How to utilize Spark filter pushdown in Spark Streaming connected to a Kafka soure?












0















I'm having issue's designing an efficient Spark pipeline for an ETL use-case. It's a Spark Streaming application that is connected to a Kafka topic at it's source. Now the actual data is not send via Kafka but instead sits on HDFS or a NoSQL backend. All that is send over Kafka is a json message providing context information (such as the location of the data to process).



One direction is to implement a map or flatMap in which a connection to the storage backend is opened, run a query and yield (as a generator) the data back to Spark's dataframe. However i'm having issue's with this approach because:



1) Upon yielding data back into the dataframe, the context record (as received from kafka) is lost unless the record/details are glued in the returned records which causes a hugh overhead since we are talking about time-series data.



2) Secondly in Spark you should actually query for data using the data sources API so you get partition pruning and filter pushdown support by Spark (or the data source extension). But i don't see how i could use this method if i already have a DStream connected to Kafka? Should i foreachBatch at the driver to create a new DStream?



Any advise or thoughts are appreciated.



Paul










share|improve this question




















  • 1





    You need to deserialize every Kafka message in full, so I doubt there is a query optimization for push downs in Kafka, or other stream sources. It's not like Parquet or ORC where you can target a specific field

    – cricket_007
    Nov 22 '18 at 17:49


















0















I'm having issue's designing an efficient Spark pipeline for an ETL use-case. It's a Spark Streaming application that is connected to a Kafka topic at it's source. Now the actual data is not send via Kafka but instead sits on HDFS or a NoSQL backend. All that is send over Kafka is a json message providing context information (such as the location of the data to process).



One direction is to implement a map or flatMap in which a connection to the storage backend is opened, run a query and yield (as a generator) the data back to Spark's dataframe. However i'm having issue's with this approach because:



1) Upon yielding data back into the dataframe, the context record (as received from kafka) is lost unless the record/details are glued in the returned records which causes a hugh overhead since we are talking about time-series data.



2) Secondly in Spark you should actually query for data using the data sources API so you get partition pruning and filter pushdown support by Spark (or the data source extension). But i don't see how i could use this method if i already have a DStream connected to Kafka? Should i foreachBatch at the driver to create a new DStream?



Any advise or thoughts are appreciated.



Paul










share|improve this question




















  • 1





    You need to deserialize every Kafka message in full, so I doubt there is a query optimization for push downs in Kafka, or other stream sources. It's not like Parquet or ORC where you can target a specific field

    – cricket_007
    Nov 22 '18 at 17:49
















0












0








0








I'm having issue's designing an efficient Spark pipeline for an ETL use-case. It's a Spark Streaming application that is connected to a Kafka topic at it's source. Now the actual data is not send via Kafka but instead sits on HDFS or a NoSQL backend. All that is send over Kafka is a json message providing context information (such as the location of the data to process).



One direction is to implement a map or flatMap in which a connection to the storage backend is opened, run a query and yield (as a generator) the data back to Spark's dataframe. However i'm having issue's with this approach because:



1) Upon yielding data back into the dataframe, the context record (as received from kafka) is lost unless the record/details are glued in the returned records which causes a hugh overhead since we are talking about time-series data.



2) Secondly in Spark you should actually query for data using the data sources API so you get partition pruning and filter pushdown support by Spark (or the data source extension). But i don't see how i could use this method if i already have a DStream connected to Kafka? Should i foreachBatch at the driver to create a new DStream?



Any advise or thoughts are appreciated.



Paul










share|improve this question
















I'm having issue's designing an efficient Spark pipeline for an ETL use-case. It's a Spark Streaming application that is connected to a Kafka topic at it's source. Now the actual data is not send via Kafka but instead sits on HDFS or a NoSQL backend. All that is send over Kafka is a json message providing context information (such as the location of the data to process).



One direction is to implement a map or flatMap in which a connection to the storage backend is opened, run a query and yield (as a generator) the data back to Spark's dataframe. However i'm having issue's with this approach because:



1) Upon yielding data back into the dataframe, the context record (as received from kafka) is lost unless the record/details are glued in the returned records which causes a hugh overhead since we are talking about time-series data.



2) Secondly in Spark you should actually query for data using the data sources API so you get partition pruning and filter pushdown support by Spark (or the data source extension). But i don't see how i could use this method if i already have a DStream connected to Kafka? Should i foreachBatch at the driver to create a new DStream?



Any advise or thoughts are appreciated.



Paul







apache-spark apache-kafka spark-streaming






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 22 '18 at 17:47









cricket_007

82.6k1144112




82.6k1144112










asked Nov 22 '18 at 10:00









Paul BormansPaul Bormans

8341016




8341016








  • 1





    You need to deserialize every Kafka message in full, so I doubt there is a query optimization for push downs in Kafka, or other stream sources. It's not like Parquet or ORC where you can target a specific field

    – cricket_007
    Nov 22 '18 at 17:49
















  • 1





    You need to deserialize every Kafka message in full, so I doubt there is a query optimization for push downs in Kafka, or other stream sources. It's not like Parquet or ORC where you can target a specific field

    – cricket_007
    Nov 22 '18 at 17:49










1




1





You need to deserialize every Kafka message in full, so I doubt there is a query optimization for push downs in Kafka, or other stream sources. It's not like Parquet or ORC where you can target a specific field

– cricket_007
Nov 22 '18 at 17:49







You need to deserialize every Kafka message in full, so I doubt there is a query optimization for push downs in Kafka, or other stream sources. It's not like Parquet or ORC where you can target a specific field

– cricket_007
Nov 22 '18 at 17:49














0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53428342%2fhow-to-utilize-spark-filter-pushdown-in-spark-streaming-connected-to-a-kafka-sou%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53428342%2fhow-to-utilize-spark-filter-pushdown-in-spark-streaming-connected-to-a-kafka-sou%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

Npm cannot find a required file even through it is in the searched directory

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith