Its best to use DataFrames. You can read from as streaming or as batch.
More details here.

https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html#creating-a-kafka-source-for-batch-queries
https://databricks.com/blog/2017/04/26/processing-data-in-apache-kafka-with-structured-streaming-in-apache-spark-2-2.html

On Mon, Aug 7, 2017 at 6:03 PM, shyla deshpande <deshpandesh...@gmail.com>
wrote:

> Hi all,
>
> What is the easiest way to read all the data from kafka in a batch program
> for a given topic?
> I have 10 kafka partitions, but the data is not much. I would like to read
>  from the earliest from all the partitions for a topic.
>
> I appreciate any help. Thanks
>

Reply via email to