For the integration for kafka 0.8, you are literally starting a
streaming job against a fixed set of topicapartitions,  It will not
change throughout the job, so you'll need to restart the spark job if
you change kafka partitions.

For the integration for kafka 0.10 / spark 2.0, if you use subscribe
or subscribepattern, it should pick up new partitions as they are
added.

On Fri, Jul 22, 2016 at 11:29 AM, Srikanth <srikanth...@gmail.com> wrote:
> Hello,
>
> I'd like to understand how Spark Streaming(direct) would handle Kafka
> partition addition?
> Will a running job be aware of new partitions and read from it?
> Since it uses Kafka APIs to query offsets and offsets are handled
> internally.
>
> Srikanth

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to