Hi Bharat,

Thanks for your email. If the "Kafka Reader" worker process dies, it will
be replaced by different machine, and it will start consuming from the
offset where it left over ( for each partition). Same case can happen even
if I tried to have individual Receiver for every partition.

Regards,
Dibyendu


On Tue, Aug 26, 2014 at 5:43 AM, bharatvenkat <bvenkat.sp...@gmail.com>
wrote:

> I like this consumer for what it promises - better control over offset and
> recovery from failures.  If I understand this right, it still uses single
> worker process to read from Kafka (one thread per partition) - is there a
> way to specify multiple worker processes (on different machines) to read
> from Kafka?  Maybe one worker process for each partition?
>
> If there is no such option, what happens when the single machine hosting
> the
> "Kafka Reader" worker process dies and is replaced by a different machine
> (like in cloud)?
>
> Thanks,
> Bharat
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Low-Level-Kafka-Consumer-for-Spark-tp11258p12788.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to