I like this consumer for what it promises - better control over offset and
recovery from failures.  If I understand this right, it still uses single
worker process to read from Kafka (one thread per partition) - is there a
way to specify multiple worker processes (on different machines) to read
from Kafka?  Maybe one worker process for each partition?

If there is no such option, what happens when the single machine hosting the
"Kafka Reader" worker process dies and is replaced by a different machine
(like in cloud)?

Thanks,
Bharat



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Low-Level-Kafka-Consumer-for-Spark-tp11258p12788.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to