TD has addressed this. It should be available in 1.2.0.
https://issues.apache.org/jira/browse/SPARK-3495
On Thu, Oct 2, 2014 at 9:45 AM, maddenpj madde...@gmail.com wrote:
I am seeing this same issue. Bumping for visibility.
--
View this message in context:
Hi Dibyendu,
That would be great. One of the biggest drawback of Kafka utils as well as
your implementation is I am unable to scale out processing. I am
relatively new to Spark and Spark Streaming - from what I read and what I
observe with my deployment is that having the RDD created on one
Hi,
To test the resiliency of Kafka Spark streaming, I killed the worker
reading from Kafka Topic and noticed that the driver is unable to replace
the worker and the job becomes a rogue job that keeps running doing nothing
from that point on.
Is this a known issue? Are there any workarounds?