Hi,

I am experimenting with Spark Streaming and Kafka. I will appreciate if
someone can say whether the following assumption is correct.

If I have multiple computations (each with its own output) on one stream
(created as KafkaUtils.createDirectStream), then there is a chance to have
ConcurrentModificationException: KafkaConsumer is not safe for
multi-threaded access.  To solve this problem, I should create a new stream
with different "group.id" for each computation.

Am I right?

Best regards,
Anton

Reply via email to