Thanks, Mich for your reply.
I agree, it is not so scalable and efficient. But it works correctly for
kafka transaction, and there is no problem with committing offset to kafka
async for now.
I try to tell you some more details about my streaming job.
CustomReceiver does not receive anything
Interesting
My concern is infinite Loop in* foreachRDD*: The *while(true)* loop within
foreachRDD creates an infinite loop within each Spark executor. This might
not be the most efficient approach, especially since offsets are committed
asynchronously.?
HTH
Mich Talebzadeh,
Technologist |
Because spark streaming for kafk transaction does not work correctly to
suit my need, I moved to another approach using raw kafka consumer which
handles read_committed messages from kafka correctly.
My codes look like the following.
JavaDStream stream = ssc.receiverStream(new CustomReceiver());