Thank you Mich for your reply.
Actually, I tried to do most of your advice.
When spark.streaming.kafka.allowNonConsecutiveOffsets=false, I got the
following error.
Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most
recent failure: Lost task 0.0 in stage 1.0 (TID 3)
Hi Kidong,
There may be few potential reasons why the message counts from your Kafka
producer and Spark Streaming consumer might not match, especially with
transactional messages and read_committed isolation level.
1) Just ensure that both your Spark Streaming job and the Kafka consumer
written