http://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
specifically
http://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html#storing-offsets
Have you set enable.auto.commit to false?
The new consumer stores offsets in kafka, so the idea of specifically
I'm using Kafka direct stream (auto.offset.reset = earliest) and enable
Spark streaming's checkpoint.
The application starts and consumes messages correctly. Then I stop the
application and clean the checkpoint folder.
I restart the application and expect it to consumes old messages. But