Hi,

Upon restarting my Spark Streaming app it is failing with error

Exception in thread "main" org.apache.spark.SparkException: Job aborted due
to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 1.0 (TID 6, localhost):
org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of
range with no configured reset policy for partitions: {mt-event-2=1710706}

It is correct that the last read offset was deleted by kafka due to
retention period expiry.
I've set auto.offset.reset in my app but it is getting reset here

https://github.com/apache/spark/blob/master/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala#L160

How to force it to restart in this case (fully aware of potential data
loss)?

Srikanth

Reply via email to