I am using Spark streaming and reading data from Kafka using KafkaUtils.createDirectStream. I have the "auto.offset.reset" set to smallest.
But in some Kafka partitions, I get kafka.common.OffsetOutOfRangeException and my spark job crashes. I want to understand if there is a graceful way to handle this failure and not kill the job. I want to keep ignoring these exceptions, as some other partitions are fine and I am okay with data loss. Is there any way to handle this and not have my spark job crash? I have no option of increasing the kafka retention period. I tried to have the DStream returned by createDirectStream() wrapped in a Try construct, but since the exception happens in the executor, the Try construct didn't take effect. Do you have any ideas of how to handle this? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-gracefully-handle-Kafka-OffsetOutOfRangeException-tp26534.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org