All, We're running a Spark job that is consuming data from a large Kafka cluster using the Direct Stream receiver. We're seeing intermittent NotLeaderForPartitionExceptions when the leader is moved to another broker. Currently even with retry enabled we're seeing the job fail at this exception. Is there a configuration setting I am missing? How are these issues typically handled?
User class threw exception: org.apache.spark.SparkException: ArrayBuffer(kafka.common.NotLeaderForPartitionException, org.apache.spark.SparkException: Couldn't find leader offsets for Set([MyTopic,43])) Thank you, Bryan Jeffrey