http://spark.apache.org/docs/latest/configuration.html

spark.streaming.kafka.maxRetries

spark.task.maxFailures

On Mon, Jun 13, 2016 at 8:25 AM, Bryan Jeffrey <bryan.jeff...@gmail.com> wrote:
> All,
>
> We're running a Spark job that is consuming data from a large Kafka cluster
> using the Direct Stream receiver.  We're seeing intermittent
> NotLeaderForPartitionExceptions when the leader is moved to another broker.
> Currently even with retry enabled we're seeing the job fail at this
> exception.  Is there a configuration setting I am missing?  How are these
> issues typically handled?
>
> User class threw exception: org.apache.spark.SparkException:
> ArrayBuffer(kafka.common.NotLeaderForPartitionException,
> org.apache.spark.SparkException: Couldn't find leader offsets for
> Set([MyTopic,43]))
>
> Thank you,
>
> Bryan Jeffrey
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to