a failure in the data reader results to a task failure, and Spark will
re-try the task for you (IIRC re-try 3 times before fail the job).

Can you check your Spark log and see if the task fails consistently?

On Tue, Jul 3, 2018 at 2:17 PM assaf.mendelson <assaf.mendel...@rsa.com>
wrote:

> Hi All,
>
> I am implemented a data source V2 which integrates with an internal system
> and I need to make it resilient to errors in the internal data source.
>
> The issue is that currently, if there is an exception in the data reader,
> the exception seems to fail the entire task. I would prefer instead to just
> restart the relevant partition.
>
> Is there a way to do it or would I need to solve it inside the iterator
> itself?
>
> Thanks,
>     Assaf.
>
>
>
> --
> Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>

Reply via email to