It is controlled by "spark.task.maxFailures". See
http://spark.apache.org/docs/latest/configuration.html#scheduling.

On Fri, Dec 5, 2014 at 11:02 AM, shahab <shahab.mok...@gmail.com> wrote:

> Hello,
>
> By some (unknown) reasons some of my tasks, that fetch data from
> Cassandra, are failing so often, and apparently the master removes a tasks
> which fails more than 4 times (in my case).
>
> Is there any way to increase the number of re-tries ?
>
> best,
> /Shahab
>

Reply via email to