Spark Job triggers second attempt

2015-05-07 Thread ๏̯͡๏
How i can stop Spark to stop triggering second attempt in case the first fails. I do not want to wait for the second attempt to fail again so that i can debug faster. .set(spark.yarn.maxAppAttempts, 0) OR .set(spark.yarn.maxAppAttempts, 1) is not helping. -- Deepak

Re: Spark Job triggers second attempt

2015-05-07 Thread Doug Balog
I bet you are running on YARN in cluster mode. If you are running on yarn in client mode, .set(“spark.yarn.maxAppAttempts”,”1”) works as you expect, because YARN doesn’t start your app on the cluster until you call SparkContext(). But If you are running on yarn in cluster mode, the driver

Re: Spark Job triggers second attempt

2015-05-07 Thread Richard Marscher
Hi, I think you may want to use this setting?: spark.task.maxFailures4Number of individual task failures before giving up on the job. Should be greater than or equal to 1. Number of allowed retries = this value - 1. On Thu, May 7, 2015 at 2:34 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) deepuj...@gmail.com wrote: How