FYI, I just confirmed with the latest Spark 1.3 snapshot that the
spark.yarn.maxAppAttempts setting that SPARK-2165 refers to works
perfectly. Great to finally get rid of this problem. Also caused an issue
when the eventLogs were enabled since the spark-events/appXXX folder
already exists the
Found a setting that seems to fix this problem, but it does not seems to be
available until Spark 1.3. See
https://issues.apache.org/jira/browse/SPARK-2165
However, glad to see a work is being done with the issue.
On Tue, Jan 13, 2015 at 8:00 PM, Anders Arpteg arp...@spotify.com wrote:
Yes
Yes Andrew, I am. Tried setting spark.yarn.applicationMaster.waitTries to 1
(thanks Sean), but with no luck. Any ideas?
On Tue, Jan 13, 2015 at 7:58 PM, Andrew Or and...@databricks.com wrote:
Hi Anders, are you using YARN by any chance?
2015-01-13 0:32 GMT-08:00 Anders Arpteg
Hi Anders, are you using YARN by any chance?
2015-01-13 0:32 GMT-08:00 Anders Arpteg arp...@spotify.com:
Since starting using Spark 1.2, I've experienced an annoying issue with
failing apps that gets executed twice. I'm not talking about tasks inside a
job, that should be executed multiple
Since starting using Spark 1.2, I've experienced an annoying issue with
failing apps that gets executed twice. I'm not talking about tasks inside a
job, that should be executed multiple times before failing the whole app.
I'm talking about the whole app, that seems to close the previous Spark