[ https://issues.apache.org/jira/browse/SPARK-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16427123#comment-16427123 ]
Attila Zsolt Piros commented on SPARK-16630: -------------------------------------------- [~irashid] I would reuse spark.blacklist.application.maxFailedExecutorsPerNode which has already a default = 2. I think it makes sense to use the same limit for per node failures before adding the node to the backlist. But in this case I cannot consider spark.yarn.max.executor.failures into the default. > Blacklist a node if executors won't launch on it. > ------------------------------------------------- > > Key: SPARK-16630 > URL: https://issues.apache.org/jira/browse/SPARK-16630 > Project: Spark > Issue Type: Improvement > Components: YARN > Affects Versions: 1.6.2 > Reporter: Thomas Graves > Priority: Major > > On YARN, its possible that a node is messed or misconfigured such that a > container won't launch on it. For instance if the Spark external shuffle > handler didn't get loaded on it , maybe its just some other hardware issue or > hadoop configuration issue. > It would be nice we could recognize this happening and stop trying to launch > executors on it since that could end up causing us to hit our max number of > executor failures and then kill the job. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org