[ https://issues.apache.org/jira/browse/SPARK-29683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17467924#comment-17467924 ]
Apache Spark commented on SPARK-29683: -------------------------------------- User 'sungpeo' has created a pull request for this issue: https://github.com/apache/spark/pull/35089 > Job failed due to executor failures all available nodes are blacklisted > ----------------------------------------------------------------------- > > Key: SPARK-29683 > URL: https://issues.apache.org/jira/browse/SPARK-29683 > Project: Spark > Issue Type: Bug > Components: Spark Core, YARN > Affects Versions: 3.0.0 > Reporter: Genmao Yu > Priority: Major > > My streaming job will fail *due to executor failures all available nodes are > blacklisted*. This exception is thrown only when all node is blacklisted: > {code:java} > def isAllNodeBlacklisted: Boolean = currentBlacklistedYarnNodes.size >= > numClusterNodes > val allBlacklistedNodes = excludeNodes ++ schedulerBlacklist ++ > allocatorBlacklist.keySet > {code} > After diving into the code, I found some critical conditions not be handled > properly: > - unchecked `excludeNodes`: it comes from user config. If not set properly, > it may lead to "currentBlacklistedYarnNodes.size >= numClusterNodes". For > example, we may set some nodes not in Yarn cluster. > {code:java} > excludeNodes = (invalid1, invalid2, invalid3) > clusterNodes = (valid1, valid2) > {code} > - `numClusterNodes` may equals 0: When HA Yarn failover, it will take some > time for all NodeManagers to register ResourceManager again. In this case, > `numClusterNode` may equals 0 or some other number, and Spark driver failed. > - too strong condition check: Spark driver will fail as long as > "currentBlacklistedYarnNodes.size >= numClusterNodes". This condition should > not indicate a unrecovered fatal. For example, there are some NodeManagers > restarting. So we can give some waiting time before job failed. -- This message was sent by Atlassian Jira (v8.20.1#820001) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org