[ 
https://issues.apache.org/jira/browse/SPARK-29683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17363055#comment-17363055
 ] 

Jogesh Anand commented on SPARK-29683:
--------------------------------------

Facing the same issue with 3.0.1 with streaming:

{noformat}
Application Report :
Application-Id : application_123934893489289_0001
Application-Name : yyyyy
Application-Type : SPARK
User : livy
Queue : default
Application Priority : 0
Start-Time : 1622842590806
Finish-Time : 1623111223883
Progress : 100%
State : FINISHED
Final-State : FAILED
Tracking-URL : 
ip-xx.ec2.internal:18080/history/application_1123934893489289_0001/3
RPC Port : 36535
AM Host : ip-10-160-98-55.ec2.internal
Aggregate Resource Allocation : 41388024201 MB-seconds, 2854390 vcore-seconds
Aggregate Resource Preempted : 0 MB-seconds, 0 vcore-seconds
Log Aggregation Status : TIME_OUT
Diagnostics : Due to executor failures all available nodes are blacklisted
Unmanaged Application : false
Application Node Label Expression : <Not set>
AM container Node Label Expression : <DEFAULT_PARTITION>
TimeoutType : LIFETIME  ExpiryTime : UNLIMITED  RemainingTime : -1seconds
{noformat}

Is there a workaround or any updates on this?


> Job failed due to executor failures all available nodes are blacklisted
> -----------------------------------------------------------------------
>
>                 Key: SPARK-29683
>                 URL: https://issues.apache.org/jira/browse/SPARK-29683
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, YARN
>    Affects Versions: 3.0.0
>            Reporter: Genmao Yu
>            Priority: Major
>
> My streaming job will fail *due to executor failures all available nodes are 
> blacklisted*. This exception is thrown only when all node is blacklisted:
> {code:java}
> def isAllNodeBlacklisted: Boolean = currentBlacklistedYarnNodes.size >= 
> numClusterNodes
> val allBlacklistedNodes = excludeNodes ++ schedulerBlacklist ++ 
> allocatorBlacklist.keySet
> {code}
> After diving into the code, I found some critical conditions not be handled 
> properly:
>  - unchecked `excludeNodes`: it comes from user config. If not set properly, 
> it may lead to "currentBlacklistedYarnNodes.size >= numClusterNodes". For 
> example, we may set some nodes not in Yarn cluster.
> {code:java}
> excludeNodes = (invalid1, invalid2, invalid3)
> clusterNodes = (valid1, valid2)
> {code}
>  - `numClusterNodes` may equals 0: When HA Yarn failover, it will take some 
> time for all NodeManagers to register ResourceManager again. In this case, 
> `numClusterNode` may equals 0 or some other number, and Spark driver failed.
>  - too strong condition check: Spark driver will fail as long as 
> "currentBlacklistedYarnNodes.size >= numClusterNodes". This condition should 
> not indicate a unrecovered fatal. For example, there are some NodeManagers 
> restarting. So we can give some waiting time before job failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to