[ 
https://issues.apache.org/jira/browse/SPARK-24016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445252#comment-16445252
 ] 

Saisai Shao commented on SPARK-24016:
-------------------------------------

I think this can be useful if we enabled 
"spark.blacklist.killBlacklistedExecutors". NM could avoid relaunching 
executors on the bad nodes.

> Yarn does not update node blacklist in static allocation
> --------------------------------------------------------
>
>                 Key: SPARK-24016
>                 URL: https://issues.apache.org/jira/browse/SPARK-24016
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler, YARN
>    Affects Versions: 2.3.0
>            Reporter: Imran Rashid
>            Priority: Major
>
> Task-based blacklisting keeps track of bad nodes, and updates YARN with that 
> set of nodes so that Spark will not receive more containers on that node.  
> However, that only happens with dynamic allocation.  Though its far more 
> important with dynamic allocation, even with static allocation this matters; 
> if executors die, or if the cluster was too busy at the original resource 
> request to give all the containers, the spark application will add new 
> containers in the middle.  And we want an updated node blacklist for that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to