wuyi created SPARK-26269:
----------------------------

             Summary: YarnAllocator should have same blacklist behaviour with 
YARN to maxmize use of cluster resource
                 Key: SPARK-26269
                 URL: https://issues.apache.org/jira/browse/SPARK-26269
             Project: Spark
          Issue Type: Improvement
          Components: YARN
    Affects Versions: 2.4.0, 2.3.2, 2.3.1
            Reporter: wuyi
             Fix For: 2.4.0


Currently, YarnAllocator may put a node with a completed container whose exit 
status is not one of SUCCESS, PREEMPTED, KILLED_EXCEEDED_VMEM, 
KILLED_EXCEEDED_PMEM into blacklist. Howerver, for other exit status, e.g. 
KILLED_BY_RESOURCEMANAGER, Yarn do not consider its related nodes shoule be 
added into blacklist(see YARN's explaination for detail 
https://github.com/apache/hadoop/blob/228156cfd1b474988bc4fedfbf7edddc87db41e3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/Apps.java#L273).
 So, relaxing the current blacklist rule and having the same blacklist 
behaviour with YARN would maxmize use of cluster resources.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to