[
https://issues.apache.org/jira/browse/SPARK-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386789#comment-16386789
]
Attila Zsolt Piros commented on SPARK-16630:
--------------------------------------------
I have checked the existing sources and I would like to open a discussion about
the possible solution.
As I have seen YarnAllocator#processCompletedContainers could be extended to
track the number of failures by host. Also YarnAllocator is responsible to
update the task-level backlisted nodes with YARN (calling
AMRMClient#updateBlacklist). So a relatively easy solution would be to have a
separate counter here (which is independent from task level failures) with its
own configured limit and updating YARN with the union of task-level backlisted
nodes and "allocator-level" backlisted nodes. What is your opinion?
> Blacklist a node if executors won't launch on it.
> -------------------------------------------------
>
> Key: SPARK-16630
> URL: https://issues.apache.org/jira/browse/SPARK-16630
> Project: Spark
> Issue Type: Improvement
> Components: YARN
> Affects Versions: 1.6.2
> Reporter: Thomas Graves
> Priority: Major
>
> On YARN, its possible that a node is messed or misconfigured such that a
> container won't launch on it. For instance if the Spark external shuffle
> handler didn't get loaded on it , maybe its just some other hardware issue or
> hadoop configuration issue.
> It would be nice we could recognize this happening and stop trying to launch
> executors on it since that could end up causing us to hit our max number of
> executor failures and then kill the job.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]