[
https://issues.apache.org/jira/browse/YARN-3416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
mai shurong updated YARN-3416:
------------------------------
Description:
I submit a big job, which has 500 maps and 350 reduce, to a
queue(fairscheduler) with 300 max cores. When the big mapreduce job is running
100% maps, the 300 reduces have occupied 300 max cores in the queue. And then,
a map fails and retry, waiting for a core, while the 300 reduces are waiting
for failed map to finish. So a deadlock occur. As a result, the job is blocked,
and the later job in the queue cannot run because no available cores in the
queue.
was:
I submit a big job, which has 500 maps and 350 reduce, to a
queue(fairscheduler) with 300 max cores. When the big mapreduce job is running
100% maps, the 300 reduces have occupied 300 max cores in the queue. And then,
a map fails and retry, waiting for a core, while the 300 reduces are waiting
for failed map to finish. So a deadlock accur, the job is blocked, and the
later job in the queue cannot run because no available cores in the queue.
> deadlock in a job between map and reduce cores allocation
> ----------------------------------------------------------
>
> Key: YARN-3416
> URL: https://issues.apache.org/jira/browse/YARN-3416
> Project: Hadoop YARN
> Issue Type: Bug
> Components: fairscheduler
> Affects Versions: 2.6.0
> Reporter: mai shurong
>
> I submit a big job, which has 500 maps and 350 reduce, to a
> queue(fairscheduler) with 300 max cores. When the big mapreduce job is
> running 100% maps, the 300 reduces have occupied 300 max cores in the queue.
> And then, a map fails and retry, waiting for a core, while the 300 reduces
> are waiting for failed map to finish. So a deadlock occur. As a result, the
> job is blocked, and the later job in the queue cannot run because no
> available cores in the queue.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)