[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14940517#comment-14940517
 ] 

Karthik Kambatla commented on MAPREDUCE-6302:
---------------------------------------------

Preempting reducers to run mappers doesn't always lead to higher throughput. 
The reducer being preempted might have to spend more time to re-copy map 
outputs from every mapper than the mappers in question take to run. I 
understand that it will likely make sense for the vast majority of cases. 

I propose we do the following:
# In this JIRA, let us just fix starvation. Stick to the logic of preempting 
enough resources to run one mapper.
# In a follow up JIRA(s), let us improve this preemption to
## preempt reducers until we are able to meet the slowstart threshold
## prioritize preempting reducers that are still in SHUFFLE phase as Jason 
mentioned
## add an option to not preempt reducers that are past SHUFFLE phase 
irrespective of slowstart as long as one mapper can run

> Incorrect headroom can lead to a deadlock between map and reduce allocations 
> -----------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-6302
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6302
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 2.6.0
>            Reporter: mai shurong
>            Assignee: Karthik Kambatla
>            Priority: Critical
>         Attachments: AM_log_head100000.txt.gz, AM_log_tail100000.txt.gz, 
> log.txt, mr-6302-1.patch, mr-6302-2.patch, mr-6302-3.patch, mr-6302-4.patch, 
> mr-6302-prelim.patch, queue_with_max163cores.png, queue_with_max263cores.png, 
> queue_with_max333cores.png
>
>
> I submit a  big job, which has 500 maps and 350 reduce, to a 
> queue(fairscheduler) with 300 max cores. When the big mapreduce job is 
> running 100% maps, the 300 reduces have occupied 300 max cores in the queue. 
> And then, a map fails and retry, waiting for a core, while the 300 reduces 
> are waiting for failed map to finish. So a deadlock occur. As a result, the 
> job is blocked, and the later job in the queue cannot run because no 
> available cores in the queue.
> I think there is the similar issue for memory of a queue .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to