[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945760#comment-14945760
 ] 

Anubhav Dhoot commented on MAPREDUCE-6302:
------------------------------------------

Should we make MR_JOB_REDUCER_UNCONDITIONAL_PREEMPT_DELAY_SEC and 
MR_JOB_REDUCER_PREEMPT_DELAY_SEC consistent in the way they treat negative 
values? 
Today MR_JOB_REDUCER_PREEMPT_DELAY_SEC treats negative same as zero which does 
not allow you to turn it off while the new proposed 
MR_JOB_REDUCER_UNCONDITIONAL_PREEMPT_DELAY_SEC uses negative as a way to turn 
off preemption. The latter seems preferable and since the default is zero and 
the doc does not talk about the negative value, i think it should be ok to 
change this behavior. Thoughts?

Its better to reword
 // Duration to wait before forcibly preempting a reducer when there is room
to 
 // Duration to wait before forcibly preempting a reducer irrespective of 
whether there is room


> Incorrect headroom can lead to a deadlock between map and reduce allocations 
> -----------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-6302
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6302
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 2.6.0
>            Reporter: mai shurong
>            Assignee: Karthik Kambatla
>            Priority: Critical
>         Attachments: AM_log_head100000.txt.gz, AM_log_tail100000.txt.gz, 
> log.txt, mr-6302-1.patch, mr-6302-2.patch, mr-6302-3.patch, mr-6302-4.patch, 
> mr-6302-5.patch, mr-6302-prelim.patch, queue_with_max163cores.png, 
> queue_with_max263cores.png, queue_with_max333cores.png
>
>
> I submit a  big job, which has 500 maps and 350 reduce, to a 
> queue(fairscheduler) with 300 max cores. When the big mapreduce job is 
> running 100% maps, the 300 reduces have occupied 300 max cores in the queue. 
> And then, a map fails and retry, waiting for a core, while the 300 reduces 
> are waiting for failed map to finish. So a deadlock occur. As a result, the 
> job is blocked, and the later job in the queue cannot run because no 
> available cores in the queue.
> I think there is the similar issue for memory of a queue .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to