[
https://issues.apache.org/jira/browse/MAPREDUCE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Lowe updated MAPREDUCE-5689:
----------------------------------
Fix Version/s: 0.23.11
Thanks Lohit and Karthik! I pulled this into branch-0.23 as well.
> MRAppMaster does not preempt reducers when scheduled maps cannot be fulfilled
> -----------------------------------------------------------------------------
>
> Key: MAPREDUCE-5689
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5689
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Affects Versions: 3.0.0, 2.2.0
> Reporter: Lohit Vijayarenu
> Assignee: Lohit Vijayarenu
> Priority: Critical
> Fix For: 3.0.0, 0.23.11, 2.3.0
>
> Attachments: MAPREDUCE-5689.1.patch, MAPREDUCE-5689.2.patch
>
>
> We saw corner case where Jobs running on cluster were hung. Scenario was
> something like this. Job was running within a pool which was running at its
> capacity. All available containers were occupied by reducers and last 2
> mappers. There were few more reducers waiting to be scheduled in pipeline.
> At this point two mappers which were running failed and went back to
> scheduled state. two available containers were assigned to reducers, now
> whole pool was full of reducers waiting on two maps to be complete. 2 maps
> never got scheduled because pool was full.
> Ideally reducer preemption should have kicked in to make room for Mappers
> from this code in RMContaienrAllocator
> {code}
> int completedMaps = getJob().getCompletedMaps();
> int completedTasks = completedMaps + getJob().getCompletedReduces();
> if (lastCompletedTasks != completedTasks) {
> lastCompletedTasks = completedTasks;
> recalculateReduceSchedule = true;
> }
> if (recalculateReduceSchedule) {
> preemptReducesIfNeeded();
> {code}
> But in this scenario lastCompletedTasks is always completedTasks because maps
> were never completed. This would cause job to hang forever. As workaround if
> we kill few reducers, mappers would get scheduled and caused job to complete.
--
This message was sent by Atlassian JIRA
(v6.2#6252)