[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13861698#comment-13861698
 ] 

Hudson commented on MAPREDUCE-5689:
-----------------------------------

SUCCESS: Integrated in Hadoop-trunk-Commit #4954 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4954/])
MAPREDUCE-5689. MRAppMaster does not preempt reducers when scheduled maps 
cannot be fulfilled. (lohit via kasha) (kasha: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1555161)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java


> MRAppMaster does not preempt reducers when scheduled maps cannot be fulfilled
> -----------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-5689
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5689
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 3.0.0, 2.2.0
>            Reporter: Lohit Vijayarenu
>            Assignee: Lohit Vijayarenu
>            Priority: Critical
>             Fix For: 3.0.0, 2.4.0
>
>         Attachments: MAPREDUCE-5689.1.patch, MAPREDUCE-5689.2.patch
>
>
> We saw corner case where Jobs running on cluster were hung. Scenario was 
> something like this. Job was running within a pool which was running at its 
> capacity. All available containers were occupied by reducers and last 2 
> mappers. There were few more reducers waiting to be scheduled in pipeline. 
> At this point two mappers which were running failed and went back to 
> scheduled state. two available containers were assigned to reducers, now 
> whole pool was full of reducers waiting on two maps to be complete. 2 maps 
> never got scheduled because pool was full.
> Ideally reducer preemption should have kicked in to make room for Mappers 
> from this code in RMContaienrAllocator
> {code}
> int completedMaps = getJob().getCompletedMaps();
>     int completedTasks = completedMaps + getJob().getCompletedReduces();
>     if (lastCompletedTasks != completedTasks) {
>       lastCompletedTasks = completedTasks;
>       recalculateReduceSchedule = true;
>     }
>     if (recalculateReduceSchedule) {
>       preemptReducesIfNeeded();
> {code}
> But in this scenario lastCompletedTasks is always completedTasks because maps 
> were never completed. This would cause job to hang forever. As workaround if 
> we kill few reducers, mappers would get scheduled and caused job to complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to