[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14681045#comment-14681045
 ] 

Ben Podgursky commented on MAPREDUCE-5817:
------------------------------------------

Option (2) sounds a lot trickier (and more of a risk tradeoff) because tasks do 
realistically die or get pre-empted after copying, and it could hurt runtime to 
* not * restart tasks in case that happens.  So you'd want that to be 
configurable either way.  In an ideal world, I would just kill all map tasks 
which were running at the time the last reduce finished... but I don't know the 
implications this would have for counters, etc.

So I'd vote (1) for now since I think it's pure upside, and leave (2) for a 
later JIRA since involves more tradeoffs.

> mappers get rescheduled on node transition even after all reducers are 
> completed
> --------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-5817
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5817
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: applicationmaster
>    Affects Versions: 2.3.0
>            Reporter: Sangjin Lee
>            Assignee: Sangjin Lee
>              Labels: BB2015-05-TBR
>         Attachments: mapreduce-5817.patch
>
>
> We're seeing a behavior where a job runs long after all reducers were already 
> finished. We found that the job was rescheduling and running a number of 
> mappers beyond the point of reducer completion. In one situation, the job ran 
> for some 9 more hours after all reducers completed!
> This happens because whenever a node transition (to an unusable state) comes 
> into the app master, it just reschedules all mappers that already ran on the 
> node in all cases.
> Therefore, if any node transition has a potential to extend the job period. 
> Once this window opens, another node transition can prolong it, and this can 
> happen indefinitely in theory.
> If there is some instability in the pool (unhealthy, etc.) for a duration, 
> then any big job is severely vulnerable to this problem.
> If all reducers have been completed, JobImpl.actOnUnusableNode() should not 
> reschedule mapper tasks. If all reducers are completed, the mapper outputs 
> are no longer needed, and there is no need to reschedule mapper tasks as they 
> would not be consumed anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to