[ 
https://issues.apache.org/jira/browse/YARN-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15866914#comment-15866914
 ] 

Chris Douglas commented on YARN-6191:
-------------------------------------

This is related to a 
[discussion|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201702.mbox/%3CCACO5Y4wVm-9_3uES+qVvi2ypzsGTvu9jbEgVfTb79unPH-E=t...@mail.gmail.com%3E]
 on mapreduce-dev@ on the incomplete, work-conserving preemption logic. The MR 
AM should react by killing reducers when it gets a preemption message 
(checkpointing their state, if possible).

> CapacityScheduler preemption by container priority can be problematic for 
> MapReduce
> -----------------------------------------------------------------------------------
>
>                 Key: YARN-6191
>                 URL: https://issues.apache.org/jira/browse/YARN-6191
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>            Reporter: Jason Lowe
>
> A MapReduce job with thousands of reducers and just a couple of maps left to 
> go was running in a preemptable queue.  Periodically other queues would get 
> busy and the RM would preempt some resources from the job, but it _always_ 
> picked the job's map tasks first because they use the lowest priority 
> containers.  Even though the reducers had a shorter running time, most were 
> spared but the maps were always shot.  Since the map tasks ran for a longer 
> time than the preemption period, the job was in a perpetual preemption loop.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to