[ 
https://issues.apache.org/jira/browse/YARN-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249884#comment-16249884
 ] 

Sunil G commented on YARN-7469:
-------------------------------

Hi [~eepayne]
This is a nice catch and nasty one to debug too.

I think the proposed patch helps to solve the issue. In broader perspective, i 
think we are lacking dead zone here. In a way, now min container is the dead 
zone here. But if user gets more control on this, may be more oscillations 
could be avoided. May be we can take up that also in another ticket.

> Capacity Scheduler Intra-queue preemption: User can starve if newest app is 
> exactly at user limit
> -------------------------------------------------------------------------------------------------
>
>                 Key: YARN-7469
>                 URL: https://issues.apache.org/jira/browse/YARN-7469
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacity scheduler, yarn
>    Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>            Reporter: Eric Payne
>            Assignee: Eric Payne
>         Attachments: UnitTestToShowStarvedUser.patch, YARN-7469.001.patch
>
>
> Queue Configuration:
> - Total Memory: 20GB
> - 2 Queues
> -- Queue1
> --- Memory: 10GB
> --- MULP: 10%
> --- ULF: 2.0
> - Minimum Container Size: 0.5GB
> Use Case:
> - User1 submits app1 to Queue1 and consumes 20GB
> - User2 submits app2 to Queue1 and requests 7.5GB
> - Preemption monitor preempts 7.5GB from app1. Capacity Scheduler gives those 
> resources to User2
> - User 3 submits app3 to Queue1. To begin with, app3 is requesting 1 
> container for the AM.
> - Preemption monitor never preempts a container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to