[ https://issues.apache.org/jira/browse/YARN-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15748894#comment-15748894 ]
Eric Payne commented on YARN-5889: ---------------------------------- bq. If we all agree this behavior (FIFO order comes before user limit), we can use approach from Jason. I agree with this statement. I think this is the current scheduler's behavior. {quote} I personally do not want preemptions to give users resources beyond the minimum. For example if the MULP is configured to 25% and there are two users in the queue, A at 70% usage and B at 30%, I'd rather not lose work by shooting A's containers to give B resources beyond the configured minimum limit. Preemptions can be very costly to an application, so I don't think this should be completely fair (that's the job of FairScheduler). We should only preempt work to get users up to the minimum limit and only if others are above the minimum limit. Thoughts on this? {quote} I agree. I recognize that if A gives up resources, the current behavior of the scheduler is to give those to B. But preempting A's resources under these circumstances is not what we want because the cost of lost work is high. > Improve user-limit calculation in capacity scheduler > ---------------------------------------------------- > > Key: YARN-5889 > URL: https://issues.apache.org/jira/browse/YARN-5889 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler > Reporter: Sunil G > Assignee: Sunil G > Attachments: YARN-5889.v0.patch, YARN-5889.v1.patch, > YARN-5889.v2.patch > > > Currently user-limit is computed during every heartbeat allocation cycle with > a write lock. To improve performance, this tickets is focussing on moving > user-limit calculation out of heartbeat allocation flow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org