[ 
https://issues.apache.org/jira/browse/YARN-8379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16511694#comment-16511694
 ] 

Eric Payne commented on YARN-8379:
----------------------------------

[~Zian Chen], I attached the confs I used to create my pseudo cluster. I was 
using patch 003.
{quote}3. The reason we add FifoCandidatesSelector to 
candidatesSelectionPolicies twice is that we want to make conservative 
preemption when we do the balance.
{quote}
I don't see why this is necessary. In 2.8 (and earlier 3.x releases prior to 
YARN-5864), the balancing was done all at once inside the 
{{FifoCandidatesSelector}} by properly adjusting the ideal assigned values per 
queue and the values of offered resources to each queue. Why can't we adjust 
these values to either 1) keep the same behavior or 2) balance queues, 
depending on the setting of the new property 
({{fairness-balance-queue-after-satisfied.enabled}}).
{quote}4. The reason for this code is explained in item 3.
{quote}
My question here is why is {{selectedCandidates}} always returned after the 
first time through the for loop? If this was the intention, a for loop is not 
necessary. It looks like the intention was to only return if containers exist 
in {{selectedCandidates}} (the for loop) AND {{if (!containers.isEmpty())}}. 
Did you want the return to be inside of the {{if (!containers.isEmpty())}}?

> Add an option to allow Capacity Scheduler preemption to balance satisfied 
> queues
> --------------------------------------------------------------------------------
>
>                 Key: YARN-8379
>                 URL: https://issues.apache.org/jira/browse/YARN-8379
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Wangda Tan
>            Assignee: Zian Chen
>            Priority: Major
>         Attachments: YARN-8379.001.patch, YARN-8379.002.patch, 
> YARN-8379.003.patch, ericpayne.confs.tgz
>
>
> Existing capacity scheduler only supports preemption for an underutilized 
> queue to reach its guaranteed resource. In addition to that, there’s an 
> requirement to get better balance between queues when all of them reach 
> guaranteed resource but with different fairness resource.
> An example is, 3 queues with capacity, queue_a = 30%, queue_b = 30%, queue_c 
> = 40%. At time T. queue_a is using 30%, queue_b is using 70%. Existing 
> scheduler preemption won't happen. But this is unfair to queue_a since 
> queue_a has the same guaranteed resources.
> Before YARN-5864, capacity scheduler do additional preemption to balance 
> queues. We changed the logic since it could preempt too many containers 
> between queues when all queues are satisfied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to