[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2113:
--------------------------
    Attachment: YARN-2113.0013.patch

[~eepayne]
Thank you very much..

I have some more thoughts to add here. To address the problem which you have 
mentioned, we are already kind of handling it in 
{{FifoIntraQueuePreemptionPlugin#validateOutSameAppPriorityFromDemand}}
{code}
+            // Ideally if any application has a higher priority, then it can
+            // force to preempt any lower priority app from any user. However
+            // if admin enforces user-limit over priority, preemption module
+            // will not choose lower priority apps from usre's who are not yet
+            // met its user-limit.
+            TempUserPerPartition tmpUser = usersPerPartition
+                .get(apps[lPriority].getUser());
+            if ((!apps[hPriority].getUser().equals(apps[lPriority].getUser()))
+                && (tmpUser.isUserLimitReached(rc, cluster) == false)
+                && (intraQueuePreemptionOrder
+                    .equals(IntraQueuePreemptionOrder.USERLIMIT_FIRST))) {
+              continue;
+            }
{code}

This indicates that we ll not preempt resources from other users who are still 
under its userlimit. But given priority_first order, resources could be 
preempted from such users.

I guess the [latest issue mentioned by 
you|https://issues.apache.org/jira/browse/YARN-2113?focusedCommentId=15985032&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15985032],
 is caused only because of logic added to find the FAT container which will 
bring down the usage under UL. To identify fat container, i think we must 
iterate through all containers.

Without the logic added to find FAT container, priority based preemption will 
really work eventhough we set order as userlimit-first. Hence in my latest 
patch, I was trying to handle this specific case step by step.

# In first pass of preemption, we could find that one user "user1" has 25GB of 
used resource and UL is 20GB (after calculation). Given demand from other 
users, we have to preempt 4GB (avoiding last container). But there could be a 
high priority app within "user1" who has more demand. But we wont consider this 
in round 1.
# In second or further passes, once user-limit is normalized, we could preempt 
for that high priority app within "user1" itself. This is possible by the Step 
3) which is mentioned in my earlier comment and v12 patch was handling same.

Attaching v13 patch with better refactoring and test cases. Please share your 
thoughts?

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---------------------------------------------------------------
>
>                 Key: YARN-2113
>                 URL: https://issues.apache.org/jira/browse/YARN-2113
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: scheduler
>            Reporter: Vinod Kumar Vavilapalli
>            Assignee: Sunil G
>         Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to