[ 
https://issues.apache.org/jira/browse/YARN-7149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16161546#comment-16161546
 ] 

Eric Payne commented on YARN-7149:
----------------------------------

bq. You could check the unit test code to see if that matches your expectation.
I see that the patch for YARN-5889 needed to change the headroom usage in 
{{TestLeafQueue}} for Assersions in {{testComputeUserLimitAndSetHeadroom}} and 
{{testHeadroomWithMaxCap}}:
{code}
@@ -1123,9 +1129,9 @@ public void testComputeUserLimitAndSetHeadroom() throws 
IOException {
     //testcase3 still active - 2+2+6=10
     assertEquals(10*GB, qb.getUsedResources().getMemorySize());
     //app4 is user 0
-    //maxqueue 16G, userlimit 13G, used 8G, headroom 5G
+    //maxqueue 16G, userlimit 7G, used 8G, headroom 5G
     //(8G used is 6G from this test case - app4, 2 from last test case, app_1)
-    assertEquals(5*GB, app_4.getHeadroom().getMemorySize());
+    assertEquals(0*GB, app_4.getHeadroom().getMemorySize());
   }

   @Test
@@ -1309,8 +1315,8 @@ public void testHeadroomWithMaxCap() throws Exception {
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemorySize());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemorySize());
     // TODO, fix headroom in the future patch
-    assertEquals(1*GB, app_0.getHeadroom().getMemorySize());
-      // User limit = 4G, 2 in use
+    assertEquals(0*GB, app_0.getHeadroom().getMemorySize());
+      // User limit = 2G, 2 in use
     assertEquals(0*GB, app_1.getHeadroom().getMemorySize());
       // the application is not yet active

@@ -1322,15 +1328,15 @@ public void testHeadroomWithMaxCap() throws Exception {
     assertEquals(3*GB, a.getUsedResources().getMemorySize());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemorySize());
     assertEquals(1*GB, app_1.getCurrentConsumption().getMemorySize());
-    assertEquals(1*GB, app_0.getHeadroom().getMemorySize()); // 4G - 3G
-    assertEquals(1*GB, app_1.getHeadroom().getMemorySize()); // 4G - 3G
+    assertEquals(0*GB, app_0.getHeadroom().getMemorySize()); // 4G - 3G
+    assertEquals(0*GB, app_1.getHeadroom().getMemorySize()); // 4G - 3G

     // Submit requests for app_1 and set max-cap
     a.setMaxCapacity(.1f);
     app_2.updateResourceRequests(Collections.singletonList(
{code} 

> Cross-queue preemption sometimes starves an underserved queue
> -------------------------------------------------------------
>
>                 Key: YARN-7149
>                 URL: https://issues.apache.org/jira/browse/YARN-7149
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacity scheduler
>    Affects Versions: 2.9.0, 3.0.0-alpha3
>            Reporter: Eric Payne
>            Assignee: Eric Payne
>         Attachments: YARN-7149.demo.unit-test.patch
>
>
> In branch 2 and trunk, I am consistently seeing some use cases where 
> cross-queue preemption does not happen when it should. I do not see this in 
> branch-2.8.
> Use Case:
> | | *Size* | *Minimum Container Size* |
> |MyCluster | 20 GB | 0.5 GB |
> | *Queue Name* | *Capacity* | *Absolute Capacity* | *Minimum User Limit 
> Percent (MULP)* | *User Limit Factor (ULF)* |
> |Q1 | 50% = 10 GB | 100% = 20 GB | 10% = 1 GB | 2.0 |
> |Q2 | 50% = 10 GB | 100% = 20 GB | 10% = 1 GB | 2.0 |
> - {{User1}} launches {{App1}} in {{Q1}} and consumes all resources (20 GB)
> - {{User2}} launches {{App2}} in {{Q2}} and requests 10 GB
> - _Note: containers are 0.5 GB._
> - Preemption monitor kills 2 containers (equals 1 GB) from {{App1}} in {{Q1}}.
> - Capacity Scheduler assigns 2 containers (equals 1 GB) to {{App2}} in {{Q2}}.
> - _No more containers are ever preempted, even though {{Q2}} is far 
> underserved_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to