[ 
https://issues.apache.org/jira/browse/YARN-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16509074#comment-16509074
 ] 

zhuqi commented on YARN-7467:
-----------------------------

[~templedf] , thanks for your comment, i should improve my code of course.
 * I only work with memory because the original computeShares() function, only 
use the memory type to compute, i want to match it, the MEMORY here is the 
resouce type:

           ComputeFairShares.computeShares(schedulables, totalResources, MEMORY)
 * I use ceiling because i test the original one and compare it to my change, i 
find the original one is actual the ceiling result. For example, in the 
original one, the total fairshare in a queue is 8G, there are 3 runnable apps 
in the queue, each one will have  2731M, but reasonable result is 2730, so i 
add the ceiling to match the original one.
 * I try to confirm the ceiling result again today, in my test cluster there is 
768G = 786432M , and i add 7 apps use one queue, each one fairshare is 112348, 
but 786432 / 7 = 112347.42 if not use the ceiling the result is 112347, here is 
the test result:

!image-2018-06-12-10-02-25-724.png!

Thanks.

> FSLeafQueue unnecessarily calls ComputeFairShares.computeShare() to calculate 
> fair share for apps
> -------------------------------------------------------------------------------------------------
>
>                 Key: YARN-7467
>                 URL: https://issues.apache.org/jira/browse/YARN-7467
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: fairscheduler
>    Affects Versions: 3.1.0
>            Reporter: Daniel Templeton
>            Assignee: Daniel Templeton
>            Priority: Critical
>
> All apps have the same weight, the same max share (unbounded), and the same 
> min share (none).  There's no reason to call {{computeShares()}} at all.  
> Just divide the resources by the number of apps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to