[ 
https://issues.apache.org/jira/browse/YARN-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-8668:
-------------------------------
    Description: 
We have observed that given capacityScheduler and defaultResourceCalculor,  
when there are many memory resources in a node, running heavy workload, then 
the available vcores of this node will be negative!

I noticed that in capacityScheduler.java, use code below to calculate the 
available resources for allocating containers:

{code}

if (calculator.computeAvailableContainers(Resources
 .add(node.getUnallocatedResource(), node.getTotalKillableResources()),
 minimumAllocation) <= 0) {
 if (LOG.isDebugEnabled()) {
 LOG.debug("This node or this node partition doesn't have available or"
 + "killable resource");
 }

{code}

while in fairscheduler FsAppAttempt.java, similar code was found:

{code}

// Can we allocate a container on this node?
if (Resources.fitsIn(capability, available)) {

...

}

{code}

Why is the inconsistency? I think we should use 
Resources.fitsIn(smaller,bigger) instead in capacityScheduler !!!

 

> Inconsistency between capacity and fair scheduler in the aspect of computing 
> node available resource
> ----------------------------------------------------------------------------------------------------
>
>                 Key: YARN-8668
>                 URL: https://issues.apache.org/jira/browse/YARN-8668
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Yeliang Cang
>            Assignee: Yeliang Cang
>            Priority: Major
>
> We have observed that given capacityScheduler and defaultResourceCalculor,  
> when there are many memory resources in a node, running heavy workload, then 
> the available vcores of this node will be negative!
> I noticed that in capacityScheduler.java, use code below to calculate the 
> available resources for allocating containers:
> {code}
> if (calculator.computeAvailableContainers(Resources
>  .add(node.getUnallocatedResource(), node.getTotalKillableResources()),
>  minimumAllocation) <= 0) {
>  if (LOG.isDebugEnabled()) {
>  LOG.debug("This node or this node partition doesn't have available or"
>  + "killable resource");
>  }
> {code}
> while in fairscheduler FsAppAttempt.java, similar code was found:
> {code}
> // Can we allocate a container on this node?
> if (Resources.fitsIn(capability, available)) {
> ...
> }
> {code}
> Why is the inconsistency? I think we should use 
> Resources.fitsIn(smaller,bigger) instead in capacityScheduler !!!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to