[jira] [Updated] (YARN-11082) Use node label reosurce as denominator to decide which resource is dominated
[ https://issues.apache.org/jira/browse/YARN-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-11082: -- Fix Version/s: (was: 3.1.1) I removed the Fix Version. That field should only be filled in by the committer when they resolve the ticket. > Use node label reosurce as denominator to decide which resource is dominated > - > > Key: YARN-11082 > URL: https://issues.apache.org/jira/browse/YARN-11082 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.1.1 >Reporter: Bo Li >Priority: Major > Labels: pull-request-available > Attachments: YARN-11082.001.patch > > Time Spent: 20m > Remaining Estimate: 0h > > We ued cluster resource as denominator to decide which resoure is dominated > in AbstrctQueue#canAssignToThisQueue. Howere nodes in our cluster are > configed differently. > {quote}2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: > assignedContainer application > attempt=appattempt_1637412555366_1588993_01 container=null > queue=root.a.a1.a2 clusterResource= > type=RACK_LOCAL requestedPartition=x > 2021-12-09 10:24:37,069 DEBUG > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue: > Used resource= exceeded maxResourceLimit of the > queue = > 2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Failed to accept allocation proposal > {quote} > We can find out that even thouth root.a.a1.a2 used 687/687 vcores, but the > following code in AbstrctQueue#canAssignToThisQueue still return false > {quote}Resources.greaterThanOrEqual(resourceCalculator, clusterResource, > usedExceptKillable, currentLimitResource) > {quote} > clusterResource = > usedExceptKillable = > currentLimitResource = > currentLimitResource: > memory : 3381248/175117312 = 0.01930847362 > vCores : 687/40222 = 0.01708020486 > usedExceptKillable: > memory : 3384320/175117312 = 0.01932601615 > vCores : 688/40222 = 0.01710506687 > DRF will think memory is dominated resource and return false in this scenario -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-11082) Use node label reosurce as denominator to decide which resource is dominated
[ https://issues.apache.org/jira/browse/YARN-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bo Li updated YARN-11082: - Description: We ued cluster resource as denominator to decide which resoure is dominated in AbstrctQueue#canAssignToThisQueue. Howere nodes in our cluster are configed differently. {quote}2021-12-09 10:24:37,069 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1637412555366_1588993_01 container=null queue=root.a.a1.a2 clusterResource= type=RACK_LOCAL requestedPartition=x 2021-12-09 10:24:37,069 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue: Used resource= exceeded maxResourceLimit of the queue = 2021-12-09 10:24:37,069 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Failed to accept allocation proposal {quote} We can find out that even thouth root.a.a1.a2 used 687/687 vcores, but the following code in AbstrctQueue#canAssignToThisQueue still return false {quote}Resources.greaterThanOrEqual(resourceCalculator, clusterResource, usedExceptKillable, currentLimitResource) {quote} clusterResource = usedExceptKillable = currentLimitResource = currentLimitResource: memory : 3381248/175117312 = 0.01930847362 vCores : 687/40222 = 0.01708020486 usedExceptKillable: memory : 3384320/175117312 = 0.01932601615 vCores : 688/40222 = 0.01710506687 DRF will think memory is dominated resource and return false in this scenario was: We ued cluster resource as denominator to decide which resoure is dominated in AbstrctQueue#canAssignToThisQueue. Howere nodes in our cluster are configed differently. {quote}2021-12-09 10:24:37,069 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1637412555366_1588993_01 container=null queue=root.a.a1.a2 clusterResource= type=RACK_LOCAL requestedPartition=xx 2021-12-09 10:24:37,069 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue: Used resource= exceeded maxResourceLimit of the queue = 2021-12-09 10:24:37,069 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Failed to accept allocation proposal {quote} We can find out that even thouth root.a.a1.a2 used 687/687 vcores, but the following code in AbstrctQueue#canAssignToThisQueue still return false {quote} Resources.greaterThanOrEqual(resourceCalculator, clusterResource, usedExceptKillable, currentLimitResource) {quote} clusterResource = usedExceptKillable = currentLimitResource = currentLimitResource: memory : 3381248/175117312 = 0.01930847362 vCores : 687/40222 = 0.01708020486 usedExceptKillable: memory : 3384320/175117312 = 0.01932601615 vCores : 688/40222 = 0.01710506687 DRF will think memory is dominated resource and return false in this scenario > Use node label reosurce as denominator to decide which resource is dominated > - > > Key: YARN-11082 > URL: https://issues.apache.org/jira/browse/YARN-11082 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.1.1 >Reporter: Bo Li >Priority: Major > Labels: pull-request-available > Fix For: 3.1.1 > > Attachments: YARN-11082.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > We ued cluster resource as denominator to decide which resoure is dominated > in AbstrctQueue#canAssignToThisQueue. Howere nodes in our cluster are > configed differently. > {quote}2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: > assignedContainer application > attempt=appattempt_1637412555366_1588993_01 container=null > queue=root.a.a1.a2 clusterResource= > type=RACK_LOCAL requestedPartition=x > 2021-12-09 10:24:37,069 DEBUG > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue: > Used resource= exceeded maxResourceLimit of the > queue = > 2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Failed to accept allocation proposal > {quote} > We can find out that even thouth root.a.a1.a2 used 687/687 vcores, but the > following code in AbstrctQueue#canAssignToThisQueue still return false > {quote}Resources.greaterThanOrEqual(resourceCalculator, clusterResource, > usedExceptKillable, currentLimitResource) > {quote} > clusterResource = > usedExceptKillable = > currentLimitResource = > currentLimitResource: > memory : 3381248/175117312 = 0.01930847362 > vCores : 687/40222 = 0.01708020
[jira] [Updated] (YARN-11082) Use node label reosurce as denominator to decide which resource is dominated
[ https://issues.apache.org/jira/browse/YARN-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated YARN-11082: -- Labels: pull-request-available (was: ) > Use node label reosurce as denominator to decide which resource is dominated > - > > Key: YARN-11082 > URL: https://issues.apache.org/jira/browse/YARN-11082 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.1.1 >Reporter: Bo Li >Priority: Major > Labels: pull-request-available > Fix For: 3.1.1 > > Attachments: YARN-11082.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > We ued cluster resource as denominator to decide which resoure is dominated > in AbstrctQueue#canAssignToThisQueue. Howere nodes in our cluster are > configed differently. > {quote}2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: > assignedContainer application > attempt=appattempt_1637412555366_1588993_01 container=null > queue=root.a.a1.a2 clusterResource= > type=RACK_LOCAL requestedPartition=xx > 2021-12-09 10:24:37,069 DEBUG > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue: > Used resource= exceeded maxResourceLimit of the > queue = > 2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Failed to accept allocation proposal > {quote} > We can find out that even thouth root.a.a1.a2 used 687/687 vcores, but the > following code in AbstrctQueue#canAssignToThisQueue still return false > {quote} > Resources.greaterThanOrEqual(resourceCalculator, clusterResource, > usedExceptKillable, currentLimitResource) > {quote} > clusterResource = > usedExceptKillable = > currentLimitResource = > currentLimitResource: > memory : 3381248/175117312 = 0.01930847362 > vCores : 687/40222 = 0.01708020486 > usedExceptKillable: > memory : 3384320/175117312 = 0.01932601615 > vCores : 688/40222 = 0.01710506687 > DRF will think memory is dominated resource and return false in this scenario -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-11082) Use node label reosurce as denominator to decide which resource is dominated
[ https://issues.apache.org/jira/browse/YARN-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bo Li updated YARN-11082: - Attachment: YARN-11082.001.patch > Use node label reosurce as denominator to decide which resource is dominated > - > > Key: YARN-11082 > URL: https://issues.apache.org/jira/browse/YARN-11082 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.1.1 >Reporter: Bo Li >Priority: Major > Fix For: 3.1.1 > > Attachments: YARN-11082.001.patch > > > We ued cluster resource as denominator to decide which resoure is dominated > in AbstrctQueue#canAssignToThisQueue. Howere nodes in our cluster are > configed differently. > {quote}2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: > assignedContainer application > attempt=appattempt_1637412555366_1588993_01 container=null > queue=root.a.a1.a2 clusterResource= > type=RACK_LOCAL requestedPartition=xx > 2021-12-09 10:24:37,069 DEBUG > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue: > Used resource= exceeded maxResourceLimit of the > queue = > 2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Failed to accept allocation proposal > {quote} > We can find out that even thouth root.a.a1.a2 used 687/687 vcores, but the > following code in AbstrctQueue#canAssignToThisQueue still return false > {quote} > Resources.greaterThanOrEqual(resourceCalculator, clusterResource, > usedExceptKillable, currentLimitResource) > {quote} > clusterResource = > usedExceptKillable = > currentLimitResource = > currentLimitResource: > memory : 3381248/175117312 = 0.01930847362 > vCores : 687/40222 = 0.01708020486 > usedExceptKillable: > memory : 3384320/175117312 = 0.01932601615 > vCores : 688/40222 = 0.01710506687 > DRF will think memory is dominated resource and return false in this scenario -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-11082) Use node label reosurce as denominator to decide which resource is dominated
[ https://issues.apache.org/jira/browse/YARN-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bo Li updated YARN-11082: - Attachment: (was: YARN-11082.patch) > Use node label reosurce as denominator to decide which resource is dominated > - > > Key: YARN-11082 > URL: https://issues.apache.org/jira/browse/YARN-11082 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.1.1 >Reporter: Bo Li >Priority: Major > Fix For: 3.1.1 > > Attachments: YARN-11082.001.patch > > > We ued cluster resource as denominator to decide which resoure is dominated > in AbstrctQueue#canAssignToThisQueue. Howere nodes in our cluster are > configed differently. > {quote}2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: > assignedContainer application > attempt=appattempt_1637412555366_1588993_01 container=null > queue=root.a.a1.a2 clusterResource= > type=RACK_LOCAL requestedPartition=xx > 2021-12-09 10:24:37,069 DEBUG > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue: > Used resource= exceeded maxResourceLimit of the > queue = > 2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Failed to accept allocation proposal > {quote} > We can find out that even thouth root.a.a1.a2 used 687/687 vcores, but the > following code in AbstrctQueue#canAssignToThisQueue still return false > {quote} > Resources.greaterThanOrEqual(resourceCalculator, clusterResource, > usedExceptKillable, currentLimitResource) > {quote} > clusterResource = > usedExceptKillable = > currentLimitResource = > currentLimitResource: > memory : 3381248/175117312 = 0.01930847362 > vCores : 687/40222 = 0.01708020486 > usedExceptKillable: > memory : 3384320/175117312 = 0.01932601615 > vCores : 688/40222 = 0.01710506687 > DRF will think memory is dominated resource and return false in this scenario -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-11082) Use node label reosurce as denominator to decide which resource is dominated
[ https://issues.apache.org/jira/browse/YARN-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bo Li updated YARN-11082: - Target Version/s: 3.1.1 > Use node label reosurce as denominator to decide which resource is dominated > - > > Key: YARN-11082 > URL: https://issues.apache.org/jira/browse/YARN-11082 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.1.1 >Reporter: Bo Li >Priority: Major > Fix For: 3.1.1 > > Attachments: YARN-11082.patch > > > We ued cluster resource as denominator to decide which resoure is dominated > in AbstrctQueue#canAssignToThisQueue. Howere nodes in our cluster are > configed differently. > {quote}2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: > assignedContainer application > attempt=appattempt_1637412555366_1588993_01 container=null > queue=root.a.a1.a2 clusterResource= > type=RACK_LOCAL requestedPartition=xx > 2021-12-09 10:24:37,069 DEBUG > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue: > Used resource= exceeded maxResourceLimit of the > queue = > 2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Failed to accept allocation proposal > {quote} > We can find out that even thouth root.a.a1.a2 used 687/687 vcores, but the > following code in AbstrctQueue#canAssignToThisQueue still return false > {quote} > Resources.greaterThanOrEqual(resourceCalculator, clusterResource, > usedExceptKillable, currentLimitResource) > {quote} > clusterResource = > usedExceptKillable = > currentLimitResource = > currentLimitResource: > memory : 3381248/175117312 = 0.01930847362 > vCores : 687/40222 = 0.01708020486 > usedExceptKillable: > memory : 3384320/175117312 = 0.01932601615 > vCores : 688/40222 = 0.01710506687 > DRF will think memory is dominated resource and return false in this scenario -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-11082) Use node label reosurce as denominator to decide which resource is dominated
[ https://issues.apache.org/jira/browse/YARN-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bo Li updated YARN-11082: - Description: We ued cluster resource as denominator to decide which resoure is dominated in AbstrctQueue#canAssignToThisQueue. Howere nodes in our cluster are configed differently. {quote}2021-12-09 10:24:37,069 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1637412555366_1588993_01 container=null queue=root.a.a1.a2 clusterResource= type=RACK_LOCAL requestedPartition=xx 2021-12-09 10:24:37,069 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue: Used resource= exceeded maxResourceLimit of the queue = 2021-12-09 10:24:37,069 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Failed to accept allocation proposal {quote} We can find out that even thouth root.a.a1.a2 used 687/687 vcores, but the following code in AbstrctQueue#canAssignToThisQueue still return false {quote} Resources.greaterThanOrEqual(resourceCalculator, clusterResource, usedExceptKillable, currentLimitResource) {quote} clusterResource = usedExceptKillable = currentLimitResource = currentLimitResource: memory : 3381248/175117312 = 0.01930847362 vCores : 687/40222 = 0.01708020486 usedExceptKillable: memory : 3384320/175117312 = 0.01932601615 vCores : 688/40222 = 0.01710506687 DRF will think memory is dominated resource and return false in this scenario was: We ued cluster resource as denominator to decide which resoure is dominated in AbstrctQueue#canAssignToThisQueue. Howere nodes in our cluster are configed differently. {quote} 2021-12-09 10:24:37,069 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1637412555366_1588993_01 container=null queue=root.a.a1.a2 clusterResource= type=RACK_LOCAL requestedPartition=xx 2021-12-09 10:24:37,069 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue: Used resource= exceeded maxResourceLimit of the queue = 2021-12-09 10:24:37,069 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Failed to accept allocation proposal {quote} We can find out that even thouth root.a.a1.a2 used 687/687 vcores, but the following code in AbstrctQueue#canAssignToThisQueue still return false ```java Resources.greaterThanOrEqual(resourceCalculator, clusterResource, usedExceptKillable, currentLimitResource) ``` clusterResource = usedExceptKillable = currentLimitResource = currentLimitResource: memory : 3381248/175117312 = 0.01930847362 vCores : 687/40222 = 0.01708020486 usedExceptKillable: memory : 3384320/175117312 = 0.01932601615 vCores : 688/40222 = 0.01710506687 DRF will think memory is dominated resource and return false in this scenario > Use node label reosurce as denominator to decide which resource is dominated > - > > Key: YARN-11082 > URL: https://issues.apache.org/jira/browse/YARN-11082 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 3.1.1 >Reporter: Bo Li >Priority: Major > > We ued cluster resource as denominator to decide which resoure is dominated > in AbstrctQueue#canAssignToThisQueue. Howere nodes in our cluster are > configed differently. > {quote}2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator: > assignedContainer application > attempt=appattempt_1637412555366_1588993_01 container=null > queue=root.a.a1.a2 clusterResource= > type=RACK_LOCAL requestedPartition=xx > 2021-12-09 10:24:37,069 DEBUG > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue: > Used resource= exceeded maxResourceLimit of the > queue = > 2021-12-09 10:24:37,069 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: > Failed to accept allocation proposal > {quote} > We can find out that even thouth root.a.a1.a2 used 687/687 vcores, but the > following code in AbstrctQueue#canAssignToThisQueue still return false > {quote} > Resources.greaterThanOrEqual(resourceCalculator, clusterResource, > usedExceptKillable, currentLimitResource) > {quote} > clusterResource = > usedExceptKillable = > currentLimitResource = > currentLimitResource: > memory : 3381248/175117312 = 0.01930847362 > vCores : 687/40222 = 0.01708020486 > usedExceptKillable: > memory : 3384320/175117312 = 0.01932601615 > vCores : 688/40222 = 0.01710506687 > DRF will think memory is dominated resource and re