[ https://issues.apache.org/jira/browse/YARN-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483530#comment-16483530 ]
Wilfred Spiegelenburg commented on YARN-4677: --------------------------------------------- This is the exception thrown when the issue happens {code:java} FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in handling event type NODE_UPDATE to the scheduler java.lang.NullPointerException at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:892) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1089) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:122) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:709) at java.lang.Thread.run(Thread.java:748) {code} The NPE is not caught and that triggers the uncaught exception handler which then triggers the exit. The code is a custom code base patched with a number of things, this is the code snippet that reflects the specific codebase: {code:java} 891 if (nm.getState() == NodeState.DECOMMISSIONING) { 892 this.rmContext 893 .getDispatcher() 894 .getEventHandler() 895 .handle( 896 new RMNodeResourceUpdateEvent(nm.getNodeID(), ResourceOption 897 .newInstance(getSchedulerNode(nm.getNodeID()) 898 .getUsedResource(), 0))); 899 } {code} > RMNodeResourceUpdateEvent update from scheduler can lead to race condition > -------------------------------------------------------------------------- > > Key: YARN-4677 > URL: https://issues.apache.org/jira/browse/YARN-4677 > Project: Hadoop YARN > Issue Type: Sub-task > Components: graceful, resourcemanager, scheduler > Affects Versions: 2.7.1 > Reporter: Brook Zhou > Assignee: Wilfred Spiegelenburg > Priority: Major > Attachments: YARN-4677-branch-2.001.patch, > YARN-4677-branch-2.002.patch, YARN-4677.01.patch > > > When a node is in decommissioning state, there is time window between > completedContainer() and RMNodeResourceUpdateEvent get handled in > scheduler.nodeUpdate (YARN-3223). > So if a scheduling effort happens within this window, the new container could > still get allocated on this node. Even worse case is if scheduling effort > happen after RMNodeResourceUpdateEvent sent out but before it is propagated > to SchedulerNode - then the total resource is lower than used resource and > available resource is a negative value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org