[
https://issues.apache.org/jira/browse/YARN-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Andy Skelton updated YARN-2273:
---
Description:
One DN experienced memory errors and entered a cycle of rebooting and rejoining
the cluster. After the second time the node went away, the RM produced this:
{code}
2014-07-09 21:47:36,571 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler:
Application attempt appattempt_1404858438119_4352_01 released container
container_1404858438119_4352_01_04 on node: host:
node-A16-R09-19.hadoop.dfw.wordpress.com:8041 #containers=0
available=memory:8192, vCores:8 used=memory:0, vCores:0 with event: KILL
2014-07-09 21:47:36,571 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler:
Removed node node-A16-R09-19.hadoop.dfw.wordpress.com:8041 cluster capacity:
memory:335872, vCores:328
2014-07-09 21:47:36,571 ERROR
org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread
Thread[ContinuousScheduling,5,main] threw an Exception.
java.lang.NullPointerException
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$NodeAvailableResourceComparator.compare(FairScheduler.java:1044)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$NodeAvailableResourceComparator.compare(FairScheduler.java:1040)
at java.util.TimSort.countRunAndMakeAscending(TimSort.java:329)
at java.util.TimSort.sort(TimSort.java:203)
at java.util.TimSort.sort(TimSort.java:173)
at java.util.Arrays.sort(Arrays.java:659)
at java.util.Collections.sort(Collections.java:217)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousScheduling(FairScheduler.java:1012)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.access$600(FairScheduler.java:124)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$2.run(FairScheduler.java:1306)
at java.lang.Thread.run(Thread.java:744)
{code}
A few cycles later YARN was crippled. The RM was running and jobs could be
submitted but containers were not assigned and no progress was made. Restarting
the RM resolved it.
was:
One DN experienced memory errors and entered a cycle of rebooting and rejoining
the cluster. After the second time the node went away, the RM produced this:
{code}
2014-07-09 21:47:36,571 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler:
Application attempt appattempt_1404858438119_4352_01 released container
container_1404858438119_4352_01_04 on node: host:
node-A16-R09-19.hadoop.dfw.wordpress.com:8041 #containers=0
available=memory:8192, vCores:8 used=memory:0, vCores:0 with event: KILL
2014-07-09 21:47:36,571 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler:
Removed node node-A16-R09-19.hadoop.dfw.wordpress.com:8041 cluster capacity:
memory:335872, vCores:328
2014-07-09 21:47:36,571 ERROR
org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread
Thread[ContinuousScheduling,5,main] threw an Exception.
java.lang.NullPointerException
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$NodeAvailableResourceComparator.compare(FairScheduler.java:1044)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$NodeAvailableResourceComparator.compare(FairScheduler.java:1040)
at java.util.TimSort.countRunAndMakeAscending(TimSort.java:329)
at java.util.TimSort.sort(TimSort.java:203)
at java.util.TimSort.sort(TimSort.java:173)
at java.util.Arrays.sort(Arrays.java:659)
at java.util.Collections.sort(Collections.java:217)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousScheduling(FairScheduler.java:1012)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.access$600(FairScheduler.java:124)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$2.run(FairScheduler.java:1306)
at java.lang.Thread.run(Thread.java:744)
{code}
A few minutes later YARN was crippled. The RM was running and jobs could be
submitted but containers were not assigned and no progress was made. Restarting
the RM resolved it.
Summary: NPE in ContinuousScheduling Thread crippled RM after DN flap
(was: Flapping node caused NPE in FairScheduler)
NPE in ContinuousScheduling Thread crippled RM after DN flap
Key: YARN-2273
URL: https://issues.apache.org/jira/browse/YARN-2273
Project: Hadoop YARN
Issue Type: Bug
Components: fairscheduler, resourcemanager
Affects Versions: 2.3.0
Environment: cdh5.0.2