Thanks Yi Tian!
Yes, I use fair scheduler.
In resource manager log. I see the container's start shell:
/home/export/Data/hadoop/tmp/nm-local-dir/usercache/hpc/appcache/application_1411693809133_0002/container_1411693809133_0002_01_02/launch_container.sh
In the end:
exec /bin/bash -c "$JAVA_HO
You should check the log of resource manager when you submit this job to yarn.
It will be recorded how many resources your spark application actually asked
from resource manager for each container.
Did you use fair scheduler?
there is a config parameter of fair scheduler
“yarn.scheduler.increm
My yarn-site.xml config:
yarn.nodemanager.resource.memory-mb
16384
ENV:
Spark:0.9.0-incubating
Hadoop:2.3.0
I run spark task on Yarn. I see the log in Nodemanager:
2014-09-25 17:43:34,141 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Memory u
ENV:
Spark:0.9.0-incubating
Hadoop:2.3.0
I run spark task on Yarn. I see the log in Nodemanager:
2014-09-25 17:43:34,141 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Memory usage of ProcessTree 549 for container-id
container_1411635522254_0001_
ENV:
Spark:0.9.0-incubating
Hadoop:2.3.0
I run spark task on Yarn. I see the log in Nodemanager:
2014-09-25 17:43:34,141 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Memory usage of ProcessTree 549 for container-id
container_1411635522254_0001_