YARN container lauch failed exception and mapred-site.xml configuration

2015-02-20 Thread tesm...@gmail.com
I have 7 nodes in my Hadoop cluster [8GB RAM and 4VCPUs to each nodes], 1
Namenode + 6 datanodes.

**EDIT-1@ARNON:** I followed the link, mad calculation according to the
hardware configruation on my nodes and have added the update mapred-site
and yarn-site.xml files in my question. Still my application is crashing
with the same exection

My mapreduce application has 34 input splits with a block size of 128MB.

**mapred-site.xml** has the  following properties:

mapreduce.framework.name  = yarn
mapred.child.java.opts= -Xmx2048m
mapreduce.map.memory.mb   = 4096
mapreduce.map.java.opts   = -Xmx2048m

**yarn-site.xml** has the  following properties:

yarn.resourcemanager.hostname= hadoop-master
yarn.nodemanager.aux-services= mapreduce_shuffle
yarn.nodemanager.resource.memory-mb  = 6144
yarn.scheduler.minimum-allocation-mb = 2048
yarn.scheduler.maximum-allocation-mb = 6144


 Exception from container-launch: ExitCodeException exitCode=134:
/bin/bash: line 1:  3876 Aborted  (core dumped)
/usr/lib/jvm/java-7-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx8192m
-Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_11/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 192.168.0.12 50842
attempt_1424264025191_0002_m_05_0 11 

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stdout
2

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stderr


How can avoid this?any help is appreciated

Is there an option to restrict number of containers on hadoop ndoes?


Fwd: YARN container lauch failed exception and mapred-site.xml configuration

2015-02-20 Thread tesm...@gmail.com
I have 7 nodes in my Hadoop cluster [8GB RAM and 4VCPUs to each nodes], 1
Namenode + 6 datanodes.

I followed the link o horton works [
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html]
and made  calculation according to the hardware configruation on my nodes
and have added the update mapred-site and yarn-site.xml files in my
question. Still my application is crashing with the same exection

My mapreduce application has 34 input splits with a block size of 128MB.

**mapred-site.xml** has the  following properties:

mapreduce.framework.name  = yarn
mapred.child.java.opts= -Xmx2048m
mapreduce.map.memory.mb   = 4096
mapreduce.map.java.opts   = -Xmx2048m

**yarn-site.xml** has the  following properties:

yarn.resourcemanager.hostname= hadoop-master
yarn.nodemanager.aux-services= mapreduce_shuffle
yarn.nodemanager.resource.memory-mb  = 6144
yarn.scheduler.minimum-allocation-mb = 2048
yarn.scheduler.maximum-allocation-mb = 6144


 Exception from container-launch: ExitCodeException exitCode=134:
/bin/bash: line 1:  3876 Aborted  (core dumped)
/usr/lib/jvm/java-7-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx8192m
-Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_11/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 192.168.0.12 50842
attempt_1424264025191_0002_m_05_0 11 

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stdout
2

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stderr


How can avoid this?any help is appreciated

Is there an option to restrict number of containers on hadoop ndoes?