I have 7 nodes in my Hadoop cluster [8GB RAM and 4VCPUs to each nodes], 1
Namenode + 6 datanodes.

I followed the link from Hortonwroks [
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html
] and made calculation according to the hardware configruation on my nodes.
Added the update mapred-site and yarn-site.xml files in my question. Still
my application is crashing with the same exection

My mapreduce application has 34 input splits with a block size of 128MB.

**mapred-site.xml** has the  following properties:

    mapreduce.framework.name  = yarn
    mapred.child.java.opts    = -Xmx2048m
    mapreduce.map.memory.mb   = 4096
    mapreduce.map.java.opts   = -Xmx2048m

**yarn-site.xml** has the  following properties:

    yarn.resourcemanager.hostname        = hadoop-master
    yarn.nodemanager.aux-services        = mapreduce_shuffle
    yarn.nodemanager.resource.memory-mb  = 6144
    yarn.scheduler.minimum-allocation-mb = 2048
    yarn.scheduler.maximum-allocation-mb = 6144


 Exception from container-launch: ExitCodeException exitCode=134:
/bin/bash: line 1:  3876 Aborted  (core dumped)
/usr/lib/jvm/java-7-openjdk-amd64/bin/java
-Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx8192m
-Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_000011/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011
-Dyarn.app.container.log.filesize=0
-Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild
192.168.0.12 50842 attempt_1424264025191_0002_m_000005_0 11 >

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stdout
2>

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stderr


How can avoid this?any help is appreciated

It looks to me that YAN is trying to launch all the container
simultaneously and anot according to the available resources. Is there an
option to restrict number of containers on hadoop ndoes?

Regards,
Tariq

Reply via email to