Thanks for your answer Nair,
Is installing Oracle JDK on Ubuntu is that complicated as described in this
link
http://askubuntu.com/questions/56104/how-can-i-install-sun-oracles-proprietary-java-jdk-6-7-8-or-jre

Is there an alternate?

Regards


On Sat, Feb 21, 2015 at 6:50 AM, R Nair <ravishankar.n...@gmail.com> wrote:

> I had an issue very similar, I changed and used Oracle JDK. There is
> nothing I see wrong with your configuration in my first look, thanks
>
> Regards,
> Nair
>
> On Sat, Feb 21, 2015 at 1:42 AM, tesm...@gmail.com <tesm...@gmail.com>
> wrote:
>
>> I have 7 nodes in my Hadoop cluster [8GB RAM and 4VCPUs to each nodes], 1
>> Namenode + 6 datanodes.
>>
>> I followed the link from Hortonwroks [
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html
>> ] and made calculation according to the hardware configruation on my
>> nodes. Added the update mapred-site and yarn-site.xml files in my question.
>> Still my application is crashing with the same exection
>>
>> My mapreduce application has 34 input splits with a block size of 128MB.
>>
>> **mapred-site.xml** has the  following properties:
>>
>>     mapreduce.framework.name  = yarn
>>     mapred.child.java.opts    = -Xmx2048m
>>     mapreduce.map.memory.mb   = 4096
>>     mapreduce.map.java.opts   = -Xmx2048m
>>
>> **yarn-site.xml** has the  following properties:
>>
>>     yarn.resourcemanager.hostname        = hadoop-master
>>     yarn.nodemanager.aux-services        = mapreduce_shuffle
>>     yarn.nodemanager.resource.memory-mb  = 6144
>>     yarn.scheduler.minimum-allocation-mb = 2048
>>     yarn.scheduler.maximum-allocation-mb = 6144
>>
>>
>>  Exception from container-launch: ExitCodeException exitCode=134:
>> /bin/bash: line 1:  3876 Aborted  (core dumped)
>> /usr/lib/jvm/java-7-openjdk-amd64/bin/java
>> -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx8192m
>> -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_000011/tmp
>> -Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011
>> -Dyarn.app.container.log.filesize=0
>> -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild
>> 192.168.0.12 50842 attempt_1424264025191_0002_m_000005_0 11 >
>>
>> /home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stdout
>> 2>
>>
>> /home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stderr
>>
>>
>> How can avoid this?any help is appreciated
>>
>> It looks to me that YAN is trying to launch all the container
>> simultaneously and anot according to the available resources. Is there
>> an option to restrict number of containers on hadoop ndoes?
>>
>> Regards,
>> Tariq
>>
>>
>
>
> --
> Warmest Regards,
>
> Ravi Shankar
>

Reply via email to