join the group
On Fri, Oct 11, 2013 at 10:28 PM, Viswanathan J
jayamviswanat...@gmail.comwrote:
Hi,
I'm running a 14 nodes Hadoop cluster with tasktrackers running in all
nodes.
Have set the jobtracker default memory size in hadoop-env.sh
*HADOOP_HEAPSIZE=1024*
Have set the
Please don't cross-post.
HADOOP_HEAPSIZE of 1024 is too low. You might want to bump it up to 16G or
more, depending on:
* #jobs
* Scheduler you use.
Arun
On Oct 11, 2013, at 9:58 AM, Viswanathan J jayamviswanat...@gmail.com wrote:
Hi,
I'm running a 14 nodes Hadoop cluster with
Hi Arun,
Will not cross post hereafter.
I had the same heap size value, same no.of jobs,scheduler and it is works
fine in hadoop 1.0.4 version for 8 to 9 months, but I'm facing this JT OOME
issue in hadoop 1.2.1 version only.
Even though I tried to set heap size max of 16G but it eats the whole
JT OOM??.. Refer https://issues.apache.org/jira/browse/MAPREDUCE-5508 (but
JT OOM should happen over a period of time. Not immediately).
On Tue, Oct 15, 2013 at 7:44 AM, Viswanathan J
jayamviswanat...@gmail.comwrote:
Hi Arun,
Will not cross post hereafter.
I had the same heap size value,
Hi,
I'm running a 14 nodes Hadoop cluster with tasktrackers running in all
nodes.
Have set the jobtracker default memory size in hadoop-env.sh
*HADOOP_HEAPSIZE=1024*
*
*
Have set the mapred.child.java.opts value in mapred-site.xml as,
property
namemapred.child.java.opts/name