Hi Harsh,

Appreciate the response.

Thanks Reyane.

Thanks,
Viswa.J
On Oct 12, 2013 5:04 AM, "Reyane Oukpedjo" <oukped...@gmail.com> wrote:

> Hi there,
> I had a similar issue with hadoop-1.2.0  JobTracker keep crashing until I
> set HADOOP_HEAPSIZE="2048"  I did not have this kind of issue with
> previous versions. But you can try this if you have memory and see. In my
> case the issue was gone after I set as above.
>
> Thanks
>
>
> Reyane OUKPEDJO
>
>
> On 11 October 2013 13:08, Viswanathan J <jayamviswanat...@gmail.com>wrote:
>
>> Hi,
>>
>> I'm running a 14 nodes of Hadoop cluster with datanodes,tasktrackers
>> running in all nodes.
>>
>> *Apache Hadoop :* 1.2.1
>>
>> It shows the heap size currently as follows:
>>
>> *Cluster Summary (Heap Size is 5.7/8.89 GB)*
>> *
>> *
>> In the above summary what is the *8.89* GB defines? Is the *8.89*defines 
>> maximum heap size for Jobtracker, if yes how it has
>> been calculated.
>>
>> Hope *5.7* is currently running jobs heap-size, how it is calculated.
>>
>> Have set the jobtracker default memory size in hadoop-env.sh
>>
>> *HADOOP_HEAPSIZE="1024"*
>> *
>> *
>> Have set the mapred.child.java.opts value in mapred-site.xml as,
>>
>>  <property>
>>   <name>mapred.child.java.opts</name>
>>   <value>-Xmx2048m</value>
>> </property>
>>
>> Even after setting the above property, getting Jobtracker OOME issue. How
>> the jobtracker memory gradually increasing. After restart the JT, within a
>> week getting OOME.
>>
>> How to resolve this, it is in production and critical? Please help.
>> Thanks in advance.
>>
>> --
>> Regards,
>> Viswa.J
>>
>
>

Reply via email to