Hi,
Not yet updated in production environment. Will keep you posted once it is
done.
In which Apache hadoop release this issue will be fixed? Or this issue
already fixed in hadoop-1.2.1 version as in the given below link,
https://issues.apache.org/jira/i#browse/MAPREDUCE-5351?issueKey=MAPREDUCE-
Thanks a lot and lot Antonio.
I'm using the Apache hadoop, hope this issue will be resolved in upcoming
apache hadoop releases.
Do I need the restart whole cluster after changing the mapred site conf as
you mentioned?
What is the following bug id,
https://issues.apache.org/jira/i#browse/MAPREDU
Hi guys,
Appreciate your response.
Thanks,
Viswa.J
On Oct 12, 2013 11:29 PM, "Viswanathan J"
wrote:
> Hi Guys,
>
> But I can see the jobtracker OOME issue fixed in hadoop - 1.2.1 version as
> per the hadoop release notes as below.
>
> Please check this URL,
>
> https://issues.apache.org/jira/br
Hi Guys,
But I can see the jobtracker OOME issue fixed in hadoop - 1.2.1 version as
per the hadoop release notes as below.
Please check this URL,
https://issues.apache.org/jira/browse/MAPREDUCE-5351
How come the issue still persist? I'm I asking a valid thing.
Do I need to configure anything o
Thanks Antonio, hope the memory leak issue will be resolved. Its really
nightmare every week.
In which release this issue will be resolved?
How to solve this issue, please help because we are facing in production
environment.
Please share the configuration and cron to do that cleanup process.
T
Hi Harsh,
Appreciate the response.
Thanks Reyane.
Thanks,
Viswa.J
On Oct 12, 2013 5:04 AM, "Reyane Oukpedjo" wrote:
> Hi there,
> I had a similar issue with hadoop-1.2.0 JobTracker keep crashing until I
> set HADOOP_HEAPSIZE="2048" I did not have this kind of issue with
> previous versions.
Hi there,
I had a similar issue with hadoop-1.2.0 JobTracker keep crashing until I
set HADOOP_HEAPSIZE="2048" I did not have this kind of issue with previous
versions. But you can try this if you have memory and see. In my case the
issue was gone after I set as above.
Thanks
Reyane OUKPEDJO
Hi,
I'm running a 14 nodes of Hadoop cluster with datanodes,tasktrackers
running in all nodes.
*Apache Hadoop :* 1.2.1
It shows the heap size currently as follows:
*Cluster Summary (Heap Size is 5.7/8.89 GB)*
*
*
In the above summary what is the *8.89* GB defines? Is the *8.89* defines
maximum