Hey Saptarshi,
Watch the running child process while using "ps", "top", or Ganglia
monitoring. Does the map task actually use 16GB of memory, or is the
memory not getting set properly?
Brian
On Dec 28, 2008, at 3:00 PM, Saptarshi Guha wrote:
Hello,
I have work machines with 32GB and allocated 16GB to the heap size
==hadoop-env.sh==
export HADOOP_HEAPSIZE=16384
==hadoop-site.xml==
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx16384m</value>
</property>
The same code runs when not being run through Hadoop, but it fails
when in a Maptask.
Are there other places where I can specify the memory to the maptasks?
Regards
Saptarshi
--
Saptarshi Guha - saptarshi.g...@gmail.com