These variables have to be at runtime through a config file, not at compile
time. You can set them in hadoop-env.sh: Uncomment the line with export
HADOOP_HEAPSIZE=<whatever> to set the heap size for all Hadoop processes, or
change options for specific commands. Now these commands are for the Hadoop
processes themselves, but if you are getting the error in tasks you're
running, you can set these in your hadoop-site.xml through the
mapred.child.java.opts variable, as follows:
<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx512m</value>
</property>

By the way I'm not sure if -J-Xmx is the right syntax; I've always seen -Xmx
and -Xms.

On Wed, Feb 25, 2009 at 5:05 PM, madhuri72 <akil...@gmail.com> wrote:

>
> Hi,
>
> I'm trying to run hadoop version 19 on ubuntu with java build 1.6.0_11-b03.
> I'm getting the following error:
>
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
> Could not create the Java virtual machine.
> make: *** [run] Error 1
>
> I searched the forums and found some advice on setting the VM's memory via
> the javac options
>
> -J-Xmx512m or -J-Xms256m
>
> I have tried this with various sizes between 128 and 1024 MB.  I am adding
> this tag when I compile the source.  This isn't working for me, and
> allocating 1 GB of memory is a lot for the machine I'm using.  Is there
> some
> way to make this work with hadoop?  Is there somewhere else I can set the
> heap memory?
>
> Thanks.
>
>
>
>
>
> --
> View this message in context:
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>

Reply via email to