Re: Could not reserve enough space for heap in JVM

2009-02-25 Thread Arijit Mukherjee
Thanx guys - I've a clearer picture now:-)

Cheers
Arijit

2009/2/26 souravm 

> In 32 bit machine u r limited to 4 gb heap size at jvm level per machine
>
> - Original Message -
> From: Arijit Mukherjee 
> To: core-user@hadoop.apache.org 
> Sent: Wed Feb 25 21:27:25 2009
> Subject: Re: Could not reserve enough space for heap in JVM
>
> Mine is 32bit. As of now, it has only 2GB RAM, but I'm planning to acquire
> more hardware resources - so a clarification in this regard would help me
> in
> deciding the specs of the new cluster.
>
> Arijit
>
> 2009/2/26 souravm 
>
> > Is ur machine 32 bit or 64 bit
> >
> > - Original Message -
> > From: Nick Cen 
> > To: core-user@hadoop.apache.org 
> > Sent: Wed Feb 25 21:10:00 2009
> > Subject: Re: Could not reserve enough space for heap in JVM
> >
> > I got a question relatived to the HADOOP_HEAPSIZE variable. My machine's
> > memory size is 16G. but when i set HADOOP_HEAPSIZE to 4GB, it thrown the
> > exception refered in this thread. how can i make full use of my mem. thx.
> >
> > 2009/2/26 Arijit Mukherjee 
> >
> > > I was getting similar errors too while running the mapreduce samples. I
> > > fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and
> the
> > > hadoop-site.xml files - and rectified it after some trial and error.
> But
> > I
> > > would like to know if there is a thumb rule for this. Right now, I've a
> > > core
> > > duo machine with 2GB RAM running on Ubuntu 8.10, and I've found that a
> > > HEAPSIZE of 256Mb works without any problems. Anything more than that
> > would
> > > give the same error (even when nothing else is going on in the
> machine).
> > >
> > > Arijit
> > >
> > > 2009/2/26 Anum Ali 
> > >
> > > > If the solution given my Matei Zaharia wont work , which I guess it
> > > > wont if you are using eclipse 3.3.0 because this is a bug , which
> they
> > > > resloved it in later version which is eclipse 3.4 ganymede. Better
> > > > upgrade eclipse version.
> > > >
> > > >
> > > >
> > > >
> > > > On 2/26/09, Matei Zaharia  wrote:
> > > > > These variables have to be at runtime through a config file, not at
> > > > compile
> > > > > time. You can set them in hadoop-env.sh: Uncomment the line with
> > export
> > > > > HADOOP_HEAPSIZE= to set the heap size for all Hadoop
> > > processes,
> > > > or
> > > > > change options for specific commands. Now these commands are for
> the
> > > > Hadoop
> > > > > processes themselves, but if you are getting the error in tasks
> > you're
> > > > > running, you can set these in your hadoop-site.xml through the
> > > > > mapred.child.java.opts variable, as follows:
> > > > > 
> > > > >   mapred.child.java.opts
> > > > >   -Xmx512m
> > > > > 
> > > > >
> > > > > By the way I'm not sure if -J-Xmx is the right syntax; I've always
> > seen
> > > > -Xmx
> > > > > and -Xms.
> > > > >
> > > > > On Wed, Feb 25, 2009 at 5:05 PM, madhuri72 
> > wrote:
> > > > >
> > > > >>
> > > > >> Hi,
> > > > >>
> > > > >> I'm trying to run hadoop version 19 on ubuntu with java build
> > > > >> 1.6.0_11-b03.
> > > > >> I'm getting the following error:
> > > > >>
> > > > >> Error occurred during initialization of VM
> > > > >> Could not reserve enough space for object heap
> > > > >> Could not create the Java virtual machine.
> > > > >> make: *** [run] Error 1
> > > > >>
> > > > >> I searched the forums and found some advice on setting the VM's
> > memory
> > > > via
> > > > >> the javac options
> > > > >>
> > > > >> -J-Xmx512m or -J-Xms256m
> > > > >>
> > > > >> I have tried this with various sizes between 128 and 1024 MB.  I
> am
> > > > adding
> > > > >> this tag when I compile the source.  This isn't working for me,
> and
> > > > >> allocating 1 GB of memory is a lot for the machine I'm using.  Is
> > > there
> > > >

Re: Could not reserve enough space for heap in JVM

2009-02-25 Thread souravm
In 32 bit machine u r limited to 4 gb heap size at jvm level per machine

- Original Message -
From: Arijit Mukherjee 
To: core-user@hadoop.apache.org 
Sent: Wed Feb 25 21:27:25 2009
Subject: Re: Could not reserve enough space for heap in JVM

Mine is 32bit. As of now, it has only 2GB RAM, but I'm planning to acquire
more hardware resources - so a clarification in this regard would help me in
deciding the specs of the new cluster.

Arijit

2009/2/26 souravm 

> Is ur machine 32 bit or 64 bit
>
> - Original Message -
> From: Nick Cen 
> To: core-user@hadoop.apache.org 
> Sent: Wed Feb 25 21:10:00 2009
> Subject: Re: Could not reserve enough space for heap in JVM
>
> I got a question relatived to the HADOOP_HEAPSIZE variable. My machine's
> memory size is 16G. but when i set HADOOP_HEAPSIZE to 4GB, it thrown the
> exception refered in this thread. how can i make full use of my mem. thx.
>
> 2009/2/26 Arijit Mukherjee 
>
> > I was getting similar errors too while running the mapreduce samples. I
> > fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and the
> > hadoop-site.xml files - and rectified it after some trial and error. But
> I
> > would like to know if there is a thumb rule for this. Right now, I've a
> > core
> > duo machine with 2GB RAM running on Ubuntu 8.10, and I've found that a
> > HEAPSIZE of 256Mb works without any problems. Anything more than that
> would
> > give the same error (even when nothing else is going on in the machine).
> >
> > Arijit
> >
> > 2009/2/26 Anum Ali 
> >
> > > If the solution given my Matei Zaharia wont work , which I guess it
> > > wont if you are using eclipse 3.3.0 because this is a bug , which they
> > > resloved it in later version which is eclipse 3.4 ganymede. Better
> > > upgrade eclipse version.
> > >
> > >
> > >
> > >
> > > On 2/26/09, Matei Zaharia  wrote:
> > > > These variables have to be at runtime through a config file, not at
> > > compile
> > > > time. You can set them in hadoop-env.sh: Uncomment the line with
> export
> > > > HADOOP_HEAPSIZE= to set the heap size for all Hadoop
> > processes,
> > > or
> > > > change options for specific commands. Now these commands are for the
> > > Hadoop
> > > > processes themselves, but if you are getting the error in tasks
> you're
> > > > running, you can set these in your hadoop-site.xml through the
> > > > mapred.child.java.opts variable, as follows:
> > > > 
> > > >   mapred.child.java.opts
> > > >   -Xmx512m
> > > > 
> > > >
> > > > By the way I'm not sure if -J-Xmx is the right syntax; I've always
> seen
> > > -Xmx
> > > > and -Xms.
> > > >
> > > > On Wed, Feb 25, 2009 at 5:05 PM, madhuri72 
> wrote:
> > > >
> > > >>
> > > >> Hi,
> > > >>
> > > >> I'm trying to run hadoop version 19 on ubuntu with java build
> > > >> 1.6.0_11-b03.
> > > >> I'm getting the following error:
> > > >>
> > > >> Error occurred during initialization of VM
> > > >> Could not reserve enough space for object heap
> > > >> Could not create the Java virtual machine.
> > > >> make: *** [run] Error 1
> > > >>
> > > >> I searched the forums and found some advice on setting the VM's
> memory
> > > via
> > > >> the javac options
> > > >>
> > > >> -J-Xmx512m or -J-Xms256m
> > > >>
> > > >> I have tried this with various sizes between 128 and 1024 MB.  I am
> > > adding
> > > >> this tag when I compile the source.  This isn't working for me, and
> > > >> allocating 1 GB of memory is a lot for the machine I'm using.  Is
> > there
> > > >> some
> > > >> way to make this work with hadoop?  Is there somewhere else I can
> set
> > > the
> > > >> heap memory?
> > > >>
> > > >> Thanks.
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> View this message in context:
> > > >>
> > >
> >
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> > > >> Sent from the Hadoop core-user mailing list ar

Re: Could not reserve enough space for heap in JVM

2009-02-25 Thread Matei Zaharia
Yes, the namenode, jobtracker, datanode, etc are separate processes.
Typically in a large cluster you'd have one machine running a namenode, one
running a jobtracker, and many slave machines running a datanode and
tasktracker. The datanodes and tasktrackers don't need high amounts of
memory. The namenode needs more if you have a large filesystem (large
numbers of files), and the jobtracker needs more if you have many large jobs
(tens of thousands of tasks). However these should not be problems on
smaller clusters. The child memory settings are the ones to fix if your
tasks are failing.

On Wed, Feb 25, 2009 at 9:27 PM, Arijit Mukherjee wrote:

> Mine is 32bit. As of now, it has only 2GB RAM, but I'm planning to acquire
> more hardware resources - so a clarification in this regard would help me
> in
> deciding the specs of the new cluster.
>
> Arijit
>
> 2009/2/26 souravm 
>
> > Is ur machine 32 bit or 64 bit
> >
> > - Original Message -
> > From: Nick Cen 
> > To: core-user@hadoop.apache.org 
> > Sent: Wed Feb 25 21:10:00 2009
> > Subject: Re: Could not reserve enough space for heap in JVM
> >
> > I got a question relatived to the HADOOP_HEAPSIZE variable. My machine's
> > memory size is 16G. but when i set HADOOP_HEAPSIZE to 4GB, it thrown the
> > exception refered in this thread. how can i make full use of my mem. thx.
> >
> > 2009/2/26 Arijit Mukherjee 
> >
> > > I was getting similar errors too while running the mapreduce samples. I
> > > fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and
> the
> > > hadoop-site.xml files - and rectified it after some trial and error.
> But
> > I
> > > would like to know if there is a thumb rule for this. Right now, I've a
> > > core
> > > duo machine with 2GB RAM running on Ubuntu 8.10, and I've found that a
> > > HEAPSIZE of 256Mb works without any problems. Anything more than that
> > would
> > > give the same error (even when nothing else is going on in the
> machine).
> > >
> > > Arijit
> > >
> > > 2009/2/26 Anum Ali 
> > >
> > > > If the solution given my Matei Zaharia wont work , which I guess it
> > > > wont if you are using eclipse 3.3.0 because this is a bug , which
> they
> > > > resloved it in later version which is eclipse 3.4 ganymede. Better
> > > > upgrade eclipse version.
> > > >
> > > >
> > > >
> > > >
> > > > On 2/26/09, Matei Zaharia  wrote:
> > > > > These variables have to be at runtime through a config file, not at
> > > > compile
> > > > > time. You can set them in hadoop-env.sh: Uncomment the line with
> > export
> > > > > HADOOP_HEAPSIZE= to set the heap size for all Hadoop
> > > processes,
> > > > or
> > > > > change options for specific commands. Now these commands are for
> the
> > > > Hadoop
> > > > > processes themselves, but if you are getting the error in tasks
> > you're
> > > > > running, you can set these in your hadoop-site.xml through the
> > > > > mapred.child.java.opts variable, as follows:
> > > > > 
> > > > >   mapred.child.java.opts
> > > > >   -Xmx512m
> > > > > 
> > > > >
> > > > > By the way I'm not sure if -J-Xmx is the right syntax; I've always
> > seen
> > > > -Xmx
> > > > > and -Xms.
> > > > >
> > > > > On Wed, Feb 25, 2009 at 5:05 PM, madhuri72 
> > wrote:
> > > > >
> > > > >>
> > > > >> Hi,
> > > > >>
> > > > >> I'm trying to run hadoop version 19 on ubuntu with java build
> > > > >> 1.6.0_11-b03.
> > > > >> I'm getting the following error:
> > > > >>
> > > > >> Error occurred during initialization of VM
> > > > >> Could not reserve enough space for object heap
> > > > >> Could not create the Java virtual machine.
> > > > >> make: *** [run] Error 1
> > > > >>
> > > > >> I searched the forums and found some advice on setting the VM's
> > memory
> > > > via
> > > > >> the javac options
> > > > >>
> > > > >> -J-Xmx512m or -J-Xms256m
> > > > >>
> > > > >> I have tried this with various sizes between 128 and 1024 MB

Re: Could not reserve enough space for heap in JVM

2009-02-25 Thread Arijit Mukherjee
Mine is 32bit. As of now, it has only 2GB RAM, but I'm planning to acquire
more hardware resources - so a clarification in this regard would help me in
deciding the specs of the new cluster.

Arijit

2009/2/26 souravm 

> Is ur machine 32 bit or 64 bit
>
> - Original Message -
> From: Nick Cen 
> To: core-user@hadoop.apache.org 
> Sent: Wed Feb 25 21:10:00 2009
> Subject: Re: Could not reserve enough space for heap in JVM
>
> I got a question relatived to the HADOOP_HEAPSIZE variable. My machine's
> memory size is 16G. but when i set HADOOP_HEAPSIZE to 4GB, it thrown the
> exception refered in this thread. how can i make full use of my mem. thx.
>
> 2009/2/26 Arijit Mukherjee 
>
> > I was getting similar errors too while running the mapreduce samples. I
> > fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and the
> > hadoop-site.xml files - and rectified it after some trial and error. But
> I
> > would like to know if there is a thumb rule for this. Right now, I've a
> > core
> > duo machine with 2GB RAM running on Ubuntu 8.10, and I've found that a
> > HEAPSIZE of 256Mb works without any problems. Anything more than that
> would
> > give the same error (even when nothing else is going on in the machine).
> >
> > Arijit
> >
> > 2009/2/26 Anum Ali 
> >
> > > If the solution given my Matei Zaharia wont work , which I guess it
> > > wont if you are using eclipse 3.3.0 because this is a bug , which they
> > > resloved it in later version which is eclipse 3.4 ganymede. Better
> > > upgrade eclipse version.
> > >
> > >
> > >
> > >
> > > On 2/26/09, Matei Zaharia  wrote:
> > > > These variables have to be at runtime through a config file, not at
> > > compile
> > > > time. You can set them in hadoop-env.sh: Uncomment the line with
> export
> > > > HADOOP_HEAPSIZE= to set the heap size for all Hadoop
> > processes,
> > > or
> > > > change options for specific commands. Now these commands are for the
> > > Hadoop
> > > > processes themselves, but if you are getting the error in tasks
> you're
> > > > running, you can set these in your hadoop-site.xml through the
> > > > mapred.child.java.opts variable, as follows:
> > > > 
> > > >   mapred.child.java.opts
> > > >   -Xmx512m
> > > > 
> > > >
> > > > By the way I'm not sure if -J-Xmx is the right syntax; I've always
> seen
> > > -Xmx
> > > > and -Xms.
> > > >
> > > > On Wed, Feb 25, 2009 at 5:05 PM, madhuri72 
> wrote:
> > > >
> > > >>
> > > >> Hi,
> > > >>
> > > >> I'm trying to run hadoop version 19 on ubuntu with java build
> > > >> 1.6.0_11-b03.
> > > >> I'm getting the following error:
> > > >>
> > > >> Error occurred during initialization of VM
> > > >> Could not reserve enough space for object heap
> > > >> Could not create the Java virtual machine.
> > > >> make: *** [run] Error 1
> > > >>
> > > >> I searched the forums and found some advice on setting the VM's
> memory
> > > via
> > > >> the javac options
> > > >>
> > > >> -J-Xmx512m or -J-Xms256m
> > > >>
> > > >> I have tried this with various sizes between 128 and 1024 MB.  I am
> > > adding
> > > >> this tag when I compile the source.  This isn't working for me, and
> > > >> allocating 1 GB of memory is a lot for the machine I'm using.  Is
> > there
> > > >> some
> > > >> way to make this work with hadoop?  Is there somewhere else I can
> set
> > > the
> > > >> heap memory?
> > > >>
> > > >> Thanks.
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> View this message in context:
> > > >>
> > >
> >
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> > > >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> > > >>
> > > >>
> > > >
> > >
> >
> >
> >
> > --
> > "And when the night is cloudy,
> > There is still a light that shines on me,
> >

Re: Could not reserve enough space for heap in JVM

2009-02-25 Thread Arijit Mukherjee
I'm making a guesswork here.

Are the namenode, datanode, jobtracker and testtracker separate java
processes? Does each of them take up separate heap spaces? (Running Jps
would show them separately - I guess they may be separate java processes
with separate heap spaces). If that's true, then, if the total memory is X,
and you are allocating Y as the heapspace, then 4*Y must be much less than
X.

I'm not sure - this is what I'm speculating. Can anyone confirm this please?

Cheers
Arijit

2009/2/26 Nick Cen 

> I got a question relatived to the HADOOP_HEAPSIZE variable. My machine's
> memory size is 16G. but when i set HADOOP_HEAPSIZE to 4GB, it thrown the
> exception refered in this thread. how can i make full use of my mem. thx.
>
> 2009/2/26 Arijit Mukherjee 
>
> > I was getting similar errors too while running the mapreduce samples. I
> > fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and the
> > hadoop-site.xml files - and rectified it after some trial and error. But
> I
> > would like to know if there is a thumb rule for this. Right now, I've a
> > core
> > duo machine with 2GB RAM running on Ubuntu 8.10, and I've found that a
> > HEAPSIZE of 256Mb works without any problems. Anything more than that
> would
> > give the same error (even when nothing else is going on in the machine).
> >
> > Arijit
> >
> > 2009/2/26 Anum Ali 
> >
> > > If the solution given my Matei Zaharia wont work , which I guess it
> > > wont if you are using eclipse 3.3.0 because this is a bug , which they
> > > resloved it in later version which is eclipse 3.4 ganymede. Better
> > > upgrade eclipse version.
> > >
> > >
> > >
> > >
> > > On 2/26/09, Matei Zaharia  wrote:
> > > > These variables have to be at runtime through a config file, not at
> > > compile
> > > > time. You can set them in hadoop-env.sh: Uncomment the line with
> export
> > > > HADOOP_HEAPSIZE= to set the heap size for all Hadoop
> > processes,
> > > or
> > > > change options for specific commands. Now these commands are for the
> > > Hadoop
> > > > processes themselves, but if you are getting the error in tasks
> you're
> > > > running, you can set these in your hadoop-site.xml through the
> > > > mapred.child.java.opts variable, as follows:
> > > > 
> > > >   mapred.child.java.opts
> > > >   -Xmx512m
> > > > 
> > > >
> > > > By the way I'm not sure if -J-Xmx is the right syntax; I've always
> seen
> > > -Xmx
> > > > and -Xms.
> > > >
> > > > On Wed, Feb 25, 2009 at 5:05 PM, madhuri72 
> wrote:
> > > >
> > > >>
> > > >> Hi,
> > > >>
> > > >> I'm trying to run hadoop version 19 on ubuntu with java build
> > > >> 1.6.0_11-b03.
> > > >> I'm getting the following error:
> > > >>
> > > >> Error occurred during initialization of VM
> > > >> Could not reserve enough space for object heap
> > > >> Could not create the Java virtual machine.
> > > >> make: *** [run] Error 1
> > > >>
> > > >> I searched the forums and found some advice on setting the VM's
> memory
> > > via
> > > >> the javac options
> > > >>
> > > >> -J-Xmx512m or -J-Xms256m
> > > >>
> > > >> I have tried this with various sizes between 128 and 1024 MB.  I am
> > > adding
> > > >> this tag when I compile the source.  This isn't working for me, and
> > > >> allocating 1 GB of memory is a lot for the machine I'm using.  Is
> > there
> > > >> some
> > > >> way to make this work with hadoop?  Is there somewhere else I can
> set
> > > the
> > > >> heap memory?
> > > >>
> > > >> Thanks.
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> View this message in context:
> > > >>
> > >
> >
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> > > >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> > > >>
> > > >>
> > > >
> > >
> >
> >
> >
> > --
> > "And when the night is cloudy,
> > There is still a light that shines on me,
> > Shine on until tomorrow, let it be."
> >
>
>
>
> --
> http://daily.appspot.com/food/
>



-- 
"And when the night is cloudy,
There is still a light that shines on me,
Shine on until tomorrow, let it be."


Re: Could not reserve enough space for heap in JVM

2009-02-25 Thread souravm
Is ur machine 32 bit or 64 bit

- Original Message -
From: Nick Cen 
To: core-user@hadoop.apache.org 
Sent: Wed Feb 25 21:10:00 2009
Subject: Re: Could not reserve enough space for heap in JVM

I got a question relatived to the HADOOP_HEAPSIZE variable. My machine's
memory size is 16G. but when i set HADOOP_HEAPSIZE to 4GB, it thrown the
exception refered in this thread. how can i make full use of my mem. thx.

2009/2/26 Arijit Mukherjee 

> I was getting similar errors too while running the mapreduce samples. I
> fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and the
> hadoop-site.xml files - and rectified it after some trial and error. But I
> would like to know if there is a thumb rule for this. Right now, I've a
> core
> duo machine with 2GB RAM running on Ubuntu 8.10, and I've found that a
> HEAPSIZE of 256Mb works without any problems. Anything more than that would
> give the same error (even when nothing else is going on in the machine).
>
> Arijit
>
> 2009/2/26 Anum Ali 
>
> > If the solution given my Matei Zaharia wont work , which I guess it
> > wont if you are using eclipse 3.3.0 because this is a bug , which they
> > resloved it in later version which is eclipse 3.4 ganymede. Better
> > upgrade eclipse version.
> >
> >
> >
> >
> > On 2/26/09, Matei Zaharia  wrote:
> > > These variables have to be at runtime through a config file, not at
> > compile
> > > time. You can set them in hadoop-env.sh: Uncomment the line with export
> > > HADOOP_HEAPSIZE= to set the heap size for all Hadoop
> processes,
> > or
> > > change options for specific commands. Now these commands are for the
> > Hadoop
> > > processes themselves, but if you are getting the error in tasks you're
> > > running, you can set these in your hadoop-site.xml through the
> > > mapred.child.java.opts variable, as follows:
> > > 
> > >   mapred.child.java.opts
> > >   -Xmx512m
> > > 
> > >
> > > By the way I'm not sure if -J-Xmx is the right syntax; I've always seen
> > -Xmx
> > > and -Xms.
> > >
> > > On Wed, Feb 25, 2009 at 5:05 PM, madhuri72  wrote:
> > >
> > >>
> > >> Hi,
> > >>
> > >> I'm trying to run hadoop version 19 on ubuntu with java build
> > >> 1.6.0_11-b03.
> > >> I'm getting the following error:
> > >>
> > >> Error occurred during initialization of VM
> > >> Could not reserve enough space for object heap
> > >> Could not create the Java virtual machine.
> > >> make: *** [run] Error 1
> > >>
> > >> I searched the forums and found some advice on setting the VM's memory
> > via
> > >> the javac options
> > >>
> > >> -J-Xmx512m or -J-Xms256m
> > >>
> > >> I have tried this with various sizes between 128 and 1024 MB.  I am
> > adding
> > >> this tag when I compile the source.  This isn't working for me, and
> > >> allocating 1 GB of memory is a lot for the machine I'm using.  Is
> there
> > >> some
> > >> way to make this work with hadoop?  Is there somewhere else I can set
> > the
> > >> heap memory?
> > >>
> > >> Thanks.
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> --
> > >> View this message in context:
> > >>
> >
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> > >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> > >>
> > >>
> > >
> >
>
>
>
> --
> "And when the night is cloudy,
> There is still a light that shines on me,
> Shine on until tomorrow, let it be."
>



-- 
http://daily.appspot.com/food/

 CAUTION - Disclaimer *
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely 
for the use of the addressee(s). If you are not the intended recipient, please 
notify the sender by e-mail and delete the original message. Further, you are 
not 
to copy, disclose, or distribute this e-mail or its contents to any other 
person and 
any such actions are unlawful. This e-mail may contain viruses. Infosys has 
taken 
every reasonable precaution to minimize this risk, but is not liable for any 
damage 
you may sustain as a result of any virus in this e-mail. You should carry out 
your 
own virus checks before opening the e-mail or attachment. Infosys reserves the 
right to monitor and review the content of all messages sent to or from this 
e-mail 
address. Messages sent to or from this e-mail address may be stored on the 
Infosys e-mail system.
***INFOSYS End of Disclaimer INFOSYS***

Re: Could not reserve enough space for heap in JVM

2009-02-25 Thread Nick Cen
I got a question relatived to the HADOOP_HEAPSIZE variable. My machine's
memory size is 16G. but when i set HADOOP_HEAPSIZE to 4GB, it thrown the
exception refered in this thread. how can i make full use of my mem. thx.

2009/2/26 Arijit Mukherjee 

> I was getting similar errors too while running the mapreduce samples. I
> fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and the
> hadoop-site.xml files - and rectified it after some trial and error. But I
> would like to know if there is a thumb rule for this. Right now, I've a
> core
> duo machine with 2GB RAM running on Ubuntu 8.10, and I've found that a
> HEAPSIZE of 256Mb works without any problems. Anything more than that would
> give the same error (even when nothing else is going on in the machine).
>
> Arijit
>
> 2009/2/26 Anum Ali 
>
> > If the solution given my Matei Zaharia wont work , which I guess it
> > wont if you are using eclipse 3.3.0 because this is a bug , which they
> > resloved it in later version which is eclipse 3.4 ganymede. Better
> > upgrade eclipse version.
> >
> >
> >
> >
> > On 2/26/09, Matei Zaharia  wrote:
> > > These variables have to be at runtime through a config file, not at
> > compile
> > > time. You can set them in hadoop-env.sh: Uncomment the line with export
> > > HADOOP_HEAPSIZE= to set the heap size for all Hadoop
> processes,
> > or
> > > change options for specific commands. Now these commands are for the
> > Hadoop
> > > processes themselves, but if you are getting the error in tasks you're
> > > running, you can set these in your hadoop-site.xml through the
> > > mapred.child.java.opts variable, as follows:
> > > 
> > >   mapred.child.java.opts
> > >   -Xmx512m
> > > 
> > >
> > > By the way I'm not sure if -J-Xmx is the right syntax; I've always seen
> > -Xmx
> > > and -Xms.
> > >
> > > On Wed, Feb 25, 2009 at 5:05 PM, madhuri72  wrote:
> > >
> > >>
> > >> Hi,
> > >>
> > >> I'm trying to run hadoop version 19 on ubuntu with java build
> > >> 1.6.0_11-b03.
> > >> I'm getting the following error:
> > >>
> > >> Error occurred during initialization of VM
> > >> Could not reserve enough space for object heap
> > >> Could not create the Java virtual machine.
> > >> make: *** [run] Error 1
> > >>
> > >> I searched the forums and found some advice on setting the VM's memory
> > via
> > >> the javac options
> > >>
> > >> -J-Xmx512m or -J-Xms256m
> > >>
> > >> I have tried this with various sizes between 128 and 1024 MB.  I am
> > adding
> > >> this tag when I compile the source.  This isn't working for me, and
> > >> allocating 1 GB of memory is a lot for the machine I'm using.  Is
> there
> > >> some
> > >> way to make this work with hadoop?  Is there somewhere else I can set
> > the
> > >> heap memory?
> > >>
> > >> Thanks.
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> --
> > >> View this message in context:
> > >>
> >
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> > >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> > >>
> > >>
> > >
> >
>
>
>
> --
> "And when the night is cloudy,
> There is still a light that shines on me,
> Shine on until tomorrow, let it be."
>



-- 
http://daily.appspot.com/food/


Re: Could not reserve enough space for heap in JVM

2009-02-25 Thread Arijit Mukherjee
I was getting similar errors too while running the mapreduce samples. I
fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and the
hadoop-site.xml files - and rectified it after some trial and error. But I
would like to know if there is a thumb rule for this. Right now, I've a core
duo machine with 2GB RAM running on Ubuntu 8.10, and I've found that a
HEAPSIZE of 256Mb works without any problems. Anything more than that would
give the same error (even when nothing else is going on in the machine).

Arijit

2009/2/26 Anum Ali 

> If the solution given my Matei Zaharia wont work , which I guess it
> wont if you are using eclipse 3.3.0 because this is a bug , which they
> resloved it in later version which is eclipse 3.4 ganymede. Better
> upgrade eclipse version.
>
>
>
>
> On 2/26/09, Matei Zaharia  wrote:
> > These variables have to be at runtime through a config file, not at
> compile
> > time. You can set them in hadoop-env.sh: Uncomment the line with export
> > HADOOP_HEAPSIZE= to set the heap size for all Hadoop processes,
> or
> > change options for specific commands. Now these commands are for the
> Hadoop
> > processes themselves, but if you are getting the error in tasks you're
> > running, you can set these in your hadoop-site.xml through the
> > mapred.child.java.opts variable, as follows:
> > 
> >   mapred.child.java.opts
> >   -Xmx512m
> > 
> >
> > By the way I'm not sure if -J-Xmx is the right syntax; I've always seen
> -Xmx
> > and -Xms.
> >
> > On Wed, Feb 25, 2009 at 5:05 PM, madhuri72  wrote:
> >
> >>
> >> Hi,
> >>
> >> I'm trying to run hadoop version 19 on ubuntu with java build
> >> 1.6.0_11-b03.
> >> I'm getting the following error:
> >>
> >> Error occurred during initialization of VM
> >> Could not reserve enough space for object heap
> >> Could not create the Java virtual machine.
> >> make: *** [run] Error 1
> >>
> >> I searched the forums and found some advice on setting the VM's memory
> via
> >> the javac options
> >>
> >> -J-Xmx512m or -J-Xms256m
> >>
> >> I have tried this with various sizes between 128 and 1024 MB.  I am
> adding
> >> this tag when I compile the source.  This isn't working for me, and
> >> allocating 1 GB of memory is a lot for the machine I'm using.  Is there
> >> some
> >> way to make this work with hadoop?  Is there somewhere else I can set
> the
> >> heap memory?
> >>
> >> Thanks.
> >>
> >>
> >>
> >>
> >>
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> >>
> >>
> >
>



-- 
"And when the night is cloudy,
There is still a light that shines on me,
Shine on until tomorrow, let it be."


Re: Could not reserve enough space for heap in JVM

2009-02-25 Thread Anum Ali
If the solution given my Matei Zaharia wont work , which I guess it
wont if you are using eclipse 3.3.0 because this is a bug , which they
resloved it in later version which is eclipse 3.4 ganymede. Better
upgrade eclipse version.




On 2/26/09, Matei Zaharia  wrote:
> These variables have to be at runtime through a config file, not at compile
> time. You can set them in hadoop-env.sh: Uncomment the line with export
> HADOOP_HEAPSIZE= to set the heap size for all Hadoop processes, or
> change options for specific commands. Now these commands are for the Hadoop
> processes themselves, but if you are getting the error in tasks you're
> running, you can set these in your hadoop-site.xml through the
> mapred.child.java.opts variable, as follows:
> 
>   mapred.child.java.opts
>   -Xmx512m
> 
>
> By the way I'm not sure if -J-Xmx is the right syntax; I've always seen -Xmx
> and -Xms.
>
> On Wed, Feb 25, 2009 at 5:05 PM, madhuri72  wrote:
>
>>
>> Hi,
>>
>> I'm trying to run hadoop version 19 on ubuntu with java build
>> 1.6.0_11-b03.
>> I'm getting the following error:
>>
>> Error occurred during initialization of VM
>> Could not reserve enough space for object heap
>> Could not create the Java virtual machine.
>> make: *** [run] Error 1
>>
>> I searched the forums and found some advice on setting the VM's memory via
>> the javac options
>>
>> -J-Xmx512m or -J-Xms256m
>>
>> I have tried this with various sizes between 128 and 1024 MB.  I am adding
>> this tag when I compile the source.  This isn't working for me, and
>> allocating 1 GB of memory is a lot for the machine I'm using.  Is there
>> some
>> way to make this work with hadoop?  Is there somewhere else I can set the
>> heap memory?
>>
>> Thanks.
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
>


Re: Could not reserve enough space for heap in JVM

2009-02-25 Thread Matei Zaharia
These variables have to be at runtime through a config file, not at compile
time. You can set them in hadoop-env.sh: Uncomment the line with export
HADOOP_HEAPSIZE= to set the heap size for all Hadoop processes, or
change options for specific commands. Now these commands are for the Hadoop
processes themselves, but if you are getting the error in tasks you're
running, you can set these in your hadoop-site.xml through the
mapred.child.java.opts variable, as follows:

  mapred.child.java.opts
  -Xmx512m


By the way I'm not sure if -J-Xmx is the right syntax; I've always seen -Xmx
and -Xms.

On Wed, Feb 25, 2009 at 5:05 PM, madhuri72  wrote:

>
> Hi,
>
> I'm trying to run hadoop version 19 on ubuntu with java build 1.6.0_11-b03.
> I'm getting the following error:
>
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
> Could not create the Java virtual machine.
> make: *** [run] Error 1
>
> I searched the forums and found some advice on setting the VM's memory via
> the javac options
>
> -J-Xmx512m or -J-Xms256m
>
> I have tried this with various sizes between 128 and 1024 MB.  I am adding
> this tag when I compile the source.  This isn't working for me, and
> allocating 1 GB of memory is a lot for the machine I'm using.  Is there
> some
> way to make this work with hadoop?  Is there somewhere else I can set the
> heap memory?
>
> Thanks.
>
>
>
>
>
> --
> View this message in context:
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


Could not reserve enough space for heap in JVM

2009-02-25 Thread madhuri72

Hi,

I'm trying to run hadoop version 19 on ubuntu with java build 1.6.0_11-b03. 
I'm getting the following error:

Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
make: *** [run] Error 1

I searched the forums and found some advice on setting the VM's memory via
the javac options 

-J-Xmx512m or -J-Xms256m

I have tried this with various sizes between 128 and 1024 MB.  I am adding
this tag when I compile the source.  This isn't working for me, and
allocating 1 GB of memory is a lot for the machine I'm using.  Is there some
way to make this work with hadoop?  Is there somewhere else I can set the
heap memory?

Thanks.





-- 
View this message in context: 
http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.