Hi,

     I have 6 instances allocated.
i havent tried adding more instances coz i have maximum of 30,000 rows in
hbase tables. wht do u recommend?
i have max 4-5 map concurrent map/reduce tasks on one node.
how do we characterize the memory usage of mappers and reducers??
i m running spinn3r... other than regular hadoop/hbase... but spinn3r is
being called from one of my map tasks.
I am not running gangila or any other program to characterize resource usage
over time.

Thanks,
Raakhi

On Sat, Apr 18, 2009 at 7:09 PM, Andrew Purtell <apurt...@apache.org> wrote:

>
> Hi,
>
> This is an OS level exception. Your node is out of memory
> even to fork a process.
>
> How many instances do you currently have allocated? Have
> you increased the number of instances over time to try and
> spread the load of your application around? How many
> concurrent mapper and/or reducer processes do you execute
> on a node? Can you characterize the memory usage of your
> mappers and reducers? Are you running other processes
> external to hadoop/hbase which consume a lot of memory? Are
> you running Ganglia or similar to track and characterize
> resource usage over time?
>
> You may find you are trying to solve a 100 node problem
> with 10.
>
>   - Andy
>
> > From: Rakhi Khatwani
> > Subject: Re: Ec2 instability
> > To: hbase-u...@hadoop.apache.org, core-user@hadoop.apache.org
> > Date: Friday, April 17, 2009, 9:44 AM
>  > Hi,
> >  this is the exception i have been getting @ the mapreduce
> >
> > java.io.IOException: Cannot run program "bash":
> > java.io.IOException:
> > error=12, Cannot allocate memory
> >       at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
> >       at org.apache.hadoop.util.Shell.run(Shell.java:134)
> >       at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
> >       at
> >
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:321)
> >       at
> >
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
> >       at
> >
> org.apache.hadoop.mapred.MapOutputFile.getOutputFileForWrite(MapOutputFile.java:61)
> >       at
> >
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:1199)
> >       at
> > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:857)
> >       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:333)
> >       at org.apache.hadoop.mapred.Child.main(Child.java:155)
> > Caused by: java.io.IOException: java.io.IOException:
> > error=12, Cannot
> > allocate memory
> >       at java.lang.UNIXProcess.(UNIXProcess.java:148)
> >       at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> >       at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
> >       ... 10 more
>
>
>
>
>

Reply via email to