I increased the heap size as you have suggested, and I could run a map
reduce job on it.

thanks

On Mon, Mar 10, 2008 at 10:58 AM, Amar Kamat <[EMAIL PROTECTED]> wrote:

> What is the heap size you are using for your tasks? Check
> 'mapred.child.java.opts' in your hadoop-default.xml. Try increasing it.
> This will happen if you try running the random-writer + sort examples with
> default parameters. The maps are not able to spill the data to the disk.
> Btw what version of HADOOP are you using?
> Amar
> On Mon, 10 Mar 2008, Ved Prakash
> wrote:
>
> > Hi friends,
> >
> > I have made a cluster of 3 machines, one of them is master, and other 2
> > slaves. I executed a mapreduce job on master but after Map, the
> execution
> > terminates and Reduce doesn't happen. I have checked dfs and no output
> > folder gets created.
> >
> > this is the error I see
> >
> > 08/03/10 10:35:21 INFO mapred.JobClient: Task Id :
> > task_200803101001_0001_m_000064_0, Status : FAILED
> > java.lang.OutOfMemoryError: Java heap space
> >        at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java
> > :95)
> >        at java.io.DataOutputStream.write(DataOutputStream.java:90)
> >        at org.apache.hadoop.io.Text.write(Text.java:243)
> >        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(
> > MapTask.java:347)
> >        at org.apache.hadoop.examples.WordCount$MapClass.map(
> WordCount.java
> > :72)
> >        at org.apache.hadoop.examples.WordCount$MapClass.map(
> WordCount.java
> > :59)
> >        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
> >        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:192)
> >        at org.apache.hadoop.mapred.TaskTracker$Child.main(
> TaskTracker.java
> > :1787)
> >
> > 08/03/10 10:35:22 INFO mapred.JobClient:  map 55% reduce 17%
> > 08/03/10 10:35:31 INFO mapred.JobClient:  map 56% reduce 17%
> > 08/03/10 10:35:51 INFO mapred.JobClient:  map 57% reduce 17%
> > 08/03/10 10:36:04 INFO mapred.JobClient:  map 58% reduce 17%
> > 08/03/10 10:36:07 INFO mapred.JobClient:  map 57% reduce 17%
> > 08/03/10 10:36:07 INFO mapred.JobClient: Task Id :
> > task_200803101001_0001_m_000071_0, Status : FAILED
> > java.lang.OutOfMemoryError: Java heap space
> >        at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java
> > :95)
> >        at java.io.DataOutputStream.write(DataOutputStream.java:90)
> >        at org.apache.hadoop.io.Text.write(Text.java:243)
> >        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(
> > MapTask.java:347)
> >        at org.apache.hadoop.examples.WordCount$MapClass.map(
> WordCount.java
> > :72)
> >        at org.apache.hadoop.examples.WordCount$MapClass.map(
> WordCount.java
> > :59)
> >        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
> >        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:192)
> >        at org.apache.hadoop.mapred.TaskTracker$Child.main(
> TaskTracker.java
> > :1787)
> >
> > though it tries to overcome this problem but the mapreduce application
> > doesn't create output, can anyone tell me why is this happening?
> >
> > Thanks
> >
>

Reply via email to