Hi Arun,

I did what you said, and it seems to work. Thanks a lot! I guess I do not
completely understand how the tuning parameters affect each other. Thanks!

Boyu

On Thu, Mar 11, 2010 at 12:27 PM, Arun C Murthy <a...@yahoo-inc.com> wrote:

> Moving to mapreduce-user@, bcc: common-user
>
> Have you tried bumping up the heap for the map task?
>
> Since you are setting io.sort.mb to 256M, pls set heap-size to 512M at
> least, if not more.
>
> mapred.child.java.opts -> -Xmx512M or -Xmx1024m
>
> Arun
>
>
> On Mar 11, 2010, at 8:24 AM, Boyu Zhang wrote:
>
>  Dear All,
>>
>> I am running a hadoop job processing data. The output of map is really
>> large, and it spill 15 times. So I was trying to set io.sort.mb = 256
>> instead of 100. And I leave everything else default. I am using 0.20.2
>> version. And when I run the job, I got the following errors:
>>
>> 2010-03-11 11:09:37,581 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=MAP, sessionId=
>> 2010-03-11 11:09:38,073 INFO org.apache.hadoop.mapred.MapTask:
>> numReduceTasks: 1
>> 2010-03-11 11:09:38,086 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb
>> = 256
>> 2010-03-11 11:09:38,326 FATAL org.apache.hadoop.mapred.TaskTracker:
>> Error running child : java.lang.OutOfMemoryError: Java heap space
>>        at
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:781)
>>        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:350)
>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>
>>
>> I can't figure out why, could anyone please give me a hint? Any hlep will
>> be
>> appreciate! Thanks a lot!
>>
>> SIncerely,
>>
>> Boyu
>>
>
>

Reply via email to