Arun,

Actually CDH3 is Hadoop 0.20, but with .21 backported, so I am using 0.21
API whenever I can.

Mark

On Wed, May 23, 2012 at 9:40 PM, Mark Kerzner <mark.kerz...@shmsoft.com>wrote:

> Arun,
>
> I am running the latest CDH3, which I re-installed yesterday, so I believe
> it is Hadoop 0.21.
>
> I have about 6000 maps emitted, and 16 spills, and then I see Mapper
> cleanup() being called, after which I get this error
>
> 2012-05-23 20:22:58,108 FATAL org.apache.hadoop.mapred.Child: Error
> running child : java.lang.OutOfMemoryError: Java heap space
>     at org.apache.hadoop.mapred.IFile$Reader.readNextBlock(IFile.java:355)
>
> Thank you,
> Mark
>
>
> On Wed, May 23, 2012 at 9:29 PM, Arun C Murthy <a...@hortonworks.com>wrote:
>
>> What version of hadoop are you running?
>>
>> On May 23, 2012, at 12:16 PM, Mark Kerzner wrote:
>>
>> > Hi, all,
>> >
>> > I got the exception below in the mapper. I already have my global Hadoop
>> > heap at 5 GB, but is there a specific other setting? Or maybe I should
>> > troubleshoot for memory?
>> >
>> > But the same application works in the IDE.
>> >
>> > Thank you!
>> >
>> > Mark
>> >
>> > *stderr logs*
>> >
>> > Exception in thread "Thread for syncLogs" java.lang.OutOfMemoryError:
>> > Java heap space
>> >       at
>> java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:76)
>> >       at
>> java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:59)
>> >       at
>> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:292)
>> >       at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:365)
>> >       at org.apache.hadoop.mapred.Child$3.run(Child.java:157)
>> > Exception in thread "communication thread" java.lang.OutOfMemoryError:
>> > Java heap space
>> >
>> > Exception: java.lang.OutOfMemoryError thrown from the
>> > UncaughtExceptionHandler in thread "communication thread"
>>
>> --
>> Arun C. Murthy
>> Hortonworks Inc.
>> http://hortonworks.com/
>>
>>
>>
>

Reply via email to