Hi Uma,

I tried various settings, but it keeps giving me the memory error.
>From what I can tell there should be plenty of memory available since
I'm running on a 4gb node and trying to copy a 100kb file.

Is this the correct place to adjust the memory setting for a child
process forked from a java mapper code?

Shouldn't the default settings be able to execute this type of task?

Is it perhaps because I have a child process copying files into the
HDFS while I'm running a higher level hadoop mapper?

Again any help greatly appreciated - cheers,

Joris

On Sun, Sep 25, 2011 at 9:41 PM, Uma Maheswara Rao G 72686
<mahesw...@huawei.com> wrote:
> Hello Joris,
>
> Looks You have configured mapred.map.child.java.opts to -Xmx512M,
>  To spawn a child process that much memory is required.
> Can you check what other processes occupied memory in your machine. Bacuse 
> your current task is not getting the enough memory to initialize. or try to 
> reduce the  mapred.map.child.java.opts to 256 , if your map task can exeute 
> with that memory.
>
> Regards,
> Uma
>
> ----- Original Message -----
> From: Joris Poort <gpo...@gmail.com>
> Date: Saturday, September 24, 2011 5:50 am
> Subject: Hadoop java mapper -copyFromLocal heap size error
> To: mapreduce-user <mapreduce-user@hadoop.apache.org>
>
>> As part of my Java mapper I have a command executes some code on the
>> local node and copies a local output file to the hadoop fs.
>> Unfortunately I'm getting the following output:
>>
>>    "Error occurred during initialization of VM"
>>    "Could not reserve enough space for object heap"
>>
>> I've tried adjusting mapred.map.child.java.opts to -Xmx512M, but
>> unfortunately no luck.
>>
>> When I ssh into the node, I can run the -copyFromLocal command without
>> any issues. The ouput files are also quite small like around 100kb.
>>
>> Any help would be greatly appreciated!
>>
>> Cheers,
>>
>> Joris
>>
>

Reply via email to