As you described in the below MR Job uses 6 GB file, how many tasks are there 
in this Job and how much is the i/p for each task?

Is there any chance of holding more data by tasks in-memory, could you check 
your map function why it is not able to run with 2GB memory.

Could you also check those failing task log files, you will get more idea who 
is causing the problem.

Thanks
Devaraj k

From: Ramya S [mailto:ram...@suntecgroup.com]
Sent: 25 June 2013 15:50
To: user@hadoop.apache.org
Subject: RE: Error:java heap size

Hi,

I have set the properties in mapred-site.xml as follows:

 <property>
                                      <name>mapreduce.map.memory.mb</name>
                                        <value>2048</value>
   </property>
   <property>
                                                  
<name>mapreduce.map.java.opts</name>
                                                   <value>-Xmx2048M</value>
    </property>
    <property>
                                                                      
<name>mapreduce.reduce.memory.mb</name>
                                                                         
<value>2048</value>
      </property>
      <property>
                                                                             
<name>mapreduce.reduce.java.opts</name>
                                                                             
<value>-Xmx2048M</value>
         </property>
But i am still getting the same error in AM logs with some improvement in  
mapping process.

I have found the property "mapreduce.task.io.sort.mb" set with the value 512. 
Is there any significance in changing this value to resolve this error?

Thanks,
Ramya

________________________________
From: Devaraj k [mailto:devara...@huawei.com]
Sent: Tue 6/25/2013 3:08 PM
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: RE: Error:java heap size
Hi Ramya,

We need to change the –Xmx value for your Job tasks according to the memory 
allocating for map/reduce containers.

You can pass the –Xmx value for map and reduce yarn child’s using 
configurations "mapreduce.map.java.opts" and "mapreduce.reduce.java.opts".

If you are allocating 2GB for map container, you can probably pass the same 
value as –Xmx for the  mapreduce.map.java.opts and same way for reducer as well.


Thanks
Devaraj k

From: Ramya S [mailto:ram...@suntecgroup.com]
Sent: 25 June 2013 14:39
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: RE: Error:java heap size


Hi,

Error is in AM log, which is as follows:

  *   FATAL [IPC Server handler 10 on 49363] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
attempt_1372143291407_0003_m_000001_0 - exited : Java heap space

  *    INFO [IPC Server handler 10 on 49363] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from 
attempt_1372143291407_0003_m_000001_0: Error: Java heap space

  *   INFO [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report 
from attempt_1372143291407_0003_m_000001_0: Error: Java heap space
Thanks,
Ramya
________________________________
From: Devaraj k [mailto:devara...@huawei.com]
Sent: Tue 6/25/2013 2:09 PM
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: RE: Error:java heap size
Hi Ramya,

Where did you get the java heap size error?

Could you see the error in client side/RM/AM log? What is the detailed error?

Thanks
Devaraj k

From: Ramya S [mailto:ram...@suntecgroup.com]
Sent: 25 June 2013 13:10
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Error:java heap size

Hi,

I am using hadoop-2.0.0-cdh4.3.0 version (YARN) and when i tried to run a MR 
job(6gb file) i got yhe following error:

ERROR: java heap size

Plese give me a solution to solve this...

Ramya

Reply via email to