Awesome, that worked. Thanks.
To summarize the root cause and resolution as I understand it:
At the end of first job, PIG did a list on the job and apparently that shot its 
Heap size requirement up to 1.7G. The existing, default, heapsize for the PIG 
JVM didn't allow that and gave an out of memory error. Setting HADOOP_HEAPSIZE 
to 8192 set the max heap size of the PIG process to 8196m and the problem went 
away.

Thanks,
Pankaj
On Jun 18, 2012, at 1:23 AM, Aniket Mokashi wrote:

> export HADOOP_HEAPSIZE=<something more than what it is right now>
> 
> Thanks,
> Aniket
> 
> On Sun, Jun 17, 2012 at 11:16 PM, Pankaj Gupta <[email protected]>wrote:
> 
>> Hi,
>> 
>> I am getting an out of memory error while running Pig. I am running a
>> pretty big job with one master node and over 100 worker nodes. Pig divides
>> the execution in two map-reduce jobs. Both the jobs succeed with a small
>> data set. With a large data set I get an out of memory error at the end of
>> the first job. This happens right after the all the mappers and reducers of
>> the first job are done and the second job hasn't started. Here is the error:
>> 
>> 2012-06-18 03:15:29,565 [Low Memory Detector] INFO
>> org.apache.pig.impl.util.SpillableMemoryManager - first memory handler
>> call - Collection threshold init = 187039744(182656K) used =
>> 390873656(381712K) committed = 613744640(599360K) max = 699072512(682688K)
>> 2012-06-18 03:15:31,137 [Low Memory Detector] INFO
>> org.apache.pig.impl.util.SpillableMemoryManager - first memory handler
>> call- Usage threshold init = 187039744(182656K) used = 510001720(498048K)
>> committed = 613744640(599360K) max = 699072512(682688K)
>> Exception in thread "IPC Client (47) connection to /10.217.23.253:9001from 
>> hadoop" java.lang.RuntimeException:
>> java.lang.reflect.InvocationTargetException
>> Caused by: java.lang.reflect.InvocationTargetException
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>       at org.apache.hadoop.mapred.TaskReport.<init>(TaskReport.java:46)
>>       at sun.reflect.GeneratedConstructorAccessor31.newInstance(Unknown
>> Source)
>>       at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>       at
>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:113)
>>       at
>> org.apache.hadoop.io.WritableFactories.newInstance(WritableFactories.java:53)
>>       at
>> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:236)
>>       at
>> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:171)
>>       at
>> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:219)
>>       at
>> org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
>>       at
>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:807)
>>       at org.apache.hadoop.ipc.Client$Connection.run(Client.java:742)
>> Exception in thread "Low Memory Detector" java.lang.OutOfMemoryError: Java
>> heap space
>>       at
>> sun.management.MemoryUsageCompositeData.getCompositeData(MemoryUsageCompositeData.java:40)
>>       at
>> sun.management.MemoryUsageCompositeData.toCompositeData(MemoryUsageCompositeData.java:34)
>>       at
>> sun.management.MemoryNotifInfoCompositeData.getCompositeData(MemoryNotifInfoCompositeData.java:42)
>>       at
>> sun.management.MemoryNotifInfoCompositeData.toCompositeData(MemoryNotifInfoCompositeData.java:36)
>>       at sun.management.MemoryImpl.createNotification(MemoryImpl.java:168)
>>       at
>> sun.management.MemoryPoolImpl$CollectionSensor.triggerAction(MemoryPoolImpl.java:300)
>>       at sun.management.Sensor.trigger(Sensor.java:120)
>> 
>> I will really appreciate and suggestions on how to go about debugging and
>> rectifying this issue.
>> 
>> Thanks,
>> Pankaj
> 
> 
> 
> 
> -- 
> "...:::Aniket:::... Quetzalco@tl"

Reply via email to