Hi Franco Maria,

Try with:
MAP_MEMORY=1500
JAVA_CHILD_OPTS=-Xmx1G
 -Dmapred.job.map.memory.mb="${MAP_MEMORY}" \
 -Dmapred.map.child.java.opts="${JAVA_CHILD_OPTS}" \

Cheers,
--
Gianmarco



On Thu, Sep 13, 2012 at 11:01 PM, Franco Maria Nardini <
francomaria.nard...@isti.cnr.it> wrote:

> Hi all,
>
> I am running the PageRank code on single machine hadoop installation.
> In particular, I am running the code using four workers on a graph of
> 100.000 nodes. I am getting this exception:
>
> 012-09-13 22:54:38,379 INFO org.apache.hadoop.mapred.Task:
> Communication exception: java.lang.OutOfMemoryError: GC overhead limit
> exceeded
>         at
> org.apache.hadoop.util.ProcfsBasedProcessTree.getProcessList(ProcfsBasedProcessTree.java:365)
>         at
> org.apache.hadoop.util.ProcfsBasedProcessTree.getProcessTree(ProcfsBasedProcessTree.java:138)
>         at
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin.getProcResourceValues(LinuxResourceCalculatorPlugin.java:401)
>         at
> org.apache.hadoop.mapred.Task.updateResourceCounters(Task.java:808)
>         at org.apache.hadoop.mapred.Task.updateCounters(Task.java:830)
>         at org.apache.hadoop.mapred.Task.access$600(Task.java:66)
>         at org.apache.hadoop.mapred.Task$TaskReporter.run(Task.java:666)
>         at java.lang.Thread.run(Thread.java:662)
>
> It's seems to be a problem related to communication stuff. Do I have
> to specify or modify some options to hadoop to avoid this?
>
> Best,
>
> Franco Maria
>
  • memory problem Franco Maria Nardini
    • Re: memory problem Gianmarco De Francisci Morales

Reply via email to