did you extend the heap available to the mapper tasks? e.g. through
mapred.child.java.opts.


On Tue, Sep 10, 2013 at 12:50 AM, Alexander Asplund
<alexaspl...@gmail.com>wrote:

> Thanks for the reply.
>
> I tried setting giraph.maxPartitionsInMemory to 1, but I'm still
> getting OOM: GC limit exceeded.
>
> Are there any particular cases the OOC will not be able to handle, or
> is it supposed to work in all cases? If the latter, it might be that I
> have made some configuration error.
>
> I do have one concern that might indicateI have done something wrong:
> to allow OOC to activate without crashing I had to modify the trunk
> code. This was because Giraph relied on guava-12 and
> DiskBackedPartitionStore used hasInt() - a method which does not exist
> in guava-11 which hadoop 2 depends on. At runtime guava 11 was being
> used
>
> I suppose this problem might indicate I'm running submitting the job
> using the wrong binary. Currently I am including the giraph
> dependencies with the jar, and running using hadoop jar.
>
> On 9/7/13, Claudio Martella <claudio.marte...@gmail.com> wrote:
> > OOC is used also at input superstep. try to decrease the number of
> > partitions kept in memory.
> >
> >
> > On Sat, Sep 7, 2013 at 1:37 AM, Alexander Asplund
> > <alexaspl...@gmail.com>wrote:
> >
> >> Hi,
> >>
> >> I'm trying to process a graph that is about 3 times the size of
> >> available memory. On the other hand, there is plenty of disk space. I
> >> have enabled the giraph.useOutOfCoreGraph property, but it still
> >> crashes with outOfMemoryError: GC limit exceeded when I try running my
> >> job.
> >>
> >> I'm wondering of the spilling is supposed to work during the input
> >> step. If so, are there any additional steps that must be taken to
> >> ensure it functions?
> >>
> >> Regards,
> >> Alexander Asplund
> >>
> >
> >
> >
> > --
> >    Claudio Martella
> >    claudio.marte...@gmail.com
> >
>
>
> --
> Alexander Asplund
>



-- 
   Claudio Martella
   claudio.marte...@gmail.com

Reply via email to