the distributed cache is
stored.
Could you provide the stacktrace Giraph is spitting when failing?
On Thu, Sep 12, 2013 at 12:54 AM, Alexander Asplund
alexaspl...@gmail.comwrote:
Hi,
I'm still trying to get Giraph to work on a graph that requires more
memory that is available
, 2013 at 12:54 AM, Alexander Asplund alexaspl...@gmail.com
wrote:
Hi,
I'm still trying to get Giraph to work on a graph that requires more
memory that is available. The problem is that when the Workers try to
offload partitions, the offloading fails. The DiskBackedPartitionStore
fails
Actually, why is it saying it fails to create directory in the first place,
when it is trying to write files?
On Sep 12, 2013 3:04 PM, Alexander Asplund alexaspl...@gmail.com wrote:
I can also add that there is no such issue with DiskBackedMessageStore. It
successfully creates a large number
Hi,
I'm still trying to get Giraph to work on a graph that requires more
memory that is available. The problem is that when the Workers try to
offload partitions, the offloading fails. The DiskBackedPartitionStore
fails to create the directory
_bsp/_partitions/job-/part-vertices-xxx (roughly
really fragmented.
There are options that you can set with Java to change the type of
garbage
collection and
how it is scheduled as well.
You might up the heap size slightly - what is the default heap size on
your cluster?
On 9/9/2013 8:33 PM, Alexander Asplund wrote:
A small note: I'm
Correction: the computation does not actually stall - it does
complains a bit that the directories cannot be created and then
eventually moves to the next superstep. I guess this means I'm
actually fitting all the data in memory?
On 9/10/13, Alexander Asplund alexaspl...@gmail.com wrote:
Thanks
also at input superstep. try to decrease the number of
partitions kept in memory.
On Sat, Sep 7, 2013 at 1:37 AM, Alexander Asplund
alexaspl...@gmail.comwrote:
Hi,
I'm trying to process a graph that is about 3 times the size of
available memory. On the other hand, there is plenty of disk
AM, Alexander Asplund
alexaspl...@gmail.comwrote:
Thanks for the reply.
I tried setting giraph.maxPartitionsInMemory to 1, but I'm still
getting OOM: GC limit exceeded.
Are there any particular cases the OOC will not be able to handle, or
is it supposed to work in all cases? If the latter
A small note: I'm not seeing any partitions directory being formed
under _bsp, which is where I have understood that they should be
appearing.
On 9/10/13, Alexander Asplund alexaspl...@gmail.com wrote:
Really appreciate the swift responses! Thanks again.
I have not both increased mapper tasks