Unfortunately there's some restrictions that means I don't really have
them handy, BUT pointing me towards the local disk helped me partially
resolve it. There are rights issues with this directory, but I was
able to get by it by manually creating a separate giraph in mapreduce
local storage mapred/local/giraph and setting Giraph Options to point
to <local storage>/giraph/partitions and /messages

Then something strange happens. The job successfully creates 30,
exactly 30 directories, and then starts failing again. This happened
both times I ran the job. 30 directories are created in the partitions
directory, and then subsequenctly it prints to the task log something
like

DiskBackedPartitionStorage: offloadPartition: Failed to create directory <...>

..and then no further directories are created after the 30. It will
attempt to create more partition directories, but it keeps failling
after the inital 30. It is quite strange.

On 9/12/13, Claudio Martella <claudio.marte...@gmail.com> wrote:
> Giraph does not offload partitions or messages to HDFS in the out-of-core
> module. It uses local disk on the computing nodes. By defualt, it uses the
> tasktracker local directory where for example the distributed cache is
> stored.
>
> Could you provide the stacktrace Giraph is spitting when failing?
>
>
> On Thu, Sep 12, 2013 at 12:54 AM, Alexander Asplund
> <alexaspl...@gmail.com>wrote:
>
>> Hi,
>>
>> I'm still trying to get Giraph to work on a graph that requires more
>> memory that is available. The problem is that when the Workers try to
>> offload partitions, the offloading fails. The DiskBackedPartitionStore
>> fails to create the directory
>> _bsp/_partitions/job-xxxx/part-vertices-xxx (roughly from recall).
>>
>> The input or computation will then continue for a while, which I
>> believe is because it is still managing to hold everything in memory -
>> but at some point it reaches the limit where there simply is no more
>> heap space, and it crashes with OOM.
>>
>> Has anybody had this problem with giraph failing to make HDFS
>> directories?
>>
>
>
>
> --
>    Claudio Martella
>    claudio.marte...@gmail.com
>


-- 
Alexander Asplund

Reply via email to