Hi all,
I'm seeing the same problem. I'm pasting here part of the logs that
looks more relevant in case it helps. This appears on the log of every
hadoop slave node.
2013-09-23 12:34:29,908 INFO
org.apache.giraph.comm.SendPartitionCache: SendPartitionCache:
maxVerticesPerTransfer = 1
Weird.
This is the code:
if (!parent.exists()) {
if (!parent.mkdirs()) {
LOG.error(offloadPartition: Failed to create directory + parent.
getAbsolutePath());
}
}
Question is why parent.mkdirs() is returning false. Could be a problem of
permissions. Could you
Unfortunately there's some restrictions that means I don't really have
them handy, BUT pointing me towards the local disk helped me partially
resolve it. There are rights issues with this directory, but I was
able to get by it by manually creating a separate giraph in mapreduce
local storage
Actually, I take that back. It seems it does succeeded in creating
partitions - it just struggles with it sometimes. Should I be worried about
these errors if partition directories seem to be filling up?
On Sep 11, 2013 6:38 PM, Claudio Martella claudio.marte...@gmail.com
wrote:
Giraph does not
Actually, why is it saying it fails to create directory in the first place,
when it is trying to write files?
On Sep 12, 2013 3:04 PM, Alexander Asplund alexaspl...@gmail.com wrote:
I can also add that there is no such issue with DiskBackedMessageStore. It
successfully creates a large number of
Hi,
I'm still trying to get Giraph to work on a graph that requires more
memory that is available. The problem is that when the Workers try to
offload partitions, the offloading fails. The DiskBackedPartitionStore
fails to create the directory
_bsp/_partitions/job-/part-vertices-xxx (roughly
Giraph does not offload partitions or messages to HDFS in the out-of-core
module. It uses local disk on the computing nodes. By defualt, it uses the
tasktracker local directory where for example the distributed cache is
stored.
Could you provide the stacktrace Giraph is spitting when failing?