Hi,
by default, how many threads are used for the compute method? I thought
that Giraph would automatically use multiple threads by default, but then I
stumbled onto this log message:
2013-09-11 11:51:44,501 INFO org.apache.giraph.graph.GraphTaskManager:
execute: 6 partitions to process with 1
By default Giraph uses one compute thread for each worker. It uses multiple
threads for IO like Netty etc. The number of compute threads depends on the
number of workers per machine. Imagine you have a machine in your hadoop
cluster with 8 cores and 8 mapper tasks (something like the basic setup).
Hi,
When running in pseudo-distributed mode, I can successfully test the
examples.
However, when running with the distributed scenario (Shortest Path), I
can't an answer for the example.
Using the command hadoop fs -cat FOLDER/FILE/ * only appears a folder
named _logs.
Scenario:
- Master is a
Hi,
I'm still trying to get Giraph to work on a graph that requires more
memory that is available. The problem is that when the Workers try to
offload partitions, the offloading fails. The DiskBackedPartitionStore
fails to create the directory
_bsp/_partitions/job-/part-vertices-xxx (roughly
Giraph does not offload partitions or messages to HDFS in the out-of-core
module. It uses local disk on the computing nodes. By defualt, it uses the
tasktracker local directory where for example the distributed cache is
stored.
Could you provide the stacktrace Giraph is spitting when failing?