Ok, so I added the partitions flag, going with
hadoop jar target/giraph-0.1-jar-with-dependencies.jar
org.apache.giraph.examples.SimpleShortestPathsVertex
-Dgiraph.SplitMasterWorker=false -Dgiraph.numComputeThreads=12
-Dhash.userPartitionCount=12 input output 12 1
but still I got no overall
Folks,
I have some of the same questions as Alexandros below. What is exactly is a
worker? I am not sure I understood Avery's answer below. I have 4-node
cluster. Each node has 24 nodes. My first node is functioning (in MapReduce
parlance) as both a job tracker as well as a task tracker.
Hello Bence,
So, you have 96 cores at your disposal. My guess would be that 3 workers
are not enough to use all of them, you should either try with a lot more,
or try to multithread them as Avery said (thus, try 4 workers with 24
threads each). However, as I already reported, I tried this myself
Hi Alexandros,
I increased my number of workers to 30, but my job just hangs at 3%:
./giraph -Dgiraph.useSuperstepCounters=false
-DSimpleShortestPathsVertex.sourceId=100 ../target/giraph.jar
org.apache.giraph.examples.SimpleShortestPathsVertex -if