Hello Alok,
about the question 3.a, i guess the framework will indeed try to allocate the
local workers.
Each worker is actually a map only task. Due to the behaviour of the Hadoop
framework, it will aim for data locality. Therefore, the framework will try to
run the map tasks (and thus the wor
Hi,
I got this exception when I ran a Giraph-1.0.0 PageRank job over a 60
machine cluster with 28GB input data. But I got this exception:
java.lang.IllegalStateException: run: Caught an unrecoverable
exception resolveMutations: Already has missing vertex on this worker
for 20464109
at or