looking at your logs, there's a null pointer exception. looks like a bug to
me. what version are you running? what command are you using to run the job?
On Fri, Oct 18, 2013 at 9:03 AM, Jianqiang Ou oujianqiang...@gmail.comwrote:
Thanks, I just tried another dataset, which could be
Hi Claudio,
The version of hadoop should be 0.20.203.0, but I am not quite sure about
the version of Giraph, I got it from:
git clone https://github.com/apache/giraph.git
and the command I used is something like the one below, but I might also
used the giraph.maxPartitionsInMemory=1 option at
Thanks, I just tried another dataset, which could be successfully handled
by my cluster within memory. However, exceptions still occurred with the
-Dgiraph.useOutOfCoreGraph=true option, but it works fine with only
-Dgiraph.useOutOfCoreMessages=true
option, so do you still think it is the dir
Thanks very much, so are you saying if I use Dgiraph.maxPartitionsInMemory
and Dgiraph.maxMessagesInMemory to make them both smaller number, then it
might work?
Thanks again,
Jian
On Thu, Oct 17, 2013 at 12:56 AM, Jyotirmoy Sundi sundi...@gmail.comwrote:
You need to tune it per your cluster.
apart from these you might also want to check permissions of the dir path
where offloading of vertices and messages happen.
Ideally giraph is not meant for out-of-core if you graph is much bigger
then the cluster can handle in memory, using giraph defeats the purpose in
this case.
On Thu, Oct
Hi I have a question about the out of core giraph. It is said that, in
order to use disk to store the partions, we need to use
giraph.useOutOfCoreGraph=true, but where should I put this statement to?
BTW, I am just trying to use the pagerank or shortestpath example to test
the out of core
Put it as -Dgiraph.useOutOfCoreMessages=true
-Dgiraph.useOutOfCoreGraph=true after GiraphRuuner
like
hadoop jar girap.jar org.apache.giraph.GiraphRunner
-Dgiraph.useOutOfCoreMessages=true
-Dgiraph.useOutOfCoreGraph=true ...
On Wed, Oct 16, 2013 at 7:29 AM, Jianqiang Ou
Hi Sundi,
I just tried your method, but somehow the job failed, the attached is the
history of the job. and it was good without the outofcore options. Do you
have any clue why is that?
The command I used to run the program is below:
$HADOOP_HOME/bin/hadoop jar
You need to tune it per your cluster. This is what mentioned in the docs:
*It is difficult to decide a general policy to use out-of-core capabilities
*, as it depends on the behavior of the algorithm and the input graph. The
exact number of partitions and messages to keep in memory depends on the