I'm also trying to implement but with poor results. my target is the
betweenness Centrality.
The only thing that i did is generate giraph code throught Green-Marl, but
this is very far from being a running code.
Update us if there are progress.
But if complexity is quadratic in the of space it's
Hi Sebastian,
I read the article, it's very hard, at least for me, to implement the
pseudo-code algorithms of Effective Closeness and LineRank.
Do You Know any implementations in Apache Hadoop (MapReduce) or in Apache
Giraph?
2013/10/16 Silvio Di gregorio silvio.digrego...@gmail.com
thank you
Hi I have a question about the out of core giraph. It is said that, in
order to use disk to store the partions, we need to use
giraph.useOutOfCoreGraph=true, but where should I put this statement to?
BTW, I am just trying to use the pagerank or shortestpath example to test
the out of core
Put it as -Dgiraph.useOutOfCoreMessages=true
-Dgiraph.useOutOfCoreGraph=true after GiraphRuuner
like
hadoop jar girap.jar org.apache.giraph.GiraphRunner
-Dgiraph.useOutOfCoreMessages=true
-Dgiraph.useOutOfCoreGraph=true ...
On Wed, Oct 16, 2013 at 7:29 AM, Jianqiang Ou
Thanks Sebastian for your reply..
Would you please help me a little more??
Suppose in my algorithm,there are two parts.After all vertices have
executed first part,they need to execute other part. Is it possible??
On Wed, Oct 16, 2013 at 9:32 AM, Sebastian Schelter s...@apache.org wrote:
Hi
Hey Guys,
I've only been using Giraph a few days so am very new to it. I'm
currently using Giraph 1.0.0. I'm getting the error below when I try
to send an ArrayListWritableText message. The error happens between
supersteps. If you run the sample code I've included Superstep 1
never gets printed
Hi.
I am having data of around 4B Vertices and 13548192791 edges, say with
avg outgoing edge of 4, so total 4B*40 bytes= 160G , When I load this into
memory across 165 mappers each with 6G mem, all memory get occupied, I am
confused here as we have around 1TB space in memory. What is its thats
Hi Sundi,
I just tried your method, but somehow the job failed, the attached is the
history of the job. and it was good without the outofcore options. Do you
have any clue why is that?
The command I used to run the program is below:
$HADOOP_HOME/bin/hadoop jar
You need to tune it per your cluster. This is what mentioned in the docs:
*It is difficult to decide a general policy to use out-of-core capabilities
*, as it depends on the behavior of the algorithm and the input graph. The
exact number of partitions and messages to keep in memory depends on the