Hi Jyoti,
You can find a couple of very large graphs in KONECT [1] and on the
website of the laboratory for web algorithmics from the University of
Milan [2]. You will probably have to convert them to an appropriate
format for Giraph.
Best,
Sebastian
[1] http://konect.uni-koblenz.de/
[2]
Hi Sebastian..
Thanks for the links given for big graphs..
Actually I want to tell you something about problem i am facing.
Initially I was working with *hadoop 0.20.203* . I build Giraph there.. it
was running fine.
Now to test very big graph related problem and to compare the performance
,
Hi Dionysios,
We were going to send an e-mail to the giraph mailing lists soon, but
it's good to see you already discovered the site.
I didn't discover it, your colleague Alexandros presented okapi at the
RecSys meetup in Berlin :) I'm very eager to see it, especially the
modifications you
Hi Mirko..
Thanks for your reply.. All MapReduce programs are running fine on this
system.
And it is yarn setup.
Please guide me how to bulid giraph with this hadoop version..Should I need
to install external zookeeper also.?
Thanks in advance..
Jyoti
On Sat, Mar 1, 2014 at 6:31 PM, Mirko
Here is my work log with some steps I need to prep for building Giraph:
Requires Maven 3.x
mvn -version
Install JDK 1.7
http://www.if-not-true-then-false.com/2010/install-sun-oracle-java-jdk-jre-7-on-fedora-centos-red-hat-rhel/
## java ##
sudo alternatives --install /usr/bin/java java
Hi,
Here I'm trying to process a very big input file through giraph, ~70GB.
I'm running the giraph program on a 40 nodes linux cluster but the program
just get stuck there after it read in a small fraction of the input file.
Although each node has 16GB mem, it looks that only one node read the
Hi Folks,
The job was working properly in MR1 without any issue. I am trying
to run a simple CC sample Giraph job on YARN. . I have attached the
stacktrace and a few errors. Any pointers will be really helpful for
the below errors.
*1. BspServiceMaster (YARN profile) is FAILING this task,