Hi Niâshant,It looks like there is no enough memory to hold the data. How much aggregate memory available in your cluster? Thanks, Khaled
Hi,
I am using Giraph-1.1.0 to do large graph processing. I was trying to do a
hashMin (WCC) algorithm on a large graph but it failed with out of memory
error. I thought the out-of-core option may help, but it did not.
Is there any advice about how to enable out-of-core processing?
I followed
Hi Udbhav,
Please tell us more about your cluster setup, and your Hadoop version and
configuration.
Thanks,
-Khaled
On Fri, Jul 24, 2015 at 9:16 AM, Udbhav Agarwal udbhav.agar...@syncoms.com
wrote:
Hi,
Am running Giraph Job for 1,00,000 connected vertices for shortest path
calculation.
%40mail.gmail.com%3E
In my experience, Blocks framework is much easier to use and it naturally
suit your needs.
On Wed, Jun 3, 2015 at 11:02 AM, Khaled Ammar khaled.am...@gmail.com
wrote:
Thank you Sergey,
This is exactly what I am looking for. I would like to run multiple
computation classes
do you want to use I/O formats? You can use
different computation classes within one application and you don't need to
do I/O between them. All intermediate results can be kept in vertex and
edge data.
Regards,
Sergey Edunov
On Tue, Jun 2, 2015 at 12:46 PM, Khaled Ammar khaled.am
Hi all,
There are InMemory input and output format for giraph. These could be
useful when a specific computation should be executed until convergence and
then another computation is needed. Instead of writing intermediate results
to HDFS and read it again, InMemoryVertex format sounds very
I don't think Giraph is suitable for this task because in this case you
probably want to visit graph vertices in order which leaves no chance for
parallelization. In fact, even running a shortest path query on such graph
will not perform as good as it would in web or social network graphs.
These