Thanks for quick response.

Partitioning graphs into subgraphs and later combining the results is
too complicated to do. I prefer a simple method.

Currently, I do not want to divide the breadth-first search from a
single source. I just want to run 100 breadth-first search from 100
source nodes with 100 threads running in parallel.
The problem is that these 100 threads do not seem to run parallel,
however, they seem to run in sequential. I have searched on-line. Some
people mention that all tasks are put into queues waiting for free
mapreduce slots. It is might be due to not enough slots.
How to deal with this problem?

Wei


-----Original Message-----
From: Ted Dunning [mailto:tdunn...@maprtech.com] 
Sent: Wednesday, December 22, 2010 2:01 PM
To: common-user@hadoop.apache.org
Subject: Re: breadth-first search

The Mahout math package has a number of basic algorithms that use
algorithmic efficiencies when given sparse graphs.

A number of other algorithms use only the product of a sparse matrix on
another matrix or a vector.  Since these algorithms never change the
original sparse matrix, they are safe against fill-in problems.

The random projection technique avoids O(v^3) algorithms for computing
SVD
or related matrix decompositions.  See http://arxiv.org/abs/0909.4061
and
https://issues.apache.org/jira/browse/MAHOUT-376

None of these these algorithms are specific to graph theory, but all
deal
with methods that are useful with sparse graphs.

On Wed, Dec 22, 2010 at 10:46 AM, Ricky Ho <rickyphyl...@yahoo.com>
wrote:

> Can you point me to Matrix algorithms that is tuned for sparse graph ?
>  What I
> mean is from O(v^3) to O(v*e)  where v = number of vertex and e =
number of
> edges.
>

Reply via email to