Hama is a proposed distributed matrix implementation system that is based on
using map-reduce to implement basic matrix operations and uses hbase to
store matrices.  Getting useful performance out of this substrate for dense
matrix operations is likely to be fairly challenging due to I/O costs.  For
sparse operations that exceed memory size, it may be more attractive.

Hama has been around for nearly two years.  So far, it appears that there is
an implementation of matrix multiply and add.  Performance numbers are
underwhelming for dense matrices.  On sample problem of multiplying 5000 x
5000 random matrices, hama achieves a speed on 8 workstations that is about
1/3 of the speed of R running on a laptop.

Performance on sparse matrices may be better.

On Fri, May 22, 2009 at 10:13 AM, Edward J. Yoon <edwardy...@apache.org>wrote:

> > Is Hama related to Hadoop ?
>
> Yes, it is.




-- 
Ted Dunning, CTO
DeepDyve

Reply via email to