This is the agenda that I'm interested in too.
I believe Item-Based Recomemndation in Mahout (Not only about Mahout
though) should spend sometime
doing multiplication of cooccurrence matrix and user preference vector.
If we could pass this multiplication task off loaded to GGPU, then that
will be a great acceleration.
What I'm not really clear is how double precision multiplication task
inside Java Virtual Machine can take advantage of the HW accelerator.(I
mean how can you make GGPU visible to Mahout through JVM?)

If we could get over this in addition to what Ted Dunning presented the
other day on Solr involment in building/loading cooccurrence matrix for
Mahout recommendation, it should be a big leap in innovating Mahout
recommendation.

Am I missing sothing or just dreamig?
Regards,,,
Y.Mandai

2013/2/20 Sean Owen <sro...@gmail.com>

> I think all of the code uses double-precision floats. I imagine much of it
> could work as well with single-precision floats.
>
> MapReduce and a GPU are very different things though, and I'm not sure how
> you would use both together effectively.
>
>
> On Wed, Feb 20, 2013 at 7:10 AM, shruti ranade <shrutiranad...@gmail.com
> >wrote:
>
> > Hi,
> >
> > I am a beginner in mahout. I am working on k-means MR implementation and
> > trying to run it on a GPGPU.* I wanted to know if mahout computations are
> > all double precision or single precision. *
> >
> > Suggest me any documentation that I need to refer to.
> >
> > Thanks,
> > Shruti
> >
>

Reply via email to