Hi all,

I was wondering if anyone here tried using the GPU of a Hadoop Node to
enhance MapReduce processing ?

I read about it but it always comes down to heavy computations such as
Matrix multiplications and Mote Carlo algorithms.

Did anyone try it with MapReduce jobs that analyze logs or any other text
mining examples ?

Is there a trade-off here (guess there is) between data size/complexity and
the computation required ?

Thanks,

Amit.

Reply via email to