Dot products are an example of something that gpu can't help with. The problem 
is that there the same number of flops as memory operations and memory is slow. 
 

To get acceleration you need lots of flops per memory fetch. Usually you need 
at least matrix by matrix multiply with both dense. Scalable algorithms depend 
on sparsity in many cases so you are left with a problem. 

Sent from my iPhone

On Jul 9, 2012, at 9:31 AM, mohsen jadidi <mohsen.jad...@gmail.com> wrote:

> Thanks for clarifications and comments.
> 
> 
> On Mon, Jul 9, 2012 at 10:18 AM, Sean Owen <sro...@gmail.com> wrote:
> 
>> The factorization is the heavy number crunching. The client of a
>> recommender needs to do very little computation in comparison, like a
>> vector-matrix product. While a GPU might make this happen faster, it's
>> already on the order of microseconds. Compare with the cost of
>> downloading the whole factored matrix which may run into gigabytes
>> though.
>> 
>> On Mon, Jul 9, 2012 at 9:11 AM, Dan Brickley <dan...@danbri.org> wrote:
>>> Just a quick and possible innumerate thought re WebGL (which is OpenGL
>>> exposed as Web browser content via Javascript).
>>> 
>>> Perhaps the big heavy number-crunching can be done on server-side
>>> Mahout / Hadoop, but with a role for *delivery* of computed matrices
>>> in the browser? The memory concerns are still relevant, but if you can
>>> get data into GPU shaders (via texture) there might be modern Web
>>> application scenarios where it's worth doing some computations locally
>>> on GPU is worthwhile. Last time i looked, getting floats off of the
>>> graphics card wasn't easy with standard WebGL btw, though there's a
>>> WebCL looming too.
>>> 
>>> Dan
>> 
> 
> 
> 
> -- 
> Mohsen Jadidi

Reply via email to