Re: mahout on GPU

2012-07-13 Thread mohsen jadidi
the Mahout and GPU capabilities. I just wanted to know if people involve in Mahout have thought about it or is it at all possible or not.for example speed up the Map and Reduce phases by parallelise computations on nodes. Of course I am not aware of communication cost

Re: mahout on GPU

2012-07-13 Thread Ted Dunning
. but I am more interested to get faster computation by combining the Mahout and GPU capabilities. I just wanted to know if people involve in Mahout have thought about it or is it at all possible or not.for example speed up the Map and Reduce phases by parallelise

Re: mahout on GPU

2012-07-10 Thread mohsen jadidi
mohsen.jad...@gmail.com wrote: yes it makes sense . but I am more interested to get faster computation by combining the Mahout and GPU capabilities. I just wanted to know if people involve in Mahout have thought about it or is it at all possible or not.for example speed up the Map and Reduce

Re: mahout on GPU

2012-07-10 Thread mohsen jadidi
in this project. On Mon, Jul 9, 2012 at 6:07 PM, mohsen jadidi mohsen.jad...@gmail.com wrote: yes it makes sense . but I am more interested to get faster computation by combining the Mahout and GPU capabilities. I just wanted to know if people involve in Mahout have thought about

Re: mahout on GPU

2012-07-10 Thread Sean Owen
I don't think this result holds in general -- they chose a very CPU intensive problem, without much data movement. This won't work for, say, Mahout jobs. I don't really see the point in porting Hadoop to a GPU. If you're in a GPU you don't need most of what Hadoop does! That is I imagine this is

Re: mahout on GPU

2012-07-10 Thread Ted Dunning
: yes it makes sense . but I am more interested to get faster computation by combining the Mahout and GPU capabilities. I just wanted to know if people involve in Mahout have thought about it or is it at all possible or not.for example speed up the Map and Reduce phases

Re: mahout on GPU

2012-07-09 Thread Manuel Blechschmidt
Hi Mohsen, hello Sean, there is already a lot of researching going on for doing recommendations especially matrix factorization on GPUs: e.g. http://www.slideshare.net/NVIDIA/1034-gtc09 20x - 300x faster or http://www.multicoreinfo.com/research/papers/2009/ipdps09-lahabar.pdf 60x faster over

Re: mahout on GPU

2012-07-09 Thread Sean Owen
(I agree, it's quite a useful approach -- was answering the question about whether there was any such thing in Mahout. This all assumes you can fit the data in memory in the GPU but that is true for moderately large data sets.) On Mon, Jul 9, 2012 at 9:04 AM, Manuel Blechschmidt

Re: mahout on GPU

2012-07-09 Thread Dan Brickley
Just a quick and possible innumerate thought re WebGL (which is OpenGL exposed as Web browser content via Javascript). Perhaps the big heavy number-crunching can be done on server-side Mahout / Hadoop, but with a role for *delivery* of computed matrices in the browser? The memory concerns are

Re: mahout on GPU

2012-07-09 Thread Sean Owen
The factorization is the heavy number crunching. The client of a recommender needs to do very little computation in comparison, like a vector-matrix product. While a GPU might make this happen faster, it's already on the order of microseconds. Compare with the cost of downloading the whole

Re: mahout on GPU

2012-07-09 Thread mohsen jadidi
Thanks for clarifications and comments. On Mon, Jul 9, 2012 at 10:18 AM, Sean Owen sro...@gmail.com wrote: The factorization is the heavy number crunching. The client of a recommender needs to do very little computation in comparison, like a vector-matrix product. While a GPU might make this

Re: mahout on GPU

2012-07-09 Thread Ted Dunning
Dot products are an example of something that gpu can't help with. The problem is that there the same number of flops as memory operations and memory is slow. To get acceleration you need lots of flops per memory fetch. Usually you need at least matrix by matrix multiply with both dense.

Re: mahout on GPU

2012-07-09 Thread mohsen jadidi
yes it makes sense . but I am more interested to get faster computation by combining the Mahout and GPU capabilities. I just wanted to know if people involve in Mahout have thought about it or is it at all possible or not.for example speed up the Map and Reduce phases by parallelise computations

Re: mahout on GPU

2012-07-09 Thread Sean Owen
interested to get faster computation by combining the Mahout and GPU capabilities. I just wanted to know if people involve in Mahout have thought about it or is it at all possible or not.for example speed up the Map and Reduce phases by parallelise computations on nodes. Of course I am

mahout on GPU

2012-07-08 Thread mohsen jadidi
Hello , This is my first post here and I just started reading about Hadoop, Mahout and all. I was wondering if there is any solution to use Mahout on parallel computing on GPU (mainly CUDA) ? I know it's a bit wired question to ask because cud a is C base and Mahout is Java base , but I just ask

Re: mahout on GPU

2012-07-08 Thread Sean Owen
More than that, Mahout is mostly Hadoop-based, which is well up the stack from Java. No there is nothing CUDA-related in the project. The closest thing are the pure Java non-Hadoop-based recommender pieces. But it is still far from CUDA. I think CUDA is intriguing since a lot of ML is a bunch of

Re: mahout on GPU

2012-07-08 Thread Ted Dunning
In general, large scale machine learning is I/O bound already. There are some things that would not be, but to really feed a GPU reasonably, data almost has to be memory resident. For more information on CUDA from Java, see (among others) http://www.jcuda.de/ On Sun, Jul 8, 2012 at 4:04 PM,

Re: mahout on GPU

2012-07-08 Thread Lance Norskog
To put it a little differently: the GPU architecture has been developed around video games. In a video game architecture, you have a fairly small amount of data (models and textures) going into the GPU memory via the bus, and then a lot of data coming out of the GPU hardware substrate to the video