On Wed, Aug 5, 2009 at 4:45 AM, Romain Brette <romain.bre...@ens.fr> wrote:
> Hi everyone, > > I was wondering if you had any plan to incorporate some GPU support to > numpy, or > perhaps as a separate module. What I have in mind is something that would > mimick > the syntax of numpy arrays, with a new dtype (gpufloat), like this: > > from gpunumpy import * > x=zeros(100,dtype='gpufloat') # Creates an array of 100 elements on the GPU > y=ones(100,dtype='gpufloat') > z=exp(2*x+y) # z in on the GPU, all operations on GPU with no transfer > z_cpu=array(z,dtype='float') # z is copied to the CPU > i=(z>2.3).nonzero()[0] # operation on GPU, returns a CPU integer array > > I came across a paper about something like that but couldn't find any > public > release: > http://www.tricity.wsu.edu/~bobl/personal/mypubs/2009_gpupy_toms.pdf<http://www.tricity.wsu.edu/%7Ebobl/personal/mypubs/2009_gpupy_toms.pdf> > > There is a library named GPULib (http://www.txcorp.com/products/GPULib/) > that > does similar things, but unfortunately they don't support Python (I think > their > main Python developer left). > I think this would be very useful for many people. For our project (a > neural > network simulator, http://www.briansimulator.org) we use PyCuda > (http://mathema.tician.de/software/pycuda), which is great, but it is > mainly for > low-level GPU programming. > What sort of functionality are you looking for? It could be you could slip in a small mod that would do what you want. In the larger picture, the use of GPUs has been discussed on the list several times going back at least a year. The main problems with using GPUs were that CUDA was only available for nvidia video cards and there didn't seem to be any hope for a CUDA version of LAPACK. Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion