A friend of mine wrote a simple wrapper around CUBLAS using ctypes that basically exposes a Python class that keeps a 2D array of single- precision floats on the GPU for you, and lets you I keep telling him to release it, but he thinks it's too hackish.
It did inspire some of our colleagues in Montreal to create this, though: http://code.google.com/p/cuda-ndarray/ I gather it is VERY early in development, but I'm sure they'd love contributions! David On 5-Aug-09, at 6:45 AM, Romain Brette wrote: > Hi everyone, > > I was wondering if you had any plan to incorporate some GPU support > to numpy, or > perhaps as a separate module. What I have in mind is something that > would mimick > the syntax of numpy arrays, with a new dtype (gpufloat), like this: > > from gpunumpy import * > x=zeros(100,dtype='gpufloat') # Creates an array of 100 elements on > the GPU > y=ones(100,dtype='gpufloat') > z=exp(2*x+y) # z in on the GPU, all operations on GPU with no transfer > z_cpu=array(z,dtype='float') # z is copied to the CPU > i=(z>2.3).nonzero()[0] # operation on GPU, returns a CPU integer array > There is a library named GPULib (http://www.txcorp.com/products/GPULib/ > ) that > does similar things, but unfortunately they don't support Python (I > think their > main Python developer left). > I think this would be very useful for many people. For our project > (a neural > network simulator, http://www.briansimulator.org) we use PyCuda > (http://mathema.tician.de/software/pycuda) Neat project, though at first I was sure that was a typo :) "He can't be simulating Brians...." - David _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion