Hello from gpunumpy import * > x=zeros(100,dtype='gpufloat') # Creates an array of 100 elements on the GPU > y=ones(100,dtype='gpufloat') > z=exp(2*x+y) # z in on the GPU, all operations on GPU with no transfer > z_cpu=array(z,dtype='float') # z is copied to the CPU > i=(z>2.3).nonzero()[0] # operation on GPU, returns a CPU integer array >
PyCuda already supports this through the gpuarray interface. As soon as Nvidia allows us to combine Driver and Runtime APIs, we'll be able to integrate libraries like CUBLAS, CUFFT, and any other runtime-depedent library. We could probably get access to CUBLAS/CUFFT source code as Nvidia released the 1.1 version in the past: http://sites.google.com/site/cudaiap2009/materials-1/extras/online-resources#TOC-CUBLAS-and-CUFFT-1.1-Source-Code but it would be easier to just use the libraries (and 1.1 is outdated now). For those of you who are interested, we forked python-cuda recently and started to add some numpy "sugar". The goal of python-cuda is to *complement* PyCuda by providing an equivalent to the CUDA Runtime API (understand: not Pythonic) using automatically-generated ctypes bindings. With it you can use CUBLAS, CUFFT and the emulation mode (so you don't need a GPU to develop): http://github.com/npinto/python-cuda/tree/master HTH Best, -- Nicolas Pinto Ph.D. Candidate, Brain & Computer Sciences Massachusetts Institute of Technology, USA http://web.mit.edu/pinto
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion