Re: [Numpy-discussion] GPU Numpy

2009-08-21 Thread Nicolas Pinto
Agreed! What would be the best name? Our package will provide non-pythonic
bindings to cuda (e.g. import cuda; cuda.cudaMemcpy( ... ) ) and some numpy
sugar (e.g. from cuda import sugar; sugar.fft.fftconvolve(ndarray_a,
ndarray_b, 'same')).

How about cuda-ctypes or ctypes-cuda? Any suggestion?

At the same time we may wait for Nvidia to unlock this Driver/Runtime issue,
so we don't need this anymore.

Best,

N

2009/8/21 Stéfan van der Walt 

> 2009/8/21 Nicolas Pinto :
> > For those of you who are interested, we forked python-cuda recently and
> > started to add some numpy "sugar". The goal of python-cuda is to
> > *complement* PyCuda by providing an equivalent to the CUDA Runtime API
> > (understand: not Pythonic) using automatically-generated ctypes bindings.
> > With it you can use CUBLAS, CUFFT and the emulation mode (so you don't
> need
> > a GPU to develop):
> > http://github.com/npinto/python-cuda/tree/master
>
> Since you forked the project, it may be worth giving it a new name.
> PyCuda vs. python-cuda is bound to confuse people horribly!
>
> Cheers
> Stéfan
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>



-- 
Nicolas Pinto
Ph.D. Candidate, Brain & Computer Sciences
Massachusetts Institute of Technology, USA
http://web.mit.edu/pinto
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-21 Thread Stéfan van der Walt
2009/8/21 Nicolas Pinto :
> For those of you who are interested, we forked python-cuda recently and
> started to add some numpy "sugar". The goal of python-cuda is to
> *complement* PyCuda by providing an equivalent to the CUDA Runtime API
> (understand: not Pythonic) using automatically-generated ctypes bindings.
> With it you can use CUBLAS, CUFFT and the emulation mode (so you don't need
> a GPU to develop):
> http://github.com/npinto/python-cuda/tree/master

Since you forked the project, it may be worth giving it a new name.
PyCuda vs. python-cuda is bound to confuse people horribly!

Cheers
Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-21 Thread Nicolas Pinto
Hello

from gpunumpy import *
> x=zeros(100,dtype='gpufloat') # Creates an array of 100 elements on the GPU
> y=ones(100,dtype='gpufloat')
> z=exp(2*x+y) # z in on the GPU, all operations on GPU with no transfer
> z_cpu=array(z,dtype='float') # z is copied to the CPU
> i=(z>2.3).nonzero()[0] # operation on GPU, returns a CPU integer array
>

PyCuda already supports this through the gpuarray interface. As soon as
Nvidia allows us to combine Driver and Runtime APIs, we'll be able to
integrate libraries like CUBLAS, CUFFT, and any other runtime-depedent
library. We could probably get access to CUBLAS/CUFFT source code as Nvidia
released the 1.1 version in the past:
http://sites.google.com/site/cudaiap2009/materials-1/extras/online-resources#TOC-CUBLAS-and-CUFFT-1.1-Source-Code
but it would be easier to just use the libraries (and 1.1 is outdated now).

For those of you who are interested, we forked python-cuda recently and
started to add some numpy "sugar". The goal of python-cuda is to
*complement* PyCuda by providing an equivalent to the CUDA Runtime API
(understand: not Pythonic) using automatically-generated ctypes bindings.
With it you can use CUBLAS, CUFFT and the emulation mode (so you don't need
a GPU to develop):
http://github.com/npinto/python-cuda/tree/master

HTH

Best,

-- 
Nicolas Pinto
Ph.D. Candidate, Brain & Computer Sciences
Massachusetts Institute of Technology, USA
http://web.mit.edu/pinto
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-06 Thread Romain Brette
David Warde-Farley  cs.toronto.edu> writes:
> It did inspire some of our colleagues in Montreal to create this,  
> though:
> 
>   http://code.google.com/p/cuda-ndarray/
> 
> I gather it is VERY early in development, but I'm sure they'd love  
> contributions!
> 

Hi David,
That does look quite close to what I imagined, probably a good start then!
Romain


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-06 Thread Romain Brette
Ian Mallett  gmail.com> writes:

> 
> 
> On Wed, Aug 5, 2009 at 11:34 AM, Charles R Harris 
gmail.com> wrote:
> 
> 
> 
> It could be you could slip in a small mod that would do what you want.
> I'll help, if you want.  I'm good with GPUs, and I'd appreciate the numerical
power it would afford.

That would be great actually if we could gather a little team! Anyone else
interested?

As Trevor said, OpenCL could be a better choice than Cuda.

By the way, there is a Matlab toolbox that seems to have similar functionality:
http://www.accelereyes.com/

Cheers
Romain


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-06 Thread Romain Brette
Charles R Harris  gmail.com> writes:

> 
> What sort of functionality are you looking for? It could be you could slip in
a small mod that would do what you want. In the larger picture, the use of GPUs
has been discussed on the list several times going back at least a year. The
main problems with using GPUs were that CUDA was only available for nvidia video
cards and there didn't seem to be any hope for a CUDA version of LAPACK.Chuck
> 

So for our project what we need is:
* element-wise operations on vectors (arithmetical, exp/log, exponentiation)
* same but on views (x[2:7])
* assignment (x[:]=2*y)
* boolean operations on vectors (x>2.5) and the nonzero() method
* possibly, multiplying a N*M matrix by an M*M matrix, where N is large and M is
small (but this could be done with vector operations).
* random number generation would be great too (gpurand(N))

What is very important to me is that the syntax be the same as with normal
arrays, so that you could easily switch the GPU on/off (depending on whether a
GPU was detected).

Cheers
Romain


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-06 Thread Sturla Molden
Olivier Grisel wrote:
> As usual, MS reinvents the wheel with DirectX Compute but vendors such
> as AMD and nvidia propose both the OpenCL API +runtime binaries for
> windows and their DirectX Compute counterpart, based on mostly the
> same underlying implementation, e.g. CUDA in nvidia's case.
>
>   
Here is a DirectX Compute tutorial I found:

http://www.gamedev.net/community/forums/topic.asp?topic_id=516043

It pretty much says all we need to know. I am not investing any of my 
time learning that shitty API. Period.

Lets just hope OpenCL makes it to Windows without Microsoft breaking it 
for "security reasons" (as they did with OpenGL).



Sturla



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-05 Thread Olivier Grisel
2009/8/6 David Cournapeau :
> Olivier Grisel wrote:
>> OpenCL is definitely the way to go for a cross platform solution with
>> both nvidia and AMD having released beta runtimes to their respective
>> developer networks (free as in beer subscription required for the beta
>> dowload pages). Final public releases to be expected around 2009 Q3.
>>
>
> What's the status of opencl on windows ? Will MS have its own direct-x
> specific implementation ?

As usual, MS reinvents the wheel with DirectX Compute but vendors such
as AMD and nvidia propose both the OpenCL API +runtime binaries for
windows and their DirectX Compute counterpart, based on mostly the
same underlying implementation, e.g. CUDA in nvidia's case.

-- 
Olivier
http://twitter.com/ogrisel - http://code.oliviergrisel.name
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-05 Thread David Cournapeau
Olivier Grisel wrote:
> OpenCL is definitely the way to go for a cross platform solution with
> both nvidia and AMD having released beta runtimes to their respective
> developer networks (free as in beer subscription required for the beta
> dowload pages). Final public releases to be expected around 2009 Q3.
>   

What's the status of opencl on windows ? Will MS have its own direct-x
specific implementation ?

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-05 Thread Olivier Grisel
OpenCL is definitely the way to go for a cross platform solution with
both nvidia and AMD having released beta runtimes to their respective
developer networks (free as in beer subscription required for the beta
dowload pages). Final public releases to be expected around 2009 Q3.

OpenCL is an open royalty free standardized API and runtime
specification for heterogeneous  with a mix of CPU and GPU cores. The
nvidia implementation is based on the CUDA runtime and programming
OpenCL is very similar to programming in C for CUDA.

The developer of PyCUDA is also working on PyOpenCL

  http://pypi.python.org/pypi/pyopencl/

Both nvidia and AMD use llvm to compile the OpenCL cross-platform
kernel sources into device specific binaries loaded at runtime.

Official OpenCL specs:

 http://www.khronos.org/registry/cl/specs/opencl-1.0.29.pdf

Wikipedia page:

  http://en.wikipedia.org/wiki/OpenCL

nvidia runtime:

  http://www.nvidia.com/object/cuda_opencl.html

AMD runtime: only working with x86 and x86_64 with SSE3 for now:

  http://developer.amd.com/GPU/ATISTREAMSDKBETAPROGRAM/Pages/default.aspx

Intel and IBM were also a members of the standard comity so we can
reasonably expect runtime for there chips in the future (e.g. larabee
and Cell BE).

-- 
Olivier
http://twitter.com/ogrisel - http://code.oliviergrisel.name
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-05 Thread David Warde-Farley
A friend of mine wrote a simple wrapper around CUBLAS using ctypes  
that basically exposes a Python class that keeps a 2D array of single- 
precision floats on the GPU for you, and lets you I keep telling him  
to release it, but he thinks it's too hackish.

It did inspire some of our colleagues in Montreal to create this,  
though:

http://code.google.com/p/cuda-ndarray/

I gather it is VERY early in development, but I'm sure they'd love  
contributions!

David

On 5-Aug-09, at 6:45 AM, Romain Brette wrote:

> Hi everyone,
>
> I was wondering if you had any plan to incorporate some GPU support  
> to numpy, or
> perhaps as a separate module. What I have in mind is something that  
> would mimick
> the syntax of numpy arrays, with a new dtype (gpufloat), like this:
>
> from gpunumpy import *
> x=zeros(100,dtype='gpufloat') # Creates an array of 100 elements on  
> the GPU
> y=ones(100,dtype='gpufloat')
> z=exp(2*x+y) # z in on the GPU, all operations on GPU with no transfer
> z_cpu=array(z,dtype='float') # z is copied to the CPU
> i=(z>2.3).nonzero()[0] # operation on GPU, returns a CPU integer array


> There is a library named GPULib (http://www.txcorp.com/products/GPULib/ 
> ) that
> does similar things, but unfortunately they don't support Python (I  
> think their
> main Python developer left).
> I think this would be very useful for many people. For our project  
> (a neural
> network simulator, http://www.briansimulator.org) we use PyCuda
> (http://mathema.tician.de/software/pycuda)

Neat project, though at first I was sure that was a typo :) "He can't  
be simulating Brians"

- David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-05 Thread Trevor Clarke
With OpenCL implementations making their way into the wild, that's probably
a better target than CUDA.

On Wed, Aug 5, 2009 at 3:39 PM, Ian Mallett  wrote:

> On Wed, Aug 5, 2009 at 11:34 AM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>> It could be you could slip in a small mod that would do what you want.
>
> I'll help, if you want.  I'm good with GPUs, and I'd appreciate the
> numerical power it would afford.
>
>> The main problems with using GPUs were that CUDA was only available for
>> nvidia video cards and there didn't seem to be any hope for a CUDA version
>> of LAPACK.
>
> You don't have to use CUDA, although it would make it easier.
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-05 Thread Ian Mallett
On Wed, Aug 5, 2009 at 11:34 AM, Charles R Harris  wrote:

> It could be you could slip in a small mod that would do what you want.

I'll help, if you want.  I'm good with GPUs, and I'd appreciate the
numerical power it would afford.

> The main problems with using GPUs were that CUDA was only available for
> nvidia video cards and there didn't seem to be any hope for a CUDA version
> of LAPACK.

You don't have to use CUDA, although it would make it easier.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GPU Numpy

2009-08-05 Thread Charles R Harris
On Wed, Aug 5, 2009 at 4:45 AM, Romain Brette  wrote:

> Hi everyone,
>
> I was wondering if you had any plan to incorporate some GPU support to
> numpy, or
> perhaps as a separate module. What I have in mind is something that would
> mimick
> the syntax of numpy arrays, with a new dtype (gpufloat), like this:
>
> from gpunumpy import *
> x=zeros(100,dtype='gpufloat') # Creates an array of 100 elements on the GPU
> y=ones(100,dtype='gpufloat')
> z=exp(2*x+y) # z in on the GPU, all operations on GPU with no transfer
> z_cpu=array(z,dtype='float') # z is copied to the CPU
> i=(z>2.3).nonzero()[0] # operation on GPU, returns a CPU integer array
>
> I came across a paper about something like that but couldn't find any
> public
> release:
> http://www.tricity.wsu.edu/~bobl/personal/mypubs/2009_gpupy_toms.pdf
>
> There is a library named GPULib (http://www.txcorp.com/products/GPULib/)
> that
> does similar things, but unfortunately they don't support Python (I think
> their
> main Python developer left).
> I think this would be very useful for many people. For our project (a
> neural
> network simulator, http://www.briansimulator.org) we use PyCuda
> (http://mathema.tician.de/software/pycuda), which is great, but it is
> mainly for
> low-level GPU programming.
>

What sort of functionality are you looking for? It could be you could slip
in a small mod that would do what you want. In the larger picture, the use
of GPUs has been discussed on the list several times going back at least a
year. The main problems with using GPUs were that CUDA was only available
for nvidia video cards and there didn't seem to be any hope for a CUDA
version of LAPACK.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion