Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Gael Varoquaux
> The other packages are nice but I would really love to just use scipy/
> sklearn and have decompositions, factorizations, etc for big matrices
> go a little faster without recoding the algorithms.  Thanks

If you have very big matrices, scikit-learn's PCA already uses randomized
linear algebra, which buys you more than GPUs.

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Lev E Givon
On Jan 2, 2018 8:35 PM, "Matthew Harrigan" 
wrote:

Is it possible to have NumPy use a BLAS/LAPACK library that is GPU
accelerated for certain problems?  Any recommendations or readme's on how
that might be set up?  The other packages are nice but I would really love
to just use scipy/sklearn and have decompositions, factorizations, etc for
big matrices go a little faster without recoding the algorithms.  Thanks


Depending on what operation you want to accelerate, scikit-cuda may provide
a scipy-like interface to GPU-based implementations that you can use. It
isn't a drop-in replacement for numpy/scipy, however.

http://github.com/lebedov/scikit-cuda

L
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Stefan Seefeld
On 02.01.2018 16:36, Matthieu Brucher wrote:
> Hi,
>
> Let's say that Numpy provides a GPU version on GPU. How would that
> work with all the packages that expect the memory to be allocated on CPU?
> It's not that Numpy refuses a GPU implementation, it's that it
> wouldn't solve the problem of GPU/CPU having different memory. When/if
> nVidia decides (finally) that memory should be also accessible from
> the CPU (like AMD APU), then this argument is actually void.

I actually doubt that. Sure, having a unified memory is convenient for
the programmer. But as long as copying data between host and GPU is
orders of magnitude slower than copying data locally, performance will
suffer. Addressing this performance issue requires some NUMA-like
approach, moving the operation to where the data resides, rather than
treating all data locations equal.

Stefan

-- 

  ...ich hab' noch einen Koffer in Berlin...


___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Robert Kern
On Tue, Jan 2, 2018 at 1:21 PM, Yasunori Endo  wrote:
>
> Hi all
>
> Numba looks so nice library to try.
> Thanks for the information.
>
>> This suggests a new, higher-level data model which supports replicating
data into different memory spaces (e.g. host and GPU). Then users (or some
higher layer in the software stack) can dispatch operations to suitable
implementations to minimize data movement.
>>
>> Given NumPy's current raw-pointer C API this seems difficult to
implement, though, as it is very hard to track memory aliases.
>
> I understood modifying numpy.ndarray for GPU is technically difficult.
>
> So my next primitive question is why NumPy doesn't offer
> ndarray like interface (e.g. numpy.gpuarray)?
> I wonder why everybody making *separate* library, making user confused.

Because there is no settled way to do this. All of those separate library
implementations are trying different approaches. We are learning from each
of their attempts. They can each move at their own pace rather than being
tied down to numpy's slow rate of development and strict backwards
compatibility requirements. They can try new things and aren't limited to
their first mistakes. The user may well be confused by all of the different
options currently available. I don't think that's avoidable: there are lots
of meaningful options. Picking just one to stick into numpy is a disservice
to the community that needs the other options.

> Is there any policy that NumPy refuse standard GPU implementation?

Not officially, but I'm pretty sure that there is no appetite among the
developers for incorporating proper GPU support into numpy (by which I mean
that a user would build numpy with certain settings then make use of the
GPU using just numpy APIs). numpy is a mature project with a relatively
small development team. Much of that effort is spent more on maintenance
than new development.

What there is appetite for is to listen to the needs of the GPU-using
libraries and making sure that numpy's C and Python APIs are flexible
enough to do what the GPU libraries need. This ties into the work that's
being done to make ndarray subclasses better and formalizing the notions of
an "array-like" interface that things like pandas Series, etc. can
implement and play well with the rest of numpy.

--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Matthieu Brucher
Hi,

Let's say that Numpy provides a GPU version on GPU. How would that work
with all the packages that expect the memory to be allocated on CPU?
It's not that Numpy refuses a GPU implementation, it's that it wouldn't
solve the problem of GPU/CPU having different memory. When/if nVidia
decides (finally) that memory should be also accessible from the CPU (like
AMD APU), then this argument is actually void.

Matthieu

2018-01-02 22:21 GMT+01:00 Yasunori Endo :

> Hi all
>
> Numba looks so nice library to try.
> Thanks for the information.
>
> This suggests a new, higher-level data model which supports replicating
>> data into different memory spaces (e.g. host and GPU). Then users (or some
>> higher layer in the software stack) can dispatch operations to suitable
>> implementations to minimize data movement.
>>
>> Given NumPy's current raw-pointer C API this seems difficult to
>> implement, though, as it is very hard to track memory aliases.
>>
> I understood modifying numpy.ndarray for GPU is technically difficult.
>
> So my next primitive question is why NumPy doesn't offer
> ndarray like interface (e.g. numpy.gpuarray)?
> I wonder why everybody making *separate* library, making user confused.
> Is there any policy that NumPy refuse standard GPU implementation?
>
> Thanks.
>
>
> --
> Yasunori Endo
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
>
>


-- 
Quantitative analyst, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Yasunori Endo
Hi all

Numba looks so nice library to try.
Thanks for the information.

This suggests a new, higher-level data model which supports replicating
> data into different memory spaces (e.g. host and GPU). Then users (or some
> higher layer in the software stack) can dispatch operations to suitable
> implementations to minimize data movement.
>
> Given NumPy's current raw-pointer C API this seems difficult to implement,
> though, as it is very hard to track memory aliases.
>
I understood modifying numpy.ndarray for GPU is technically difficult.

So my next primitive question is why NumPy doesn't offer
ndarray like interface (e.g. numpy.gpuarray)?
I wonder why everybody making *separate* library, making user confused.
Is there any policy that NumPy refuse standard GPU implementation?

Thanks.


-- 
Yasunori Endo
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Stefan Seefeld
On 02.01.2018 15:22, Jerome Kieffer wrote:
> On Tue, 02 Jan 2018 15:37:16 +
> Yasunori Endo  wrote:
>
>> If the reason is just about human resources,
>> I'd like to try implementing GPU support on my NumPy fork.
>> My goal is to create standard NumPy interface which supports
>> both CUDA and OpenCL, and more devices if available.
> I think this initiative already exists ... something which merges the
> approach of cuda and opencl but I have no idea on the momentum behind
> it.
>
>> Are there other reason not to support GPU on NumPy?
> yes. Matlab has such support and the performances gain are in the order
> of 2x vs 10x when addressing the GPU directly. All the time is spent in
> sending data back & forth. Numba is indeed a good candidate bu limited
> to the PTX assembly (i.e. cuda, hence nvidia hardware) 

This suggests a new, higher-level data model which supports replicating
data into different memory spaces (e.g. host and GPU). Then users (or
some higher layer in the software stack) can dispatch operations to
suitable implementations to minimize data movement.

Given NumPy's current raw-pointer C API this seems difficult to
implement, though, as it is very hard to track memory aliases.

Regards,

Stefan

-- 

  ...ich hab' noch einen Koffer in Berlin...


___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Jerome Kieffer
On Tue, 02 Jan 2018 15:37:16 +
Yasunori Endo  wrote:

> If the reason is just about human resources,
> I'd like to try implementing GPU support on my NumPy fork.
> My goal is to create standard NumPy interface which supports
> both CUDA and OpenCL, and more devices if available.

I think this initiative already exists ... something which merges the
approach of cuda and opencl but I have no idea on the momentum behind
it.

> Are there other reason not to support GPU on NumPy?

yes. Matlab has such support and the performances gain are in the order
of 2x vs 10x when addressing the GPU directly. All the time is spent in
sending data back & forth. Numba is indeed a good candidate bu limited
to the PTX assembly (i.e. cuda, hence nvidia hardware) 

Cheers,

Jerome
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Lev E Givon
On Tue, Jan 2, 2018 at 10:37 AM, Yasunori Endo  wrote:
> Hi
>
> I recently started working with Python and GPU,
> found that there're lot's of libraries provides
> ndarray like interface such as CuPy/PyOpenCL/PyCUDA/etc.
> I got so confused which one to use.
>
> Is there any reason not to support GPU computation
> directly on the NumPy itself?
> I want NumPy to support GPU computation as a standard.
>
> If the reason is just about human resources,
> I'd like to try implementing GPU support on my NumPy fork.
> My goal is to create standard NumPy interface which supports
> both CUDA and OpenCL, and more devices if available.
>
> Are there other reason not to support GPU on NumPy?
>
> Thanks.
> --
> Yasunori Endo

Check out numba - it may already address some of your needs:

https://numba.pydata.org/
-- 
Lev E. Givon, PhD
http://lebedov.github.io

___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Yasunori Endo
Hi

I recently started working with Python and GPU,
found that there're lot's of libraries provides
ndarray like interface such as CuPy/PyOpenCL/PyCUDA/etc.
I got so confused which one to use.

Is there any reason not to support GPU computation
directly on the NumPy itself?
I want NumPy to support GPU computation as a standard.

If the reason is just about human resources,
I'd like to try implementing GPU support on my NumPy fork.
My goal is to create standard NumPy interface which supports
both CUDA and OpenCL, and more devices if available.

Are there other reason not to support GPU on NumPy?

Thanks.
-- 
Yasunori Endo
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion