Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Gael Varoquaux
> The other packages are nice but I would really love to just use scipy/ > sklearn and have decompositions, factorizations, etc for big matrices > go a little faster without recoding the algorithms.  Thanks If you have very big matrices, scikit-learn's PCA already uses randomized linear algebra, w

Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Lev E Givon
On Jan 2, 2018 8:35 PM, "Matthew Harrigan" wrote: Is it possible to have NumPy use a BLAS/LAPACK library that is GPU accelerated for certain problems? Any recommendations or readme's on how that might be set up? The other packages are nice but I would really love to just use scipy/sklearn and h

Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Matthew Harrigan
Is it possible to have NumPy use a BLAS/LAPACK library that is GPU accelerated for certain problems? Any recommendations or readme's on how that might be set up? The other packages are nice but I would really love to just use scipy/sklearn and have decompositions, factorizations, etc for big matr

Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Stefan Seefeld
On 02.01.2018 16:36, Matthieu Brucher wrote: > Hi, > > Let's say that Numpy provides a GPU version on GPU. How would that > work with all the packages that expect the memory to be allocated on CPU? > It's not that Numpy refuses a GPU implementation, it's that it > wouldn't solve the problem of GPU/

Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Robert Kern
On Tue, Jan 2, 2018 at 1:21 PM, Yasunori Endo wrote: > > Hi all > > Numba looks so nice library to try. > Thanks for the information. > >> This suggests a new, higher-level data model which supports replicating data into different memory spaces (e.g. host and GPU). Then users (or some higher layer

Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Matthieu Brucher
Hi, Let's say that Numpy provides a GPU version on GPU. How would that work with all the packages that expect the memory to be allocated on CPU? It's not that Numpy refuses a GPU implementation, it's that it wouldn't solve the problem of GPU/CPU having different memory. When/if nVidia decides (fin

Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Yasunori Endo
Hi all Numba looks so nice library to try. Thanks for the information. This suggests a new, higher-level data model which supports replicating > data into different memory spaces (e.g. host and GPU). Then users (or some > higher layer in the software stack) can dispatch operations to suitable > i

Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Stefan Seefeld
On 02.01.2018 15:22, Jerome Kieffer wrote: > On Tue, 02 Jan 2018 15:37:16 + > Yasunori Endo wrote: > >> If the reason is just about human resources, >> I'd like to try implementing GPU support on my NumPy fork. >> My goal is to create standard NumPy interface which supports >> both CUDA and Op

Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Jerome Kieffer
On Tue, 02 Jan 2018 15:37:16 + Yasunori Endo wrote: > If the reason is just about human resources, > I'd like to try implementing GPU support on my NumPy fork. > My goal is to create standard NumPy interface which supports > both CUDA and OpenCL, and more devices if available. I think this i

Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Lev E Givon
On Tue, Jan 2, 2018 at 10:37 AM, Yasunori Endo wrote: > Hi > > I recently started working with Python and GPU, > found that there're lot's of libraries provides > ndarray like interface such as CuPy/PyOpenCL/PyCUDA/etc. > I got so confused which one to use. > > Is there any reason not to support G

[Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Yasunori Endo
Hi I recently started working with Python and GPU, found that there're lot's of libraries provides ndarray like interface such as CuPy/PyOpenCL/PyCUDA/etc. I got so confused which one to use. Is there any reason not to support GPU computation directly on the NumPy itself? I want NumPy to support