Is it possible to have NumPy use a BLAS/LAPACK library that is GPU
accelerated for certain problems?  Any recommendations or readme's on how
that might be set up?  The other packages are nice but I would really love
to just use scipy/sklearn and have decompositions, factorizations, etc for
big matrices go a little faster without recoding the algorithms.  Thanks

On Tue, Jan 2, 2018 at 5:04 PM, Stefan Seefeld <ste...@seefeld.name> wrote:

> On 02.01.2018 16:36, Matthieu Brucher wrote:
>
> Hi,
>
> Let's say that Numpy provides a GPU version on GPU. How would that work
> with all the packages that expect the memory to be allocated on CPU?
> It's not that Numpy refuses a GPU implementation, it's that it wouldn't
> solve the problem of GPU/CPU having different memory. When/if nVidia
> decides (finally) that memory should be also accessible from the CPU (like
> AMD APU), then this argument is actually void.
>
>
> I actually doubt that. Sure, having a unified memory is convenient for the
> programmer. But as long as copying data between host and GPU is orders of
> magnitude slower than copying data locally, performance will suffer.
> Addressing this performance issue requires some NUMA-like approach, moving
> the operation to where the data resides, rather than treating all data
> locations equal.
>
> [image: Stefan]
>
> --
>
>       ...ich hab' noch einen Koffer in Berlin...
>
>
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
>
>
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to