Hi,

Let's say that Numpy provides a GPU version on GPU. How would that work
with all the packages that expect the memory to be allocated on CPU?
It's not that Numpy refuses a GPU implementation, it's that it wouldn't
solve the problem of GPU/CPU having different memory. When/if nVidia
decides (finally) that memory should be also accessible from the CPU (like
AMD APU), then this argument is actually void.

Matthieu

2018-01-02 22:21 GMT+01:00 Yasunori Endo <jo7...@gmail.com>:

> Hi all
>
> Numba looks so nice library to try.
> Thanks for the information.
>
> This suggests a new, higher-level data model which supports replicating
>> data into different memory spaces (e.g. host and GPU). Then users (or some
>> higher layer in the software stack) can dispatch operations to suitable
>> implementations to minimize data movement.
>>
>> Given NumPy's current raw-pointer C API this seems difficult to
>> implement, though, as it is very hard to track memory aliases.
>>
> I understood modifying numpy.ndarray for GPU is technically difficult.
>
> So my next primitive question is why NumPy doesn't offer
> ndarray like interface (e.g. numpy.gpuarray)?
> I wonder why everybody making *separate* library, making user confused.
> Is there any policy that NumPy refuse standard GPU implementation?
>
> Thanks.
>
>
> --
> Yasunori Endo
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
>
>


-- 
Quantitative analyst, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to