Re: [Numpy-discussion] Extending C with Python

2018-01-31 Thread Stefan Seefeld
On 31.01.2018 17:58, Chris Barker wrote:
> I'm guessing you could use Cython to make this easier.

... or Boost.Python (http://boostorg.github.io/python), which has
built-in support for NumPy
(http://boostorg.github.io/python/doc/html/numpy/index.html), and
supports both directions: extending Python with C++, as well as
embedding Python into C++ applications.


Stefan

-- 

  ...ich hab' noch einen Koffer in Berlin...


___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Stefan Seefeld
On 02.01.2018 16:36, Matthieu Brucher wrote:
> Hi,
>
> Let's say that Numpy provides a GPU version on GPU. How would that
> work with all the packages that expect the memory to be allocated on CPU?
> It's not that Numpy refuses a GPU implementation, it's that it
> wouldn't solve the problem of GPU/CPU having different memory. When/if
> nVidia decides (finally) that memory should be also accessible from
> the CPU (like AMD APU), then this argument is actually void.

I actually doubt that. Sure, having a unified memory is convenient for
the programmer. But as long as copying data between host and GPU is
orders of magnitude slower than copying data locally, performance will
suffer. Addressing this performance issue requires some NUMA-like
approach, moving the operation to where the data resides, rather than
treating all data locations equal.

Stefan

-- 

  ...ich hab' noch einen Koffer in Berlin...


___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Direct GPU support on NumPy

2018-01-02 Thread Stefan Seefeld
On 02.01.2018 15:22, Jerome Kieffer wrote:
> On Tue, 02 Jan 2018 15:37:16 +
> Yasunori Endo  wrote:
>
>> If the reason is just about human resources,
>> I'd like to try implementing GPU support on my NumPy fork.
>> My goal is to create standard NumPy interface which supports
>> both CUDA and OpenCL, and more devices if available.
> I think this initiative already exists ... something which merges the
> approach of cuda and opencl but I have no idea on the momentum behind
> it.
>
>> Are there other reason not to support GPU on NumPy?
> yes. Matlab has such support and the performances gain are in the order
> of 2x vs 10x when addressing the GPU directly. All the time is spent in
> sending data back & forth. Numba is indeed a good candidate bu limited
> to the PTX assembly (i.e. cuda, hence nvidia hardware) 

This suggests a new, higher-level data model which supports replicating
data into different memory spaces (e.g. host and GPU). Then users (or
some higher layer in the software stack) can dispatch operations to
suitable implementations to minimize data movement.

Given NumPy's current raw-pointer C API this seems difficult to
implement, though, as it is very hard to track memory aliases.

Regards,

Stefan

-- 

  ...ich hab' noch einen Koffer in Berlin...


___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion