> Date: Wed, 31 Aug 2016 13:28:21 +0200
> From: Michael Bieri <mibi...@gmail.com>
>
> I'm not quite sure which approach is state-of-the-art as of 2016. How would
> you do it if you had to make a C/C++ library available in Python right now?
>
> In my case, I have a C library with some scientific functions on matrices
> and vectors. You will typically call a few functions to configure the
> computation, then hand over some pointers to existing buffers containing
> vector data, then start the computation, and finally read back the data.
> The library also can use MPI to parallelize.
>

Depending on how minimal and universal you want to keep things, I use
the ctypes approach quite often, i.e. treat your numpy inputs an
outputs as arrays of doubles etc using the ndpointer(...) syntax. I
find it works well if you have a small number of well-defined
functions (not too many options) which are numerically very heavy.
With this approach I usually wrap each method in python to check the
inputs for contiguity, pass in the sizes etc. and allocate the numpy
array for the result.

Peter
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to