My goal was to be able to use ctypes, though, to avoid having to do manual memory management. Meanwhile, I was able to code something in C++ that may be useful (see attachment). It (should) work as follows. 1) On the Python side: convert a numpy array to a ctypes-structure, and feed this to the C-function: arg = c_ndarray(array) mylib.myfunc(arg) 2) On the C++ side: receive the numpy array in a C-structure: myfunc(numpyArray<double> array) 3) Again on the C++ side: convert the C-structure to an Ndarray class: (e.g. for a 3D array) Ndarray<double,3> a(array) No data copying is involved in any conversion, of course. Step 2 is required to keep ctypes happy. I can now use a[i][j][k] and the conversion from [i][j][k] to i*strides[0] + j * strides[1] + k * strides[2] is done at compile time using template metaprogramming. The price to pay is that the number of dimensions of the Ndarray has to be known at compile time (to instantiate the template), which is reasonable I think, for the gain in convenience. My first tests seem to be satisfying. I would really appreciate if someone could have a look at it and tell me if it can be done much better than what I cooked. If it turns out that it may interest more people, I'll put it on the scipy wiki. Cheers, Joris |
ndarray.h
Description: Binary data
On 19 Mar 2008, at 16:22, Matthieu Brucher wrote: Hi, |
_______________________________________________ Numpy-discussion mailing list [email protected] http://projects.scipy.org/mailman/listinfo/numpy-discussion
