On Tuesday, 26 July 2016 15:21:14 UTC+1, Peter Otten  wrote:
> 
> > I'm using ctypes to interface with a binary which returns a void pointer
> > (ctypes c_void_p) to a nested 64-bit float array:
> > [[1.0, 2.0], [3.0, 4.0], … ]
> > then return the pointer so it can be freed
> > 
> > I'm using the following code to de-reference it:
> > 
> > # a 10-element array
> > shape = (10, 2)
> > array_size = np.prod(shape)
> > mem_size = 8 * array_size
> > array_str = ctypes.string_at(ptr, mem_size)
> > # convert to NumPy array,and copy to a list
> > ls = np.frombuffer(array_str, dtype="float64",
> > count=array_size).reshape(shape).tolist()
> > # return pointer so it can be freed
> > drop_array(ptr)
> > return ls
> > 
> > This works correctly and consistently on Linux and OSX using NumPy 1.11.0,
> > but fails on Windows 32 bit and 64-bit about 50% of the time, returning
> > nonsense values. Am I doing something wrong? Is there a better way to do
> > this?
> 
> I'd verify that the underlying memory has not been freed by the "binary" 
> when you are doing the ctypes/numpy processing. You might get the correct 
> values only when you are "lucky" and the memory has not yet been reused for 
> something else, and you are "lucky" on Linux/OSX more often than on 
> Windows...

I'm pretty sure the binary isn't freeing the memory prematurely; I wrote it and 
I'm testing it, and the Python tests run 10^6 loops of:
array retrieval -> numpy array allocation + copying to list -> passing original 
array back to be freed.

I'm not completely ruling it out (it's difficult to test a .dylib / .so using 
valgrind), but getting the ctypes / numpy side right would at least allow me to 
eliminate one potential source of problems.
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to