On Thu, Jun 10, 2010 at 6:48 PM, Sturla Molden <stu...@molden.no> wrote:
>
> I have a few radical suggestions:
>
> 1. Use ctypes as glue to the core DLL, so we can completely forget about
> refcounts and similar mess. Why put manual reference counting and error
> handling in the core? It's stupid.

I totally agree, I  thought that the refactoring was supposed to provide
simple data structures and simple algorithms to perform the C equivalents of
sin,cos,exp, dot, +,-,*,/, dot, inv, ...

Let me explain at an example what I expected:

In the core C numpy library there would be new  "numpy_array" struct
with attributes

 numpy_array->buffer
 numpy_array->dtype
 numpy_array->ndim
 numpy_array->shape
 numpy_array->strides
 numpy_array->owndata
etc.

that replaces the current  PyArrayObject which contains Python C API stuff:

typedef struct PyArrayObject {
        PyObject_HEAD
        char *data;             /* pointer to raw data buffer */
        int nd;                 /* number of dimensions, also called ndim */
        npy_intp *dimensions;       /* size in each dimension */
        npy_intp *strides;          /* bytes to jump to get to the
                                   next element in each dimension */
        PyObject *base;         /* This object should be decref'd
                                   upon deletion of array */
                                /* For views it points to the original array */
                                /* For creation from buffer object it points
                                   to an object that shold be decref'd on
                                   deletion */
                                /* For UPDATEIFCOPY flag this is an array
                                   to-be-updated upon deletion of this one */
        PyArray_Descr *descr;   /* Pointer to type structure */
        int flags;              /* Flags describing array -- see below*/
        PyObject *weakreflist;  /* For weakreferences */
        void *buffer_info;      /* Data used by the buffer interface */
} PyArrayObject;



Example:
--------------

If one calls the following Python code
x = numpy.zeros((N,M,K), dtype=float)
the memory allocation would be done on the Python side.

Calling a ufunc like
y = numpy.sin(x)
would first allocate the memory for y on the Python side
and then call a C function a la
numpy_unary_ufunc( double (*fcn_ptr)(double), numpy_array *x, numpy_array *y)

If y is already allocated, one would call
y = numpy.sin(x, out = y)

Similarly z = x*y
would first allocate the memory for z and then call a C function a la
numpy_binary_ufunc( double (*fcn_ptr)(double, double), numpy_array *x,
numpy_array *y, numpy_array *z)


similarly other functions like dot:
z = dot(x,y, out = z)

would simply call a C function a la
numpy_dot( numpy_array *x, numpy_array *y, numpy_array *z)


If one wants to use numpy functions on the C side only, one would use
the numpy_array struct manually.
I.e. one has to do the memory management oneself in C. Which is
perfectly ok since one is just interested in using
the algorithms.

>
> 2. The core should be a plain DLL, loadable with ctypes. (I know David
> Cournapeau and Robert Kern is going to hate this.) But if Python can have a
> custom loader for .pyd files, so can NumPy for it's core DLL. For ctypes we
> just need to specify a fully qualified path to the DLL, which can be read
> from a config file or whatever.

That would probably the easiest way.

>
> 3. ctypes will take care of all the mess regarding the GIL. Again there is
> no need to re-invent the wheel. ctypes.CDLL releases the GIL when calling
> into C, and re-acquires before making callbacks to Python. ctypes.PyDLL
> keeps the GIL for legacy library code that are not threadsafe.

I have not much experience with parallelizing codes.
If one implements the algorithms side effect free it should be quite
easy to parallelize them on the C level, not?

>
> ctypes will also make porting to other Python implementations easier (or
> even other languages: Ruby, JacaScript) easier. Not to mention that it will
> make NumPy impervious to changes in the Python C API.
>
> 4. Write the core in C++ or Fortran (95/2003), not C. ANSI C (aka C89). Use
> std::vector<> instead of malloc/free for C++, and possibly templates for
> generics on various dtypes.

The only reason I see for C++ is the possibility to use meta programming which
is very ill-designed. I'd rather like to see some simple code
preprocessing on C code than
C++ template meta programming. And it should be possible to avoid
mallocs in the C code, not?


>
> 5. Allow OpenMP pragmas in the core. If arrays are above a certain size, it
> should switch to multi-threading.
>
> Sturla


Just my 2 cents,

Sebastian



>
>
>
>
>
>
> Den 10.06.2010 15:26, skrev Charles R Harris:
>
> A few thoughts came to mind while reading the initial writeup.
>
> 1) How is the GIL handled in the callbacks.
> 2) What about error handling? That is tricky to get right, especially in C
> and with reference counting.
> 3) Is there a general policy as to how the reference counting should be
> handled in specific functions? That is, who does the reference
> incrementing/decrementing?
> 4) Boost has some reference counted pointers, have you looked at them? C++
> is admittedly a very different animal for this sort of application.
>
> Chuck
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to