When writing an numpy extension module, what is the preferred way to deal with the all the possible types an ndarray can have?
I have some data processing functions I need to implement and they need to be generic and work for all the possible numerical dtypes. I do not want to have to re-implement the same C-code for all the possible types, so the way I approached it was to use a C++ template function to implement the processing. Then I have a dispatching function that checks the type of the input ndarray and calls the correct template. Is there a better way? For example, suppose I have a processing template function: template <typename T> int Resize(T *datai) { ... } Then in my dispatch function, I do: switch(PyArray_TYPE(bufi)) { case NPY_UBYTE: Resize<npy_ubyte>((npy_ubyte *) PyArray_DATA(bufi)); break; case NPY_BYTE: Resize<npy_byte>((npy_byte *) PyArray_DATA(bufi)); break; case NPY_USHORT: Resize<npy_ushort>((npy_ushort *) PyArray_DATA(bufi)); break; case NPY_SHORT: Resize<npy_short>((npy_short *) PyArray_DATA(bufi)); break; case NPY_UINT: Resize<npy_uint>((npy_uint *) PyArray_DATA(bufi)); break; .... } _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion