Re: [Numpy-discussion] Fortran order arrays to and from numpy arrays
Alexander Schmolck wrote: 2. Despite this overhead, copying around large arrays (e.g. =1e5 elements) in above way causes notable additional overhead. Whilst I don't think there's a sane way to avoid copying by sharing data between numpy and matlab the copying could likely be done better. Alex, what do you think about hybrid arrays? http://www.mail-archive.com/numpy-discussion@lists.sourceforge.net/msg03748.html ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Questions about cross-compiling extensions for mac-ppc and mac-intel
Hi folks, I've been doing a lot of web-reading on the subject, but have not been completely able to synthesize all of the disparate bits of advice about building python extensions as Mac-PPC and Mac-Intel fat binaries, so I'm turning to the wisdom of this list for a few questions. My general goal is to make a double-clickable Mac installer of a set of tools built around numpy, numpy's distutils, a very hacked-up version of PIL, and some fortran code too. To this end, I need to figure out how to get the numpy distutils to cross-compile, generating PPC code and Intel code in separate builds -- and/or generating a universal binary all in one go. (I'd like to distribute a universal version of numpy, but I think that my own code needs to be built/distributed separately for each architecture due to endian- ness issues.) Is there explicit support in distutils for this, or is it a matter of setting the proper environment variables to entice gcc and gfortran to generate code for a specific architecture? One problem is that PIL is a tricky beast, even in the neutered form that I'm using it. It does a compile-time check for the endian-ness of the system, and a compile-time search for the zlib to use, both of which are problematic. To address the former, I'd like to be able to (say) include something like 'config_endian --big' on the 'python setup.py' command-line, and have that information trickle down to the PIL config script (a few subpackages deep). Is this easy or possible? To address the latter, I think I need to have the PIL extensions dynamically link against '/Developer/SDKs/MacOSX10.4u.sdk/usr/lib/ libz.dylib' which is the fat-binary version of the library, using the headers from '/Developer/SDKs/MacOSX10.4u.sdk/usr/include/zlib.h '. Right now, PIL is using system_info from numpy.distutils to find the valid library paths on which libz and its headers might live. This is nice and more or less platform-neutral, which I like. How best should I convince/configure numpy.distutils.system_info to put '/ Developer/SDKs/MacOSX10.4u.sdk/usr/{lib,include}' on the output to get_include_dirs() and get_lib_dirs()? Thanks for any advice or counsel, Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Fortran order arrays to and from numpy arrays
Andrew Straw [EMAIL PROTECTED] writes: Alexander Schmolck wrote: 2. Despite this overhead, copying around large arrays (e.g. =1e5 elements) in above way causes notable additional overhead. Whilst I don't think there's a sane way to avoid copying by sharing data between numpy and matlab the copying could likely be done better. Alex, what do you think about hybrid arrays? http://www.mail-archive.com/numpy-discussion@lists.sourceforge.net/msg03748.html Oh, that's why I said no *sane* way :) I read about hybrid arrays, but as far as I can tell the only way to use them to avoid copying stuff around is to create your own hybrid array memory pool (as you suggested downthread), which seems a highly unattractive pain-for-gain trade-off, especially given that you *do* have to reorder the data as you go from numpy to matlab -- unless of course I'm missing something. I have the impression that just memcpy'ing a single chunk of data isn't that much of a performance sink, but the reordering copying that mlabwrap currently does seems rather expensive. In other words, I think going from matlab to numpy is fine (just memcpy into a newly created fortran-order numpy array, or more or less equivalently, memcpy into a C-order array, transpose and reshape), the question appears to be how to best go from numpy to matlab when the numpy array isn't fortran-contiguous. I assume that it makes more sense to rely on some numpy functionality than to use a custom reordering-copy routine, especially if I want to move to ctypes later. Is there anything better than 1. allocating a matlab array 2. transposing and reshaping the numpy array 3. allocating (or keeping around) a temporary numpy array with data pointing to the matlab array data 4. using some function (PyArray_CopyInto?) to copy from the transposed, reshaped numpy array into the temporary numpy array thereby filling the matlab array with an appropriately reordered copy of the original array ? cheers, alex ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Fortran order arrays to and from numpy arrays
Charles R Harris [EMAIL PROTECTED] writes: Unfortunately I don't see an easy way to use the same approach the other way (matlab doesn't seem to offer much on the C level to manipulate arrays), so I'd presumably need something like: stuff_into_matlab_array(a.T.reshape(a.shape).copy()) the question is how to avoid doing two copies. Any comments appreciated, The easiest way to deal with the ordering is to use the order keyword in numpy: In [4]: a = array([0,1,2,3]).reshape((2,2), order='F') In [5]: a Out[5]: array([[0, 2], [1, 3]]) You would still need to get access to something to reshape, shared memory or something, but the key is that you don't have to reorder the elements, you just need the correct strides and offsets to address the elements in Fortran order. I have no idea if this works in numeric. It doesn't work in Numeric, but that isn't much of any issue because I think it ought to be pretty much equivalent by transposing and reshaping. However the problem is that I *do* need to reorder the elements for numpy-matlab and I'm not sure how to best do this (without unnecessary copying and temporary numpy array creation but using numpy functionality if possible). 'as ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Fortran order arrays to and from numpy arrays
On 25 Feb 2007 01:44:01 +, Alexander Schmolck [EMAIL PROTECTED] wrote: Charles R Harris [EMAIL PROTECTED] writes: Unfortunately I don't see an easy way to use the same approach the other way (matlab doesn't seem to offer much on the C level to manipulate arrays), so I'd presumably need something like: stuff_into_matlab_array(a.T.reshape(a.shape).copy()) the question is how to avoid doing two copies. Any comments appreciated, The easiest way to deal with the ordering is to use the order keyword in numpy: In [4]: a = array([0,1,2,3]).reshape((2,2), order='F') In [5]: a Out[5]: array([[0, 2], [1, 3]]) You would still need to get access to something to reshape, shared memory or something, but the key is that you don't have to reorder the elements, you just need the correct strides and offsets to address the elements in Fortran order. I have no idea if this works in numeric. It doesn't work in Numeric, but that isn't much of any issue because I think it ought to be pretty much equivalent by transposing and reshaping. However the problem is that I *do* need to reorder the elements for numpy-matlab and I'm not sure how to best do this (without unnecessary copying and temporary numpy array creation but using numpy functionality if possible). I don't see any way to get around a copy, but you can make numpy do the work. For example: In [12]: a = array([[0,1],[2,3]]) In [13]: b = array(a, order='f') In [14]: a.flags Out[14]: C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [15]: b.flags Out[15]: C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False F_CONTIGUOUS is what you want. The trick is to somehow use memory in the construction of the reordered array that is already designated for matlab. I don't know how to do this, but I think it might be doable. Travis is your best bet to answer that question. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] How to tell numpy to use gfortran as a compiler ?
Robert Kern wrote: David Cournapeau wrote: Hi, I try to compile numpy using gfortran, using: python setup.py config --fcompiler=gnu But this does not work. Whatever option I try, numpy build system uses g77, and as a result, I have problems with my ATLAS library compiled with gfortran. What should I do to compiler numpy with blas/lapack compiler with gfortran ? --fcompiler=gnu95 I also tried this option, without any luck. numpy still uses g77 (which is gcc 3.4 for fortran on my ubuntu system) instead of gfortran. But now, I think I misunderstand some things : I thought that g77 was the 3.* version of the fortran compiler, and gfortran the 4.* one. But it looks like they are also different in the fortran dialect they are supporting (I know nothing about fortran). So should I use gfortran at all to compile the fortran wrapper for BLAS/LAPACK by ATLAS ? Or should I use g77 ? How can I be sure then that I won't have problems using gcc 3* serie for Fortran and gcc 4* serie for everything else (C, C++ and most if not all libraries compiled on my system) ? cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion