Re: [Numpy-discussion] Compiling numpy on Red-Hat Import Error with lapack_lite.so
That actually makes sense because I am not sure the gnu that it was compiled with but I think it is different. I have since compiled gcc myself, then python, and atlas libraries. Then I tried to install numpy. It go tthrough the install no worries and found the correct libraries. It stuffed when I tried to import it with this error: import numpy Traceback (most recent call last): File stdin, line 1, in module File /home/jspender/lib/python2.6/site-packages/numpy/__init__.py, line 137, in module import add_newdocs File /home/jspender/lib/python2.6/site-packages/numpy/add_newdocs.py, line 9, in module from numpy.lib import add_newdoc File /home/jspender/lib/python2.6/site-packages/numpy/lib/__init__.py, line 13, in module from polynomial import * File /home/jspender/lib/python2.6/site-packages/numpy/lib/polynomial.py, line 17, in module from numpy.linalg import eigvals, lstsq File /home/jspender/lib/python2.6/site-packages/numpy/linalg/__init__.py, line 48, in module from linalg import * File /home/jspender/lib/python2.6/site-packages/numpy/linalg/linalg.py, line 23, in module from numpy.linalg import lapack_lite ImportError: /home/jspender/lib/python2.6/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: _gfortran_concat_string Any ideas??? Cheers, Jeff On Fri, Jul 8, 2011 at 12:37 AM, Bruce Southey bsout...@gmail.com wrote: On 07/07/2011 05:23 AM, Jeffrey Spencer wrote: The error is below: creating build/temp.linux-x86_64-2.6/numpy/core/blasdot compile options: '-DATLAS_INFO=\None\ -Inumpy/core/blasdot -Inumpy/core/include -Ibuild/src.linux-x86_64-2.6/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/home/jspender/include/python2.6 -Ibuild/src.linux-x86_64-2.6/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.6/numpy/core/src/umath -c' gcc: numpy/core/blasdot/_dotblas.c numpy/core/blasdot/_dotblas.c: In function ‘dotblas_matrixproduct’: numpy/core/blasdot/_dotblas.c:239: warning: comparison of distinct pointer types lacks a cast numpy/core/blasdot/_dotblas.c:257: warning: passing argument 3 of ‘*(PyArray_API + 2240u)’ from incompatible pointer type numpy/core/blasdot/_dotblas.c:292: warning: passing argument 3 of ‘*(PyArray_API + 2240u)’ from incompatible pointer type gcc -pthread -shared build/temp.linux-x86_64-2.6/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -Lbuild/temp.linux-x86_64-2.6 -lf77blas -lcblas -latlas -o build/lib.linux-x86_64-2.6/numpy/core/_dotblas.so /usr/bin/ld: skipping incompatible /usr/local/lib/libf77blas.a when searching for -lf77blas /usr/bin/ld: skipping incompatible /usr/local/lib/libf77blas.a when searching for -lf77blas /usr/bin/ld: cannot find -lf77blas collect2: ld returned 1 exit status /usr/bin/ld: skipping incompatible /usr/local/lib/libf77blas.a when searching for -lf77blas /usr/bin/ld: skipping incompatible /usr/local/lib/libf77blas.a when searching for -lf77blas /usr/bin/ld: cannot find -lf77blas collect2: ld returned 1 exit status error: Command gcc -pthread -shared build/temp.linux-x86_64-2.6/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -Lbuild/temp.linux-x86_64-2.6 -lf77blas -lcblas -latlas -o build/lib.linux-x86_64-2.6/numpy/core/_dotblas.so failed with exit status 1 Any help would be appreciated. Python is looking for a 64-bit library as the one in /usr/local/lib/ is either 32-bit or built with a different compiler version. If you have the correct library in another location then you need to point numpy to it or just build everything with the same compiler. Bruce ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Compiling numpy on Red-Hat Import Error with lapack_lite.so
I'll answer my own question. It was a mix of using two different fortran compilers so specified the option: python setup.py config_fc --fcompiler=gfortran build. All seems to be going well now. On 07/08/2011 05:35 PM, Jeffrey Spencer wrote: That actually makes sense because I am not sure the gnu that it was compiled with but I think it is different. I have since compiled gcc myself, then python, and atlas libraries. Then I tried to install numpy. It go tthrough the install no worries and found the correct libraries. It stuffed when I tried to import it with this error: import numpy Traceback (most recent call last): File stdin, line 1, in module File /home/jspender/lib/python2.6/site-packages/numpy/__init__.py, line 137, in module import add_newdocs File /home/jspender/lib/python2.6/site-packages/numpy/add_newdocs.py, line 9, in module from numpy.lib import add_newdoc File /home/jspender/lib/python2.6/site-packages/numpy/lib/__init__.py, line 13, in module from polynomial import * File /home/jspender/lib/python2.6/site-packages/numpy/lib/polynomial.py, line 17, in module from numpy.linalg import eigvals, lstsq File /home/jspender/lib/python2.6/site-packages/numpy/linalg/__init__.py, line 48, in module from linalg import * File /home/jspender/lib/python2.6/site-packages/numpy/linalg/linalg.py, line 23, in module from numpy.linalg import lapack_lite ImportError: /home/jspender/lib/python2.6/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: _gfortran_concat_string Any ideas??? Cheers, Jeff On Fri, Jul 8, 2011 at 12:37 AM, Bruce Southey bsout...@gmail.com mailto:bsout...@gmail.com wrote: On 07/07/2011 05:23 AM, Jeffrey Spencer wrote: The error is below: creating build/temp.linux-x86_64-2.6/numpy/core/blasdot compile options: '-DATLAS_INFO=\None\ -Inumpy/core/blasdot -Inumpy/core/include -Ibuild/src.linux-x86_64-2.6/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/home/jspender/include/python2.6 -Ibuild/src.linux-x86_64-2.6/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.6/numpy/core/src/umath -c' gcc: numpy/core/blasdot/_dotblas.c numpy/core/blasdot/_dotblas.c: In function ‘dotblas_matrixproduct’: numpy/core/blasdot/_dotblas.c:239: warning: comparison of distinct pointer types lacks a cast numpy/core/blasdot/_dotblas.c:257: warning: passing argument 3 of ‘*(PyArray_API + 2240u)’ from incompatible pointer type numpy/core/blasdot/_dotblas.c:292: warning: passing argument 3 of ‘*(PyArray_API + 2240u)’ from incompatible pointer type gcc -pthread -shared build/temp.linux-x86_64-2.6/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -Lbuild/temp.linux-x86_64-2.6 -lf77blas -lcblas -latlas -o build/lib.linux-x86_64-2.6/numpy/core/_dotblas.so /usr/bin/ld: skipping incompatible /usr/local/lib/libf77blas.a when searching for -lf77blas /usr/bin/ld: skipping incompatible /usr/local/lib/libf77blas.a when searching for -lf77blas /usr/bin/ld: cannot find -lf77blas collect2: ld returned 1 exit status /usr/bin/ld: skipping incompatible /usr/local/lib/libf77blas.a when searching for -lf77blas /usr/bin/ld: skipping incompatible /usr/local/lib/libf77blas.a when searching for -lf77blas /usr/bin/ld: cannot find -lf77blas collect2: ld returned 1 exit status error: Command gcc -pthread -shared build/temp.linux-x86_64-2.6/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -Lbuild/temp.linux-x86_64-2.6 -lf77blas -lcblas -latlas -o build/lib.linux-x86_64-2.6/numpy/core/_dotblas.so failed with exit status 1 Any help would be appreciated. Python is looking for a 64-bit library as the one in /usr/local/lib/ is either 32-bit or built with a different compiler version. If you have the correct library in another location then you need to point numpy to it or just build everything with the same compiler. Bruce ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Missing Values Discussion
Hi Travis, On Fri, Jul 8, 2011 at 5:03 AM, Travis Oliphant oliph...@enthought.com wrote: Hi all, I want to first apologize for stepping into this discussion a bit late and for not being able to participate adequately. However, I want to offer a couple of perspectives, and my opinion about what we should do as well as clarify what I have instructed Mark to do as part of his summer work. First, the discussion has reminded me how valuable it is to get feedback from all points of view. While it does lengthen the process, it significantly enhances the result. I strongly hope we can continue the tradition of respectful discussion on this mailing list where people's views are treated with respect --- even if we don't always have the time to understand them in depth. I also really appreciate people taking the time to visit on the phone call with me as it gave me a chance to understand many opinions quickly and at least start to form a possibly useful opinion. Basically, because there is not consensus and in fact a strong and reasonable opposition to specific points, Mark's NEP as proposed cannot be accepted in its entirety right now. However, I believe an implementation of his NEP is useful and will be instructive in resolving the issues and so I have instructed him to spend Enthought time on the implementation. Any changes that need to be made to the API before it is accepted into a released form of NumPy can still be made even after most of the implementation is completed as far as I understand it. This is because most of the disagreement is about the specific ability to manipulate the masks independently of assigning missing data and the creation of an additional np.HIDE (np.IGNORE) concept at the Python level. Despite some powerful arguments on both sides of the discussion, I am confident that we can figure out an elegant solution that will work long term. My current opinion is that I am very favorable to making easy the use-case that has been repeatedly described of having missing data that is *always* missing and then having hidden data that you don't want to think about for a particular set of calculations (but you also don't want to through away by over-writing). I think it is important to make it easy to keep that data around without over-writing but also have the idea of that kind of missing data different than the idea of data you can't care about because it just isn't there. I also think it is important for the calculation infrastructure to have just one notion of missing data which Mark's NEP handles beautifully. It seems to me that some of the disagreement is one of perspective in that Mark articulates very well the position of generic programming, make-opaque-the-implementation perspective with a focus on the implications of missing data for calculations. Nathaniel and Matthew articulate well the perspective of focusing on the data object itself and the desire to keep separate the different ideas behind missing data that have been described --- as well as a powerfully described description of the NumPy tradition of exposing the raw data to the Python side without hiding too much of the implementation from the user. I think it's a healthy discussion. But, I would like to see Mark's code get completed so that we can start talking about code examples. Please don't interpret my instructing Mark to finish the code as it's been decided. I simply think it's the best path forward to ultimately resolving the concerns. I would like to see an API worked out before summer's end --- and I'm hopeful everyone will be excited about what the resulting design is. I do think there is room for agreement in the present debate if we all remember to keep listening to each other. It takes a lot of effort to understand somebody else's point of view. I have been grateful to see evidence I see of that behavior multiple times (in Mark's revamping of the NEP, in Matthew Brett's re-statement of his interpretation of Mark's views, in Nathaniel's working hard to engage the dialogue even in the throes of finishing his PhD, and many other examples). It makes me very happy to be a part of this community. I look forward to times when I can send more thoughtful and technical emails than this one. Thanks for this email - it is very helpful. Personally I was worrying that: A) Mark had not fully grasped our concern B) Disagreement was not welcome and this gave me an uncomfortable feeling about A) the resulting API and B) the discussion. You've dealt with both here, and thank you for that. Can I ask - what do you recommend that we do now, for the discussion? Should we be quiet and wait until there is code to test, or, as Nathaniel has tried to do, work at reaching some compromise that makes sense to some or all parties? Thanks again, Matthew
Re: [Numpy-discussion] ANN: EPD 7.1 released
How does this match up with the recently announced release of ETS-4.0? Are the versions of the python modules the same? On Jul 8, 2011, at 12:37 AM, Ilan Schnell wrote: Hello, I am pleased to announce that EPD (Enthought Python Distribution) version 7.1 has been released. The most significant change is the addition of an EPD Free version, which has its own very liberal license, and can be downloaded and used free of any charge by anyone (not only academics). EPD Free includes a subset of the packages included in the full EPD. The highlights of this subset are: numpy, scipy, matplotlib, traits and chaco. To see which libraries are included in the free vs. full version, please see: http://www.enthought.com/products/epdlibraries.php In addition we have opened our PyPI build mirror for everyone. This means that one can type enpkg xyz for 10,000+ packages. However, there are still benefits to becoming an EPD subscriber. http://www.enthought.com/products/getepd.php Apart from the addition of EPD Free, this release includes updates to over 30 packages, including numpy, scipy, ipython and ETS. We have also added PySide, Qt and MDP to this release. Please find the complete list of additions, updates and bug fixes in the change log: http://www.enthought.com/products/changelog.php About EPD - The Enthought Python Distribution (EPD) is a kitchen-sink-included distribution of the Python programming language, including over 90 additional tools and libraries. The EPD bundle includes NumPy, SciPy, IPython, 2D and 3D visualization, and many other tools. EPD is currently available as a single-click installer for Windows XP, Vista and 7, MacOSX (10.5 and 10.6), RedHat 3, 4 and 5, as well as Solaris 10 (x86 and x86_64/amd64 on all platforms). All versions of EPD (32 and 64-bit) are free for academic use. An annual subscription including installation support is available for individual and commercial use. Additional support options, including customization, bug fixes and training classes are also available: http://www.enthought.com/products/epd_sublevels.php - Ilan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Missing Values Discussion
On 07/08/2011 07:15 AM, Matthew Brett wrote: Hi Travis, On Fri, Jul 8, 2011 at 5:03 AM, Travis Oliphantoliph...@enthought.com wrote: Hi all, I want to first apologize for stepping into this discussion a bit late and for not being able to participate adequately. However, I want to offer a couple of perspectives, and my opinion about what we should do as well as clarify what I have instructed Mark to do as part of his summer work. First, the discussion has reminded me how valuable it is to get feedback from all points of view. While it does lengthen the process, it significantly enhances the result. I strongly hope we can continue the tradition of respectful discussion on this mailing list where people's views are treated with respect --- even if we don't always have the time to understand them in depth. I also really appreciate people taking the time to visit on the phone call with me as it gave me a chance to understand many opinions quickly and at least start to form a possibly useful opinion. Basically, because there is not consensus and in fact a strong and reasonable opposition to specific points, Mark's NEP as proposed cannot be accepted in its entirety right now. However, I believe an implementation of his NEP is useful and will be instructive in resolving the issues and so I have instructed him to spend Enthought time on the implementation. Any changes that need to be made to the API before it is accepted into a released form of NumPy can still be made even after most of the implementation is completed as far as I understand it. This is because most of the disagreement is about the specific ability to manipulate the masks independently of assigning missing data and the creation of an additional np.HIDE (np.IGNORE) concept at the Python level. Despite some powerful arguments on both sides of the discussion, I am confident that we can figure out an elegant solution that will work long term. My current opinion is that I am very favorable to making easy the use-case that has been repeatedly described of having missing data that is *always* missing and then having hidden data that you don't want to think about for a particular set of calculations (but you also don't want to through away by over-writing). I think it is important to make it easy to keep that data around without over-writing but also have the idea of that kind of missing data different than the idea of data you can't care about because it just isn't there. I also think it is important for the calculation infrastructure to have just one notion of missing data which Mark's NEP handles beautifully. It seems to me that some of the disagreement is one of perspective in that Mark articulates very well the position of generic programming, make-opaque-the-implementation perspective with a focus on the implications of missing data for calculations.Nathaniel and Matthew articulate well the perspective of focusing on the data object itself and the desire to keep separate the different ideas behind missing data that have been described --- as well as a powerfully described description of the NumPy tradition of exposing the raw data to the Python side without hiding too much of the implementation from the user. I think it's a healthy discussion. But, I would like to see Mark's code get completed so that we can start talking about code examples. Please don't interpret my instructing Mark to finish the code as it's been decided. I simply think it's the best path forward to ultimately resolving the concerns. I would like to see an API worked out before summer's end --- and I'm hopeful everyone will be excited about what the resulting design is. I do think there is room for agreement in the present debate if we all remember to keep listening to each other. It takes a lot of effort to understand somebody else's point of view. I have been grateful to see evidence I see of that behavior multiple times (in Mark's revamping of the NEP, in Matthew Brett's re-statement of his interpretation of Mark's views, in Nathaniel's working hard to engage the dialogue even in the throes of finishing his PhD, and many other examples). It makes me very happy to be a part of this community. I look forward to times when I can send more thoughtful and technical emails than this one. Thanks for this email - it is very helpful. Personally I was worrying that: A) Mark had not fully grasped our concern B) Disagreement was not welcome and this gave me an uncomfortable feeling about A) the resulting API and B) the discussion. You've dealt with both here, and thank you for that. Can I ask - what do you recommend that we do now, for the discussion? Should we be quiet and wait until there is code to test, or, as Nathaniel has tried to do, work at reaching some compromise that makes sense to some or all parties?
Re: [Numpy-discussion] ANN: EPD 7.1 released
I'm not sure what you mean, when you ask if the Python modules are the same. EPD 7.1 includes ETS 4.0. - Ilan On Fri, Jul 8, 2011 at 8:21 AM, Robert Love rblove_li...@comcast.net wrote: How does this match up with the recently announced release of ETS-4.0? Are the versions of the python modules the same? On Jul 8, 2011, at 12:37 AM, Ilan Schnell wrote: Hello, I am pleased to announce that EPD (Enthought Python Distribution) version 7.1 has been released. The most significant change is the addition of an EPD Free version, which has its own very liberal license, and can be downloaded and used free of any charge by anyone (not only academics). EPD Free includes a subset of the packages included in the full EPD. The highlights of this subset are: numpy, scipy, matplotlib, traits and chaco. To see which libraries are included in the free vs. full version, please see: http://www.enthought.com/products/epdlibraries.php In addition we have opened our PyPI build mirror for everyone. This means that one can type enpkg xyz for 10,000+ packages. However, there are still benefits to becoming an EPD subscriber. http://www.enthought.com/products/getepd.php Apart from the addition of EPD Free, this release includes updates to over 30 packages, including numpy, scipy, ipython and ETS. We have also added PySide, Qt and MDP to this release. Please find the complete list of additions, updates and bug fixes in the change log: http://www.enthought.com/products/changelog.php About EPD - The Enthought Python Distribution (EPD) is a kitchen-sink-included distribution of the Python programming language, including over 90 additional tools and libraries. The EPD bundle includes NumPy, SciPy, IPython, 2D and 3D visualization, and many other tools. EPD is currently available as a single-click installer for Windows XP, Vista and 7, MacOSX (10.5 and 10.6), RedHat 3, 4 and 5, as well as Solaris 10 (x86 and x86_64/amd64 on all platforms). All versions of EPD (32 and 64-bit) are free for academic use. An annual subscription including installation support is available for individual and commercial use. Additional support options, including customization, bug fixes and training classes are also available: http://www.enthought.com/products/epd_sublevels.php - Ilan ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Missing Values Discussion
Hi, Just checking - but is this: On Fri, Jul 8, 2011 at 2:22 PM, Bruce Southey bsout...@gmail.com wrote: ... The one thing that we do need now is the code that implements the small set of core ideas (array creation and simple numerical operations). Hopefully that will provide a better grasp of the concepts and the performance differences to determine the acceptability of the approach(es). in reference to this: On 07/08/2011 07:15 AM, Matthew Brett wrote: ... Can I ask - what do you recommend that we do now, for the discussion? Should we be quiet and wait until there is code to test, or, as Nathaniel has tried to do, work at reaching some compromise that makes sense to some or all parties? ? Cheers, Matthew ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy 1.6.1 release candidate 2
On 07.07.2011, at 7:16PM, Robert Pyle wrote: .../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/numeric.py:1922: RuntimeWarning: invalid value encountered in absolute return all(less_equal(absolute(x-y), atol + rtol * absolute(y))) Everything else completes with 3 KNOWNFAILs and 1 SKIP. This warning is not new to this release; I've seen it before but haven't tried tracking it down until today. It arises in allclose(). The comments state If either array contains NaN, then False is returned. but no test for NaN is done, and NaNs are indeed what cause the warning. Inserting if any(isnan(x)) or any(isnan(y)): return False before current line number 1916 in numeric.py seems to fix it. The same warning is still present in the current master, I just never paid attention to it because the tests still pass (it does correctly identify NaNs because they are not less_equal the tolerance), but of course this should be properly fixed as you suggest. Cheers, Derek ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] potential bug in PyArray_MoveInto and PyArray_CopyInto?
On Thu, Jul 7, 2011 at 4:59 PM, James Bergstra bergs...@iro.umontreal.ca wrote: On Thu, Jul 7, 2011 at 1:10 PM, Charles R Harris charlesr.har...@gmail.com wrote: On Thu, Jul 7, 2011 at 11:03 AM, James Bergstra bergs...@iro.umontreal.ca wrote: In numpy 1.5.1, the functions PyArray_MoveInto and PyArray_CopyInto don't appear to treat strides correctly. Evidence: PyNumber_InPlaceAdd(dst, src), and modifies the correct subarray to which dst points. In the same context, PyArray_MoveInto(dst, src) modifies the first two rows of the underlying matrix instead of the first two columns. PyArray_CopyInto does the same. Is there something subtle going on here? What are the strides/dims in src and dst? Chuck In dst: strides = (40,8), dims=(5,2) in src: strides = () dims=() dst was sliced out of a 5x5 array of doubles. src is a 0-d array James -- http://www-etud.iro.umontreal.ca/~bergstrj I figured it out - I had forgotten to call PyArray_UpdateFlags after adjusting some strides. James -- http://www-etud.iro.umontreal.ca/~bergstrj ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] New arrays in 1.6 not always C-contiguous
Den 07.07.2011 15:24, skrev Yoshi Rokuko: thank you for pointing that out! so how do you change your numpy related c code now, would you like to share? Regardless or memory layout, we can always access element array[i,j,k] like this: const int s0 = array-strides[0]; const int s1 = array-strides[1]; const int s2 = array-strides[2]; char *const data = array-data; dtype *element = (dtype*)(data + i*s0 + j*s1 + k*s2); To force a particular layout, I usually call np.ascontiguousarray or np.asfortranarray in Python or Cython before calling into C or Fortran. These functions will do nothing if the layout is already correct. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] New arrays in 1.6 not always C-contiguous
Den 07.07.2011 14:10, skrev Jens Jørgen Mortensen: So, this means I can't count on new arrays being C-contiguous any more. I guess there is a good reason for this. Work with linear algebra (LAPACK) caused excessive and redundant array transpositions. Arrays would be transposed from C to Fortran order before they were passed to LAPACK, and returned arrays were transposed from Fortran to C order when used in Python. Signal and image processing in SciPy (FFTPACK) suffered from the same issue, as did certain optimization (MINPACK). Computer graphics with OpenGL was similarly impaired. The OpenGL library has a C frontent, but requires that all buffers and matrices are stored in Fortran order. The old behaviour of NumPy was very annoying. Now we can rely on NumPy to always use the most efficient memory layout, unless we request one in particular. Yeah, and it also made NumPy look bad compared to Matlab, which always uses Fortran order for this reason ;-) Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Missing Values Discussion
On 07/08/2011 08:58 AM, Matthew Brett wrote: Hi, Just checking - but is this: On Fri, Jul 8, 2011 at 2:22 PM, Bruce Southeybsout...@gmail.com wrote: ... The one thing that we do need now is the code that implements the small set of core ideas (array creation and simple numerical operations). Hopefully that will provide a better grasp of the concepts and the performance differences to determine the acceptability of the approach(es). in reference to this: On 07/08/2011 07:15 AM, Matthew Brett wrote: ... Can I ask - what do you recommend that we do now, for the discussion? Should we be quiet and wait until there is code to test, or, as Nathaniel has tried to do, work at reaching some compromise that makes sense to some or all parties? ? Cheers, Matthew Simply, I think the time for discussion has passed and it is now time to see the 'cards'. I do not know enough (or anything) about the implementation so I need code to know the actual 'cost' of Mark's idea with real situations. I am also curious on the implementation as 'conditional' unmasking can be used implement some of the missing values ideas. That is unmask all values that do not match some special value like max(int) for int arrays and some IEEE 754 range (like 'Indeterminate') for floats. The reason is that I have major concerns with handling missing values in integer arrays that Mark's idea hopefully will remove. Bruce ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Missing Values Discussion
Hi, On Fri, Jul 8, 2011 at 6:38 PM, Bruce Southey bsout...@gmail.com wrote: On 07/08/2011 08:58 AM, Matthew Brett wrote: Hi, Just checking - but is this: On Fri, Jul 8, 2011 at 2:22 PM, Bruce Southeybsout...@gmail.com wrote: ... The one thing that we do need now is the code that implements the small set of core ideas (array creation and simple numerical operations). Hopefully that will provide a better grasp of the concepts and the performance differences to determine the acceptability of the approach(es). in reference to this: On 07/08/2011 07:15 AM, Matthew Brett wrote: ... Can I ask - what do you recommend that we do now, for the discussion? Should we be quiet and wait until there is code to test, or, as Nathaniel has tried to do, work at reaching some compromise that makes sense to some or all parties? ? Cheers, Matthew Simply, I think the time for discussion has passed and it is now time to see the 'cards'. I do not know enough (or anything) about the implementation so I need code to know the actual 'cost' of Mark's idea with real situations. Yes, I thought that was what you were saying. I disagree and think that discussion of the type that Nathaniel has started is a useful way to think more clearly and specifically about the API and what can be agreed. Otherwise we will come to the same impasse when Mark's code arrives. If that happens, we'll either lose the code because the merge is refused, or be forced into something that may not be the best way forward. Best, Matthew ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Missing Values Discussion
On Fri, Jul 8, 2011 at 12:55 PM, Matthew Brett matthew.br...@gmail.com wrote: Hi, On Fri, Jul 8, 2011 at 6:38 PM, Bruce Southey bsout...@gmail.com wrote: On 07/08/2011 08:58 AM, Matthew Brett wrote: Hi, Just checking - but is this: On Fri, Jul 8, 2011 at 2:22 PM, Bruce Southeybsout...@gmail.com wrote: ... The one thing that we do need now is the code that implements the small set of core ideas (array creation and simple numerical operations). Hopefully that will provide a better grasp of the concepts and the performance differences to determine the acceptability of the approach(es). in reference to this: On 07/08/2011 07:15 AM, Matthew Brett wrote: ... Can I ask - what do you recommend that we do now, for the discussion? Should we be quiet and wait until there is code to test, or, as Nathaniel has tried to do, work at reaching some compromise that makes sense to some or all parties? ? Cheers, Matthew Simply, I think the time for discussion has passed and it is now time to see the 'cards'. I do not know enough (or anything) about the implementation so I need code to know the actual 'cost' of Mark's idea with real situations. Yes, I thought that was what you were saying. I disagree and think that discussion of the type that Nathaniel has started is a useful way to think more clearly and specifically about the API and what can be agreed. Otherwise we will come to the same impasse when Mark's code arrives. If that happens, we'll either lose the code because the merge is refused, or be forced into something that may not be the best way forward. Best, Matthew ___ Unfortunately we need code from either side as an API etc. is not sufficient to judge anything. But I do not think we will be forced into anything as in the extreme situation you can keep old versions or fork the code in the really extreme case. Bruce ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Missing Values Discussion
Hi, On Fri, Jul 8, 2011 at 8:34 PM, Bruce Southey bsout...@gmail.com wrote: On Fri, Jul 8, 2011 at 12:55 PM, Matthew Brett matthew.br...@gmail.com wrote: Hi, On Fri, Jul 8, 2011 at 6:38 PM, Bruce Southey bsout...@gmail.com wrote: On 07/08/2011 08:58 AM, Matthew Brett wrote: Hi, Just checking - but is this: On Fri, Jul 8, 2011 at 2:22 PM, Bruce Southeybsout...@gmail.com wrote: ... The one thing that we do need now is the code that implements the small set of core ideas (array creation and simple numerical operations). Hopefully that will provide a better grasp of the concepts and the performance differences to determine the acceptability of the approach(es). in reference to this: On 07/08/2011 07:15 AM, Matthew Brett wrote: ... Can I ask - what do you recommend that we do now, for the discussion? Should we be quiet and wait until there is code to test, or, as Nathaniel has tried to do, work at reaching some compromise that makes sense to some or all parties? ? Cheers, Matthew Simply, I think the time for discussion has passed and it is now time to see the 'cards'. I do not know enough (or anything) about the implementation so I need code to know the actual 'cost' of Mark's idea with real situations. Yes, I thought that was what you were saying. I disagree and think that discussion of the type that Nathaniel has started is a useful way to think more clearly and specifically about the API and what can be agreed. Otherwise we will come to the same impasse when Mark's code arrives. If that happens, we'll either lose the code because the merge is refused, or be forced into something that may not be the best way forward. Best, Matthew ___ Unfortunately we need code from either side as an API etc. is not sufficient to judge anything. If I understand correctly, we are not going to get code from either side, we are only going to get code from one side. I cannot now see how the code will inform the discussion about the API, unless it turns out that the proposed API cannot be implemented. The substantial points are not about memory use or performance, but about how the API should work. If you can see some way that the code will inform the discussion, please say, I would honestly be grateful. But I do not think we will be forced into anything as in the extreme situation you can keep old versions or fork the code in the really extreme case. That would be a terrible waste, and potentially damaging to the community, so of course we want to do all we can to avoid those outcomes. Best, Matthew ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] gist gist: 1068264
Hi Bruce, I'm replying on the list instead of on github, to make it easier for others to join in the discussion if they want. [For those joining in: this was a comment posted at https://gist.github.com/1068264 ] On Fri, Jul 8, 2011 at 10:36 AM, bsouthey wrote: I presume missing float values could be addressed with one of the 'special' ranges such as 'Indeterminate' in IEEE 754 (http://babbage.cs.qc.edu/IEEE-754/References.xhtml). The outcome should be determined by the IEEE special operations. Right. An IEEE 754 double has IIRC about 2^53 distinct bit-patterns that all mean not a number. A few of these are used to signal different invalid operations: In [20]: hex(np.asarray([np.nan]).view(dtype=np.uint64)[0]) Out[20]: '0x7ff8L' In [21]: hex(np.log([0]).view(dtype=np.uint64)[0]) Out[21]: '0xfff0L' In [22]: hex(np.divide([0.], [0,]).view(dtype=np.uint64)[0]) Out[22]: '0xfff8L' ...but that only accounts for, like, 10 of the 2^53 or something. The rest are simply unused. So what R does, and what we would do for dtype-style NAs, is just pick one of those (ideally the same one R uses), and declare that that is *not* not a number; it's NA. So my real concern is handling integer arrays: 1) How will you find where the missing values are in an array? If there is a variable that denotes missing values are present (NA_flags?) then do you have to duplicate code to avoid this searching when an array has no missing values? Each dtype has a bunch of C functions associated with it that say how to do comparisons, assignment, etc. In the miniNEP design, we add a new function to this list called 'isna', which every dtype that wants to support NAs has to define. Yes, this does mean that code which wants to treat NAs separately has to check for and call this function if it's present, but that seems to be inevitable... *all* of the dtype C functions are supposedly optional, so we have to check for them before calling them and do something sensible if they aren't defined. We could define a wrapper that calls the function if its defined, or else just fills the provided buffer with zeros (to mean there are no NAs), and then code which wanted to avoid a special case could use that. But in general we probably do want to handle arrays that might have NAs differently from arrays which don't have NAs, because if there are no NAs present then it's quicker to skip the handling altogether. That's true for any NA implementation. 2) What happens if a normal operation equates to that value: If you use max(np.int8), such as when adding 1 to an array with an element of 126 or when overflow occurs: np.arange(120,127, dtype=np.int8)+2 array([ 122, 123, 124, 125, 126, 127, -128], dtype=int8) The -128 corresponds to the missing element but is the second to last element now missing? This is worse if the overflow is larger. Yeah, in the design as written, overflow (among other things) can create accidental NAs. Which kind of sucks. There are a few options: -- Just live with it. -- We could add a flag like NPY_NA_AUTO_CHECK, and when this flag is set, the ufunc loop runs 'isna' on its output buffer before returning. If there are any NAs there that did not arise from NAs in the input, then it raises an error. (The reason we would want to make it a flag is that this checking is pointless for dtypes like NA-string, and mostly pointless for dtypes like NA-float.) Also, we'd only want to enable this if we were using the NPY_NA_AUTO_UFUNC ufunc-delegation logic, because if you registered a special ufunc loop *specifically for your NA-dtype*, then presumably it knows what it's doing. This would also allow such an NA-dtype-specific ufunc loop to return NAs on purpose if it wanted to. -- Use a dtype that adds a separate flag next to the actual integer to indicate NA-ness, instead of stealing one of the integer's values. So your NA-int8 would actually be 2 bytes, where the first byte was 1 to indicate NA, or 0 to indicate that the second byte contains an actual int8. If you do this with larger integers, say an int32, then you have a choice: you could store your int32 in 8 bytes, in which case arithmetic etc. is fast, but you waste a bit of memory. Or you could store your int32 in 5 bytes, in which case arithmetic etc. become somewhat slower, but you don't waste any memory. (This latter case would basically be like using an unaligned or byteswapped array in current numpy, in terms of mechanisms and speed.) -- Nothing in this design rules out a second implementation of NAs based on masking. Personally, as you know, I'm not a big fan, but if it were added anyway, then you could use that for your integers as well. A related issue is, of the many ways we *can* do integer NA-dtype, which one *should* we do by default. I don't have a strong opinion, really; I haven't heard anyone say that they have huge quantities of integer-plus-NA data that they want to manipulate and
[Numpy-discussion] code review request: masked dtype transfers
I've just made pull request 105: https://github.com/numpy/numpy/pull/105 This adds public API PyArray_MaskedCopyInto and PyArray_MaskedMoveInto, which behave analogously to the corresponding unmasked functions. To expose this with a reasonable interface, I added a function np.copyto, which takes a 'where=' parameter just like the element-wise ufuncs. One thing which needs discussion is that I've flagged 'putmask' and PyArray_PutMask as deprecated, because 'copyto' PyArray_MaskedMoveInto handle what those functions do but in a more flexible fashion. If there are any objections to deprecating 'putmask' and PyArray_PutMask, please speak up! Thanks, Mark ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] code review request: masked dtype transfers
On 07/08/2011 01:31 PM, Mark Wiebe wrote: I've just made pull request 105: https://github.com/numpy/numpy/pull/105 This adds public API PyArray_MaskedCopyInto and PyArray_MaskedMoveInto, which behave analogously to the corresponding unmasked functions. To expose this with a reasonable interface, I added a function np.copyto, which takes a 'where=' parameter just like the element-wise ufuncs. One thing which needs discussion is that I've flagged 'putmask' and PyArray_PutMask as deprecated, because 'copyto' PyArray_MaskedMoveInto handle what those functions do but in a more flexible fashion. If there are any objections to deprecating 'putmask' and PyArray_PutMask, please speak up! Thanks, Mark Mark, I thought I would do a test comparison of putmask and copyto, so I fetched and checked out your branch and tried to build it (after deleting my build directory), but the build failed: numpy/core/src/multiarray/multiarraymodule_onefile.c:41:20: fatal error: nditer.c: No such file or directory compilation terminated. error: Command gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/usr/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c numpy/core/src/multiarray/multiarraymodule_onefile.c -o build/temp.linux-x86_64-2.7/numpy/core/src/multiarray/multiarraymodule_onefile.o failed with exit status 1 Indeed, with rgrep I see: ./numpy/core/src/multiarray/multiarraymodule_onefile.c:#include nditer.c but no sign of nditer.c in the directory tree. Eric ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] code review request: masked dtype transfers
On Fri, Jul 8, 2011 at 7:48 PM, Eric Firing efir...@hawaii.edu wrote: On 07/08/2011 01:31 PM, Mark Wiebe wrote: I've just made pull request 105: https://github.com/numpy/numpy/pull/105 This adds public API PyArray_MaskedCopyInto and PyArray_MaskedMoveInto, which behave analogously to the corresponding unmasked functions. To expose this with a reasonable interface, I added a function np.copyto, which takes a 'where=' parameter just like the element-wise ufuncs. One thing which needs discussion is that I've flagged 'putmask' and PyArray_PutMask as deprecated, because 'copyto' PyArray_MaskedMoveInto handle what those functions do but in a more flexible fashion. If there are any objections to deprecating 'putmask' and PyArray_PutMask, please speak up! Thanks, Mark Mark, I thought I would do a test comparison of putmask and copyto, so I fetched and checked out your branch and tried to build it (after deleting my build directory), but the build failed: numpy/core/src/multiarray/multiarraymodule_onefile.c:41:20: fatal error: nditer.c: No such file or directory compilation terminated. error: Command gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/usr/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c numpy/core/src/multiarray/multiarraymodule_onefile.c -o build/temp.linux-x86_64-2.7/numpy/core/src/multiarray/multiarraymodule_onefile.o failed with exit status 1 Indeed, with rgrep I see: ./numpy/core/src/multiarray/multiarraymodule_onefile.c:#include nditer.c but no sign of nditer.c in the directory tree. This is fixed in master. The way to use it is git co -b pull-105 curl https://github.com/numpy/numpy/pull/105.patch | git am and then build. That will apply the new stuff as a patch on top of current master. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] code review request: masked dtype transfers
On Fri, Jul 8, 2011 at 8:22 PM, Charles R Harris charlesr.har...@gmail.comwrote: On Fri, Jul 8, 2011 at 7:48 PM, Eric Firing efir...@hawaii.edu wrote: On 07/08/2011 01:31 PM, Mark Wiebe wrote: I've just made pull request 105: https://github.com/numpy/numpy/pull/105 This adds public API PyArray_MaskedCopyInto and PyArray_MaskedMoveInto, which behave analogously to the corresponding unmasked functions. To expose this with a reasonable interface, I added a function np.copyto, which takes a 'where=' parameter just like the element-wise ufuncs. One thing which needs discussion is that I've flagged 'putmask' and PyArray_PutMask as deprecated, because 'copyto' PyArray_MaskedMoveInto handle what those functions do but in a more flexible fashion. If there are any objections to deprecating 'putmask' and PyArray_PutMask, please speak up! Thanks, Mark Mark, I thought I would do a test comparison of putmask and copyto, so I fetched and checked out your branch and tried to build it (after deleting my build directory), but the build failed: numpy/core/src/multiarray/multiarraymodule_onefile.c:41:20: fatal error: nditer.c: No such file or directory compilation terminated. error: Command gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/usr/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c numpy/core/src/multiarray/multiarraymodule_onefile.c -o build/temp.linux-x86_64-2.7/numpy/core/src/multiarray/multiarraymodule_onefile.o failed with exit status 1 Indeed, with rgrep I see: ./numpy/core/src/multiarray/multiarraymodule_onefile.c:#include nditer.c but no sign of nditer.c in the directory tree. This is fixed in master. The way to use it is git co -b pull-105 curl https://github.com/numpy/numpy/pull/105.patch | git am and then build. That will apply the new stuff as a patch on top of current master. That's in a clone of github.com/numpy/numpy of course. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] code review request: masked dtype transfers
On 07/08/2011 01:31 PM, Mark Wiebe wrote: I've just made pull request 105: https://github.com/numpy/numpy/pull/105 This adds public API PyArray_MaskedCopyInto and PyArray_MaskedMoveInto, which behave analogously to the corresponding unmasked functions. To expose this with a reasonable interface, I added a function np.copyto, which takes a 'where=' parameter just like the element-wise ufuncs. One thing which needs discussion is that I've flagged 'putmask' and PyArray_PutMask as deprecated, because 'copyto' PyArray_MaskedMoveInto handle what those functions do but in a more flexible fashion. If there are any objections to deprecating 'putmask' and PyArray_PutMask, please speak up! Mark, Looks good! Some quick tests with large and small arrays show copyto is faster than putmask when the source is an array and only a bit slower when the source is a scalar. Eric Thanks, Mark ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] code review request: masked dtype transfers
On Fri, Jul 8, 2011 at 5:31 PM, Mark Wiebe mwwi...@gmail.com wrote: I've just made pull request 105: https://github.com/numpy/numpy/pull/105 This adds public API PyArray_MaskedCopyInto and PyArray_MaskedMoveInto, which behave analogously to the corresponding unmasked functions. To expose this with a reasonable interface, I added a function np.copyto, which takes a 'where=' parameter just like the element-wise ufuncs. One thing which needs discussion is that I've flagged 'putmask' and PyArray_PutMask as deprecated, because 'copyto' PyArray_MaskedMoveInto handle what those functions do but in a more flexible fashion. If there are any objections to deprecating 'putmask' and PyArray_PutMask, please speak up! I think it is OK to deprecate PyArray_PutMask but it should still work. It may be a lng time before deprecated API functions can be removed, if ever... As to putmask, I don't really have an opinion, but it should probably be reimplemented to use copyto. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] code review request: masked dtype transfers
On Fri, Jul 8, 2011 at 9:24 PM, Charles R Harris charlesr.har...@gmail.comwrote: On Fri, Jul 8, 2011 at 5:31 PM, Mark Wiebe mwwi...@gmail.com wrote: I've just made pull request 105: https://github.com/numpy/numpy/pull/105 This adds public API PyArray_MaskedCopyInto and PyArray_MaskedMoveInto, which behave analogously to the corresponding unmasked functions. To expose this with a reasonable interface, I added a function np.copyto, which takes a 'where=' parameter just like the element-wise ufuncs. One thing which needs discussion is that I've flagged 'putmask' and PyArray_PutMask as deprecated, because 'copyto' PyArray_MaskedMoveInto handle what those functions do but in a more flexible fashion. If there are any objections to deprecating 'putmask' and PyArray_PutMask, please speak up! I think it is OK to deprecate PyArray_PutMask but it should still work. It may be a lng time before deprecated API functions can be removed, if ever... As to putmask, I don't really have an opinion, but it should probably be reimplemented to use copyto. One thing about putmask is that it is widely used in masked arrays, so that needs to be fixed before there is a release. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] code review request: masked dtype transfers
Sorry, looks like I forgot to rebase against master as Chuck pointed out. -Mark On Fri, Jul 8, 2011 at 8:48 PM, Eric Firing efir...@hawaii.edu wrote: On 07/08/2011 01:31 PM, Mark Wiebe wrote: I've just made pull request 105: https://github.com/numpy/numpy/pull/105 This adds public API PyArray_MaskedCopyInto and PyArray_MaskedMoveInto, which behave analogously to the corresponding unmasked functions. To expose this with a reasonable interface, I added a function np.copyto, which takes a 'where=' parameter just like the element-wise ufuncs. One thing which needs discussion is that I've flagged 'putmask' and PyArray_PutMask as deprecated, because 'copyto' PyArray_MaskedMoveInto handle what those functions do but in a more flexible fashion. If there are any objections to deprecating 'putmask' and PyArray_PutMask, please speak up! Thanks, Mark Mark, I thought I would do a test comparison of putmask and copyto, so I fetched and checked out your branch and tried to build it (after deleting my build directory), but the build failed: numpy/core/src/multiarray/multiarraymodule_onefile.c:41:20: fatal error: nditer.c: No such file or directory compilation terminated. error: Command gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/usr/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c numpy/core/src/multiarray/multiarraymodule_onefile.c -o build/temp.linux-x86_64-2.7/numpy/core/src/multiarray/multiarraymodule_onefile.o failed with exit status 1 Indeed, with rgrep I see: ./numpy/core/src/multiarray/multiarraymodule_onefile.c:#include nditer.c but no sign of nditer.c in the directory tree. Eric ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] code review request: masked dtype transfers
On Fri, Jul 8, 2011 at 10:04 PM, Eric Firing efir...@hawaii.edu wrote: On 07/08/2011 01:31 PM, Mark Wiebe wrote: I've just made pull request 105: https://github.com/numpy/numpy/pull/105 This adds public API PyArray_MaskedCopyInto and PyArray_MaskedMoveInto, which behave analogously to the corresponding unmasked functions. To expose this with a reasonable interface, I added a function np.copyto, which takes a 'where=' parameter just like the element-wise ufuncs. One thing which needs discussion is that I've flagged 'putmask' and PyArray_PutMask as deprecated, because 'copyto' PyArray_MaskedMoveInto handle what those functions do but in a more flexible fashion. If there are any objections to deprecating 'putmask' and PyArray_PutMask, please speak up! Mark, Looks good! Some quick tests with large and small arrays show copyto is faster than putmask when the source is an array and only a bit slower when the source is a scalar. With a bit of effort into performance optimization, it can probably be faster in the scalar cases as well. Currently, the masked case is always a function which calls the unmasked inner loop for the values that are unmasked. A faster way would be to create inner loops that handle the mask directly. -Mark Eric Thanks, Mark ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion