[Numpy-discussion] Incrementing with advanced indexing: why don't repeated indexes repeatedly increment?
Hello, I've noticed that If you try to increment elements of an array with advanced indexing, repeated indexes don't get repeatedly incremented. For example: In [30]: x = zeros(5) In [31]: idx = array([1,1,1,3,4]) In [32]: x[idx] += [2,4,8,10,30] In [33]: x Out[33]: array([ 0., 8., 0., 10., 30.]) I would intuitively expect the output to be array([0,14, 0,10,30]) since index 1 is incremented by 2+4+8=14, but instead it seems to only increment by 8. What is numpy actually doing here? The authors of Theano noticed this behavior a while ago so they python loop through the values in idx (this kind of calculation is necessary for calculating gradients), but this is a bit slow for my purposes, so I'd like to figure out how to get the behavior I expected, but faster. I'm also not sure how to navigate the numpy codebase, where would I look for the code responsible for this behavior? ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] lazy evaluation
On 06/06/2012 12:06 AM, mark florisson wrote: > On 5 June 2012 22:36, Dag Sverre Seljebotn wrote: >> On 06/05/2012 10:47 PM, mark florisson wrote: >>> On 5 June 2012 20:17, Nathaniel Smithwrote: On Tue, Jun 5, 2012 at 7:08 PM, mark florisson wrote: > On 5 June 2012 17:38, Nathaniel Smithwrote: >> On Tue, Jun 5, 2012 at 4:12 PM, mark florisson >> wrote: >>> On 5 June 2012 14:58, Nathaniel Smithwrote: On Tue, Jun 5, 2012 at 12:55 PM, mark florisson wrote: > It would be great if we implement the NEP listed above, but with a few > extensions. I think Numpy should handle the lazy evaluation part, and > determine when expressions should be evaluated, etc. However, for each > user operation, Numpy will call back a user-installed hook > implementing some interface, to allow various packages to provide > their own hooks to evaluate vector operations however they want. This > will include packages such as Theano, which could run things on the > GPU, Numexpr, and in the future > https://github.com/markflorisson88/minivect (which will likely have an > LLVM backend in the future, and possibly integrated with Numba to > allow inlining of numba ufuncs). The project above tries to bring > together all the different array expression compilers together in a > single framework, to provide efficient array expressions specialized > for any data layout (nditer on steroids if you will, with SIMD, > threaded and inlining capabilities). A global hook sounds ugly and hard to control -- it's hard to tell which operations should be deferred and which should be forced, etc. >>> >>> Yes, but for the user the difference should not be visible (unless >>> operations can raise exceptions, in which case you choose the safe >>> path, or let the user configure what to do). >>> While it would be less magical, I think a more explicit API would in the end be easier to use... something like a, b, c, d = deferred([a, b, c, d]) e = a + b * c # 'e' is a deferred object too f = np.dot(e, d) # so is 'f' g = force(f) # 'g' is an ndarray # or force(f, out=g) But at that point, this could easily be an external library, right? All we'd need from numpy would be some way for external types to override the evaluation of ufuncs, np.dot, etc.? We've recently seen several reasons to want that functionality, and it seems like developing these "improved numexpr" ideas would be much easier if they didn't require doing deep surgery to numpy itself... >>> >>> Definitely, but besides monkey-patch-chaining I think some >>> modifications would be required, but they would be reasonably simple. >>> Most of the functionality would be handled in one function, which most >>> ufuncs (the ones you care about, as well as ufunc (methods) like add) >>> call. E.g. if ((result = NPy_LazyEval("add", op1, op2)) return result; >>> , which is inserted after argument unpacking and sanity checking. You >>> could also do a per-module hook, and have the function look at >>> sys._getframe(1).f_globals, but that is fragile and won't work from C >>> or Cython code. >>> >>> How did you have overrides in mind? >> >> My vague idea is that core numpy operations are about as fundamental >> for scientific users as the Python builtin operations are, so they >> should probably be overrideable in a similar way. So we'd teach numpy >> functions to check for methods named like "__numpy_ufunc__" or >> "__numpy_dot__" and let themselves be overridden if found. Like how >> __gt__ and __add__ and stuff work. Or something along those lines. >> >>> I also found this thread: >>> http://mail.scipy.org/pipermail/numpy-discussion/2011-June/056945.html >>> , but I think you want more than just to override ufuncs, you want >>> numpy to govern when stuff is allowed to be lazy and when stuff should >>> be evaluated (e.g. when it is indexed, slice assigned (although that >>> itself may also be lazy), etc). You don't want some funny object back >>> that doesn't work with things which are not overridden in numpy. >> >> My point is that probably numpy should *not* govern the decision about >> what stuff should be lazy and what should be evaluated; that should be >> governed by some combination of the user and >> Numba/Theano/minivect/whatever. The toy API I sketched out would make >> those decisions obvious and explicit. (And if the funny objects had an >> __array_interface__ attribute that automatically forced evaluation >> when accessed, then they'd work fine with code that was expecting
Re: [Numpy-discussion] Incrementing with advanced indexing: why don't repeated indexes repeatedly increment?
Hi, I get across the numpy.put[1] function. I'm not sure, but maybe it do what you want. My memory are fuzy about this and they don't tell about this in the doc of this function. Fred [1] http://docs.scipy.org/doc/numpy/reference/generated/numpy.put.html On Wed, Jun 6, 2012 at 4:48 AM, John Salvatier wrote: > Hello, > > I've noticed that If you try to increment elements of an array with advanced > indexing, repeated indexes don't get repeatedly incremented. For example: > > In [30]: x = zeros(5) > > In [31]: idx = array([1,1,1,3,4]) > > In [32]: x[idx] += [2,4,8,10,30] > > In [33]: x > Out[33]: array([ 0., 8., 0., 10., 30.]) > > I would intuitively expect the output to be array([0,14, 0,10,30]) since > index 1 is incremented by 2+4+8=14, but instead it seems to only increment > by 8. What is numpy actually doing here? > > The authors of Theano noticed this behavior a while ago so they python loop > through the values in idx (this kind of calculation is necessary for > calculating gradients), but this is a bit slow for my purposes, so I'd like > to figure out how to get the behavior I expected, but faster. > > I'm also not sure how to navigate the numpy codebase, where would I look for > the code responsible for this behavior? > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] boolean indexing of structured arrays
Not sure if this is a bug or not. I am using a fairly recent master branch. >>> # Setting up... >>> import numpy as np >>> a = np.zeros((10, 1), dtype=[('foo', 'f4'), ('bar', 'f4'), ('spam', 'f4')]) >>> a['foo'] = np.random.random((10, 1)) >>> a['bar'] = np.random.random((10, 1)) >>> a['spam'] = np.random.random((10, 1)) >>> a array([[(0.8748096823692322, 0.08278043568134308, 0.2463584989309311)], [(0.27129432559013367, 0.9645473957061768, 0.41787904500961304)], [(0.4902191460132599, 0.6772263646125793, 0.07460898905992508)], [(0.13542482256889343, 0.8646988868713379, 0.98673015832901)], [(0.6527929902076721, 0.7392181754112244, 0.5919206738471985)], [(0.11248272657394409, 0.5818713903427124, 0.9287213087081909)], [(0.47561103105545044, 0.48848700523376465, 0.7108170390129089)], [(0.47087424993515015, 0.6080209016799927, 0.6583810448646545)], [(0.08447299897670746, 0.39479559659957886, 0.13520188629627228)], [(0.7074970006942749, 0.8426893353462219, 0.19329732656478882)]], dtype=[('foo', '>> b = (a['bar'] > 0.4) >>> b array([[False], [ True], [ True], [ True], [ True], [ True], [ True], [ True], [False], [ True]], dtype=bool) >>> # Boolean indexing of structured array with a (10,1) boolean array >>> a[b]['foo'] array([ 0.27129433, 0.49021915, 0.13542482, 0.65279299, 0.11248273, 0.47561103, 0.47087425, 0.707497 ], dtype=float32) >>> # Boolean indexing of structured array with a (10,) boolean array >>> a[b[:,0]]['foo'] array([[(0.27129432559013367, 0.9645473957061768, 0.41787904500961304)], [(0.4902191460132599, 0.6772263646125793, 0.07460898905992508)], [(0.13542482256889343, 0.8646988868713379, 0.98673015832901)], [(0.6527929902076721, 0.7392181754112244, 0.5919206738471985)], [(0.11248272657394409, 0.5818713903427124, 0.9287213087081909)], [(0.47561103105545044, 0.48848700523376465, 0.7108170390129089)], [(0.47087424993515015, 0.6080209016799927, 0.6583810448646545)], [(0.7074970006942749, 0.8426893353462219, 0.19329732656478882)]], dtype=[('foo', '___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Incrementing with advanced indexing: why don't repeated indexes repeatedly increment?
Thank you for the suggestion, but it looks like that has the same behavior too: In [43]: x = zeros(5) In [44]: idx = array([1,1,1,3,4]) In [45]: put(x,idx, [2,4,8,10,30]) In [46]: x Out[46]: array([ 0., 8., 0., 10., 30.]) On Wed, Jun 6, 2012 at 6:07 AM, Frédéric Bastien wrote: > Hi, > > I get across the numpy.put[1] function. I'm not sure, but maybe it do > what you want. My memory are fuzy about this and they don't tell about > this in the doc of this function. > > Fred > > > [1] http://docs.scipy.org/doc/numpy/reference/generated/numpy.put.html > > On Wed, Jun 6, 2012 at 4:48 AM, John Salvatier > wrote: > > Hello, > > > > I've noticed that If you try to increment elements of an array with > advanced > > indexing, repeated indexes don't get repeatedly incremented. For example: > > > > In [30]: x = zeros(5) > > > > In [31]: idx = array([1,1,1,3,4]) > > > > In [32]: x[idx] += [2,4,8,10,30] > > > > In [33]: x > > Out[33]: array([ 0., 8., 0., 10., 30.]) > > > > I would intuitively expect the output to be array([0,14, 0,10,30]) since > > index 1 is incremented by 2+4+8=14, but instead it seems to only > increment > > by 8. What is numpy actually doing here? > > > > The authors of Theano noticed this behavior a while ago so they python > loop > > through the values in idx (this kind of calculation is necessary for > > calculating gradients), but this is a bit slow for my purposes, so I'd > like > > to figure out how to get the behavior I expected, but faster. > > > > I'm also not sure how to navigate the numpy codebase, where would I look > for > > the code responsible for this behavior? > > > > ___ > > NumPy-Discussion mailing list > > NumPy-Discussion@scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Incrementing with advanced indexing: why don't repeated indexes repeatedly increment?
On Wed, Jun 6, 2012 at 9:48 AM, John Salvatier wrote: > Hello, > > I've noticed that If you try to increment elements of an array with advanced > indexing, repeated indexes don't get repeatedly incremented. For example: > > In [30]: x = zeros(5) > > In [31]: idx = array([1,1,1,3,4]) > > In [32]: x[idx] += [2,4,8,10,30] > > In [33]: x > Out[33]: array([ 0., 8., 0., 10., 30.]) > > I would intuitively expect the output to be array([0,14, 0,10,30]) since > index 1 is incremented by 2+4+8=14, but instead it seems to only increment > by 8. What is numpy actually doing here? > > The authors of Theano noticed this behavior a while ago so they python loop > through the values in idx (this kind of calculation is necessary for > calculating gradients), but this is a bit slow for my purposes, so I'd like > to figure out how to get the behavior I expected, but faster. > > I'm also not sure how to navigate the numpy codebase, where would I look for > the code responsible for this behavior? Strictly speaking, it isn't actually in the numpy codebase at all -- what's happening is that the Python interpreter sees this code: x[idx] += vals and then it translates it into this code before running it: tmp = x.__getitem__(idx) tmp = tmp.__iadd__(vals) x.__setitem__(idx, tmp) So you can find the implementations of the ndarray methods __getitem__, __iadd__, __setitem__ (they're called array_subscript_nice, array_inplace_add, and array_ass_sub in the C code), but there's no way to fix them so that this works the way you want it to, because there's no way for __iadd__ to know that the temporary values that it's working with are really duplicate copies of "the same" value in the original array. It would be nice if numpy had some sort of standard API for doing what you want, but not sure what a good API would look like, and someone would have to implement it. -n ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Incrementing with advanced indexing: why don't repeated indexes repeatedly increment?
On 06/06/2012 05:06 PM, Nathaniel Smith wrote: > On Wed, Jun 6, 2012 at 9:48 AM, John Salvatier > wrote: >> Hello, >> >> I've noticed that If you try to increment elements of an array with advanced >> indexing, repeated indexes don't get repeatedly incremented. For example: >> >> In [30]: x = zeros(5) >> >> In [31]: idx = array([1,1,1,3,4]) >> >> In [32]: x[idx] += [2,4,8,10,30] >> >> In [33]: x >> Out[33]: array([ 0., 8., 0., 10., 30.]) >> >> I would intuitively expect the output to be array([0,14, 0,10,30]) since >> index 1 is incremented by 2+4+8=14, but instead it seems to only increment >> by 8. What is numpy actually doing here? >> >> The authors of Theano noticed this behavior a while ago so they python loop >> through the values in idx (this kind of calculation is necessary for >> calculating gradients), but this is a bit slow for my purposes, so I'd like >> to figure out how to get the behavior I expected, but faster. >> >> I'm also not sure how to navigate the numpy codebase, where would I look for >> the code responsible for this behavior? > > Strictly speaking, it isn't actually in the numpy codebase at all -- > what's happening is that the Python interpreter sees this code: > >x[idx] += vals > > and then it translates it into this code before running it: > >tmp = x.__getitem__(idx) >tmp = tmp.__iadd__(vals) >x.__setitem__(idx, tmp) > > So you can find the implementations of the ndarray methods > __getitem__, __iadd__, __setitem__ (they're called > array_subscript_nice, array_inplace_add, and array_ass_sub in the C > code), but there's no way to fix them so that this works the way you > want it to, because there's no way for __iadd__ to know that the > temporary values that it's working with are really duplicate copies of > "the same" value in the original array. > > It would be nice if numpy had some sort of standard API for doing what > you want, but not sure what a good API would look like, and someone > would have to implement it. This operation is also heavily used for the finite element assembling, and a similar question has been raised already several times (e.g. http://old.nabble.com/How-to-assemble-large-sparse-matrices-effectively-td33833855.html). So why not adding a function np.assemble()? r. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Incrementing with advanced indexing: why don't repeated indexes repeatedly increment?
On Wed, Jun 6, 2012 at 4:30 PM, Robert Cimrman wrote: > On 06/06/2012 05:06 PM, Nathaniel Smith wrote: >> On Wed, Jun 6, 2012 at 9:48 AM, John Salvatier >> wrote: >>> Hello, >>> >>> I've noticed that If you try to increment elements of an array with advanced >>> indexing, repeated indexes don't get repeatedly incremented. For example: >>> >>> In [30]: x = zeros(5) >>> >>> In [31]: idx = array([1,1,1,3,4]) >>> >>> In [32]: x[idx] += [2,4,8,10,30] >>> >>> In [33]: x >>> Out[33]: array([ 0., 8., 0., 10., 30.]) >>> >>> I would intuitively expect the output to be array([0,14, 0,10,30]) since >>> index 1 is incremented by 2+4+8=14, but instead it seems to only increment >>> by 8. What is numpy actually doing here? >>> >>> The authors of Theano noticed this behavior a while ago so they python loop >>> through the values in idx (this kind of calculation is necessary for >>> calculating gradients), but this is a bit slow for my purposes, so I'd like >>> to figure out how to get the behavior I expected, but faster. >>> >>> I'm also not sure how to navigate the numpy codebase, where would I look for >>> the code responsible for this behavior? >> >> Strictly speaking, it isn't actually in the numpy codebase at all -- >> what's happening is that the Python interpreter sees this code: >> >> x[idx] += vals >> >> and then it translates it into this code before running it: >> >> tmp = x.__getitem__(idx) >> tmp = tmp.__iadd__(vals) >> x.__setitem__(idx, tmp) >> >> So you can find the implementations of the ndarray methods >> __getitem__, __iadd__, __setitem__ (they're called >> array_subscript_nice, array_inplace_add, and array_ass_sub in the C >> code), but there's no way to fix them so that this works the way you >> want it to, because there's no way for __iadd__ to know that the >> temporary values that it's working with are really duplicate copies of >> "the same" value in the original array. >> >> It would be nice if numpy had some sort of standard API for doing what >> you want, but not sure what a good API would look like, and someone >> would have to implement it. > > This operation is also heavily used for the finite element assembling, and a > similar question has been raised already several times (e.g. > http://old.nabble.com/How-to-assemble-large-sparse-matrices-effectively-td33833855.html). > So why not adding a function np.assemble()? I read that message, but I don't see what it has to do with this discussion? It seemed to be about fast ways to assign dense matrices into sparse matrices, not fast ways of applying in-place arithmetic to specific spots in a dense matrix. -n ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Incrementing with advanced indexing: why don't repeated indexes repeatedly increment?
On 06/06/2012 05:34 PM, Nathaniel Smith wrote: > On Wed, Jun 6, 2012 at 4:30 PM, Robert Cimrman wrote: >> On 06/06/2012 05:06 PM, Nathaniel Smith wrote: >>> On Wed, Jun 6, 2012 at 9:48 AM, John Salvatier >>> wrote: Hello, I've noticed that If you try to increment elements of an array with advanced indexing, repeated indexes don't get repeatedly incremented. For example: In [30]: x = zeros(5) In [31]: idx = array([1,1,1,3,4]) In [32]: x[idx] += [2,4,8,10,30] In [33]: x Out[33]: array([ 0., 8., 0., 10., 30.]) I would intuitively expect the output to be array([0,14, 0,10,30]) since index 1 is incremented by 2+4+8=14, but instead it seems to only increment by 8. What is numpy actually doing here? The authors of Theano noticed this behavior a while ago so they python loop through the values in idx (this kind of calculation is necessary for calculating gradients), but this is a bit slow for my purposes, so I'd like to figure out how to get the behavior I expected, but faster. I'm also not sure how to navigate the numpy codebase, where would I look for the code responsible for this behavior? >>> >>> Strictly speaking, it isn't actually in the numpy codebase at all -- >>> what's happening is that the Python interpreter sees this code: >>> >>> x[idx] += vals >>> >>> and then it translates it into this code before running it: >>> >>> tmp = x.__getitem__(idx) >>> tmp = tmp.__iadd__(vals) >>> x.__setitem__(idx, tmp) >>> >>> So you can find the implementations of the ndarray methods >>> __getitem__, __iadd__, __setitem__ (they're called >>> array_subscript_nice, array_inplace_add, and array_ass_sub in the C >>> code), but there's no way to fix them so that this works the way you >>> want it to, because there's no way for __iadd__ to know that the >>> temporary values that it's working with are really duplicate copies of >>> "the same" value in the original array. >>> >>> It would be nice if numpy had some sort of standard API for doing what >>> you want, but not sure what a good API would look like, and someone >>> would have to implement it. >> >> This operation is also heavily used for the finite element assembling, and a >> similar question has been raised already several times (e.g. >> http://old.nabble.com/How-to-assemble-large-sparse-matrices-effectively-td33833855.html). >> So why not adding a function np.assemble()? > > I read that message, but I don't see what it has to do with this > discussion? It seemed to be about fast ways to assign dense matrices > into sparse matrices, not fast ways of applying in-place arithmetic to > specific spots in a dense matrix. Yes (in that thread), but it applies also adding/assembling vectors into a global vector - this is just x[idx] += vals. I linked that discussion as that was recent enough for me to recall it, but there were other. Anyway, my point was that a having a function with the "adding" semantics in NumPy would be handy. r. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Incrementing with advanced indexing: why don't repeated indexes repeatedly increment?
On Wed, Jun 6, 2012 at 4:52 PM, Robert Cimrman wrote: > Yes (in that thread), but it applies also adding/assembling vectors into a > global vector - this is just x[idx] += vals. I linked that discussion as that > was recent enough for me to recall it, but there were other. > > Anyway, my point was that a having a function with the "adding" semantics in > NumPy would be handy. x += numpy.bincount(idx, vals, minlength=len(x)) -- Robert Kern ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Incrementing with advanced indexing: why don't repeated indexes repeatedly increment?
That does seem like it should work well if len(unique(idx)) is close to len(x). Thanks! On Wed, Jun 6, 2012 at 9:35 AM, Robert Kern wrote: > On Wed, Jun 6, 2012 at 4:52 PM, Robert Cimrman > wrote: > > > Yes (in that thread), but it applies also adding/assembling vectors into > a > > global vector - this is just x[idx] += vals. I linked that discussion as > that > > was recent enough for me to recall it, but there were other. > > > > Anyway, my point was that a having a function with the "adding" > semantics in > > NumPy would be handy. > > x += numpy.bincount(idx, vals, minlength=len(x)) > > -- > Robert Kern > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Incrementing with advanced indexing: why don't repeated indexes repeatedly increment?
On 06/06/2012 06:35 PM, Robert Kern wrote: > On Wed, Jun 6, 2012 at 4:52 PM, Robert Cimrman wrote: > >> Yes (in that thread), but it applies also adding/assembling vectors into a >> global vector - this is just x[idx] += vals. I linked that discussion as that >> was recent enough for me to recall it, but there were other. >> >> Anyway, my point was that a having a function with the "adding" semantics in >> NumPy would be handy. > > x += numpy.bincount(idx, vals, minlength=len(x)) > Nice! Looking at the C source, it seems it should be pretty efficient for this task. r. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Pull request: Split maskna support out of mainline into a branch
Just submitted this pull request for discussion: https://github.com/numpy/numpy/pull/297 As per earlier discussion on the list, this PR attempts to remove exactly and only the maskna-related code from numpy mainline: http://mail.scipy.org/pipermail/numpy-discussion/2012-May/062417.html The suggestion is that we merge this to master for the 1.7 release, and immediately "git revert" it on a branch so that it can be modified further without blocking the release. The first patch does the actual maskna removal; the second and third rearrange things so that PyArray_ReduceWrapper does not end up in the public API, for reasons described therein. All tests pass with Python 2.4, 2.5, 2.6, 2.7, 3.1, 3.2 on 64-bit Ubuntu. The docs also appear to build. Before I re-based this I also tested against Scipy, matplotlib, and pandas, and all were fine. -- Nathaniel ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Are "min", "max" documented for scalars?
Python "max" and "min" have an interesting and _useful_ behavior when applied to numpy scalars and Python numbers. Here is a piece of pseudo-code: def max(a, b): if int(b) > int(a): return b else: return a The larger object is returned unchanged. If the two objects are equal, return the first unchanged. Is the behavior of "max", "min", "<", "<=", etc. for numpy scalar objects documented somewhere? keywords: greater than less than ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion