On Wed, May 4, 2011 at 08:19, Christoph Groth <c...@falma.de> wrote: > Dear numpy experts, > > I have noticed that with Numpy 1.5.1 the operation > > m[::2] += 1.0 > > takes twice as long as > > t = m[::2] > t += 1.0 > > where "m" is some large matrix. This is of course because the first > snippet is equivalent to > > t = m[::2] > t += 1.0 > m[::2] = t > > I wonder whether it would not be a good idea to optimize > ndarray.__setitem__ to not execute an assignment of a slice onto itself. > Is there any good reason why this is not being done already?
We didn't think of it. If you can write up a patch that works safely and shows a performance improvement, it's probably worth putting in. It's probably not *that* common of a bottleneck, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion