On 14 November 2016 at 19:57, Nick Timkovich <prometheus...@gmail.com> wrote: > Currently, Numpy takes advantage of __iadd__ and friends by performing the > operation in-place; there is no copying or other object created. Numpy is > very thinly C, for better and worse (which is also likely where the += > syntax came from). If you're doing vast amounts of numeric computation, it > quickly pays off to learn a little about how C likes to store arrays in > memory and that doing things like creating a whole new array of the same > size and having to free up the old one for every operation isn't a great > idea. I strongly dislike the notion that I'd have to use arbitrary function > calls to add one to an entire array, or multiply it by 2, or add it to > another array, etc. > > As a minor nit, np.vectorize doesn't make in-place functions, it simply > makes a naive, element-wise function act as though it was vectorized. The > name is unfortunate, because it does nothing to speed it up, and it usually > slows it down (because of that layer). Reworking code to avoid copying large > arrays by doing in-place operations is the preferred method, though not > always possible. > > Nick
I can understand you good. But imagine, if Numpy would allow you to simply write: A = A + 1 Which would bring you directly to same internal procedure as A += 1. So it does not currently, why? I've tested now A += 99 against A = A + 99 and there is indeed a 30% speed difference. So it functions different. Would you then still want to write += ? I never would. Also I think to implement this syntax would be almost trivial, it should just take the A = A part and do the rest as usual. And this for equally sized arrays: A = A + B Should just add B values to A values in-place. Now it gives me also ~30% speed difference compared to A += B. Mikhail _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/