On 11/14/2016 03:42 PM, Mikhail V wrote:
On 14 November 2016 at 19:57, Nick Timkovich <[email protected]> wrote:
I can understand you good. But imagine, if Numpy would allow you to
simply write:
A = A + 1
Which would bring you directly to same internal procedure as A += 1.
So it does not currently, why?
I've tested now A += 99 against A = A + 99 and there is indeed a 30% speed
difference. So it functions different.
Would you then still want to write += ?
I never would. Also I think to implement this syntax would be almost
trivial, it should
just take the A = A part and do the rest as usual.
And this for equally sized arrays:
A = A + B
Should just add B values to A values in-place. Now it gives me also
~30% speed difference
compared to A += B.
If you take this file:
test.py
----------------------------------------------
a = a + 1
a += 1
----------------------------------------------
And you look at the bytecode it produces you get the following:
----------------------------------------------
$ python -m dis test.py
1 0 LOAD_NAME 0 (a)
3 LOAD_CONST 0 (1)
6 BINARY_ADD
7 STORE_NAME 0 (a)
2 10 LOAD_NAME 0 (a)
13 LOAD_CONST 0 (1)
16 INPLACE_ADD
17 STORE_NAME 0 (a)
20 LOAD_CONST 1 (None)
23 RETURN_VALUE
----------------------------------------------
That shows that the first and second lines are compiled _differently_.
The point is that in the first line, numpy does not have the information
necessary to know that "a + 1" will be assigned back to a. In the second
case it does. This is presumably why they couldn't do the optimization
you desire without large changes in cpython.
On the second point, personally I prefer writing "a += 1" to "a = a +
1". I think it's clearer and would keep using it even if the two were
equally efficient. But we are all allowed our opinions...
Cheers,
Thomas
_______________________________________________
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/