Serhiy Storchaka <storchaka+cpyt...@gmail.com> added the comment:

I think that In sum(range(1 << 1000, (1 << 1000) + 100)) it is dwarfed by 
repeated addition of long integers in sum(). I expect the the effect of this 
optimization is even less that 13% in this case. In any case, for sum(range(a, 
b)) it is better to use formula (b-a)*(a+b-1)//2 if it is performance critical 
to you.

Iterating range object is pretty rare in Python (in most cases you iterate a 
sequence itself or enumerate object), and iterating range object for large 
integer is especially rare. Also, in real code you do not just iterate for the 
sake of iterating, you execute some code in every iteration. And it will make 
the effect of the optimization of this case much smaller, closer to 1% or 0.1% 
that to 13%.

Last week Inada-san played with my old optimization patch for using free lists 
for integers and decided to close the issue. He, and I, and other core 
developers agreed that it is not worth. Besides that optimization would help in 
much more cases and had good effect in microbenchmarks. And I closed also my 
other optimization patch few days ago. And did it many times in past. It is 
fanny to microoptimize the code, add special cases here and there, but it makes 
code larger and less maintenable, and a lot of such microoptimizations can have 
negative cumulative effect on the performance of general code. Sometimes the 
work of the core developer is to say "No" to his own code.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue42147>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to