Stefan Behnel added the comment:

It's generally worth running the benchmark suite for this kind of optimisation. 
Being mostly Python code, it should benefit quite clearly from dictionary 
improvements, but it should also give an idea of how much of an improvement 
actual Python code (and not just micro-benchmarks) can show. And it can help 
detecting unexpected regressions that would not necessarily be revealed by 
micro-benchmarks.

https://hg.python.org/benchmarks/

And I'm with Mark: when it comes to performance optimisations, repeating even a 
firm intuition doesn't save us from validating that this intuition actually 
matches reality. Anything that seems obvious at first sight may still be proven 
wrong by benchmarks, and has often enough been so in the past.

----------
nosy: +scoder

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue23601>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to