Andrew Lin added the comment:
E[calls] reduces down to 2**k / n. I only just realized from working that out
that it therefore doesn't quite vary linearly over [2**k, 2**(k+1)), although
by what weight one wants to average I couldn't say.
Personally if the change adversely impacts
Change by Andrew Lin :
Added file: https://bugs.python.org/file50473/randbelow_timing.py
___
Python tracker
<https://bugs.python.org/issue45976>
___
___
Python-bug
Andrew Lin added the comment:
I finally cleaned up my benchmark script, which feels like a big hack. I
should really learn to use pyperf or something.
Inspired by your comment about 3.8 on the Mac I tried my Ubuntu installation's
stock 3.8. I get a small degradation (~0.5%) for powers
Andrew Lin added the comment:
Timings are unaffected by updating to non-walrus, so I'll do so in the PR.
Using _randbelow_without_getrandbits() I get 5% improvement to sample([None] *
2047, 50); 6% to shuffle([None] * 2000); and approximately 6% to randrange()
for any number significantly
New submission from Andrew Lin :
This PR obtains a performance improvement in the random module by removing a
cached attribute lookup in Random._randbelow_with_getrandbits() that costs time
on average. In the best cases (on my machine) I get 10% improvement for
randrange(), 7% for shuffle