Vincent Davis wrote:

> Excellent Peter!
> I have a question, the times reported don't make sense to me, for example
> $ python3 -m timeit -s 'from debruijn_compat import debruijn_bytes as d'
> 'd(4, 8)'
> 100 loops, best of 3: 10.2 msec per loop
> This took ~4 secs (stop watch) which is much more that 10*.0102 Why is
> this?

Look at the output, it's "100 loops"

100 * 10 msec == 1 sec

together with "best of 3" you are already at 3 sec, minimum as it is the 
"best" run.

Then, how do you think Python /knows/ that it has to repeat the code 10 
times on my "slow" and 100 times on your "fast" machine? It runs the bench 
once, then 10, then 100, then 1000 times -- until there's a run that takes 
0.2 secs or more. The total expected minimum time without startup overhead 
is then

+----calibration------+   +-measurement--+
(1 + 10 + 100) * 10msec + 3 * 100 * 10msec

or about 4 secs.

> $ python3 -m timeit -s 'from debruijn_compat import debruijn_bytes as d'
> 'd(4, 11)'
> 10 loops, best of 3: 480 msec per loop​
> This took ~20 secs vs .480*10

>>> .480*(1 + 10 + 3*10)
19.68

> d(4, 14) takes about 24 seconds (one run)

This is left as an exercise ;)

-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to