On 21 août 2012, at 19:25, Steven D'Aprano <st...@pearwood.info> wrote: > On 21/08/12 23:04, Victor Stinner wrote: > >> I don't like the timeit module for micro benchmarks, it is really >> unstable (default settings are not written for micro benchmarks). > [...] >> I wrote my own benchmark tool, based on timeit, to have more stable >> results on micro benchmarks: >> https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py > > I am surprised, because the whole purpose of timeit is to time micro > code snippets.
And when invoked from the command-line, it is already time-based: unless -n is specified, python guesstimates the number of iterations to be a power of 10 resulting in at least 0.2s per test (the repeat defaults to 3 though) As a side-note, every time I use timeit programmatically, it annoys me that this behavior is not available and has to be implemented manually. > If it is as unstable as you suggest, and if you have an alternative > which is more stable and accurate, I would love to see it in the > standard library. _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com