On Fri, 18 Mar 2011 17:06:50 +1100
Steven D'Aprano <st...@pearwood.info> wrote:
> 
> In contrast, timeit defaults to using time.time() under all operating 
> systems other than Windows, and says:
> 
>      ...on Windows, clock() has microsecond granularity but
>      time()'s granularity is 1/60th of a second; on Unix,
>      clock() has 1/100th of a second granularity and time()
>      is much more precise.
> 
> 
> Should timeit be changed, or the docs, or have I missed something?

Well, time.clock() is less precise (in terms of timing granularity),
but more accurate if the system if not otherwise idle - since it will
report CPU time for the current process instead of total wall clock
time, removing the direct contribution of other processes.

That said, benchmarks on a loaded system are inaccurate anyway, because
of other factors (such as cost of context switching, TLB and cache
misses, concurrent memory access from several processes).

I think a rule-of-thumb can be made:
- for short-running benchmarks (a couple of seconds at most), use
  time.time() which will give increased precision
- for longer-running benchmarks, use time.clock() (or the system's
  "time" command) which will avoid counting the runtime of other
  processes

timeit is in the former case.

(this is all about Unix, by the way)

Regards

Antoine.


_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to