Antoine Pitrou wrote:
In CPython, looking for reference cycles is a parasitic task that
interferes with what you are trying to measure. It is not critical in
any way, and you can schedule it much less often if it takes too much
CPU, without any really adverse consequences. timeit takes the safe way
and disables it completely.

In PyPy, it doesn't seem gc.disable() should do anything, since you'd
lose all automatic memory management if the GC was disabled.

it disables finalizers but this is besides the point. the point is
that people use timeit module to compute absolute time it takes for
CPython to do things, among other things comparing it to PyPy. While I
do agree that in microbenchmarks you don't loose much by just
disabling it, it does affect larger applications. So answering the
question like "how much time will take json encoding in my
application" should take cyclic GC time into account.

If you are only measuring json encoding of a few select pieces of data
then it's a microbenchmark.
If you are measuring the whole application (or a significant part of it)
then I'm not sure timeit is the right tool for that.


Perhaps timeit should grow a macro-benchmark tool too? I find myself often using timeit to time macro-benchmarks simply because it's more convenient at the interactive interpreter than the alternatives.

Something like this idea perhaps?

http://preshing.com/20110924/timing-your-code-using-pythons-with-statement




--
Steven

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to