On Mon, Apr 4, 2011 at 7:45 PM, Steven D'Aprano <steve+comp.lang.pyt...@pearwood.info> wrote: > I'm writing some tests to check for performance regressions (i.e. you > change a function, and it becomes much slower) and I was hoping for some > guidelines or hints. > > This is what I have come up with so far: > > > * The disclaimers about timing code snippets that can be found in the > timeit module apply. If possible, use timeit rather than roll-you-own > timers.
Huh. In looking into timing attacks actually one of the biggest lessons I learned was *not* to use timeit- that the overhead and variance involved in using it will wind up consuming small changes in behavior in ways that are fairly opaque until you really take it apart. > * Put performance tests in a separate test suite, because they're > logically independent of regression tests and functional tests, and > therefore you might not want to run them all the time. > > * Never compare the speed of a function to some fixed amount of time, > since that will depend on the hardware you are running on, but compare it > relative to some other function's running time. E.g.: > > # Don't do this: > time_taken = Timer(my_func).timeit() # or similar > assert time_taken <= 10 > # This is bad, since the test is hardware dependent, and a change > # in environment may cause this to fail even if the function > # hasn't changed. > > # Instead do this: > time_taken = Timer(my_func).timeit() > baseline = Timer(simple_func).timeit() > assert time_taken <= 2*baseline > # my_func shouldn't be more than twice as expensive as simple_func > # no matter how fast or slow they are in absolute terms. > > > Any other lessons or hints I should know? If you can get on it, emulab is great for doing network performance and correctness testing, and even if you can't it might be worth running a small one at your company. I wish I'd found out about it years ago. Geremy Condra -- http://mail.python.org/mailman/listinfo/python-list