On Fri, 05 Jul 2013 16:50:41 +0000, Steven D'Aprano wrote: > On Fri, 05 Jul 2013 16:07:03 +0000, Helmut Jarausch wrote: > >> The solution above take 0.79 seconds (mean of 100 calls) while the >> following version take 1.05 seconds (mean of 100 calls): > > 1) How are you timing the calls?
I've put the work horse "Solve" into a loop executing 100 times. That's on an otherwise idle Linux machine. > > 2) Don't use the mean, that's the wrong statistic when you are measuring > something where the error is always one sided. You want the minimum, not > the mean. Here you assume time measuring itself is without error - I doubt that. If the timing version, which executes function "Solve" one hundred times, runs about 80-100 seconds without a significant variation, then taking the mean is mathematically correct. I can't take the minimum since I don't measure the time a single call takes. > > When you measure the time taken for a piece of code, the number you get > is made up of two components: > > > 1) the actual time the code would have taken, if there were no random > fluctuations due to other processes, etc.; and > > 2) random errors due to switching to other processes, etc. > > Both of these are unknown; you only know the total. But obviously the > random errors are always positive. They can never be negative, and you > can never measure a time which is less than the fastest your code could > run. > > (If your anti-virus software starts scanning in the middle of the trial, > it can only make your code take more time to run, never less.) > > So the only way to minimize the error is to pick the minimum time, not > the average. The average just gives you: > > - some unknown "true" time, plus some unknown error, somewhere > between the smallest error and the biggest error; > > whereas the minimum gives you: > > - some unknown "true" time, plus the smallest error yet seen. -- http://mail.python.org/mailman/listinfo/python-list