On Wednesday, 19 September 2012 at 08:28:36 UTC, Manu wrote:
On 19 September 2012 01:02, Andrei Alexandrescu <
seewebsiteforem...@erdani.org> wrote:
On 9/18/12 5:07 PM, "Øivind" wrote:
* For all tests, the best run is selected, but would it not be
reasonable in some cases to get the average value? Maybe
excluding the
runs that are more than a couple std. deviations away from
the mean
value..
After extensive tests with a variety of aggregate functions, I
can say
firmly that taking the minimum time is by far the best when it
comes to
assessing the speed of a function.
The fastest execution time is rarely useful to me, I'm almost
always much
more interested in the slowest execution time.
In realtime software, the slowest time is often the only
important factor,
everything must be designed to tolerate this possibility.
I can also imagine other situations where multiple workloads
are competing
for time, the average time may be more useful in that case.
For comparison's sake, the Criterion benchmarking package for
Haskell is worth a look:
http://www.serpentine.com/blog/2009/09/29/criterion-a-new-benchmarking-library-for-haskell/
Criterion accounts for clock-call costs, displays various central
tendencies, reports outliers (and their significance --- whether
the variance is significantly affected by the outliers), etc.,
etc. It's a very well conceived benchmarking system, and might
well be worth stealing from.
Best,
Graham