On 9/18/12 5:07 PM, "Øivind" wrote:
I think the std.benchmark is definitely a useful library addition, but
in my mind it currently a bit too limited.

* All tests are run 1000 times. Depending on the length of the test to
benchmark, this can be too much. In some cases it would be good to be
able to trade the number of runs against accuracy.

It would be a good idea to make that a configurable parameter.

* For all tests, the best run is selected, but would it not be
reasonable in some cases to get the average value? Maybe excluding the
runs that are more than a couple std. deviations away from the mean value..

After extensive tests with a variety of aggregate functions, I can say firmly that taking the minimum time is by far the best when it comes to assessing the speed of a function.

* Is there a way of specifying a test name other than the function-name
when using the 'mixin(scheduleForBenchmarking)' approach to register
benchmarks?

Not currently. Probably a manual registration of an individual benchmark would make sense.

* I would also like to be able (if possible) to register two mentioned
things (number of runs and result strategy) with the mixin approach (or
similar).

Makes sense.

* It seems like the baseline for subtraction from subsequent test runs
is taken from a call to the test function, passing 1 to it. Shouldn't 0
be passed for this value?

I'll look into that.


Thanks,

Andrei

Reply via email to