I think the std.benchmark is definitely a useful library addition, but in my mind it currently a bit too limited.

* All tests are run 1000 times. Depending on the length of the test to benchmark, this can be too much. In some cases it would be good to be able to trade the number of runs against accuracy.

* For all tests, the best run is selected, but would it not be reasonable in some cases to get the average value? Maybe excluding the runs that are more than a couple std. deviations away from the mean value..

* Is there a way of specifying a test name other than the function-name when using the 'mixin(scheduleForBenchmarking)' approach to register benchmarks?

* I would also like to be able (if possible) to register two mentioned things (number of runs and result strategy) with the mixin approach (or similar).

* It seems like the baseline for subtraction from subsequent test runs is taken from a call to the test function, passing 1 to it. Shouldn't 0 be passed for this value?

If these can be addressed, I would like it added to the library!

Reply via email to