- It is very strange that the documentation of printBenchmarks
uses
neither of the words "average" or "minimum", and doesn't say
how many
trials are done.... I suppose the obvious interpretation is
that it
only does one trial, but then we wouldn't be having this
discussion
about averages and minimums right? Øivind says tests are run
1000
times... but it needs to be configurable per-test (my idea:
support a
_x1000 suffix in function names, or _for1000ms to run the test
for at
least 1000 milliseconds; and allow a multiplier when when
running a
group of benchmarks, e.g. a multiplier argument of 0.5 means
to only
run half as many trials as usual.) Also, it is not clear from
the
documentation what the single parameter to each benchmark is
(define
"iterations count".)
I don't think it's a good idea because the "for 1000 ms"
doesn't say anything except how good the clock resolution was
on the system. I'm as strongly convinced we shouldn't print
useless information as I am we should print useful information.
I am puzzled about what you think my suggestion meant. I am
suggesting allowing the user to configure how long benchmarking
takes. Some users might want to run their benchmark for an hour
to get stable and reliable numbers; others don't want to wait and
want to see results ASAP. Perhaps the *same* user will want to
run benchmarks quickly while developing them and then do a "final
run" with more trials once their benchmark suite is complete.
Also, some individual benchmark functions will take microseconds
to complete; others may take seconds to complete. All I'm
suggesting are simple ways to avoid wasting users' time, without
making std.benchmark overly complicated.