On 9/21/12 5:36 PM, David Piepgrass wrote:
Some random comments about std.benchmark based on its
documentation:

- It is very strange that the documentation of printBenchmarks
uses neither of the words "average" or "minimum", and doesn't say
how many trials are done....

Because all of those are irrelevant and confusing.

Huh? It's not nearly as confusing as reading the documentation and
not having the faintest idea what it will do. The way the benchmarker
works is somehow 'irrelevant'? The documentation doesn't even
indicate that the functions are to be run more than once!!

I misunderstood. I agree that it's a good thing to specify how
benchmarking proceeds.

I don't think that's a good idea.

I have never seen you make such vague arguments, Andrei.

I had expanded my point elsewhere. Your suggestion was:

- It is very strange that the documentation of printBenchmarks uses
neither of the words "average" or "minimum", and doesn't say how many
trials are done.... I suppose the obvious interpretation is that it
only does one trial, but then we wouldn't be having this discussion
about averages and minimums right? Øivind says tests are run 1000
times... but it needs to be configurable per-test (my idea: support a
_x1000 suffix in function names, or _for1000ms to run the test for at
least 1000 milliseconds; and allow a multiplier when when running a
group of benchmarks, e.g. a multiplier argument of 0.5 means to only
run half as many trials as usual.) Also, it is not clear from the
documentation what the single parameter to each benchmark is (define
"iterations count".)

I don't think it's a good idea because the "for 1000 ms" doesn't say anything except how good the clock resolution was on the system. I'm as strongly convinced we shouldn't print useless information as I am we should print useful information.


Andrei

Reply via email to