Am 31.05.2014 13:25, schrieb dennis luehring:
Am 31.05.2014 08:36, schrieb Russel Winder via Digitalmars-d:
As well as the average (mean), you must provide standard deviation and
degrees of freedom so that a proper error analysis and t-tests are
feasible.

average means average of benchmarked times

and the dummy values are only for keeping the compiler from removing
anything it can reduce at compiletime - that makes benchmarks
compareable, these values does not change the algorithm or result
quality an any way - its more like an overflowing-second-output bases on
the result of the original algorithm (but should be just a simple
addition or substraction - ignoring overflow etc.)

thats the base of all types of non-stupid benchmarking - next/pro step
is to look at the resulting assemblercode


so the anti-optimizer-overflowing-second-output aka AOOSO should be

initialized outside of the testfunction with an random-value - i normaly use the pointer to the main args as int

the AOOSO should be incremented by the needed result of the benchmarked
algorithm - that could be an int casted float/double value, the variant size of an string or whatever is floaty and needed enough to be used

and then return the AOOSO as main return

so the original algorithm isn't changed but the compiler got absolutely nothing to prevent the usage and the end output of this AOOSO dummy value

yes it ignores that the code-size (cache problems) is changed by the AOOSO incrementation - thats the reason for simple casting/overflowing integer stuff here, but if the benchmarking goes that deep you should better take a look at the assembler-level


Reply via email to