On 4/10/12 5:40 AM, Jens Mueller wrote:
How come that the times based relative report and the percentage based relative report are mixed in one result? And how do I choose which one I'd like to see in the output.
It's in the names. If the name of a benchmark starts with benchmark_relative_, then that benchmark is considered relative to the last non-relative benchmark. Using a naming convention allows complete automation in benchmarking a module.
I figure it's fine that all results appear together because the absence of data in the relative column clarifies which is which.
When benchmarking you can measure different things at the same time. In this regard the current proposal is limited. It just measures wall clock time. I believe extending the StopWatch to measure e.g. user CPU time is a useful addition.
Generally I fear piling too much on StopWatch because every feature adds its own noise. But there's value in collecting the result of times(). What would be the Windows equivalent?
In general, allowing user defined measurements would be great. E.g. to measure the time spend in user mode. () => { tms t; times(&t); return t.tms_utime; } Note, that this code does not need to be portable. You can also use version() else static assert. Things that come to mind that I'd like to measure. Time measurements: * User CPU time * System CPU time * Time spent in memory allocations Count measurements: * Memory usage * L1/L2/L3 cache misses * Number of executed instructions * Number of memory allocations Of course wall clock time is the ultimate measure when benchmarking. But often you need to investigate further (doing more measurements). Do you think adding this is worthwhile?
Absolutely. I just fear about expanding the charter of the framework too much. Let's see:
* Memory usage is, I think, difficult in Windows. * Estimating cache misses and executed instructions is significant research * Number of memory allocations requires instrumenting druntime Andrei