On Thu, Sep 06, 2012 at 08:41:09PM +0200, Peter Zijlstra wrote: > On Thu, 2012-09-06 at 17:46 +0200, Jiri Olsa wrote: > > The 'perf diff' and 'std/hist' code is now changed to allow computations > > mentioned in the paper. Two of them are implemented within this patchset: > > 1) ratio differential profiling > > 2) weighted differential profiling > > Seems like a useful thing indeed, the explanation of the weighted diff > method doesn't seem to contain a why. I know I could go read the paper > but... :-)
Or you could ask the author. ;-) Ratio can be fooled by statistical variations on profiling buckets with few counts. So if you are looking for a 10% difference in execution overhead somewhere in a large program, ratio will unhelpfully sort a bunch of statistical 2x or 3x noise to the top of the list. So you could use the difference in buckets instead of the ratio, but this has problems in the case where the two runs being compared got different amounts of work done, as is usually the case for timed benchmark runs or throughput-based benchmark runs. In these cases, you use the work done (or the measured throughput, as the case may be) as weights. The weighted difference will then pinpoint the code that suffered the greatest per-unit-work increase in overhead between the two runs. But ratio is still the method of choice when you are looking at pure scalability issues, especially when the offending code increased in overhead by a large factor. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/