On Wed, Apr 09, 2014 at 02:31:00PM +0900, Namhyung Kim wrote:
> On Mon, 24 Mar 2014 15:34:36 -0400, Don Zickus wrote:
> > The cache contention tools needs to keep all the perf records unique in 
> > order
> > to properly parse all the data.  Currently add_hist_entry() will combine
> > the duplicate record and add the weight/period to the existing record.
> >
> > This throws away the unique data the cache contention tool needs (mainly
> > the data source).  Create a flag to force the records to stay unique.
> 
> No.  This is why I said you need to add 'mem' and 'snoop' sort keys into
> the c2c tool.  This is not how sort works IMHO - if you need to make
> samples unique let the sort key(s) distinguish them somehow, or you can
> combine same samples (in terms of sort kes) and use the combined entry's
> stat.nr_events and stat.period or weight.

Ok.  I understand your point.  Perhaps this was my lack of fully
understanding the sorting algorithm when I did this.  I can look into
adding the 'mem' and 'snoop'.

One concern I do have is we were caculating statistics based on the weight
(mean, median, stddev).  I was afraid that combining the entries would
throw off our calculations as we could no longer accurately determine them
any more.  Is that true?

Cheers,
Don
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to