Em Thu, Nov 27, 2014 at 09:56:21AM +0900, Namhyung Kim escreveu:
> Hi Milian,
> 
> On Wed, 26 Nov 2014 19:11:01 +0100, Milian Wolff wrote:
> > I tried this on a benchmark of mine:
> >
> > before:
> > [ perf record: Woken up 196 times to write data ]
> > [ perf record: Captured and wrote 48.860 MB perf.data (~2134707 samples) ]
> >
> > after, with dwarf,512
> > [ perf record: Woken up 18 times to write data ]
> > [ perf record: Captured and wrote 4.401 MB perf.data (~192268 samples) ]
> >
> > What confuses me though is the number of samples. When the workload is 
> > equal, 
> > shouldn't the number of samples stay the same? Or what does this mean? The 
> > resulting reports both look similar enough.
> 
> It's bogus - it just calculates the number of samples based on the file
> size (with fixed sample size).  I think we should either show the correct
> number as we post-process samples for build-id detection or simply
> remove it.

Well, since we setup the perf_event_attr we could perhaps do a better
job at estimating this... In this case we even know how much stack_dump
we will take at each sample, that would be major culprit for the current
mis estimation.

And yes, if we do the post processing, we can do a precise calculation.

- Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe linux-perf-users" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to