> On Fri, Dec 02, 2016 at 04:19:11PM -0500, [email protected] wrote:
> > From: Kan Liang <[email protected]>
> >
> > On x86, NMI handler is the most important part which brings overhead
> > for sampling. Adding a pmu specific overhead type
> > PERF_PMU_SAMPLE_OVERHEAD for it.
> >
> > For other architectures which may don't have NMI, the overhead type
> > can be reused.
> >
> > Signed-off-by: Kan Liang <[email protected]>
> > ---
> >  arch/x86/events/core.c          | 17 ++++++++++++++++-
> >  arch/x86/events/perf_event.h    |  2 ++
> >  include/uapi/linux/perf_event.h |  1 +
> >  3 files changed, 19 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index
> > 9d4bf3a..de40f96 100644
> > --- a/arch/x86/events/core.c
> > +++ b/arch/x86/events/core.c
> > @@ -1397,6 +1397,9 @@ static void x86_pmu_del(struct perf_event
> > *event, int flags)
> >
> >     perf_event_update_userpage(event);
> >
> > +   if ((flags & PERF_EF_LOG) && cpuc->nmi_overhead.nr)
> > +           perf_log_overhead(event, PERF_PMU_SAMPLE_OVERHEAD,
> > +&cpuc->nmi_overhead);
> > +
> >  do_del:
> >     if (x86_pmu.del) {
> >             /*
> 
> That's not at all mentioned in the changelog, and it clearly isn't
> nmi_overhead.

Here it only records the overhead, not calculate.
The calculation is in nmi_hanlder as below. I will make it clear in the 
changelog.

@@ -1492,8 +1505,10 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_regs 
*regs)
        start_clock = sched_clock();
        ret = x86_pmu.handle_irq(regs);
        finish_clock = sched_clock();
+       clock = finish_clock - start_clock;
 
-       perf_sample_event_took(finish_clock - start_clock);
+       perf_calculate_nmi_overhead(clock);
+       perf_sample_event_took(clock);
 
        return ret;
 }

Thanks,
Kan


Reply via email to