On Mon, Jul 23, 2018 at 04:59:44PM +0200, Peter Zijlstra wrote:
> On Thu, Mar 08, 2018 at 06:15:41PM -0800, [email protected] wrote:
> > diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> > index ef47a418d819..86149b87cce8 100644
> > --- a/arch/x86/events/intel/core.c
> > +++ b/arch/x86/events/intel/core.c
> > @@ -2280,7 +2280,10 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
> >      * counters from the GLOBAL_STATUS mask and we always process PEBS
> >      * events via drain_pebs().
> >      */
> > -   status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
> > +   if (x86_pmu.flags & PMU_FL_PEBS_ALL)
> > +           status &= ~(cpuc->pebs_enabled & EXTENDED_PEBS_COUNTER_MASK);
> > +   else
> > +           status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
> >  
> >     /*
> >      * PEBS overflow sets bit 62 in the global status register
> 
> Doesn't this re-introduce the problem fixed in commit fd583ad1563be,
> where pebs_enabled:32-34 are PEBS Load Latency, instead of fixed
> counters?

Also, since they 'fixed' that conflict, the PEBS_ALL version could be:

        state &= cpuc->pebs_enabled;

Right?

Reply via email to