On Tue, Nov 29, 2016 at 06:30:55PM +0100, Peter Zijlstra wrote:
> On Tue, Nov 29, 2016 at 09:20:10AM -0800, Stephane Eranian wrote:
> > Max period is limited by the number of bits the kernel can write to an MSR.
> > Used to be 31, now it is 47 for core PMU as per patch pointed to by Kan.
> 
> No, I think it sets it to 48 now, which is the problem. It should be 1
> bit less than the total width.
> 
> So something like so.

That looks good.  Kan can you test it?

-Andi
> 
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index a74a2dbc0180..cb8522290e6a 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -4034,7 +4034,7 @@ __init int intel_pmu_init(void)
>  
>       /* Support full width counters using alternative MSR range */
>       if (x86_pmu.intel_cap.full_width_write) {
> -             x86_pmu.max_period = x86_pmu.cntval_mask;
> +             x86_pmu.max_period = x86_pmu.cntval_mask >> 1;
>               x86_pmu.perfctr = MSR_IA32_PMC0;
>               pr_cont("full-width counters, ");
>       }

Reply via email to