From: Kan Liang <kan.li...@linux.intel.com> The unconstrained value depends on the number of GP and fixed counters. Each hybrid PMU should use its own unconstrained.
Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org> Signed-off-by: Kan Liang <kan.li...@linux.intel.com> --- arch/x86/events/intel/core.c | 5 ++++- arch/x86/events/perf_event.h | 1 + 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 33d26ed..39f57ae 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3147,7 +3147,10 @@ x86_get_event_constraints(struct cpu_hw_events *cpuc, int idx, } } - return &unconstrained; + if (!is_hybrid() || !cpuc->pmu) + return &unconstrained; + + return &hybrid_pmu(cpuc->pmu)->unconstrained; } static struct event_constraint * diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 993f0de..cfb2da0 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -639,6 +639,7 @@ struct x86_hybrid_pmu { int max_pebs_events; int num_counters; int num_counters_fixed; + struct event_constraint unconstrained; }; static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu) -- 2.7.4