* Vince Weaver <vincent.wea...@maine.edu> wrote:

> I'm also still getting a lot of 
>       perfevents: irq loop stuck!
> messages, I thought the workaround for that had gone in for 4.2 but I 
> guess not.

Hm, so I was waiting for your feedback regarding the precise period cutoff to 
use, 
and I guess that's where the patch got lost.

Does the value of 2 below work for you?

Also I bet we'd need the workaround on a lot more CPU models as well, I 
sometimes 
see that warning on an early Nehalem prototype, model 26 (Nehalem-EP).

So my guess is that everything Nehalem and later is affected, i.e. NHM, WSM, 
SNB, 
IVB and HSW:

        case 30: /* 45nm Nehalem    */
        case 26: /* 45nm Nehalem-EP */
        case 46: /* 45nm Nehalem-EX */
        case 37: /* 32nm Westmere    */
        case 44: /* 32nm Westmere-EP */
        case 47: /* 32nm Westmere-EX */
        case 42: /* 32nm SandyBridge         */
        case 45: /* 32nm SandyBridge-E/EN/EP */
        case 58: /* 22nm IvyBridge       */
        case 62: /* 22nm IvyBridge-EP/EX */
        case 60: /* 22nm Haswell Core */
        case 63: /* 22nm Haswell Server */
        case 69: /* 22nm Haswell ULT */
        case 70: /* 22nm Haswell + GT3e (Intel Iris Pro graphics) */

Has anyone ever seen that warning on Broadwell and later Intel CPUs?

Thanks,

        Ingo

Signed-off-by: Ingo Molnar <mi...@kernel.org>

---
 arch/x86/kernel/cpu/perf_event_intel.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c 
b/arch/x86/kernel/cpu/perf_event_intel.c
index 960e85de13fb..26b13ea8299c 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -2479,6 +2479,15 @@ hsw_get_event_constraints(struct cpu_hw_events *cpuc, 
int idx,
 
        return c;
 }
+/*
+ * Really short periods might create infinite PMC NMI loops on Haswell,
+ * so don't allow a period of 1. There's no official erratum for this AFAIK.
+ */
+static unsigned int hsw_limit_period(struct perf_event *event, unsigned int 
left)
+{
+       return max(left, 2U);
+}
+
 
 /*
  * Broadwell:
@@ -2495,7 +2504,7 @@ hsw_get_event_constraints(struct cpu_hw_events *cpuc, int 
idx,
  * Therefore the effective (average) period matches the requested period,
  * despite coarser hardware granularity.
  */
-static unsigned bdw_limit_period(struct perf_event *event, unsigned left)
+static unsigned int bdw_limit_period(struct perf_event *event, unsigned left)
 {
        if ((event->hw.config & INTEL_ARCH_EVENT_MASK) ==
                        X86_CONFIG(.event=0xc0, .umask=0x01)) {
@@ -3265,6 +3274,7 @@ __init int intel_pmu_init(void)
                x86_pmu.hw_config = hsw_hw_config;
                x86_pmu.get_event_constraints = hsw_get_event_constraints;
                x86_pmu.cpu_events = hsw_events_attrs;
+               x86_pmu.limit_period = hsw_limit_period;
                x86_pmu.lbr_double_abort = true;
                pr_cont("Haswell events, ");
                break;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to