From: Kan Liang <kan.li...@intel.com>

Some users reported spurious NMI watchdog timeouts.

We now have more and more systems where the Turbo range is wide enough
that the NMI watchdog expires faster than the soft watchdog timer that
updates the interrupt tick the NMI watchdog relies on.

This problem was originally added by commit 58687acba592
("lockup_detector: Combine nmi_watchdog and softlockup detector").
Previously the NMI watchdog would always check jiffies, which were
ticking fast enough. But now the backing is quite slow so the expire
time becomes more sensitive.

For mainline the right fix is to switch the NMI watchdog to reference
cycles, which tick always at the same rate independent of turbo mode.
But this is requires some complicated changes in perf, which are too
difficult to backport. Since we need a stable fix to just increase the
NMI watchdog rate here to avoid the spurious timeouts. This is not an
ideal fix because a 3x as large Turbo range could still fail, but for
now that's not likely.

Signed-off-by: Kan Liang <kan.li...@intel.com>
Cc: sta...@vger.kernel.org
Fixes: 58687acba592 ("lockup_detector: Combine nmi_watchdog and
softlockup detector")
---

The right fix for mainline can be found here.
perf/x86/intel: enable CPU ref_cycles for GP counter
perf/x86/intel, watchdog: Switch NMI watchdog to ref cycles on x86
https://patchwork.kernel.org/patch/9779087/
https://patchwork.kernel.org/patch/9779089/

Change since V1:
 - Restrict the period in hw_nmi.c for Intel platform. (Don Zickus)

 arch/x86/kernel/apic/hw_nmi.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index c73c9fb281e1..716d44e986f9 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -20,9 +20,15 @@
 #include <linux/delay.h>
 
 #ifdef CONFIG_HARDLOCKUP_DETECTOR
+/*
+ * The NMI watchdog relies on PERF_COUNT_HW_CPU_CYCLES event, which
+ * can tick faster than the measured CPU Frequency due to Turbo mode.
+ * That can lead to spurious timeouts.
+ * To workaround the issue, extending the period by 3 times.
+ */
 u64 hw_nmi_get_sample_period(int watchdog_thresh)
 {
-       return (u64)(cpu_khz) * 1000 * watchdog_thresh;
+       return (u64)(cpu_khz) * 1000 * watchdog_thresh * 3;
 }
 #endif
 
-- 
2.11.0

Reply via email to