This patch series addresses issues with increased NMI latency in newer AMD processors that can result in unknown NMI messages when PMC counters are active.
The following fixes are included in this series: - Resolve a race condition when disabling an overflowed PMC counter, specifically when updating the PMC counter with a new value. - Resolve handling of active PMC counter overflows in the perf NMI handler and when to report that the NMI is not related to a PMC. - Remove earlier workaround for spurious NMIs by re-ordering the PMC stop sequence to disable the PMC first and then remove the PMC bit from the active_mask bitmap. As part of disabling the PMC, the code will wait for an overflow to be reset. The last patch re-works the order of when the PMC is removed from the active_mask. There was a comment from a long time ago about having to clear the bit in active_mask before disabling the counter because the perf NMI handler could re-enable the PMC again. Looking at the handler today, I don't see that as possible, hence the reordering. The question will be whether the Intel PMC support will now have issues. There is still support for using x86_pmu_handle_irq() in the Intel core.c file. Did Intel have any issues with spurious NMIs in the past? Peter Z, any thoughts on this? Also, I couldn't completely get rid of the "running" bit because it is used by arch/x86/events/intel/p4.c. An old commit comment that seems to indicate the p4 code suffered the spurious interrupts: 03e22198d237 ("perf, x86: Handle in flight NMIs on P4 platform"). So maybe that partially answers my previous question... --- Changes from v2 (based on feedback from Peter Z): - Simplified AMD specific disable_all callback by calling the common x86_pmu_disable_all() function and then checking and waiting for reset of and overflowed PMCs. - Removed erroneous check for no active counters in the NMI latency mitigation patch, which effectively nullified commit 63e6be6d98e1. - Reworked x86_pmu_stop() in order to remove 63e6be6d98e1. Changes from v1 (based on feedback from Peter Z): - Created an AMD specific disable_all callback function to handle the disabling of the counters and resolve the race condition - Created an AMD specific handle_irq callback function that invokes the common x86_pmu_handle_irq() function and then performs the NMI latency mitigation. - Take into account the possibility of non-perf NMI sources when applying the mitigation. This patch series is based off of the perf/core branch of tip: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core Commit c978b9460fe1 ("Merge tag 'perf-core-for-mingo-5.1-20190225' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core")