Re: [PATCH v4 12/17] watchdog/hardlockup: Have the perf hardlockup use __weak functions more cleanly

2023-05-12 Thread Petr Mladek
On Thu 2023-05-04 15:13:44, Douglas Anderson wrote:
> The fact that there watchdog_hardlockup_enable(),
> watchdog_hardlockup_disable(), and watchdog_hardlockup_probe() are
> declared __weak means that the configured hardlockup detector can
> define non-weak versions of those functions if it needs to. Instead of
> doing this, the perf hardlockup detector hooked itself into the
> default __weak implementation, which was a bit awkward. Clean this up.
> 
> >From comments, it looks as if the original design was done because the
> __weak function were expected to implemented by the architecture and
> not by the configured hardlockup detector. This got awkward when we
> tried to add the buddy lockup detector which was not arch-specific but
> wanted to hook into those same functions.
> 
> This is not expected to have any functional impact.
> 
> Signed-off-by: Douglas Anderson 

I like this change:

Reviewed-by: Petr Mladek 

See a comment below.

> --- a/kernel/watchdog_perf.c
> +++ b/kernel/watchdog_perf.c
> @@ -147,12 +151,16 @@ void hardlockup_detector_perf_enable(void)
>  }
>  
>  /**
> - * hardlockup_detector_perf_disable - Disable the local event
> + * watchdog_hardlockup_disable - Disable the local event
> + *
> + * @cpu: The CPU to enable hard lockup on.
>   */
> -void hardlockup_detector_perf_disable(void)
> +void watchdog_hardlockup_disable(unsigned int cpu)
>  {
>   struct perf_event *event = this_cpu_read(watchdog_ev);
>  
> + WARN_ON_ONCE(cpu != smp_processor_id());
> +

It makes sense. But it just shows how the code is weird.
@cpu is passed as a parameter and the code expects that it is
running on the given CPU.

It seems that @cpu is passed as a parameter because this is
called from:

  + [CPUHP_AP_WATCHDOG_ONLINE].teardown.single()
+ lockup_detector_offline_cpu()
  + watchdog_disable()

and the CPU hotplug API passes @cpu parameter.

IMHO, the clean solution would be to use per_cpu*() instead
of this_cpu*() API everywhere in this code path.

But it is yet another cleanup. It seems to be out-of-scope of
this patchset.

>   if (event) {
>   perf_event_disable(event);
>   this_cpu_write(watchdog_ev, NULL);

Best Regards,
Petr


[PATCH v4 12/17] watchdog/hardlockup: Have the perf hardlockup use __weak functions more cleanly

2023-05-04 Thread Douglas Anderson
The fact that there watchdog_hardlockup_enable(),
watchdog_hardlockup_disable(), and watchdog_hardlockup_probe() are
declared __weak means that the configured hardlockup detector can
define non-weak versions of those functions if it needs to. Instead of
doing this, the perf hardlockup detector hooked itself into the
default __weak implementation, which was a bit awkward. Clean this up.

>From comments, it looks as if the original design was done because the
__weak function were expected to implemented by the architecture and
not by the configured hardlockup detector. This got awkward when we
tried to add the buddy lockup detector which was not arch-specific but
wanted to hook into those same functions.

This is not expected to have any functional impact.

Signed-off-by: Douglas Anderson 
---

Changes in v4:
- ("Have the perf hardlockup use __weak ...") new for v4.

 include/linux/nmi.h| 10 --
 kernel/watchdog.c  | 30 ++
 kernel/watchdog_perf.c | 20 ++--
 3 files changed, 32 insertions(+), 28 deletions(-)

diff --git a/include/linux/nmi.h b/include/linux/nmi.h
index 4ff48f189ab1..094a0e7ed97d 100644
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -103,21 +103,11 @@ static inline void arch_touch_nmi_watchdog(void) { }
 #if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
 extern void hardlockup_detector_perf_stop(void);
 extern void hardlockup_detector_perf_restart(void);
-extern void hardlockup_detector_perf_disable(void);
-extern void hardlockup_detector_perf_enable(void);
 extern void hardlockup_detector_perf_cleanup(void);
-extern int hardlockup_detector_perf_init(void);
 #else
 static inline void hardlockup_detector_perf_stop(void) { }
 static inline void hardlockup_detector_perf_restart(void) { }
-static inline void hardlockup_detector_perf_disable(void) { }
-static inline void hardlockup_detector_perf_enable(void) { }
 static inline void hardlockup_detector_perf_cleanup(void) { }
-# if !defined(CONFIG_HAVE_NMI_WATCHDOG)
-static inline int hardlockup_detector_perf_init(void) { return -ENODEV; }
-# else
-static inline int hardlockup_detector_perf_init(void) { return 0; }
-# endif
 #endif
 
 void watchdog_hardlockup_stop(void);
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 8e11b2b69e2c..e21896a0a9d5 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -190,27 +190,33 @@ static inline void 
watchdog_hardlockup_interrupt_count(void) { }
 #endif /* !CONFIG_HARDLOCKUP_DETECTOR_PERF */
 
 /*
- * These functions can be overridden if an architecture implements its
- * own hardlockup detector.
+ * These functions can be overridden based on the configured hardlockdup 
detector.
  *
  * watchdog_hardlockup_enable/disable can be implemented to start and stop when
- * softlockup watchdog start and stop. The arch must select the
+ * softlockup watchdog start and stop. The detector must select the
  * SOFTLOCKUP_DETECTOR Kconfig.
  */
-void __weak watchdog_hardlockup_enable(unsigned int cpu)
-{
-   hardlockup_detector_perf_enable();
-}
+void __weak watchdog_hardlockup_enable(unsigned int cpu) { }
 
-void __weak watchdog_hardlockup_disable(unsigned int cpu)
-{
-   hardlockup_detector_perf_disable();
-}
+void __weak watchdog_hardlockup_disable(unsigned int cpu) { }
 
 /* Return 0, if a hardlockup watchdog is available. Error code otherwise */
 int __weak __init watchdog_hardlockup_probe(void)
 {
-   return hardlockup_detector_perf_init();
+   /*
+* If CONFIG_HAVE_NMI_WATCHDOG is defined then an architecture
+* is assumed to have the hard watchdog available and we return 0.
+*/
+   if (IS_ENABLED(CONFIG_HAVE_NMI_WATCHDOG))
+   return 0;
+
+   /*
+* Hardlockup detectors other than those using CONFIG_HAVE_NMI_WATCHDOG
+* are required to implement a non-weak version of this probe function
+* to tell whether they are available. If they don't override then
+* we'll return -ENODEV.
+*/
+   return -ENODEV;
 }
 
 /**
diff --git a/kernel/watchdog_perf.c b/kernel/watchdog_perf.c
index aadc52b79f5b..a55a6eab1b3a 100644
--- a/kernel/watchdog_perf.c
+++ b/kernel/watchdog_perf.c
@@ -132,10 +132,14 @@ static int hardlockup_detector_event_create(void)
 }
 
 /**
- * hardlockup_detector_perf_enable - Enable the local event
+ * watchdog_hardlockup_enable - Enable the local event
+ *
+ * @cpu: The CPU to enable hard lockup on.
  */
-void hardlockup_detector_perf_enable(void)
+void watchdog_hardlockup_enable(unsigned int cpu)
 {
+   WARN_ON_ONCE(cpu != smp_processor_id());
+
if (hardlockup_detector_event_create())
return;
 
@@ -147,12 +151,16 @@ void hardlockup_detector_perf_enable(void)
 }
 
 /**
- * hardlockup_detector_perf_disable - Disable the local event
+ * watchdog_hardlockup_disable - Disable the local event
+ *
+ * @cpu: The CPU to enable hard lockup on.
  */
-void hardlockup_detector_perf_disable(void)
+void watchdog_hardlockup_