There are two call-sites where using static_key results in recursing on the
cpu_hotplug_lock.

Use the hotplug locked version of static_key_slow_inc().

Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: jba...@akamai.com
Cc: bige...@linutronix.de
Cc: rost...@goodmis.org
Link: http://lkml.kernel.org/r/20170418103422.687248...@infradead.org
Signed-off-by: Thomas Gleixner <t...@linutronix.de>

---
 kernel/events/core.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7653,7 +7653,7 @@ static int perf_swevent_init(struct perf
                if (err)
                        return err;
 
-               static_key_slow_inc(&perf_swevent_enabled[event_id]);
+               static_key_slow_inc_cpuslocked(&perf_swevent_enabled[event_id]);
                event->destroy = sw_perf_event_destroy;
        }
 
@@ -9160,7 +9160,7 @@ static void account_event(struct perf_ev
 
                mutex_lock(&perf_sched_mutex);
                if (!atomic_read(&perf_sched_count)) {
-                       static_branch_enable(&perf_sched_events);
+                       static_key_slow_inc_cpuslocked(&perf_sched_events.key);
                        /*
                         * Guarantee that all CPUs observe they key change and
                         * call the perf scheduling hooks before proceeding to


Reply via email to