On Tue, 9 Jan 2018 17:14:00 +0100 Peter Zijlstra <[email protected]> wrote:
> On Tue, Jan 09, 2018 at 04:12:53PM +0100, Peter Zijlstra wrote: > > > In any case, I found yet another lockdep splat, trying to figure out wth > > to do about that. > > An of course, testing this one yields yet another lockdep splat.. > onwards to #3 :/ > > --- > Subject: perf: Fix another perf,trace,cpuhp lock inversion > From: Peter Zijlstra <[email protected]> > Date: Tue Jan 9 17:07:59 CET 2018 > > Lockdep complained about: > > perf_trace_init() > #0 mutex_lock(&event_mutex) > perf_trace_event_init() > perf_trace_event_reg() > tp_event->class->reg() := tracepoint_probe_register > #1 mutex_lock(&tracepoints_mutex) > trace_point_add_func() > #2 static_key_enable() > > > > #2 do_cpu_up() > perf_event_init_cpu() > #3 mutex_lock(&pmus_lock) > #4 mutex_lock(&ctx->mutex) > > > perf_ioctl() > #4 ctx = perf_event_ctx_lock() > _perf_iotcl() > ftrace_profile_set_filter() > #0 mutex_lock(&event_mutex) > > > Fudge it for now by noting that the tracepoint state does not depend > on the event <-> context relation. Ugly though :/ > Looking at ftrace_profile_set_filter(), I see it starts with: mutex_lock(&event_mutex); How much of a big deal would it be if we move taking event_mutex() into perf_ioctl(), and then make ftrace_profile_set_filter() not take the event_mutex. This is the only place that function is used. Would that work? Below is a patch (totally untested, not even compiled) -- Steve Signed-off-by: Steven Rostedt (VMware) <[email protected]> --- diff --git a/kernel/events/core.c b/kernel/events/core.c index 4df5b695bf0d..9fac7ac14b32 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4741,9 +4741,11 @@ static long perf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) struct perf_event_context *ctx; long ret; + mutex_lock(&event_mutex); ctx = perf_event_ctx_lock(event); ret = _perf_ioctl(event, cmd, arg); perf_event_ctx_unlock(event, ctx); + mutex_unlock(&event_mutex); return ret; } diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c index 61e7f0678d33..46c2e5d20662 100644 --- a/kernel/trace/trace_events_filter.c +++ b/kernel/trace/trace_events_filter.c @@ -2256,6 +2256,7 @@ static int ftrace_function_set_filter(struct perf_event *event, } #endif /* CONFIG_FUNCTION_TRACER */ +/* Must have event_mutex held */ int ftrace_profile_set_filter(struct perf_event *event, int event_id, char *filter_str) { @@ -2263,17 +2264,14 @@ int ftrace_profile_set_filter(struct perf_event *event, int event_id, struct event_filter *filter; struct trace_event_call *call; - mutex_lock(&event_mutex); + lockdep_assert_held(&event_mutex); call = event->tp_event; - - err = -EINVAL; if (!call) - goto out_unlock; + return -EINVAL; - err = -EEXIST; if (event->filter) - goto out_unlock; + return -EEXIST; err = create_filter(call, filter_str, false, &filter); if (err) @@ -2288,9 +2286,6 @@ int ftrace_profile_set_filter(struct perf_event *event, int event_id, if (err || ftrace_event_is_function(call)) __free_filter(filter); -out_unlock: - mutex_unlock(&event_mutex); - return err; }

