From: "Steven Rostedt (VMware)" <rost...@goodmis.org>

If a ftrace callback requires "rcu_is_watching", then it adds the
FTRACE_OPS_FL_RCU flag and it will not be called if RCU is not "watching".
But this means that it will use a trampoline when called, and this slows
down the function tracing a tad. By checking rcu_is_watching() from within
the callback, it no longer needs the RCU flag set in the ftrace_ops and it
can be safely called directly.

Link: https://lkml.kernel.org/r/20201028115613.591878...@goodmis.org
Link: https://lkml.kernel.org/r/20201106023547.711035...@goodmis.org

Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@kernel.org>
Cc: Josh  Poimboeuf <jpoim...@redhat.com>
Cc: Jiri Kosina <ji...@kernel.org>
Cc: Miroslav Benes <mbe...@suse.cz>
Cc: Petr Mladek <pmla...@suse.com>
Cc: Masami Hiramatsu <mhira...@kernel.org>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Jiri Olsa <jo...@redhat.com>
Signed-off-by: Steven Rostedt (VMware) <rost...@goodmis.org>
---
 kernel/trace/trace_event_perf.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index fd58d83861d8..a2b9fddb8148 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -441,6 +441,9 @@ perf_ftrace_function_call(unsigned long ip, unsigned long 
parent_ip,
        int rctx;
        int bit;
 
+       if (!rcu_is_watching())
+               return;
+
        if ((unsigned long)ops->private != smp_processor_id())
                return;
 
@@ -484,7 +487,6 @@ static int perf_ftrace_function_register(struct perf_event 
*event)
 {
        struct ftrace_ops *ops = &event->ftrace_ops;
 
-       ops->flags   = FTRACE_OPS_FL_RCU;
        ops->func    = perf_ftrace_function_call;
        ops->private = (void *)(unsigned long)nr_cpu_ids;
 
-- 
2.28.0


Reply via email to