On Tue, Aug 05, 2025 at 08:23:12PM +0800, Tao Chen wrote:
> bpf program should run under migration disabled, kprobe_multi_link_prog_run
> called the way from graph tracer, which disables preemption in
> function_graph_enter_regs, as Jiri and Yonghong suggested, there is no
> need to use migrate_disable. As a result, some overhead maybe will be
> reduced.
> 
> Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
> Signed-off-by: Tao Chen <[email protected]>

Acked-by: Jiri Olsa <[email protected]>

thanks,
jirka


> ---
>  kernel/trace/bpf_trace.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 3ae52978cae..1993fc62539 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -2734,14 +2734,19 @@ kprobe_multi_link_prog_run(struct 
> bpf_kprobe_multi_link *link,
>               goto out;
>       }
>  
> -     migrate_disable();
> +     /*
> +      * bpf program should run under migration disabled, 
> kprobe_multi_link_prog_run
> +      * called the way from graph tracer, which disables preemption in

nit, s/called the way/called all the way/


> +      * function_graph_enter_regs, so there is no need to use 
> migrate_disable.
> +      * Accessing the above percpu data bpf_prog_active is also safe for the 
> same
> +      * reason.
> +      */
>       rcu_read_lock();
>       regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr());
>       old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
>       err = bpf_prog_run(link->link.prog, regs);
>       bpf_reset_run_ctx(old_run_ctx);
>       rcu_read_unlock();
> -     migrate_enable();
>  
>   out:
>       __this_cpu_dec(bpf_prog_active);
> -- 
> 2.48.1
> 

Reply via email to