On bpf syscall map operations the bpf_disable_instrumentation function
is called for the reason described in the comment to the function.
The description matches the bug case. The function increments a per CPU
integer variable bpf_prog_active. The variable is not processed in the
bpf trace path. The fix implements a similar processing as for kprobe
handling. The fix degrades the bpf tracing by skipping some eBPF trace
sequences that otherwise might yield deadlock.

Reported-by: syzbot+9d95beb2a3c260622...@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=9d95beb2a3c260622518
Link: https://lore.kernel.org/all/000000000000adb08b0614139...@google.com/T/
Signed-off-by: Wojciech Gładysz <wojciech.glad...@infogain.com>
---
 kernel/trace/bpf_trace.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 6249dac61701..8de2e084b162 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2391,7 +2391,9 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 
*args)
        struct bpf_trace_run_ctx run_ctx;
 
        cant_sleep();
-       if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) {
+
+       // if the instrumentation is not disabled disable recurrence and go
+       if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
                bpf_prog_inc_misses_counter(prog);
                goto out;
        }
@@ -2405,7 +2407,7 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 
*args)
 
        bpf_reset_run_ctx(old_run_ctx);
 out:
-       this_cpu_dec(*(prog->active));
+       __this_cpu_dec(bpf_prog_active);
 }
 
 #define UNPACK(...)                    __VA_ARGS__
-- 
2.35.3


Reply via email to