Add a might_fault() check to validate that the bpf sys_enter/sys_exit
probe callbacks are indeed called from a context where page faults can
be handled.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Acked-by: Andrii Nakryiko <and...@kernel.org>
Tested-by: Andrii Nakryiko <and...@kernel.org> # BPF parts
Cc: Michael Jeanson <mjean...@efficios.com>
Cc: Steven Rostedt <rost...@goodmis.org>
Cc: Masami Hiramatsu <mhira...@kernel.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Alexei Starovoitov <a...@kernel.org>
Cc: Yonghong Song <y...@fb.com>
Cc: Paul E. McKenney <paul...@kernel.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Arnaldo Carvalho de Melo <a...@kernel.org>
Cc: Mark Rutland <mark.rutl...@arm.com>
Cc: Alexander Shishkin <alexander.shish...@linux.intel.com>
Cc: Namhyung Kim <namhy...@kernel.org>
Cc: Andrii Nakryiko <andrii.nakry...@gmail.com>
Cc: b...@vger.kernel.org
Cc: Joel Fernandes <j...@joelfernandes.org>
---
 include/trace/bpf_probe.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
index 211b98d45fc6..099df5c3e38a 100644
--- a/include/trace/bpf_probe.h
+++ b/include/trace/bpf_probe.h
@@ -57,6 +57,7 @@ __bpf_trace_##call(void *__data, proto)                       
                \
 static notrace void                                                    \
 __bpf_trace_##call(void *__data, proto)                                        
\
 {                                                                      \
+       might_fault();                                                  \
        guard(preempt_notrace)();                                       \
        CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, 
CAST_TO_U64(args));        \
 }
-- 
2.39.2


Reply via email to