There is no need to prohibit probing on the functions used for preparation. Those are safely probed because those are not invoked from breakpoint/fault/debug handlers, there is no chance to cause recursive exceptions.
Following functions are now removed from the kprobes blacklist. update_bitfield_fetch_param free_bitfield_fetch_param kprobe_register Signed-off-by: Masami Hiramatsu <masami.hiramatsu...@hitachi.com> Cc: Steven Rostedt <rost...@goodmis.org> Cc: Frederic Weisbecker <fweis...@gmail.com> Cc: Ingo Molnar <mi...@redhat.com> --- kernel/trace/trace_kprobe.c | 2 +- kernel/trace/trace_probe.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c index 243f683..e0132b4 100644 --- a/kernel/trace/trace_kprobe.c +++ b/kernel/trace/trace_kprobe.c @@ -1151,7 +1151,7 @@ kretprobe_perf_func(struct trace_probe *tp, struct kretprobe_instance *ri, * kprobe_trace_self_tests_init() does enable_trace_probe/disable_trace_probe * lockless, but we can't race with this __init function. */ -static __kprobes +static int kprobe_register(struct ftrace_event_call *event, enum trace_reg type, void *data) { diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c index 412e959..43638a2 100644 --- a/kernel/trace/trace_probe.c +++ b/kernel/trace/trace_probe.c @@ -346,7 +346,7 @@ DEFINE_BASIC_FETCH_FUNCS(bitfield) #define fetch_bitfield_string NULL #define fetch_bitfield_string_size NULL -static __kprobes void +static void update_bitfield_fetch_param(struct bitfield_fetch_param *data) { /* @@ -359,7 +359,7 @@ update_bitfield_fetch_param(struct bitfield_fetch_param *data) update_symbol_cache(data->orig.data); } -static __kprobes void +static void free_bitfield_fetch_param(struct bitfield_fetch_param *data) { /* -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/