On Tue, Dec 18, 2018 at 06:24:35PM +0100, Andrea Righi wrote: > On Tue, Dec 18, 2018 at 01:50:26PM +0900, Masami Hiramatsu wrote: > ... > > > Side question: there are certain symbols in arch/x86/xen that should be > > > blacklisted explicitly, because they're non-attachable. > > > > > > More exactly, all functions defined in arch/x86/xen/spinlock.c, > > > arch/x86/xen/time.c and arch/x86/xen/irq.c. > > > > > > The reason is that these files are compiled without -pg to allow the > > > usage of ftrace within a Xen domain apparently (from > > > arch/x86/xen/Makefile): > > > > > > ifdef CONFIG_FUNCTION_TRACER > > > # Do not profile debug and lowlevel utilities > > > CFLAGS_REMOVE_spinlock.o = -pg > > > CFLAGS_REMOVE_time.o = -pg > > > CFLAGS_REMOVE_irq.o = -pg > > > endif > > > > > > Actually, the reason why you can not probe those functions via > > tracing/kprobe_events is just a side effect. You can probe it if you > > write a kprobe module. Since the kprobe_events depends on some ftrace > > tracing functions, it sometimes cause a recursive call problem. To avoid > > this issue, I have introduced a CONFIG_KPROBE_EVENTS_ON_NOTRACE, see > > commit 45408c4f9250 ("tracing: kprobes: Prohibit probing on notrace > > function"). > > > > If you set CONFIG_KPROBE_EVENTS_ON_NOTRACE=n, you can continue putting > > probes > > on Xen spinlock functions too. > > OK. > > > > > > Do you see a nice and clean way to blacklist all these functions > > > (something like arch_populate_kprobe_blacklist()), or should we just > > > flag all of them explicitly with NOKPROBE_SYMBOL()? > > > > As I pointed, you can probe it via your own kprobe module. Like systemtap, > > you still can probe it. The blacklist is for "kprobes", not for > > "kprobe_events". > > (Those are used to same, but since the above commit, those are different > > now) > > > > I think the most sane solution is, identifying which (combination of) > > functions > > in ftrace (kernel/trace/*) causes a problem, marking those > > NOKPROBE_SYMBOL() and > > removing CONFIG_KPROBE_EVENTS_ON_NOTRACE.
I'm planning to spend a little bit more time on this and see if I can identify the problematic ftrace functions and eventually drop CONFIG_KPROBE_EVENTS_ON_NOTRACE, following the sane solution. However, in the meantime, with the following patch I've been able to get a more reliable kprobes blacklist and show also the notrace functions in debugfs when CONFIG_KPROBE_EVENTS_ON_NOTRACE is off. It's probably ugly and inefficient, because it's iterating over all symbols in x86's arch_populate_kprobe_blacklist(), but it seems to work for my specific use case, so I thought it shouldn't be bad to share it, just in case (maybe someone else is also interested). Thanks, From: Andrea Righi <righi.and...@gmail.com> Subject: [PATCH] x86: kprobes: automatically blacklist all non-traceable functions Iterate over all symbols to detect those that are non-traceable and blacklist them. Signed-off-by: Andrea Righi <righi.and...@gmail.com> --- arch/x86/kernel/kprobes/core.c | 11 +++++++++-- kernel/kprobes.c | 22 ++++++++++++++++++++-- 2 files changed, 29 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 4ba75afba527..8cc7191ba3f9 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -1026,10 +1026,17 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr) } NOKPROBE_SYMBOL(kprobe_fault_handler); +static int do_kprobes_arch_blacklist(void *data, const char *name, + struct module *mod, unsigned long addr) +{ + if (arch_within_kprobe_blacklist(addr)) + kprobe_add_ksym_blacklist(addr); + return 0; +} + int __init arch_populate_kprobe_blacklist(void) { - return kprobe_add_area_blacklist((unsigned long)__entry_text_start, - (unsigned long)__entry_text_end); + return kallsyms_on_each_symbol(do_kprobes_arch_blacklist, NULL); } int __init arch_init_kprobes(void) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index f4ddfdd2d07e..2e824cd536ba 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1389,11 +1389,29 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p) return ret; } +#if defined(CONFIG_KPROBES_ON_FTRACE) && \ + !defined(CONFIG_KPROBE_EVENTS_ON_NOTRACE) +static bool within_notrace(unsigned long addr) +{ + unsigned long offset, size; + + if (!kallsyms_lookup_size_offset(addr, &size, &offset)) + return true; + return !ftrace_location_range(addr - offset, addr - offset + size); +} +#else +static bool within_notrace(unsigned long addr) +{ + return false; +} +#endif + bool __weak arch_within_kprobe_blacklist(unsigned long addr) { /* The __kprobes marked functions and entry code must not be probed */ - return addr >= (unsigned long)__kprobes_text_start && - addr < (unsigned long)__kprobes_text_end; + return (addr >= (unsigned long)__kprobes_text_start && + addr < (unsigned long)__kprobes_text_end) || + within_notrace(addr); } bool within_kprobe_blacklist(unsigned long addr) -- 2.17.1