On Thu, 29 Oct 2020 16:58:03 +0900
Masami Hiramatsu <mhira...@kernel.org> wrote:

> Hi Steve,
> 
> On Wed, 28 Oct 2020 07:52:49 -0400
> Steven Rostedt <rost...@goodmis.org> wrote:
> 
> > From: "Steven Rostedt (VMware)" <rost...@goodmis.org>
> > 
> > If a ftrace callback does not supply its own recursion protection and
> > does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will
> > make a helper trampoline to do so before calling the callback instead of
> > just calling the callback directly.  
> 
> So in that case the handlers will be called without preempt disabled?
> 
> 
> > The default for ftrace_ops is going to assume recursion protection unless
> > otherwise specified.  
> 
> This seems to skip entier handler if ftrace finds recursion.
> I would like to increment the missed counter even in that case.

Note, this code does not change the functionality at this point, because
without having the FL_RECURSION flag set (which kprobes does not even in
this patch), it always gets called from the helper function that does this:

        bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
        if (bit < 0)
                return;

        preempt_disable_notrace();

        op->func(ip, parent_ip, op, regs);

        preempt_enable_notrace();
        trace_clear_recursion(bit);

Where this function gets called by op->func().

In other words, you don't get that count anyway, and I don't think you want
it. Because it means you traced something that your callback calls.

That bit check is basically a nop, because the last patch in this series
will make the default that everything has recursion protection, but at this
patch the test does this:

        /* A previous recursion check was made */
        if ((val & TRACE_CONTEXT_MASK) > max)
                return 0;

Which would always return true, because this function is called via the
helper that already did the trace_test_and_set_recursion() which, if it
made it this far, the val would always be greater than max.

> 
> [...]
> e.g.
> 
> > diff --git a/arch/csky/kernel/probes/ftrace.c 
> > b/arch/csky/kernel/probes/ftrace.c
> > index 5264763d05be..5eb2604fdf71 100644
> > --- a/arch/csky/kernel/probes/ftrace.c
> > +++ b/arch/csky/kernel/probes/ftrace.c
> > @@ -13,16 +13,21 @@ int arch_check_ftrace_location(struct kprobe *p)
> >  void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> >                        struct ftrace_ops *ops, struct pt_regs *regs)
> >  {
> > +   int bit;
> >     bool lr_saver = false;
> >     struct kprobe *p;
> >     struct kprobe_ctlblk *kcb;
> >  
> > -   /* Preempt is disabled by ftrace */
> > +   bit = ftrace_test_recursion_trylock();  
> 
> > +
> > +   preempt_disable_notrace();
> >     p = get_kprobe((kprobe_opcode_t *)ip);
> >     if (!p) {
> >             p = get_kprobe((kprobe_opcode_t *)(ip - MCOUNT_INSN_SIZE));
> >             if (unlikely(!p) || kprobe_disabled(p))
> > -                   return;
> > +                   goto out;
> >             lr_saver = true;
> >     }  
> 
>       if (bit < 0) {
>               kprobes_inc_nmissed_count(p);
>               goto out;
>       }

If anything called in get_kprobe() or kprobes_inc_nmissed_count() gets
traced here, you have zero recursion protection, and this will crash the
machine with a likely reboot (triple fault).

Note, the recursion handles interrupts and wont stop them. bit < 0 only
happens if you recurse because this function called something that ends up
calling itself. Really, why would you care about missing a kprobe on the
same kprobe?

-- Steve

Reply via email to