On Tue, 1 Dec 2015 14:42:36 +0100
Jiri Olsa <jo...@redhat.com> wrote:

> On Mon, Nov 30, 2015 at 05:36:40PM -0500, Steven Rostedt wrote:
> > 
> > [ Jiri, can you take a look at this. You can also apply it on top of my
> >   branch ftrace/core, and run any specific tests. I just need to nuke
> >   that control structure for further updates with ftrace. ]
> > 
> > 
> > Currently perf has its own list function within the ftrace infrastructure
> > that seems to be used only to allow for it to have per-cpu disabling as well
> > as a check to make sure that it's not called while RCU is not watching. It
> > uses something called the "control_ops" which is used to iterate over ops
> > under it with the control_list_func().
> > 
> > The problem is that this control_ops and control_list_func unnecessarily
> > complicates the code. By replacing FTRACE_OPS_FL_CONTROL with two new flags
> > (FTRACE_OPS_FL_RCU and FTRACE_OPS_FL_PER_CPU) we can remove all the code
> > that is special with the control ops and add the needed checks within the
> > generic ftrace_list_func().  
> 
> hum,
> do we need also change for the trampoline, something like below?
> 
> I needed attached patch to get the perf ftrace:function
> event work properly..

Hmm, I thought that I forced the list function when RCU or PER_CPU
was set. Oh wait. I have CONFIG_PREEMPT set, which will change the
logic slightly. I'm guessing you have PREEMPT_VOLUNTARY set. I'll try
that out.

> 
> thanks,
> jirka
> 
> 
> ---
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index c4d881200d1f..2705ac2f3487 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -5179,6 +5179,26 @@ static void ftrace_ops_recurs_func(unsigned long ip, 
> unsigned long parent_ip,
>       trace_clear_recursion(bit);
>  }
>  
> +static void ftrace_ops_per_cpu_func(unsigned long ip, unsigned long 
> parent_ip,
> +                                struct ftrace_ops *op, struct pt_regs *regs)
> +{
> +     int bit;
> +
> +     bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
> +     if (bit < 0)
> +             return;
> +
> +     preempt_disable_notrace();
> +
> +     if ((!(op->flags & FTRACE_OPS_FL_RCU) || rcu_is_watching()) &&
> +            (!(op->flags & FTRACE_OPS_FL_PER_CPU) || 
> !ftrace_function_local_disabled(op))) {
> +             op->func(ip, parent_ip, op, regs);
> +     }
> +
> +     preempt_enable_notrace();
> +     trace_clear_recursion(bit);
> +}
> +
>  /**
>   * ftrace_ops_get_func - get the function a trampoline should call
>   * @ops: the ops to get the function for
> @@ -5192,6 +5212,11 @@ static void ftrace_ops_recurs_func(unsigned long ip, 
> unsigned long parent_ip,
>   */
>  ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops)
>  {
> +     if (ops->flags & (FTRACE_OPS_FL_PER_CPU|FTRACE_OPS_FL_RCU)) {
> +             printk("used per cpu trampoline function\n");
> +             return ftrace_ops_per_cpu_func;

I have a slight different idea on how to handle this.

-- Steve

> +     }
> +
>       /*
>        * If the func handles its own recursion, call it directly.
>        * Otherwise call the recursion protected function that

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to