Re: [PATCH v7 6/8] tracing: Centralize preemptirq tracepoints and unify their usage

2018-05-25 Thread Joel Fernandes
On Fri, May 25, 2018 at 08:43:39PM +0900, Namhyung Kim wrote:
> Hi Joel,
> 
> On Wed, May 23, 2018 at 06:21:55PM -0700, Joel Fernandes wrote:
> > From: "Joel Fernandes (Google)" 
> > 
> > This patch detaches the preemptirq tracepoints from the tracers and
> > keeps it separate.
> > 
> > Advantages:
> > * Lockdep and irqsoff event can now run in parallel since they no longer
> > have their own calls.
> > 
> > * This unifies the usecase of adding hooks to an irqsoff and irqson
> > event, and a preemptoff and preempton event.
> >   3 users of the events exist:
> >   - Lockdep
> >   - irqsoff and preemptoff tracers
> >   - irqs and preempt trace events
> > 
> > The unification cleans up several ifdefs and makes the code in preempt
> > tracer and irqsoff tracers simpler. It gets rid of all the horrific
> > ifdeferry around PROVE_LOCKING and makes configuration of the different
> > users of the tracepoints more easy and understandable. It also gets rid
> > of the time_* function calls from the lockdep hooks used to call into
> > the preemptirq tracer which is not needed anymore. The negative delta in
> > lines of code in this patch is quite large too.
> > 
> [SNIP]
> >  
> >  #ifdef CONFIG_IRQSOFF_TRACER
> > +/*
> > + * We are only interested in hardirq on/off events:
> > + */
> > +static void tracer_hardirqs_on(void *none, unsigned long a0, unsigned long 
> > a1)
> > +{
> > +   if (!preempt_trace() && irq_trace())
> > +   stop_critical_timing(a0, a1);
> > +}
> > +
> > +static void tracer_hardirqs_off(void *none, unsigned long a0, unsigned 
> > long a1)
> > +{
> > +   if (!preempt_trace() && irq_trace())
> > +   start_critical_timing(a0, a1);
> > +}
> > +
> >  static int irqsoff_tracer_init(struct trace_array *tr)
> >  {
> > trace_type = TRACER_IRQS_OFF;
> >  
> > +   register_trace_irq_disable(tracer_hardirqs_off, NULL);
> > +   register_trace_irq_enable(tracer_hardirqs_on, NULL);
> > return __irqsoff_tracer_init(tr);
> >  }
> >  
> >  static void irqsoff_tracer_reset(struct trace_array *tr)
> >  {
> > +   unregister_trace_irq_disable(tracer_hardirqs_off, NULL);
> > +   unregister_trace_irq_enable(tracer_hardirqs_on, NULL);
> > __irqsoff_tracer_reset(tr);
> >  }
> >  
> > @@ -692,19 +650,37 @@ static struct tracer irqsoff_tracer __read_mostly =
> >  };
> >  # define register_irqsoff(trace) register_tracer(&trace)
> >  #else
> > +static inline void tracer_hardirqs_on(unsigned long a0, unsigned long a1) 
> > { }
> > +static inline void tracer_hardirqs_off(unsigned long a0, unsigned long a1) 
> > { }
> 
> Just a nitpick.  These lines seem unnecessary since they're used
> only when CONFIG_IRQSOFF_TRACER is enabled AFAICS.
> 
> 
> >  # define register_irqsoff(trace) do { } while (0)
> > -#endif
> > +#endif /*  CONFIG_IRQSOFF_TRACER */
> >  
> >  #ifdef CONFIG_PREEMPT_TRACER
> > +static void tracer_preempt_on(void *none, unsigned long a0, unsigned long 
> > a1)
> > +{
> > +   if (preempt_trace() && !irq_trace())
> > +   stop_critical_timing(a0, a1);
> > +}
> > +
> > +static void tracer_preempt_off(void *none, unsigned long a0, unsigned long 
> > a1)
> > +{
> > +   if (preempt_trace() && !irq_trace())
> > +   start_critical_timing(a0, a1);
> > +}
> > +
> >  static int preemptoff_tracer_init(struct trace_array *tr)
> >  {
> > trace_type = TRACER_PREEMPT_OFF;
> >  
> > +   register_trace_preempt_disable(tracer_preempt_off, NULL);
> > +   register_trace_preempt_enable(tracer_preempt_on, NULL);
> > return __irqsoff_tracer_init(tr);
> >  }
> >  
> >  static void preemptoff_tracer_reset(struct trace_array *tr)
> >  {
> > +   unregister_trace_preempt_disable(tracer_preempt_off, NULL);
> > +   unregister_trace_preempt_enable(tracer_preempt_on, NULL);
> > __irqsoff_tracer_reset(tr);
> >  }
> >  
> > @@ -729,21 +705,32 @@ static struct tracer preemptoff_tracer __read_mostly =
> >  };
> >  # define register_preemptoff(trace) register_tracer(&trace)
> >  #else
> > +static inline void tracer_preempt_on(unsigned long a0, unsigned long a1) { 
> > }
> > +static inline void tracer_preempt_off(unsigned long a0, unsigned long a1) 
> > { }
> 
> Ditto (for CONFIG_PREEMPT_TRACER).
> 
> Thanks,
> Namhyung

Yes you're right, saves quite a few lines actually! I also inlined the
register_tracer macros, It seems much cleaner. I will fold the below diff but
let me know if there's anything else.

Thanks Namhyung!

 - Joel


---8<---

diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index d2d8284088f0..d0bcb51d1a2a 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -648,11 +648,6 @@ static struct tracer irqsoff_tracer __read_mostly =
.allow_instances = true,
.use_max_tr = true,
 };
-# define register_irqsoff(trace) register_tracer(&trace)
-#else
-static inline void tracer_hardirqs_on(unsigned long a0, unsigned long a1) { }
-static inline void tracer_hardirqs_off(unsigned long a0, unsigned long a1) { }

Re: [PATCH v7 6/8] tracing: Centralize preemptirq tracepoints and unify their usage

2018-05-25 Thread Namhyung Kim
Hi Joel,

On Wed, May 23, 2018 at 06:21:55PM -0700, Joel Fernandes wrote:
> From: "Joel Fernandes (Google)" 
> 
> This patch detaches the preemptirq tracepoints from the tracers and
> keeps it separate.
> 
> Advantages:
> * Lockdep and irqsoff event can now run in parallel since they no longer
> have their own calls.
> 
> * This unifies the usecase of adding hooks to an irqsoff and irqson
> event, and a preemptoff and preempton event.
>   3 users of the events exist:
>   - Lockdep
>   - irqsoff and preemptoff tracers
>   - irqs and preempt trace events
> 
> The unification cleans up several ifdefs and makes the code in preempt
> tracer and irqsoff tracers simpler. It gets rid of all the horrific
> ifdeferry around PROVE_LOCKING and makes configuration of the different
> users of the tracepoints more easy and understandable. It also gets rid
> of the time_* function calls from the lockdep hooks used to call into
> the preemptirq tracer which is not needed anymore. The negative delta in
> lines of code in this patch is quite large too.
> 
> In the patch we introduce a new CONFIG option PREEMPTIRQ_TRACEPOINTS
> as a single point for registering probes onto the tracepoints. With
> this,
> the web of config options for preempt/irq toggle tracepoints and its
> users becomes:
> 
>  PREEMPT_TRACER   PREEMPTIRQ_EVENTS  IRQSOFF_TRACER PROVE_LOCKING
>| | \ |   |
>\(selects)/  \\ (selects) /
>   TRACE_PREEMPT_TOGGLE   > TRACE_IRQFLAGS
>   \  /
>\ (depends on)   /
>  PREEMPTIRQ_TRACEPOINTS
> 
> One note, I have to check for lockdep recursion in the code that calls
> the trace events API and bail out if we're in lockdep recursion
> protection to prevent something like the following case: a spin_lock is
> taken. Then lockdep_acquired is called.  That does a raw_local_irq_save
> and then sets lockdep_recursion, and then calls __lockdep_acquired. In
> this function, a call to get_lock_stats happens which calls
> preempt_disable, which calls trace IRQS off somewhere which enters my
> tracepoint code and sets the tracing_irq_cpu flag to prevent recursion.
> This flag is then never cleared causing lockdep paths to never be
> entered and thus causing splats and other bad things.
> 
> Other than the performance tests mentioned in the previous patch, I also
> ran the locking API test suite. I verified that all tests cases are
> passing.
> 
> I also injected issues by not registering lockdep probes onto the
> tracepoints and I see failures to confirm that the probes are indeed
> working.
> 
> Without probes:
> 
> [0.00]  hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
> [0.00]  soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
> [0.00]sirq-safe-A => hirqs-on/12:  ok  |  ok  |  ok  |
> [0.00]sirq-safe-A => hirqs-on/21:  ok  |  ok  |  ok  |
> [0.00]  hard-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
> [0.00]  soft-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
> [0.00]  hard-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
> [0.00]  soft-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
> [0.00] hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
> [0.00] soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
> 
> With probes:
> 
> [0.00]  hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
> [0.00]  soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
> [0.00]sirq-safe-A => hirqs-on/12:FAILED|FAILED|  ok  |
> [0.00]sirq-safe-A => hirqs-on/21:FAILED|FAILED|  ok  |
> [0.00]  hard-safe-A + irqs-on/12:FAILED|FAILED|  ok  |
> [0.00]  soft-safe-A + irqs-on/12:FAILED|FAILED|  ok  |
> [0.00]  hard-safe-A + irqs-on/21:FAILED|FAILED|  ok  |
> [0.00]  soft-safe-A + irqs-on/21:FAILED|FAILED|  ok  |
> [0.00] hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
> [0.00] soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
> 
> Signed-off-by: Joel Fernandes (Google) 
> ---

[SNIP]
>  
>  #ifdef CONFIG_IRQSOFF_TRACER
> +/*
> + * We are only interested in hardirq on/off events:
> + */
> +static void tracer_hardirqs_on(void *none, unsigned long a0, unsigned long 
> a1)
> +{
> + if (!preempt_trace() && irq_trace())
> + stop_critical_timing(a0, a1);
> +}
> +
> +static void tracer_hardirqs_off(void *none, unsigned long a0, unsigned long 
> a1)
> +{
> + if (!preempt_trace() && irq_trace())
> + start_critical_timing(a0, a1);
> +}
> +
>  static int irqsoff_tracer_init(struct trace_array *tr)
>  {
>   trace_type = TRACER_IRQS_OFF;
>  
> + register_trace_irq_disable(tracer_hardirqs_off, NULL);
> + register_trace_irq_enable(tracer_hardirqs_on, NULL);
>   return __irqsoff_tracer_init(tr);
>  }

[PATCH v7 6/8] tracing: Centralize preemptirq tracepoints and unify their usage

2018-05-23 Thread Joel Fernandes
From: "Joel Fernandes (Google)" 

This patch detaches the preemptirq tracepoints from the tracers and
keeps it separate.

Advantages:
* Lockdep and irqsoff event can now run in parallel since they no longer
have their own calls.

* This unifies the usecase of adding hooks to an irqsoff and irqson
event, and a preemptoff and preempton event.
  3 users of the events exist:
  - Lockdep
  - irqsoff and preemptoff tracers
  - irqs and preempt trace events

The unification cleans up several ifdefs and makes the code in preempt
tracer and irqsoff tracers simpler. It gets rid of all the horrific
ifdeferry around PROVE_LOCKING and makes configuration of the different
users of the tracepoints more easy and understandable. It also gets rid
of the time_* function calls from the lockdep hooks used to call into
the preemptirq tracer which is not needed anymore. The negative delta in
lines of code in this patch is quite large too.

In the patch we introduce a new CONFIG option PREEMPTIRQ_TRACEPOINTS
as a single point for registering probes onto the tracepoints. With
this,
the web of config options for preempt/irq toggle tracepoints and its
users becomes:

 PREEMPT_TRACER   PREEMPTIRQ_EVENTS  IRQSOFF_TRACER PROVE_LOCKING
   | | \ |   |
   \(selects)/  \\ (selects) /
  TRACE_PREEMPT_TOGGLE   > TRACE_IRQFLAGS
  \  /
   \ (depends on)   /
 PREEMPTIRQ_TRACEPOINTS

One note, I have to check for lockdep recursion in the code that calls
the trace events API and bail out if we're in lockdep recursion
protection to prevent something like the following case: a spin_lock is
taken. Then lockdep_acquired is called.  That does a raw_local_irq_save
and then sets lockdep_recursion, and then calls __lockdep_acquired. In
this function, a call to get_lock_stats happens which calls
preempt_disable, which calls trace IRQS off somewhere which enters my
tracepoint code and sets the tracing_irq_cpu flag to prevent recursion.
This flag is then never cleared causing lockdep paths to never be
entered and thus causing splats and other bad things.

Other than the performance tests mentioned in the previous patch, I also
ran the locking API test suite. I verified that all tests cases are
passing.

I also injected issues by not registering lockdep probes onto the
tracepoints and I see failures to confirm that the probes are indeed
working.

Without probes:

[0.00]  hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[0.00]  soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[0.00]sirq-safe-A => hirqs-on/12:  ok  |  ok  |  ok  |
[0.00]sirq-safe-A => hirqs-on/21:  ok  |  ok  |  ok  |
[0.00]  hard-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[0.00]  soft-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[0.00]  hard-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[0.00]  soft-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[0.00] hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[0.00] soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |

With probes:

[0.00]  hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[0.00]  soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[0.00]sirq-safe-A => hirqs-on/12:FAILED|FAILED|  ok  |
[0.00]sirq-safe-A => hirqs-on/21:FAILED|FAILED|  ok  |
[0.00]  hard-safe-A + irqs-on/12:FAILED|FAILED|  ok  |
[0.00]  soft-safe-A + irqs-on/12:FAILED|FAILED|  ok  |
[0.00]  hard-safe-A + irqs-on/21:FAILED|FAILED|  ok  |
[0.00]  soft-safe-A + irqs-on/21:FAILED|FAILED|  ok  |
[0.00] hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[0.00] soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |

Signed-off-by: Joel Fernandes (Google) 
---
 include/linux/ftrace.h|  11 +-
 include/linux/irqflags.h  |  11 +-
 include/linux/lockdep.h   |   8 +-
 include/linux/preempt.h   |   2 +-
 include/trace/events/preemptirq.h |  23 ++--
 init/main.c   |   5 +-
 kernel/locking/lockdep.c  |  35 ++---
 kernel/sched/core.c   |   2 +-
 kernel/trace/Kconfig  |  22 ++-
 kernel/trace/Makefile |   2 +-
 kernel/trace/trace_irqsoff.c  | 213 --
 kernel/trace/trace_preemptirq.c   |  71 ++
 12 files changed, 189 insertions(+), 216 deletions(-)
 create mode 100644 kernel/trace/trace_preemptirq.c

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 9c3c9a319e48..5191030af0c0 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -709,16 +709,7 @@ static inline unsigned long get_lock_parent_ip(void)
return CALLER_ADDR2;
 }
 
-#ifdef CONFIG_IRQSOFF_TRACER
-  extern void time_hardirqs_on(unsigne