On Tue, 3 Sep 2013 22:01:15 -0400
Steven Rostedt <rost...@goodmis.org> wrote:

> On Tue, 3 Sep 2013 18:24:04 -0700
> "Paul E. McKenney" <paul...@linux.vnet.ibm.com> wrote:
> 
> 
> > >  static DEFINE_PER_CPU(unsigned long, ftrace_rcu_func);
> > > @@ -588,15 +593,14 @@ static void
> > >  ftrace_unsafe_callback(unsigned long ip, unsigned long parent_ip,
> > >                  struct ftrace_ops *op, struct pt_regs *pt_regs)
> > >  {
> > > - int bit;
> > > -
> > > + /* Make sure we see disabled or not first */
> > > + smp_rmb();
> > 
> >     smp_mb__before_atomic_inc()?
> > 
> 
> Ah, but this is before an atomic_read(), and not an atomic_inc(), thus
> the normal smp_rmb() is still required.
> 

Here's the changes against this one: 

diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index cdcf187..9e6902a 100644
--- a/kernel/trace/trace_functions.c
+++ b/kernel/trace/trace_functions.c
@@ -569,14 +569,14 @@ void ftrace_unsafe_rcu_checker_disable(void)
 {
        atomic_inc(&ftrace_unsafe_rcu_disabled);
        /* Make sure the update is seen immediately */
-       smp_wmb();
+       smp_mb__after_atomic_inc();
 }
 
 void ftrace_unsafe_rcu_checker_enable(void)
 {
        atomic_dec(&ftrace_unsafe_rcu_disabled);
        /* Make sure the update is seen immediately */
-       smp_wmb();
+       smp_mb__after_atomic_dec();
 }
 
 static void



Which is nice, because the smp_mb() are now in the really slow path.

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to