On Thu, Oct 15, 2020 at 11:50:33AM +0200, Peter Zijlstra wrote:
> On Thu, Oct 15, 2020 at 11:49:26AM +0200, Peter Zijlstra wrote:
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -1143,13 +1143,15 @@ bool rcu_lockdep_current_cpu_online(void
> >     struct rcu_data *rdp;
> >     struct rcu_node *rnp;
> >     bool ret = false;
> > +   unsigned long seq;
> >  
> >     if (in_nmi() || !rcu_scheduler_fully_active)
> >             return true;
> >     preempt_disable_notrace();
> >     rdp = this_cpu_ptr(&rcu_data);
> >     rnp = rdp->mynode;
> > -   if (rdp->grpmask & rcu_rnp_online_cpus(rnp))
> > +   seq = READ_ONCE(rnp->ofl_seq) & ~0x1;
> > +   if (rdp->grpmask & rcu_rnp_online_cpus(rnp) || seq != 
> > READ_ONCE(rnp->ofl_seq))
> >             ret = true;
> >     preempt_enable_notrace();
> >     return ret;
> 
> Also, here, are the two loads important? Wouldn't:
> 
>       || READ_ONCE(rnp->ofl_seq) & 0x1
> 
> be sufficient?

Indeed it would!  Good catch, thank you!!!

                                                        Thanx, Paul

Reply via email to