On Fri, May 22, 2020 at 05:17:05PM +0200, Sebastian A. Siewior wrote:
> On 2020-05-22 16:57:07 [+0200], Peter Zijlstra wrote:
> > > @@ -725,21 +735,48 @@ void lru_add_drain_all(void)
> > >   if (WARN_ON(!mm_percpu_wq))
> > >           return;
> > >  
> > 
> > > + this_gen = READ_ONCE(lru_drain_gen);
> > > + smp_rmb();
> > 
> >     this_gen = smp_load_acquire(&lru_drain_gen);
> > >  
> > >   mutex_lock(&lock);
> > >  
> > >   /*
> > > +  * (C) Exit the draining operation if a newer generation, from another
> > > +  * lru_add_drain_all(), was already scheduled for draining. Check (A).
> > >    */
> > > + if (unlikely(this_gen != lru_drain_gen))
> > >           goto done;
> > >  
> > 
> > > + WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
> > > + smp_wmb();
> > 
> > You can leave this smp_wmb() out and rely on the smp_mb() implied by
> > queue_work_on()'s test_and_set_bit().
> 
> This is to avoid smp_store_release() ?

store_release would have the barrier on the other end. If you read the
comments (I so helpfully cut out) you'll see it wants to order against
later stores, not ealier.

Reply via email to