On Thu, Nov 12, 2015 at 04:00:58PM +0100, Oleg Nesterov wrote:
> On 11/12, Boqun Feng wrote:
> >
> > On Wed, Nov 11, 2015 at 08:39:53PM +0100, Oleg Nesterov wrote:
> > >
> > >   object_t *object;
> > >   spinlock_t lock;
> > >
> > >   void update(void)
> > >   {
> > >           object_t *o;
> > >
> > >           spin_lock(&lock);
> > >           o = READ_ONCE(object);
> > >           if (o) {
> > >                   BUG_ON(o->dead);
> > >                   do_something(o);
> > >           }
> > >           spin_unlock(&lock);
> > >   }
> > >
> > >   void destroy(void) // can be called only once, can't race with itself
> > >   {
> > >           object_t *o;
> > >
> > >           o = object;
> > >           object = NULL;
> > >
> > >           /*
> > >            * pairs with lock/ACQUIRE. The next update() must see
> > >            * object == NULL after spin_lock();
> > >            */
> > >           smp_mb();
> > >
> > >           spin_unlock_wait(&lock);
> > >
> > >           /*
> > >            * pairs with unlock/RELEASE. The previous update() has
> > >            * already passed BUG_ON(o->dead).
> > >            *
> > >            * (Yes, yes, in this particular case it is not needed,
> > >            *  we can rely on the control dependency).
> > >            */
> > >           smp_mb();
> > >
> > >           o->dead = true;
> > >   }
> > >
> > > I believe the code above is correct and it needs the barriers on both 
> > > sides.
> > >
> >
> > Hmm.. probably incorrect.. because the ACQUIRE semantics of spin_lock()
> > only guarantees that the memory operations following spin_lock() can't
> > be reorder before the *LOAD* part of spin_lock() not the *STORE* part,
> > i.e. the case below can happen(assuming the spin_lock() is implemented
> > as ll/sc loop)
> >
> >     spin_lock(&lock):
> >       r1 = *lock; // LL, r1 == 0
> >     o = READ_ONCE(object); // could be reordered here.
> >       *lock = 1; // SC
> >
> > This could happen because of the ACQUIRE semantics of spin_lock(), and
> > the current implementation of spin_lock() on PPC allows this happen.
> >
> > (Cc PPC maintainers for their opinions on this one)
> 
> In this case the code above is obviously wrong. And I do not understand
> how we can rely on spin_unlock_wait() then.
> 
> And afaics do_exit() is buggy too then, see below.
> 
> > I think it's OK for it as an ACQUIRE(with a proper barrier) or even just
> > a control dependency to pair with spin_unlock(), for example, the
> > following snippet in do_exit() is OK, except the smp_mb() is redundant,
> > unless I'm missing something subtle:
> >
> >     /*
> >      * The setting of TASK_RUNNING by try_to_wake_up() may be delayed
> >      * when the following two conditions become true.
> >      *   - There is race condition of mmap_sem (It is acquired by
> >      *     exit_mm()), and
> >      *   - SMI occurs before setting TASK_RUNINNG.
> >      *     (or hypervisor of virtual machine switches to other guest)
> >      *  As a result, we may become TASK_RUNNING after becoming TASK_DEAD
> >      *
> >      * To avoid it, we have to wait for releasing tsk->pi_lock which
> >      * is held by try_to_wake_up()
> >      */
> >     smp_mb();
> >     raw_spin_unlock_wait(&tsk->pi_lock);
> 
> Perhaps it is me who missed something. But I don't think we can remove
> this mb(). And at the same time it can't help on PPC if I understand
> your explanation above correctly.

I cannot resist suggesting that any lock that interacts with
spin_unlock_wait() must have all relevant acquisitions followed by
smp_mb__after_unlock_lock().

                                                        Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to