On 02/12, Oleg Nesterov wrote:
> On 02/11, Davidlohr Bueso wrote:
> >
> > On Wed, 2015-02-11 at 16:34 -0800, Paul E. McKenney wrote:
> > > Hello!
> > >
> > > Did an earlier-than-usual port of v3.21 patches to post-v3.19, and
> > > hit the following on x86_64.  This happened after about 15 minutes of
> > > rcutorture.  In contrast, I have been doing successful 15-hour runs
> > > on v3.19.  I will check reproducibility and try to narrow it down.
> > > Might this be a duplicate of the bug that Raghavendra posted a fix for?
> > >
> > > Anyway, this was on 3e8c04eb1174 (Merge branch 'for-3.20' of
> > > git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata).
> > >
> > > [  837.287011] BUG: spinlock bad magic on CPU#0, migration/0/9
> > > [  837.287013]  lock: 0xffff88001ea0fe80, .magic: ffffffff, .owner:  
> > > gî<81>ÿÿÿÿ/0, .owner_cpu: -42
> > > [  837.287013] CPU: 0 PID: 9 Comm: migration/0 Not tainted 3.19.0+ #1
> > > [  837.287013] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> > > BIOS Bochs 01/01/2011
> > > [  837.287013]  ffff88001ea0fe80 ffff88001ea0bc78 ffffffff818f6f4b 
> > > ffffffff810a5a51
> > > [  837.287013]  ffffffff81e500e0 ffff88001ea0bc98 ffffffff818f3755 
> > > ffff88001ea0fe80
> > > [  837.287013]  ffffffff81ca4396 ffff88001ea0bcb8 ffffffff818f377b 
> > > ffff88001ea0fe80
> > > [  837.287013] Call Trace:
> > > [  837.287013]  [<ffffffff818f6f4b>] dump_stack+0x45/0x57
> > > [  837.287013]  [<ffffffff810a5a51>] ? console_unlock+0x1f1/0x4c0
> > > [  837.287013]  [<ffffffff818f3755>] spin_dump+0x8b/0x90
> > > [  837.287013]  [<ffffffff818f377b>] spin_bug+0x21/0x26
> > > [  837.287013]  [<ffffffff8109923c>] do_raw_spin_unlock+0x5c/0xa0
> > > [  837.287013]  [<ffffffff81902587>] _raw_spin_unlock_irqrestore+0x27/0x50
> > > [  837.287013]  [<ffffffff8108f0a1>] complete+0x41/0x50
> >
> > We did have some recent changes in completions:
> >
> > 7c34e318 (sched/completion: Add lock-free checking of the blocking case)
> > de30ec47 (sched/completion: Remove unnecessary ->wait.lock serialization 
> > when reading completion state)
> >
> > The second one being more related (although both appear to make sense).
> > Perhaps some subtle implication in the completion_done side that
> > disappeared with the spinlock?
>
> At first glance both changes look suspicious.

No, sorry, only the 2nd one.

> Unless at least document how
> you can use these helpers.
>
> Consider this code:
>
>       void xxx(void)
>       {
>               struct completion c;
>
>               init_completion(&c);
>
>               expose_this_completion(&c);
>
>               while (!completion_done(&c)
>                       schedule_timeout_uninterruptible(1);
>       }
>
> Before that change this code was correct, now it is not. Hmm and note that
> this is what stop_machine_from_inactive_cpu() does although I do not know
> if this is related or not.
>
> Because completion_done() can now race with complete(), the final
> spin_unlock() can write to the memory after it was freed/reused. In this
> case it can write to the stack after return.
>
> Add CC's.

Nicholas, don't we need something like below?

Oleg.


--- x/kernel/sched/completion.c
+++ x/kernel/sched/completion.c
@@ -274,7 +274,7 @@ bool try_wait_for_completion(struct comp
         * first without taking the lock so we can
         * return early in the blocking case.
         */
-       if (!ACCESS_ONCE(x->done))
+       if (!READ_ONCE(x->done))
                return 0;
 
        spin_lock_irqsave(&x->wait.lock, flags);
@@ -297,6 +297,11 @@ EXPORT_SYMBOL(try_wait_for_completion);
  */
 bool completion_done(struct completion *x)
 {
-       return !!ACCESS_ONCE(x->done);
+       if (!READ_ONCE(x->done))
+               return false;
+
+       smp_rmb();
+       spin_unlock_wait(&x->wait.lock);
+       return true;
 }
 EXPORT_SYMBOL(completion_done);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to