There is no agreed-upon definition of spin_unlock_wait()'s semantics,
and it appears that all callers could do just as well with a lock/unlock
pair.  This commit therefore replaces the spin_unlock_wait() call in
completion_done() with spin_lock() followed immediately by spin_unlock().
This should be safe from a performance perspective because the lock
will be held only the wakeup happens really quickly.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Alan Stern <[email protected]>
Cc: Andrea Parri <[email protected]>
Cc: Linus Torvalds <[email protected]>
[ paulmck: Updated to use irqsave based on 0day Test Robot feedback. ]

diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c
index 13fc5ae9bf2f..c9524d2d9316 100644
--- a/kernel/sched/completion.c
+++ b/kernel/sched/completion.c
@@ -300,6 +300,8 @@ EXPORT_SYMBOL(try_wait_for_completion);
  */
 bool completion_done(struct completion *x)
 {
+       unsigned long flags;
+
        if (!READ_ONCE(x->done))
                return false;
 
@@ -307,14 +309,9 @@ bool completion_done(struct completion *x)
         * If ->done, we need to wait for complete() to release ->wait.lock
         * otherwise we can end up freeing the completion before complete()
         * is done referencing it.
-        *
-        * The RMB pairs with complete()'s RELEASE of ->wait.lock and orders
-        * the loads of ->done and ->wait.lock such that we cannot observe
-        * the lock before complete() acquires it while observing the ->done
-        * after it's acquired the lock.
         */
-       smp_rmb();
-       spin_unlock_wait(&x->wait.lock);
+       spin_lock_irqsave(&x->wait.lock, flags);
+       spin_unlock_irqrestore(&x->wait.lock, flags);
        return true;
 }
 EXPORT_SYMBOL(completion_done);

Reply via email to