There is no agreed-upon definition of spin_unlock_wait()'s semantics,
and it appears that all callers could do just as well with a lock/unlock
pair.  This commit therefore replaces the spin_unlock_wait() call in
completion_done() with spin_lock() followed immediately by spin_unlock().
This should be safe from a performance perspective because the lock
will be held only the wakeup happens really quickly.

Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Will Deacon <will.dea...@arm.com>
Cc: Alan Stern <st...@rowland.harvard.edu>
Cc: Andrea Parri <parri.and...@gmail.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
---
 kernel/sched/completion.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c
index 53f9558fa925..3e66712e1964 100644
--- a/kernel/sched/completion.c
+++ b/kernel/sched/completion.c
@@ -307,14 +307,9 @@ bool completion_done(struct completion *x)
         * If ->done, we need to wait for complete() to release ->wait.lock
         * otherwise we can end up freeing the completion before complete()
         * is done referencing it.
-        *
-        * The RMB pairs with complete()'s RELEASE of ->wait.lock and orders
-        * the loads of ->done and ->wait.lock such that we cannot observe
-        * the lock before complete() acquires it while observing the ->done
-        * after it's acquired the lock.
         */
-       smp_rmb();
-       spin_unlock_wait(&x->wait.lock);
+       spin_lock_irq(&x->wait.lock);
+       spin_unlock_irq(&x->wait.lock);
        return true;
 }
 EXPORT_SYMBOL(completion_done);
-- 
2.5.2

Reply via email to