* Paul E. McKenney <paul...@linux.vnet.ibm.com> wrote:

> There is no agreed-upon definition of spin_unlock_wait()'s semantics,
> and it appears that all callers could do just as well with a lock/unlock
> pair.  This commit therefore replaces the spin_unlock_wait() call in
> completion_done() with spin_lock() followed immediately by spin_unlock().
> This should be safe from a performance perspective because the lock
> will be held only the wakeup happens really quickly.
> 
> Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
> Cc: Ingo Molnar <mi...@redhat.com>
> Cc: Peter Zijlstra <pet...@infradead.org>
> Cc: Will Deacon <will.dea...@arm.com>
> Cc: Alan Stern <st...@rowland.harvard.edu>
> Cc: Andrea Parri <parri.and...@gmail.com>
> Cc: Linus Torvalds <torva...@linux-foundation.org>
> [ paulmck: Updated to use irqsave based on 0day Test Robot feedback. ]
> 
> diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c
> index 13fc5ae9bf2f..c9524d2d9316 100644
> --- a/kernel/sched/completion.c
> +++ b/kernel/sched/completion.c
> @@ -300,6 +300,8 @@ EXPORT_SYMBOL(try_wait_for_completion);
>   */
>  bool completion_done(struct completion *x)
>  {
> +     unsigned long flags;
> +
>       if (!READ_ONCE(x->done))
>               return false;
>  
> @@ -307,14 +309,9 @@ bool completion_done(struct completion *x)
>        * If ->done, we need to wait for complete() to release ->wait.lock
>        * otherwise we can end up freeing the completion before complete()
>        * is done referencing it.
> -      *
> -      * The RMB pairs with complete()'s RELEASE of ->wait.lock and orders
> -      * the loads of ->done and ->wait.lock such that we cannot observe
> -      * the lock before complete() acquires it while observing the ->done
> -      * after it's acquired the lock.
>        */
> -     smp_rmb();
> -     spin_unlock_wait(&x->wait.lock);
> +     spin_lock_irqsave(&x->wait.lock, flags);
> +     spin_unlock_irqrestore(&x->wait.lock, flags);
>       return true;
>  }
>  EXPORT_SYMBOL(completion_done);

I'm fine with this patch - as long as there are no performance regression 
reports. 
(which I suspect there won't be.)

Would you like to carry this in the RCU tree, due to other changes depending on 
this change - or can I pick this up into the scheduler tree?

Thanks,

        Ingo

Reply via email to