When unlocking, we always want to reach the slowpath with the lock's counter indicating it is unlocked. -- as returned by the asm fastpath call or by explicitly setting it. While doing so, at least in theory, we can optimize and allow faster lock stealing.
This is not immediately obvious and deserves to be documented. Signed-off-by: Davidlohr Bueso <davidl...@hp.com> --- kernel/locking/mutex.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index ad0e333..7a9be39 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -676,7 +676,8 @@ EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible); #endif /* - * Release the lock, slowpath: + * Release the lock, slowpath. + * At this point, the lock counter is 0 or negative. */ static inline void __mutex_unlock_common_slowpath(struct mutex *lock, int nested) @@ -684,9 +685,16 @@ __mutex_unlock_common_slowpath(struct mutex *lock, int nested) unsigned long flags; /* - * some architectures leave the lock unlocked in the fastpath failure + * As a performance measurement, release the lock before doing other + * wakeup related duties to follow. This allows other tasks to acquire + * the lock sooner, while still handling cleanups in past unlock calls. + * This can be done as we do not enforce strict equivalence between the + * mutex counter and wait_list. + * + * + * Some architectures leave the lock unlocked in the fastpath failure * case, others need to leave it locked. In the later case we have to - * unlock it here + * unlock it here. */ if (__mutex_slowpath_needs_to_unlock()) atomic_set(&lock->count, 1); -- 1.8.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/