On a highly contended rwsem, spinlock contention due to the slow rwsem_wake() call can be a significant portion of the total CPU cycles used. With writer lock stealing and writer optimistic spinning, there is also a chance that the lock may have been stolen by the time that the wait_lock is acquired.
This patch adds a low cost checking code after acquiring the wait_lock to look for active writer. The presence of an active writer will abort the wakeup operation. Signed-off-by: Waiman Long <waiman.l...@hp.com> --- kernel/locking/rwsem-xadd.c | 21 +++++++++++++++++++-- 1 files changed, 19 insertions(+), 2 deletions(-) diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c index 2bb25e2..815f0cc 100644 --- a/kernel/locking/rwsem-xadd.c +++ b/kernel/locking/rwsem-xadd.c @@ -399,6 +399,15 @@ static inline bool rwsem_has_spinner(struct rw_semaphore *sem) return osq_is_locked(&sem->osq); } +/* + * Return true if there is an active writer by checking the owner field which + * should be set if there is one. + */ +static inline bool rwsem_has_active_writer(struct rw_semaphore *sem) +{ + return READ_ONCE(sem->owner) != NULL; +} + #else static bool rwsem_optimistic_spin(struct rw_semaphore *sem) { @@ -409,6 +418,11 @@ static inline bool rwsem_has_spinner(struct rw_semaphore *sem) { return false; } + +static inline bool rwsem_has_active_writer(struct rw_semaphore *sem) +{ + return false; /* Assume it has no active writer */ +} #endif /* @@ -524,8 +538,11 @@ struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); locked: - /* do nothing if list empty */ - if (!list_empty(&sem->wait_list)) + /* + * Do nothing if list empty or the lock has just been stolen by a + * writer after a possibly long wait in getting the wait_lock. + */ + if (!list_empty(&sem->wait_list) && !rwsem_has_active_writer(sem)) sem = __rwsem_do_wake(sem, RWSEM_WAKE_ANY); raw_spin_unlock_irqrestore(&sem->wait_lock, flags); -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/