With the use of wake_q, we can do task wakeups without holding the wait_lock. There is one exception in the rwsem code, though. It is when the writer in the slowpath detects that there are waiters ahead but the rwsem is not held by a writer. This can lead to a long wait_lock hold time especially when a large number of readers are to be woken up.
Remediate this situation by releasing the wait_lock before waking up tasks and re-acquiring it afterward. Suggested-by: Peter Zijlstra <pet...@infradead.org> Signed-off-by: Waiman Long <long...@redhat.com> --- include/linux/sched/wake_q.h | 5 +++++ kernel/locking/rwsem.c | 30 +++++++++++++++++++----------- 2 files changed, 24 insertions(+), 11 deletions(-) diff --git a/include/linux/sched/wake_q.h b/include/linux/sched/wake_q.h index ad826d2a4557..26a2013ac39c 100644 --- a/include/linux/sched/wake_q.h +++ b/include/linux/sched/wake_q.h @@ -51,6 +51,11 @@ static inline void wake_q_init(struct wake_q_head *head) head->lastp = &head->first; } +static inline bool wake_q_empty(struct wake_q_head *head) +{ + return head->first == WAKE_Q_TAIL; +} + extern void wake_q_add(struct wake_q_head *head, struct task_struct *task); extern void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task); extern void wake_up_q(struct wake_q_head *head); diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index dd90884a80b8..750f407b83cf 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -747,17 +747,25 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state) ? RWSEM_WAKE_READERS : RWSEM_WAKE_ANY, &wake_q); - /* - * The wakeup is normally called _after_ the wait_lock - * is released, but given that we are proactively waking - * readers we can deal with the wake_q overhead as it is - * similar to releasing and taking the wait_lock again - * for attempting rwsem_try_write_lock(). - */ - wake_up_q(&wake_q); - - /* We need wake_q again below, reinitialize */ - wake_q_init(&wake_q); + if (!wake_q_empty(&wake_q)) { + /* + * We want to minimize wait_lock hold time especially + * when a large number of readers are to be woken up. + */ + raw_spin_unlock_irq(&sem->wait_lock); + wake_up_q(&wake_q); + wake_q_init(&wake_q); /* Used again, reinit */ + raw_spin_lock_irq(&sem->wait_lock); + /* + * This waiter may have become first in the wait + * list after re-acquring the wait_lock. The + * rwsem_first_waiter() test in the main while + * loop below will correctly detect that. We do + * need to reload count to perform proper trylock + * and avoid missed wakeup. + */ + count = atomic_long_read(&sem->count); + } } else { count = atomic_long_add_return(RWSEM_FLAG_WAITERS, &sem->count); } -- 2.18.1