change from v2 to v3 - the way to solve it in v2 is racy, so I changed the approach entirely. - just make semaphore's trylock use spinlock's trylock.
change from v1 to v2 - remove unnecessary overhead by the redundant spin(un)lock. Since I faced a infinite recursive printk() bug, I've tried to propose patches the title of which is "lib/spinlock_debug.c: prevent a recursive cycle in the debug code". But I noticed the root problem cannot be fixed by that, through some discussion thanks to Sergey and Peter. So I focused on preventing the deadlock. -----8<----- >From 43e029ca920890ac644e30d873be69cf5d01efdb Mon Sep 17 00:00:00 2001 From: Byungchul Park <[email protected]> Date: Wed, 17 Feb 2016 17:22:18 +0900 Subject: [PATCH v3] lock/semaphore: Avoid an unnecessary deadlock within up() One of semaphore acquisition functions, down_trylock() is implemented using raw_spin_lock_irqsave(&sem->lock) even though it's enough to use raw_spin_trylock_irqsave(). Furthermore, using raw_spin_lock_irqsave() can cause a unnecessary deadlock as described below. Just make it use the spinlock trylock to implement the semaphore trylock so that we can avoid the unnecessary deadlock happened. The scenario the bad thing can happen is, printk console_trylock console_unlock up_console_sem up raw_spin_lock_irqsave(&sem->lock, flags) __up wake_up_process try_to_wake_up raw_spin_lock_irqsave(&p->pi_lock) __spin_lock_debug spin_dump printk console_trylock raw_spin_lock_irqsave(&sem->lock, flags) >>> DEADLOCK <<< Signed-off-by: Byungchul Park <[email protected]> --- kernel/locking/semaphore.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c index b8120ab..6634b68 100644 --- a/kernel/locking/semaphore.c +++ b/kernel/locking/semaphore.c @@ -130,13 +130,14 @@ EXPORT_SYMBOL(down_killable); int down_trylock(struct semaphore *sem) { unsigned long flags; - int count; + int count = -1; - raw_spin_lock_irqsave(&sem->lock, flags); - count = sem->count - 1; - if (likely(count >= 0)) - sem->count = count; - raw_spin_unlock_irqrestore(&sem->lock, flags); + if (raw_spin_trylock_irqsave(&sem->lock, flags)) { + count = sem->count - 1; + if (likely(count >= 0)) + sem->count = count; + raw_spin_unlock_irqrestore(&sem->lock, flags); + } return (count < 0); } -- 1.9.1

