On Fri, Jul 10, 2015 at 08:32:46PM +0900, Masami Hiramatsu wrote: > double unlock: > ------------[ cut here ]------------ > kernel BUG at > /home/mhiramat/ksrc/linux-3/kernel/locking/qspinlock_paravirt.h:137!
> Call Trace: > [<ffffffff81114a59>] __raw_callee_save___pv_queued_spin_unlock+0x11/0x1e > [<ffffffff81117133>] ? do_raw_spin_unlock+0xfa/0x10c > [<ffffffff817cd3f7>] _raw_spin_unlock+0x44/0x64 > [<ffffffff814603ee>] double_unlock_spin+0x3d/0x46 Cute, but somewhat expected. A double unlock really is a BUG and the PV spinlock code cannot deal with it. Do we want to make double unlock non-fatal unconditionally? --- kernel/locking/qspinlock_paravirt.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h index 04ab18151cc8..172deeaf1311 100644 --- a/kernel/locking/qspinlock_paravirt.h +++ b/kernel/locking/qspinlock_paravirt.h @@ -286,15 +286,22 @@ __visible void __pv_queued_spin_unlock(struct qspinlock *lock) { struct __qspinlock *l = (void *)lock; struct pv_node *node; + u8 locked; /* * We must not unlock if SLOW, because in that case we must first * unhash. Otherwise it would be possible to have multiple @lock * entries, which would be BAD. */ - if (likely(cmpxchg(&l->locked, _Q_LOCKED_VAL, 0) == _Q_LOCKED_VAL)) + locked = cmpxchg(&l->locked, _Q_LOCKED_VAL, 0); + if (likely(locked == _Q_LOCKED_VAL)) return; +#ifdef CONFIG_DEBUG_LOCKING_API_SELFTESTS + if (unlikely(!locked)) + return; +#endif + /* * Since the above failed to release, this must be the SLOW path. * Therefore start by looking up the blocked node and unhashing it. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/