From: Wanpeng Li <wanpeng...@hotmail.com> This can be reproduced by running rt-migrate-test:
WARNING: CPU: 2 PID: 2195 at kernel/locking/lockdep.c:3670 lock_unpin_lock+0x172/0x180 unpinning an unpinned lock CPU: 2 PID: 2195 Comm: rt-migrate-test Tainted: G W 4.11.0-rc2+ #1 Call Trace: dump_stack+0x85/0xc2 __warn+0xcb/0xf0 warn_slowpath_fmt+0x5f/0x80 lock_unpin_lock+0x172/0x180 __balance_callback+0x75/0x90 __schedule+0x83f/0xc00 ? futex_wait_setup+0x82/0x130 schedule+0x3d/0x90 futex_wait_queue_me+0xd4/0x170 futex_wait+0x119/0x260 ? __lock_acquire+0x4c8/0x1900 ? stop_one_cpu+0x94/0xc0 do_futex+0x2fe/0xc10 ? sched_setaffinity+0x1c1/0x290 SyS_futex+0x81/0x190 ? rcu_read_lock_sched_held+0x72/0x80 do_syscall_64+0x73/0x1f0 entry_SYSCALL64_slow_path+0x25/0x25 We utilize balance callbacks to delay the load-balancing operations {rt,dl}*{push,pull} until we've done all the important work. The push/pull operations can unlock/lock the current rq for safety acquires the src's and dest's rq->locks in a fair way. It's safe to drop the rq lock here, unpin and repin to avoid the splat. Reported-by: Fengguang Wu <fengguang...@intel.com> Cc: Mike Galbraith <efa...@gmx.de> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Thomas Gleixner <t...@linutronix.de> Cc: Ingo Molnar <mi...@kernel.org> Signed-off-by: Wanpeng Li <wanpeng...@hotmail.com> --- kernel/sched/core.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c762f62..cd901f6 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2787,7 +2787,9 @@ static void __balance_callback(struct rq *rq) head->next = NULL; head = next; + rq_unpin_lock(rq, &rf); func(rq); + rq_repin_lock(rq, &rf); } rq_unlock_irqrestore(rq, &rf); } -- 2.7.4