On 20.01.2017 20:05, Eric W. Biederman wrote:
> Nikolay Borisov <[email protected]> writes:
> 
>> On 20.01.2017 15:07, Dmitry Vyukov wrote:
>>> Hello,
>>>
>>> I've got the following deadlock report while running syzkaller fuzzer
>>> on eec0d3d065bfcdf9cd5f56dd2a36b94d12d32297 of linux-next (on odroid
>>> device if it matters):
> 
> I am puzzled I thought we had fixed this with:
>   add7c65ca426 ("pid: fix lockdep deadlock warning due to ucount_lock")
> But apparently not.  We  just moved it from hardirq to softirq context.  Bah.
> 
> Thank you very much for the report.
> 
> Nikolay can you make your change use spinlock_irq?  And have put_ucounts
> do spin_lock_irqsave?  That way we just don't care where we call this.

Like the one attached? I haven't really taken careful look as to whether
the function where _irq versions do fiddle with irq state, since this
might cause a problem if we unconditionally enable them.

> 
> I a tired of being clever.
> 
> Eric
> 
> 
>From 0aa66c85afdac0cd07fabdf899c173c6dca2b6e7 Mon Sep 17 00:00:00 2001
From: Nikolay Borisov <[email protected]>
Date: Fri, 20 Jan 2017 15:21:35 +0200
Subject: [PATCH] userns: Make ucounts lock softirq-safe

The ucounts_lock is being used to protect various ucounts lifecycle
management functionalities. However, those services can also be invoked
when a pidns is being freed in an RCU callback (e.g. softirq context).
This can lead to deadlocks. There were already efforts trying to
prevent similar deadlocks in add7c65ca426 ("pid: fix lockdep deadlock
warning due to ucount_lock"), however they just moved the context
from hardirq to softrq. Fix this issue once and for all by explictly
making the lock disable irqs altogether.

Fixes: add7c65ca426 ("pid: fix lockdep deadlock warning due to ucount_lock")
Link: https://www.spinics.net/lists/kernel/msg2426637.html
Signed-off-by: Nikolay Borisov <[email protected]>
---
 kernel/ucount.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/kernel/ucount.c b/kernel/ucount.c
index b4aaee935b3e..68716403b261 100644
--- a/kernel/ucount.c
+++ b/kernel/ucount.c
@@ -132,10 +132,10 @@ static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
 	struct hlist_head *hashent = ucounts_hashentry(ns, uid);
 	struct ucounts *ucounts, *new;
 
-	spin_lock(&ucounts_lock);
+	spin_lock_irq(&ucounts_lock);
 	ucounts = find_ucounts(ns, uid, hashent);
 	if (!ucounts) {
-		spin_unlock(&ucounts_lock);
+		spin_unlock_irq(&ucounts_lock);
 
 		new = kzalloc(sizeof(*new), GFP_KERNEL);
 		if (!new)
@@ -145,7 +145,7 @@ static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
 		new->uid = uid;
 		atomic_set(&new->count, 0);
 
-		spin_lock(&ucounts_lock);
+		spin_lock_irq(&ucounts_lock);
 		ucounts = find_ucounts(ns, uid, hashent);
 		if (ucounts) {
 			kfree(new);
@@ -156,16 +156,18 @@ static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
 	}
 	if (!atomic_add_unless(&ucounts->count, 1, INT_MAX))
 		ucounts = NULL;
-	spin_unlock(&ucounts_lock);
+	spin_unlock_irq(&ucounts_lock);
 	return ucounts;
 }
 
 static void put_ucounts(struct ucounts *ucounts)
 {
+	unsigned long flags;
+
 	if (atomic_dec_and_test(&ucounts->count)) {
-		spin_lock(&ucounts_lock);
+		spin_lock_irqsave(&ucounts_lock, flags);
 		hlist_del_init(&ucounts->node);
-		spin_unlock(&ucounts_lock);
+		spin_unlock_irqrestore(&ucounts_lock, flags);
 
 		kfree(ucounts);
 	}
-- 
2.7.4

Reply via email to