----- Original Message -----
> Hello,
> 
> I've got the following report while running syzkaller fuzzer on mmotm
> (git://git.kernel.org/pub/scm/linux/kernel/git/mhocko/mm.git)
> remotes/mmotm/auto-latest ee4ba7533626ba7bf2f8b992266467ac9fdc045e:
> 

[...]

> 
> other info that might help us debug this:
> 
>  Possible interrupt unsafe locking scenario:
> 
>        CPU0                    CPU1
>        ----                    ----
>   lock(&(&r->consumer_lock)->rlock);
>                                local_irq_disable();
>                                lock(&(&r->producer_lock)->rlock);
>                                lock(&(&r->consumer_lock)->rlock);
>   <Interrupt>
>     lock(&(&r->producer_lock)->rlock);
> 

Thanks a lot for the testing.

Looks like we could address this by using skb_array_consume_bh() instead.

Could you pls verify if the following patch works?

diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 8a7d6b9..a97c00d 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -520,7 +520,7 @@ static void tun_queue_purge(struct tun_file *tfile)
 {
        struct sk_buff *skb;
 
-       while ((skb = skb_array_consume(&tfile->tx_array)) != NULL)
+       while ((skb = skb_array_consume_bh(&tfile->tx_array)) != NULL)
                kfree_skb(skb);
 
        skb_queue_purge(&tfile->sk.sk_write_queue);
@@ -1458,7 +1458,7 @@ static struct sk_buff *tun_ring_recv(struct tun_file 
*tfile, int noblock,
        struct sk_buff *skb = NULL;
        int error = 0;
 
-       skb = skb_array_consume(&tfile->tx_array);
+       skb = skb_array_consume_bh(&tfile->tx_array);
        if (skb)
                goto out;
        if (noblock) {
@@ -1470,7 +1470,7 @@ static struct sk_buff *tun_ring_recv(struct tun_file 
*tfile, int noblock,
        current->state = TASK_INTERRUPTIBLE;
 
        while (1) {
-               skb = skb_array_consume(&tfile->tx_array);
+               skb = skb_array_consume_bh(&tfile->tx_array);
                if (skb)
                        break;
                if (signal_pending(current)) {

Reply via email to