3.12-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Jason Wang <[email protected]>

[ Upstream commit 6cd4ce0099da7702f885b6fa9ebb49e3831d90b4 ]

During restoring, try_fill_recv() was called with neither napi lock nor napi
disabled. This can lead two try_fill_recv() was called in the same time. Fix
this by refilling before trying to enable napi.

Fixes 0741bcb5584f9e2390ae6261573c4de8314999f2
(virtio: net: Add freeze, restore handlers to support S4).

Cc: Amit Shah <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: Michael S. Tsirkin <[email protected]>
Cc: Eric Dumazet <[email protected]>
Signed-off-by: Jason Wang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
 drivers/net/virtio_net.c |   11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1772,16 +1772,17 @@ static int virtnet_restore(struct virtio
        if (err)
                return err;
 
-       if (netif_running(vi->dev))
+       if (netif_running(vi->dev)) {
+               for (i = 0; i < vi->curr_queue_pairs; i++)
+                       if (!try_fill_recv(&vi->rq[i], GFP_KERNEL))
+                               schedule_delayed_work(&vi->refill, 0);
+
                for (i = 0; i < vi->max_queue_pairs; i++)
                        virtnet_napi_enable(&vi->rq[i]);
+       }
 
        netif_device_attach(vi->dev);
 
-       for (i = 0; i < vi->curr_queue_pairs; i++)
-               if (!try_fill_recv(&vi->rq[i], GFP_KERNEL))
-                       schedule_delayed_work(&vi->refill, 0);
-
        mutex_lock(&vi->config_lock);
        vi->config_enable = true;
        mutex_unlock(&vi->config_lock);


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to