On 12/2/25 13:03, Jason Wang wrote:
On Mon, Dec 1, 2025 at 11:04 PM Bui Quang Minh <[email protected]> wrote:
On 11/28/25 09:20, Jason Wang wrote:
On Fri, Nov 28, 2025 at 1:47 AM Bui Quang Minh <[email protected]> wrote:
I think the the requeue in refill_work is not the problem here. In
virtnet_rx_pause[_all](), we use cancel_work_sync() which is safe to
use "even if the work re-queues itself". AFAICS, cancel_work_sync()
will disable work -> flush work -> enable again. So if the work requeue
itself in flush work, the requeue will fail because the work is already
disabled.
Right.

I think what triggers the deadlock here is a bug in
virtnet_rx_resume_all(). virtnet_rx_resume_all() calls to
__virtnet_rx_resume() which calls napi_enable() and may schedule
refill. It schedules the refill work right after napi_enable the first
receive queue. The correct way must be napi_enable all receive queues
before scheduling refill work.
So what you meant is that the napi_disable() is called for a queue
whose NAPI has been disabled?

cpu0] enable_delayed_refill()
cpu0] napi_enable(queue0)
cpu0] schedule_delayed_work(&vi->refill)
cpu1] napi_disable(queue0)
cpu1] napi_enable(queue0)
cpu1] napi_disable(queue1)

In this case cpu1 waits forever while holding the netdev lock. This
looks like a bug since the netdev_lock 413f0271f3966 ("net: protect
NAPI enablement with netdev_lock()")?
Yes, I've tried to fix it in 4bc12818b363 ("virtio-net: disable delayed
refill when pausing rx"), but it has flaws.
I wonder if a simplified version is just restoring the behaviour
before 413f0271f3966 by using napi_enable_locked() but maybe I miss
something.

As far as I understand, before 413f0271f3966 ("net: protect NAPI enablement with netdev_lock()"), the napi is protected by the rtnl_lock(). But in the refill_work, we don't acquire the rtnl_lock(), so it seems like we will have race condition before 413f0271f3966 ("net: protect NAPI enablement with netdev_lock()").

Thanks,
Quang Minh.

Reply via email to