On Mon, Apr 13, 2026 at 03:45:20 -0400, "Michael S. Tsirkin" wrote:
> GFP_ATOMIC allocations can and will fail. If using them, one must
> retry, not just propagate failures.
> Or just switch admin_vq->lock to a mutex?

Hi Michael,

Thank you for the review.

Regarding the suggestion to switch admin_vq->lock to a mutex:

The virtqueue callback vp_modern_avq_done() holds admin_vq->lock and
runs in an interrupt handler context, making it impractical to replace
the spinlock with a mutex directly.

I considered deferring the completion to a workqueue so we could safely
use a mutex, but since this is a bug fix destined for [email protected],
doing so would introduce significant code churn (e.g., handling INIT_WORK,
cancel_work_sync during cleanup, etc.) and increase the risk for backports.

Therefore, using GFP_ATOMIC with the existing spinlock seems to be the most
minimal and safest approach for a fix.

However, just replacing GFP_KERNEL with GFP_ATOMIC isn't entirely safe
because of how virtqueue_add_sgs() handles allocation failures. If kmalloc()
fails under memory pressure with GFP_ATOMIC, the function falls back to using
direct descriptors. If there are not enough free direct descriptors, it
ultimately returns -ENOSPC.

In the current code, -ENOSPC is handled with a busy loop:

if (ret == -ENOSPC) {
        spin_unlock_irqrestore(&admin_vq->lock, flags);
        cpu_relax();
        goto again;
}

If the -ENOSPC is actually caused by a GFP_ATOMIC allocation failure under
memory pressure, this cpu_relax() loop will never yield the CPU to memory
reclaim mechanisms (like kswapd), potentially leading to a soft lockup.

To properly handle both actual queue-full conditions and GFP_ATOMIC failures,
I propose replacing cpu_relax() with a sleep (e.g., usleep_range(10, 100)).
This allows memory reclaim to run while we wait.

I plan to send out a v2 patch with this modification:

--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -101,11 +101,11 @@ static int virtqueue_exec_admin_cmd(struct 
virtio_pci_admin_vq *admin_vq,
                return -EIO;

        spin_lock_irqsave(&admin_vq->lock, flags);
-       ret = virtqueue_add_sgs(vq, sgs, out_num, in_num, cmd, GFP_KERNEL);
+       ret = virtqueue_add_sgs(vq, sgs, out_num, in_num, cmd, GFP_ATOMIC);
        if (ret < 0) {
                if (ret == -ENOSPC) {
                        spin_unlock_irqrestore(&admin_vq->lock, flags);
-                       cpu_relax();
+                       usleep_range(10, 100);
                        goto again;
                }
                goto unlock_err;

Does this approach align with your expectations for the fix?

Thanks,
Jinhui

Reply via email to