It was returned as error before. Instead of it, simply update the corresponding field so qemu can send it in the migration data.
Signed-off-by: Eugenio Pérez <epere...@redhat.com> --- hw/net/virtio-net.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c index dd0d056fde..63a8332cd0 100644 --- a/hw/net/virtio-net.c +++ b/hw/net/virtio-net.c @@ -1412,19 +1412,14 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd, return VIRTIO_NET_ERR; } - /* Avoid changing the number of queue_pairs for vdpa device in - * userspace handler. A future fix is needed to handle the mq - * change in userspace handler with vhost-vdpa. Let's disable - * the mq handling from userspace for now and only allow get - * done through the kernel. Ripples may be seen when falling - * back to userspace, but without doing it qemu process would - * crash on a recursive entry to virtio_net_set_status(). - */ + n->curr_queue_pairs = queue_pairs; if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) { - return VIRTIO_NET_ERR; + /* + * Avoid updating the backend for a vdpa device: We're only interested + * in updating the device model queues. + */ + return VIRTIO_NET_OK; } - - n->curr_queue_pairs = queue_pairs; /* stop the backend before changing the number of queue_pairs to avoid handling a * disabled queue */ virtio_net_set_status(vdev, vdev->status); -- 2.31.1