Public bug reported:

While addressing https://bugs.launchpad.net/nova/+bug/1847367 "Images with 
hw:vif_multiqueue_enabled can be limited to 8 queues even if more are 
supported" in https://review.opendev.org/#/c/695118/
i noticed that we currently have no way of reporting per host networking config 
options such
as 
rx_queue_size(https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.rx_queue_size)
 and tx_queue_size 
(https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.tx_queue_size)

this means that on live migration the source node values are used on the 
destination which may be invalid. https://review.opendev.org/#/c/695118/ add a 
new [libvirt]/max_queue option that similarly
could change per host. at this time it is not clear if libvirt would allow the 
number of queues or queue length to be change as part of a live migration, as 
such it is not clear it the existing behaviour is correct and nova should 
select a host with a matching value or if the value can
be updated.

cold migration can be used as a workaround today as can shelved.
where live migration is used today and it is successfully a hard reboot will 
result in the
correct values for rx/tx_queue_size and max_queues being used. as such i am 
triaging this as low
given this is a latent issue that has not been reported for several release 
since the introduction
of rx/tx queue size in rocky 
https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/libvirt-virtio-set-queue-sizes.html

** Affects: nova
     Importance: Low
     Assignee: sean mooney (sean-k-mooney)
         Status: Triaged


** Tags: libvirt live-migration

** Description changed:

- While addressing https://bugs.launchpad.net/nova/+bug/1847367 "Images with 
hw:vif_multiqueue_enabled can be limited to 8 queues even if more are 
supported" in https://review.opendev.org/#/c/695118/ 
- i noticed that we currently have no way of communicating per host networking 
config options such
+ While addressing https://bugs.launchpad.net/nova/+bug/1847367 "Images with 
hw:vif_multiqueue_enabled can be limited to 8 queues even if more are 
supported" in https://review.opendev.org/#/c/695118/
+ i noticed that we currently have no way of reporting per host networking 
config options such
  as 
rx_queue_size(https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.rx_queue_size)
 and tx_queue_size 
(https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.tx_queue_size)
  
  this means that on live migration the source node values are used on the 
destination which may be invalid. https://review.opendev.org/#/c/695118/ add a 
new [libvirt]/max_queue option that similarly
  could change per host. at this time it is not clear if libvirt would allow 
the number of queues or queue length to be change as part of a live migration, 
as such it is not clear it the existing behaviour is correct and nova should 
select a host with a matching value or if the value can
  be updated.
  
  cold migration can be used as a workaround today as can shelved.
  where live migration is used today and it is successfully a hard reboot will 
result in the
  correct values for rx/tx_queue_size and max_queues being used. as such i am 
triaging this as low
  given this is a latent issue that has not been reported for several release 
since the introduction
  of rx/tx queue size in rocky 
https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/libvirt-virtio-set-queue-sizes.html

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1854844

Title:
  libvirt: tx/rx queue lenght and max queues are not updtated on live
  migration

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  While addressing https://bugs.launchpad.net/nova/+bug/1847367 "Images with 
hw:vif_multiqueue_enabled can be limited to 8 queues even if more are 
supported" in https://review.opendev.org/#/c/695118/
  i noticed that we currently have no way of reporting per host networking 
config options such
  as 
rx_queue_size(https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.rx_queue_size)
 and tx_queue_size 
(https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.tx_queue_size)

  this means that on live migration the source node values are used on the 
destination which may be invalid. https://review.opendev.org/#/c/695118/ add a 
new [libvirt]/max_queue option that similarly
  could change per host. at this time it is not clear if libvirt would allow 
the number of queues or queue length to be change as part of a live migration, 
as such it is not clear it the existing behaviour is correct and nova should 
select a host with a matching value or if the value can
  be updated.

  cold migration can be used as a workaround today as can shelved.
  where live migration is used today and it is successfully a hard reboot will 
result in the
  correct values for rx/tx_queue_size and max_queues being used. as such i am 
triaging this as low
  given this is a latent issue that has not been reported for several release 
since the introduction
  of rx/tx queue size in rocky 
https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/libvirt-virtio-set-queue-sizes.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1854844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to