Public bug reported: Instances misses neutron QoS on their ports after unrescue and soft reboot
Description =========== After some operations with instance: such as unrescue and soft reboot libvirt domains are created, but neutron doesn't set QoS on ports for VM. So user can avoid QoS per-port limitation and utilise all hosts bandwidth. This doesn't happend after live migration, migration, hard reboot, rescue, shutdown with start. This problem doesn't happen for operations which ends up calling _create_domain_and_network(): https://github.com/openstack/nova/blob/stable/pike/nova/virt/libvirt/driver.py#L5392 In unrescue and soft reboot libvirt driver calls _create_domain() directly and don't execute plug_vifs(): https://github.com/openstack/nova/blob/stable/pike/nova/virt/libvirt/driver.py#L2547 Steps to reproduce ================== 1. Create instance with port in neutron network 2. Create QoS in neutron: $ neutron qos-policy-create limited_1000mbps $ neutron qos-bandwidth-limit-rule-create limited_1000mbps --max-kbps 1000000 --max-burst-kbps 160000 3. Update port of the instance, assign policy: $ neutron port-update --qos-policy limited_1000mbps PORT_UUID 4. Ensure, that QoS rule is applied to the port: $ /sbin/tc -s qdisc show dev tap47eaf544-39 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 383621004 bytes 262469 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc ingress ffff: parent ffff:fff1 ---------------- Sent 173850 bytes 1515 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 5. 1) Execute nova rescue and then nova unrescue for the instance or 2) Execute nova reboot (without parameter --hard) 6. See, that after tap interface recreation during libvirt domain start QoS are gone: $ /sbin/tc -s qdisc show dev tap47eaf544-39 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 1537 bytes 19 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 Expected result =============== QoS rules are applied to the port, like in step 4. Actual result ============= QoS rules are gone, tap interface is not limited: $ /sbin/tc -s qdisc show dev tap47eaf544-39 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 1537 bytes 19 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 Environment =========== 1. Exact version of OpenStack: OpenStack Pike nova 16.1.4 neutron 11.0.5 2. Which networking type did you use? Neutron with Open vSwitch ** Affects: nova Importance: Undecided Status: New ** Tags: libvirt ** Description changed: Instances misses neutron QoS on their ports after unrescue and soft reboot - Description - =========== - After some operations with instance: such as unrescue and soft reboot - libvirt domains are created, but neutron doesn't set QoS on ports for VM. + Description + =========== + After some operations with instance: such as unrescue and soft reboot + libvirt domains are created, but neutron doesn't set QoS on ports for VM. - So user can avoid QoS per-port limitation and utilise all hosts - bandwidth. + So user can avoid QoS per-port limitation and utilise all + hosts bandwidth. - This doesn't happend after live migration, migration, hard reboot, rescue, - shutdown with start. + This doesn't happend after live migration, migration, hard reboot, + rescue, shutdown with start. - This problem doesn't happen for operations which ends up calling _create_domain_and_network() - https://github.com/openstack/nova/blob/stable/pike/nova/virt/libvirt/driver.py#L5392 + This problem doesn't happen for operations which ends up calling _create_domain_and_network(): + https://github.com/openstack/nova/blob/stable/pike/nova/virt/libvirt/driver.py#L5392 - In unrescue and soft reboot libvirt driver calls _create_domain() directly and don't execute plug_vifs(): - https://github.com/openstack/nova/blob/stable/pike/nova/virt/libvirt/driver.py#L2547 + In unrescue and soft reboot libvirt driver calls _create_domain() directly and don't execute plug_vifs(): + https://github.com/openstack/nova/blob/stable/pike/nova/virt/libvirt/driver.py#L2547 + Steps to reproduce + ================== + 1. Create instance with port in neutron network - Steps to reproduce - ================== - 1. Create instance with port in neutron network + 2. Create QoS in neutron: + $ neutron qos-policy-create limited_1000mbps + $ neutron qos-bandwidth-limit-rule-create limited_1000mbps --max-kbps 1000000 --max-burst-kbps 160000 - 2. Create QoS in neutron: - $ neutron qos-policy-create limited_1000mbps - $ neutron qos-bandwidth-limit-rule-create limited_1000mbps --max-kbps 1000000 --max-burst-kbps 160000 + 3. Update port of the instance, assign policy: + $ neutron port-update --qos-policy limited_1000mbps PORT_UUID - 3. Update port of the instance, assign policy: - $ neutron port-update --qos-policy limited_1000mbps PORT_UUID - - 4. Ensure, that QoS rule is applied to the port: + 4. Ensure, that QoS rule is applied to the port: $ /sbin/tc -s qdisc show dev tap47eaf544-39 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 - Sent 383621004 bytes 262469 pkt (dropped 0, overlimits 0 requeues 0) - backlog 0b 0p requeues 0 + Sent 383621004 bytes 262469 pkt (dropped 0, overlimits 0 requeues 0) + backlog 0b 0p requeues 0 qdisc ingress ffff: parent ffff:fff1 ---------------- - Sent 173850 bytes 1515 pkt (dropped 0, overlimits 0 requeues 0) - backlog 0b 0p requeues 0 + Sent 173850 bytes 1515 pkt (dropped 0, overlimits 0 requeues 0) + backlog 0b 0p requeues 0 - 5. 1) Execute nova rescue and then nova unrescue for the instance - or - 2) Execute nova reboot (without parameter --hard) + 5. 1) Execute nova rescue and then nova unrescue for the instance + or + 2) Execute nova reboot (without parameter --hard) - 6. See, that after tap interface recreation during libvirt domain start - QoS are gone: + 6. See, that after tap interface recreation during libvirt domain start + QoS are gone: $ /sbin/tc -s qdisc show dev tap47eaf544-39 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 - Sent 1537 bytes 19 pkt (dropped 0, overlimits 0 requeues 0) - backlog 0b 0p requeues 0 + Sent 1537 bytes 19 pkt (dropped 0, overlimits 0 requeues 0) + backlog 0b 0p requeues 0 - Expected result - =============== - QoS rules are applied to the port, like in step 4. + Expected result + =============== + QoS rules are applied to the port, like in step 4. - Actual result - ============= - QoS rules are gone, tap interface is not limited: + Actual result + ============= + QoS rules are gone, tap interface is not limited: $ /sbin/tc -s qdisc show dev tap47eaf544-39 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 - Sent 1537 bytes 19 pkt (dropped 0, overlimits 0 requeues 0) - backlog 0b 0p requeues 0 + Sent 1537 bytes 19 pkt (dropped 0, overlimits 0 requeues 0) + backlog 0b 0p requeues 0 - Environment - =========== - 1. Exact version of OpenStack: - OpenStack Pike - nova 16.1.4 - neutron 11.0.5 + Environment + =========== + 1. Exact version of OpenStack: + OpenStack Pike + nova 16.1.4 + neutron 11.0.5 - 2. Which networking type did you use? - Neutron with Open vSwitch + 2. Which networking type did you use? + Neutron with Open vSwitch ** Summary changed: - Instances misses neutron QoS on their ports after unrescue and soft reboot + Instances miss neutron QoS on their ports after unrescue and soft reboot -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1784006 Title: Instances miss neutron QoS on their ports after unrescue and soft reboot Status in OpenStack Compute (nova): New Bug description: Instances misses neutron QoS on their ports after unrescue and soft reboot Description =========== After some operations with instance: such as unrescue and soft reboot libvirt domains are created, but neutron doesn't set QoS on ports for VM. So user can avoid QoS per-port limitation and utilise all hosts bandwidth. This doesn't happend after live migration, migration, hard reboot, rescue, shutdown with start. This problem doesn't happen for operations which ends up calling _create_domain_and_network(): https://github.com/openstack/nova/blob/stable/pike/nova/virt/libvirt/driver.py#L5392 In unrescue and soft reboot libvirt driver calls _create_domain() directly and don't execute plug_vifs(): https://github.com/openstack/nova/blob/stable/pike/nova/virt/libvirt/driver.py#L2547 Steps to reproduce ================== 1. Create instance with port in neutron network 2. Create QoS in neutron: $ neutron qos-policy-create limited_1000mbps $ neutron qos-bandwidth-limit-rule-create limited_1000mbps --max-kbps 1000000 --max-burst-kbps 160000 3. Update port of the instance, assign policy: $ neutron port-update --qos-policy limited_1000mbps PORT_UUID 4. Ensure, that QoS rule is applied to the port: $ /sbin/tc -s qdisc show dev tap47eaf544-39 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 383621004 bytes 262469 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc ingress ffff: parent ffff:fff1 ---------------- Sent 173850 bytes 1515 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 5. 1) Execute nova rescue and then nova unrescue for the instance or 2) Execute nova reboot (without parameter --hard) 6. See, that after tap interface recreation during libvirt domain start QoS are gone: $ /sbin/tc -s qdisc show dev tap47eaf544-39 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 1537 bytes 19 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 Expected result =============== QoS rules are applied to the port, like in step 4. Actual result ============= QoS rules are gone, tap interface is not limited: $ /sbin/tc -s qdisc show dev tap47eaf544-39 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 1537 bytes 19 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 Environment =========== 1. Exact version of OpenStack: OpenStack Pike nova 16.1.4 neutron 11.0.5 2. Which networking type did you use? Neutron with Open vSwitch To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1784006/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp