[Yahoo-eng-team] [Bug 1960758] Re: UEFI libvirt servers can't boot on Ubuntu 20.04 hypervisors with Ussuri/Victoria
This bug was fixed in the package nova - 2:21.2.4-0ubuntu2.6 --- nova (2:21.2.4-0ubuntu2.6) focal; urgency=medium * d/p/lp1960758-ubuntu-uefi-loader-path.patch: add config option 'ubuntu_libvirt_uefi_loader_path' to restrict UEFI loaders to only those shipped/supported in Ubuntu/Ussuri. (LP: #1960758) -- Mauricio Faria de Oliveira Tue, 25 Jul 2023 17:34:00 -0300 ** Changed in: nova (Ubuntu Focal) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1960758 Title: UEFI libvirt servers can't boot on Ubuntu 20.04 hypervisors with Ussuri/Victoria Status in Ubuntu Cloud Archive: Invalid Status in Ubuntu Cloud Archive ussuri series: Fix Committed Status in Ubuntu Cloud Archive victoria series: Fix Released Status in OpenStack Compute (nova): Invalid Status in OpenStack Compute (nova) ussuri series: Invalid Status in OpenStack Compute (nova) victoria series: Invalid Status in nova package in Ubuntu: Invalid Status in nova source package in Focal: Fix Released Bug description: Impact: === Currently, setting `hw_firwmare_type=uefi` may create _unbootable_ servers on 20.04 hypervisors with Ussuri and Victoria (Wallaby and later are OK). We should not use the Secure Boot firmware on the 'pc' machine type, as 'q35' is _required_ by OVMF firmware if SMM feature is built (usually the case, to actually secure the SB feature). [See comment #6 for research and #7 for test evidence.] We should not use the Secure Boot firmware on the 'q35' machine type _either_, as it might not work regardless, since other libvirt XML options such as SMM and S3/S4 disable may be needed for Secure Boot to work, but are _not_ configured by Openstack Ussuri (no SB support). Approach: === Considering how long Focal/Ussuri have been out there (and maybe worked with UEFI enabled for some cases?) add a config option to _opt-in_ to actually supported UEFI loaders for nova/libvirt. This seems to benefit downstream/Ubuntu more (although other distros might be affected) add the config option "ubuntu_libvirt_uefi_loader_path" (disabled by default) in the DEFAULT libvirt config section (so it can be set in nova-compute charm's 'config-flags' option). Test Plan: === $ openstack image set --property hw_firmware_type=uefi $IMAGE $ openstack server create --image $IMAGE --flavor $FLAVOR --network $NETWORK uefi-server (with patched packages:) Set `ubuntu_libvirt_uefi_loader_path = true` in `[DEFAULT]` in /etc/nova/nova.conf (eg `juju config nova-compute config-flags='ubuntu_libvirt_uefi_loader_path=true'`) $ openstack server stop uefi-server $ openstack server start uefi-server - Expected Result: The server's libvirt XML uses UEFI _without_ Secure Boot. /usr/share/OVMF/OVMF_CODE.fd The guest boots, and console log confirms UEFI mode: $ openstack console log show srv | grep -i -e efi -e bios ... Creating boot entry "Boot0003" with label "ubuntu" for file "\EFI\ubuntu\shimx64.efi" ... [0.00] efi: EFI v2.70 by EDK II [0.00] efi: SMBIOS=0x7fbcd000 ACPI=0x7fbfa000 ACPI 2.0=0x7fbfa014 MEMATTR=0x7eb30018 [0.00] SMBIOS 2.8 present. [0.00] DMI: OpenStack Foundation OpenStack Nova, BIOS 0.0.0 02/06/2015 ... - Actual Result: The server's libvirt XML uses UEFI _with_ Secure Boot. /usr/share/OVMF/OVMF_CODE.secboot.fd The guest doesn't boot; empty console log; qemu-kvm looping at 100% CPU. $ openstack console log show srv | grep -i -e efi -e bios $ openstack console log show srv | wc -l 0 $ juju run --app nova-compute 'top -b -d1 -n5 | grep qemu' 67205 libvirt+ ... 100.0 1.4 1:18.35 qemu-sy+ 67205 libvirt+ ... 100.0 1.4 1:19.36 qemu-sy+ 67205 libvirt+ ... 99.0 1.4 1:20.36 qemu-sy+ 67205 libvirt+ ... 101.0 1.4 1:21.37 qemu-sy+ 67205 libvirt+ ... 100.0 1.4 1:22.38 qemu-sy+ Where problems could occur: === The changes are opt-in with `ubuntu_libvirt_uefi_loader_path=true`, so users are not affected by default. Theoretically, regressions would more likely manifest and be contained in nova's libvirt driver, when `hw_firwmare_type=uefi` (not by default). The expected symptoms of regressions are boot failures (server starts from openstack perspective, but doesn't boot to the operating system). Other Info: === - Hypervisor running Ubuntu 20.04 LTS (Focal) - Nova packages from Ussuri (Ubuntu Archive) or Victoria (Cloud Archive). To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1960758/+subscriptions
[Yahoo-eng-team] [Bug 1948466] Re: [OVN] Mech driver fails to delete DHCP options during subnet deletion
This bug was fixed in the package neutron - 2:16.4.2-0ubuntu6.3 --- neutron (2:16.4.2-0ubuntu6.3) focal; urgency=medium * d/p/check-subnet-in-remove-subnet-dhcp-options.patch: Ensure dhcp_options subnet check handles dictionary correctly (LP: #1948466). * d/p/ovn-fix-untrusted-port-security-enabled-check.patch: Fix logic for check that wraps adding of port to drop port group (LP: #1939723). -- Corey Bryant Mon, 21 Aug 2023 15:29:46 -0400 ** Changed in: neutron (Ubuntu Focal) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1948466 Title: [OVN] Mech driver fails to delete DHCP options during subnet deletion Status in Ubuntu Cloud Archive: Fix Released Status in Ubuntu Cloud Archive ussuri series: Fix Committed Status in neutron: Fix Released Status in neutron package in Ubuntu: Fix Released Status in neutron source package in Focal: Fix Released Bug description: == Original Bug Description == Snippet: https://paste.opendev.org/show/810168/ I can't provide a link to a CI execution, I saw this error in an internal CI. I'm still investigating when this could happen. == Ubuntu SRU Details == [Impact] During subnet deletion the check in _remove_subnet_dhcp_options() results in the following traceback (taken from pastebin above in case it disappears) if dhcp_options['subnet'] is an empty dictionary: ExternalNetworksRBACTestJSON-2078932943-project-admin] Mechanism driver 'ovn' failed in delete_subnet_postcommit: KeyError: 'uuid' Traceback (most recent call last): File "/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 482, in _call_on_drivers getattr(driver.obj, method_name)(context) File "/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 637, in delete_subnet_postcommit self._ovn_client.delete_subnet(context._plugin_context, File "/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py", line 2103, in delete_subnet self._remove_subnet_dhcp_options(subnet_id, txn) File "/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py", line 1971, in _remove_subnet_dhcp_options dhcp_options['subnet']['uuid'])) KeyError: 'uuid' The fix ensures this check handles a dictionary correctly. [Test Case] In case we don't have a recreate for this: 1) lxc launch ubuntu-daily:focal f1 && lxc exec f1 /bin/bash 2) sudo add-apt-repository -p proposed 3) sudo apt install python3-neutron 4) cd /usr/lib/python3/dist-packages 5) python3 -m unittest neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriver.test_remove_subnet_dhcp_options_in_ovn_ipv4 6) re-run the test in step #5 after adding 'pdb.set_trace()' to the line before the check in neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py this way we can see what dhcp_options['subnet'] is set to, ensure the check behaves correctly, and try another run with dhcp_options['subnet'] = {} 7) sudo add-apt-repository -r -p proposed [Regression Potential] This is a minimal change that is backward compatible with the previous check. The new check can still handle 'not None' in addition to handling an empty dictionary correctly. This has been fixed in Ubuntu Victoria packages (and above) since 2022-01-12, and has been fixed in the upstream stable/ussuri branch since 2021-10-25. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1948466/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1939723] Re: [sru] neutron-ovn-db-sync generates insufficient flow
This bug was fixed in the package neutron - 2:16.4.2-0ubuntu6.3 --- neutron (2:16.4.2-0ubuntu6.3) focal; urgency=medium * d/p/check-subnet-in-remove-subnet-dhcp-options.patch: Ensure dhcp_options subnet check handles dictionary correctly (LP: #1948466). * d/p/ovn-fix-untrusted-port-security-enabled-check.patch: Fix logic for check that wraps adding of port to drop port group (LP: #1939723). -- Corey Bryant Mon, 21 Aug 2023 15:29:46 -0400 ** Changed in: neutron (Ubuntu Focal) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1939723 Title: [sru] neutron-ovn-db-sync generates insufficient flow Status in Ubuntu Cloud Archive: Fix Released Status in Ubuntu Cloud Archive ussuri series: Fix Committed Status in Ubuntu Cloud Archive victoria series: Fix Released Status in Ubuntu Cloud Archive wallaby series: Fix Released Status in Ubuntu Cloud Archive xena series: Fix Released Status in Ubuntu Cloud Archive yoga series: Fix Released Status in Ubuntu Cloud Archive zed series: Fix Released Status in neutron: Fix Released Status in neutron package in Ubuntu: Fix Released Status in neutron source package in Focal: Fix Released Bug description: = Original bug description = In OpenStack version Victoria, neutron-ovn-db-sync generates insufficient flow for port no security-group or disable port-security. ---> As a result, the port is not connected to the network. = Ubuntu SRU details = [Impact] The neutron-ovn-db-sync tool is used to syncing neutron networks and ports with OVN databases. When the tool is run, ports with port security disabled are incorrectly being added to the drop port group causing all traffic to be dropped by default. [Test Case] - Create a VM - Disable port security - Remove NB & SB DB - Run command neutron-ovn-db-sync-util to resync from neutron to NB database neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --ovn-neutron_sync_mode repair - Restart ovn-controller - VM with port disable security die without the fix [Regression Potential] This is a simple patch that fixes the logic of an if statement. This has been fixed in the victoria+ Ubuntu package versions since 2022-01-12, and has been fixed in the upstream stable/ussuri branch since 2021-11-11. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1939723/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 2036705] [NEW] A port that is disabled and bound is still ACTIVE with ML2/OVN
Public bug reported: Issue originally reported to the Octavia project: https://bugs.launchpad.net/octavia/+bug/2033392 During the failover of a loadbalancer, Octavia disables a port and waits for its status to be DOWN, but it never happens, the port is still ACTIVE (it impacts the duration of the failover in Octavia, but also the availability of the loadbalancer). When a bound port is disabled, its status is expected to be switched to DOWN. But with ML2/OVN, the port remains ACTIVE. $ openstack server create --image cirros-0.5.2-x86_64-disk --flavor m1.nano --network public server1 [..] | id | 7e392799-7a25-4ec6-a0ff-e479b3c37cc6 | [..] $ openstack port list --device-id 7e392799-7a25-4ec6-a0ff-e479b3c37cc6 +--+--+---+--++ | ID | Name | MAC Address | Fixed IP Addresses | Status | +--+--+---+--++ | 208c473c-4161-4c3a-ab9e-8444d7bc375f | | fa:16:3e:85:bc:ac | ip_address='172.24.4.251', subnet_id='9441b590-d9d4-4f8f-b4aa-838736070222' | ACTIVE | | | | | ip_address='2001:db8::322', subnet_id='813adce0-21de-44c9-958a-6967441b8623' | | +--+--+---+--++ $ openstack port show -c admin_state_up -c status 208c473c-4161-4c3a-ab9e-8444d7bc375f +++ | Field | Value | +++ | admin_state_up | UP | | status | ACTIVE | +++ # Disabling the port $ openstack port set --disable 208c473c-4161-4c3a-ab9e-8444d7bc375f $ openstack port show -c admin_state_up -c status 208c473c-4161-4c3a-ab9e-8444d7bc375f +++ | Field | Value | +++ | admin_state_up | DOWN | | status | ACTIVE | +++ Folks on #openstack-neutron confirmed that with ML2/OVS, the status is DOWN when the port is disabled. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/2036705 Title: A port that is disabled and bound is still ACTIVE with ML2/OVN Status in neutron: New Bug description: Issue originally reported to the Octavia project: https://bugs.launchpad.net/octavia/+bug/2033392 During the failover of a loadbalancer, Octavia disables a port and waits for its status to be DOWN, but it never happens, the port is still ACTIVE (it impacts the duration of the failover in Octavia, but also the availability of the loadbalancer). When a bound port is disabled, its status is expected to be switched to DOWN. But with ML2/OVN, the port remains ACTIVE. $ openstack server create --image cirros-0.5.2-x86_64-disk --flavor m1.nano --network public server1 [..] | id | 7e392799-7a25-4ec6-a0ff-e479b3c37cc6 | [..] $ openstack port list --device-id 7e392799-7a25-4ec6-a0ff-e479b3c37cc6 +--+--+---+--++ | ID | Name | MAC Address | Fixed IP Addresses | Status | +--+--+---+--++ | 208c473c-4161-4c3a-ab9e-8444d7bc375f | | fa:16:3e:85:bc:ac | ip_address='172.24.4.251', subnet_id='9441b590-d9d4-4f8f-b4aa-838736070222' | ACTIVE | | | | | ip_address='2001:db8::322', subnet_id='813adce0-21de-44c9-958a-6967441b8623' | | +--+--+---+--++ $ openstack port show -c admin_state_up -c status 208c473c-4161-4c3a-ab9e-8444d7bc375f +++ | Field | Value | +++ | admin_state_up | UP | | status | ACTIVE | +++ # Disabling the port $ openstack port set --disable 208c473c-4161-4c3a-ab9e-8444d7bc375f $ openstack port show -c admin_state_up -c status 208c473c-4161-4c3a-ab9e-8444d7
[Yahoo-eng-team] [Bug 2036709] [NEW] Extend the OVN plugins to support remote_address_group_id for security group rules
Public bug reported: neutron provides address groups that can be used by security group rules. OVN has the address set list, which is used to store ip address sets for ACL use. In ovn scenarios, security group rules are implemented through ACLs. You can map the address group in neutron to the address set in ovn. In ovn scenarios, the remote_addresss_group parameter is supported by security group rules. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/2036709 Title: Extend the OVN plugins to support remote_address_group_id for security group rules Status in neutron: New Bug description: neutron provides address groups that can be used by security group rules. OVN has the address set list, which is used to store ip address sets for ACL use. In ovn scenarios, security group rules are implemented through ACLs. You can map the address group in neutron to the address set in ovn. In ovn scenarios, the remote_addresss_group parameter is supported by security group rules. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/2036709/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 2033980] Re: Neutron fails to respawn radvd due to corrupt pid file
This is not a bug in kolla-ansible ** Changed in: kolla-ansible Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/2033980 Title: Neutron fails to respawn radvd due to corrupt pid file Status in kolla-ansible: Invalid Status in neutron: In Progress Bug description: **Bug Report** What happened: I have had issues periodically where radvd seems to die and neutron is not able to respawn it. I'm not sure why it dies. In my neutron-l3-agent.log, the following error occurs once per minute: ``` 2023-09-03 14:37:07.514 16 ERROR neutron.agent.linux.utils [-] Unable to convert value in /var/lib/neutron/external/pids/ea759c71-0f4d-4be9-a761-83843ce04d9a.pid.radvd 2023-09-03 14:37:07.514 16 ERROR neutron.agent.linux.external_process [-] radvd for router with uuid ea759c71-0f4d-4be9-a761-83843ce04d9a not found. The process should not have died 2023-09-03 14:37:07.514 16 WARNING neutron.agent.linux.external_process [-] Respawning radvd for uuid ea759c71-0f4d-4be9-a761-83843ce04d9a 2023-09-03 14:37:07.514 16 ERROR neutron.agent.linux.utils [-] Unable to convert value in /var/lib/neutron/external/pids/ea759c71-0f4d-4be9-a761-83843ce04d9a.pid.radvd 2023-09-03 14:37:07.762 16 ERROR neutron.agent.linux.utils [-] Exit code: 255; Cmd: ['ip', 'netns', 'exec', 'qrouter-ea759c71-0f4d-4be9-a761-83843ce04d9a', 'env', 'PROCESS_TAG=radvd-ea759c71-0f4d-4be9-a761-83843ce04d9a', 'radvd', '-C', '/var/lib/neutron/ra/ea759c71-0f4d-4be9-a761-83843ce04d9a.radvd.conf', '-p', '/var/lib/neutron/external/pids/ea759c71-0f4d-4be9-a761-83843ce04d9a.pid.radvd', '-m', 'syslog', '-u', 'neutron']; Stdin: ; Stdout: ; Stderr: ``` Inspecting the pid file, it appears to have 2 pids, one on each line: ``` $ docker exec -it neutron_l3_agent cat /var/lib/neutron/external/pids/ea759c71-0f4d-4be9-a761-83843ce04d9a.pid.radvd 853 1161 ``` Deleting the file then properly respawns radvd: ``` 2023-09-03 14:38:07.515 16 ERROR neutron.agent.linux.external_process [-] radvd for router with uuid ea759c71-0f4d-4be9-a761-83843ce04d9a not found. The process should not have died 2023-09-03 14:38:07.516 16 WARNING neutron.agent.linux.external_process [-] Respawning radvd for uuid ea759c71-0f4d-4be9-a761-83843ce04d9a ``` What you expected to happen: Radvd is respawned without needing manual intervention. Likely what is meant to happen is neutron should write the pid to the file, whereas instead it appends it. I'm not sure if this is a kolla issue or a neutron issue. How to reproduce it (minimal and precise): Unsure, I'm not sure how radvd ends up dying in the first place. You could likely reproduce this by deploying kolla-ansible and then manually killing radvd. **Environment**: * OS (e.g. from /etc/os-release): NAME="Rocky Linux" VERSION="9.2 (Blue Onyx)" ID="rocky" ID_LIKE="rhel centos fedora" VERSION_ID="9.2" PLATFORM_ID="platform:el9" PRETTY_NAME="Rocky Linux 9.2 (Blue Onyx)" ANSI_COLOR="0;32" LOGO="fedora-logo-icon" CPE_NAME="cpe:/o:rocky:rocky:9::baseos" HOME_URL="https://rockylinux.org/"; BUG_REPORT_URL="https://bugs.rockylinux.org/"; SUPPORT_END="2032-05-31" ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9" ROCKY_SUPPORT_PRODUCT_VERSION="9.2" REDHAT_SUPPORT_PRODUCT="Rocky Linux" REDHAT_SUPPORT_PRODUCT_VERSION="9.2" * Kernel (e.g. `uname -a`): Linux lon1 5.14.0-284.25.1.el9_2.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Aug 2 14:53:30 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux * Docker version if applicable (e.g. `docker version`): Client: Docker Engine - Community Version: 24.0.5 API version: 1.43 Go version:go1.20.6 Git commit:ced0996 Built: Fri Jul 21 20:36:54 2023 OS/Arch: linux/amd64 Context: default Server: Docker Engine - Community Engine: Version: 24.0.5 API version: 1.43 (minimum version 1.12) Go version: go1.20.6 Git commit: a61e2b4 Built:Fri Jul 21 20:35:17 2023 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.22 GitCommit:8165feabfdfe38c65b599c4993d227328c231fca runc: Version: 1.1.8 GitCommit:v1.1.8-0-g82f18fe docker-init: Version: 0.19.0 GitCommit:de40ad0 * Kolla-Ansible version (e.g. `git head or tag or stable branch` or pip package version if using release): 16.1.0 (stable/2023.1) * Docker image Install type (source/binary): Default installed by kolla-ansible * Docker image distribution: rocky * Are you using official images from Docker Hub or self built? official * If self built - Kolla version and environment used to build: not applicable * Share your inventory file, globals.yml and other configuration files if
[Yahoo-eng-team] [Bug 2036734] [NEW] Add support for other metadef tag operations
Public bug reported: Add support for other metadef tag operations The Image Metadata Tag is not support in SDK currently. In this patch, I added an operations of image metadata tags. - Add functional test - Add unit test Co-authored-by: Chaehee Kang ** Affects: glance Importance: Undecided Assignee: Chaehee Kang (ellin0817) Status: New ** Changed in: glance Assignee: (unassigned) => Chaehee Kang (ellin0817) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/2036734 Title: Add support for other metadef tag operations Status in Glance: New Bug description: Add support for other metadef tag operations The Image Metadata Tag is not support in SDK currently. In this patch, I added an operations of image metadata tags. - Add functional test - Add unit test Co-authored-by: Chaehee Kang To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/2036734/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1948466] Re: [OVN] Mech driver fails to delete DHCP options during subnet deletion
This bug was fixed in the package neutron - 2:16.4.2-0ubuntu6.3~cloud0 --- neutron (2:16.4.2-0ubuntu6.3~cloud0) bionic-ussuri; urgency=medium . * New update for the Ubuntu Cloud Archive. . neutron (2:16.4.2-0ubuntu6.3) focal; urgency=medium . * d/p/check-subnet-in-remove-subnet-dhcp-options.patch: Ensure dhcp_options subnet check handles dictionary correctly (LP: #1948466). * d/p/ovn-fix-untrusted-port-security-enabled-check.patch: Fix logic for check that wraps adding of port to drop port group (LP: #1939723). ** Changed in: cloud-archive/ussuri Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1948466 Title: [OVN] Mech driver fails to delete DHCP options during subnet deletion Status in Ubuntu Cloud Archive: Fix Released Status in Ubuntu Cloud Archive ussuri series: Fix Released Status in neutron: Fix Released Status in neutron package in Ubuntu: Fix Released Status in neutron source package in Focal: Fix Released Bug description: == Original Bug Description == Snippet: https://paste.opendev.org/show/810168/ I can't provide a link to a CI execution, I saw this error in an internal CI. I'm still investigating when this could happen. == Ubuntu SRU Details == [Impact] During subnet deletion the check in _remove_subnet_dhcp_options() results in the following traceback (taken from pastebin above in case it disappears) if dhcp_options['subnet'] is an empty dictionary: ExternalNetworksRBACTestJSON-2078932943-project-admin] Mechanism driver 'ovn' failed in delete_subnet_postcommit: KeyError: 'uuid' Traceback (most recent call last): File "/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 482, in _call_on_drivers getattr(driver.obj, method_name)(context) File "/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 637, in delete_subnet_postcommit self._ovn_client.delete_subnet(context._plugin_context, File "/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py", line 2103, in delete_subnet self._remove_subnet_dhcp_options(subnet_id, txn) File "/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py", line 1971, in _remove_subnet_dhcp_options dhcp_options['subnet']['uuid'])) KeyError: 'uuid' The fix ensures this check handles a dictionary correctly. [Test Case] In case we don't have a recreate for this: 1) lxc launch ubuntu-daily:focal f1 && lxc exec f1 /bin/bash 2) sudo add-apt-repository -p proposed 3) sudo apt install python3-neutron 4) cd /usr/lib/python3/dist-packages 5) python3 -m unittest neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriver.test_remove_subnet_dhcp_options_in_ovn_ipv4 6) re-run the test in step #5 after adding 'pdb.set_trace()' to the line before the check in neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py this way we can see what dhcp_options['subnet'] is set to, ensure the check behaves correctly, and try another run with dhcp_options['subnet'] = {} 7) sudo add-apt-repository -r -p proposed [Regression Potential] This is a minimal change that is backward compatible with the previous check. The new check can still handle 'not None' in addition to handling an empty dictionary correctly. This has been fixed in Ubuntu Victoria packages (and above) since 2022-01-12, and has been fixed in the upstream stable/ussuri branch since 2021-10-25. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1948466/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1939723] Re: [sru] neutron-ovn-db-sync generates insufficient flow
This bug was fixed in the package neutron - 2:16.4.2-0ubuntu6.3~cloud0 --- neutron (2:16.4.2-0ubuntu6.3~cloud0) bionic-ussuri; urgency=medium . * New update for the Ubuntu Cloud Archive. . neutron (2:16.4.2-0ubuntu6.3) focal; urgency=medium . * d/p/check-subnet-in-remove-subnet-dhcp-options.patch: Ensure dhcp_options subnet check handles dictionary correctly (LP: #1948466). * d/p/ovn-fix-untrusted-port-security-enabled-check.patch: Fix logic for check that wraps adding of port to drop port group (LP: #1939723). ** Changed in: cloud-archive/ussuri Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1939723 Title: [sru] neutron-ovn-db-sync generates insufficient flow Status in Ubuntu Cloud Archive: Fix Released Status in Ubuntu Cloud Archive ussuri series: Fix Released Status in Ubuntu Cloud Archive victoria series: Fix Released Status in Ubuntu Cloud Archive wallaby series: Fix Released Status in Ubuntu Cloud Archive xena series: Fix Released Status in Ubuntu Cloud Archive yoga series: Fix Released Status in Ubuntu Cloud Archive zed series: Fix Released Status in neutron: Fix Released Status in neutron package in Ubuntu: Fix Released Status in neutron source package in Focal: Fix Released Bug description: = Original bug description = In OpenStack version Victoria, neutron-ovn-db-sync generates insufficient flow for port no security-group or disable port-security. ---> As a result, the port is not connected to the network. = Ubuntu SRU details = [Impact] The neutron-ovn-db-sync tool is used to syncing neutron networks and ports with OVN databases. When the tool is run, ports with port security disabled are incorrectly being added to the drop port group causing all traffic to be dropped by default. [Test Case] - Create a VM - Disable port security - Remove NB & SB DB - Run command neutron-ovn-db-sync-util to resync from neutron to NB database neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --ovn-neutron_sync_mode repair - Restart ovn-controller - VM with port disable security die without the fix [Regression Potential] This is a simple patch that fixes the logic of an if statement. This has been fixed in the victoria+ Ubuntu package versions since 2022-01-12, and has been fixed in the upstream stable/ussuri branch since 2021-11-11. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1939723/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 2036763] [NEW] [pep8] Pylint error W0105 (pointless-string-statement) in random CI executions
Public bug reported: This error has been seen in [1]. Logs: https://zuul.opendev.org/t/openstack/build/1c542d5ac7b1433e82e84e52737461b2 Snippet: https://paste.opendev.org/show/bLkR97YQEzEKBeCmUxkt/ [1]https://review.opendev.org/c/openstack/neutron/+/882832 ** Affects: neutron Importance: Medium Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez) Status: New ** Changed in: neutron Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez) ** Changed in: neutron Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/2036763 Title: [pep8] Pylint error W0105 (pointless-string-statement) in random CI executions Status in neutron: New Bug description: This error has been seen in [1]. Logs: https://zuul.opendev.org/t/openstack/build/1c542d5ac7b1433e82e84e52737461b2 Snippet: https://paste.opendev.org/show/bLkR97YQEzEKBeCmUxkt/ [1]https://review.opendev.org/c/openstack/neutron/+/882832 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/2036763/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 2036867] [NEW] refactor test: use project id as constant variable in all places
Public bug reported: This is not a bug, same PROJECT_ID const defined in many places. ex: fixtures/nova.py:75:PROJECT_ID = '6f70656e737461636b20342065766572' functional/api_samples_test_base.py:25:PROJECT_ID = "6f70656e737461636b20342065766572" for full list, inside tests, grep for 6f70656e737461636b20342065766572. ** Affects: nova Importance: Undecided Status: New ** Tags: low-hanging-fruit testing ** Tags added: low-hanging-fruit testing -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/2036867 Title: refactor test: use project id as constant variable in all places Status in OpenStack Compute (nova): New Bug description: This is not a bug, same PROJECT_ID const defined in many places. ex: fixtures/nova.py:75:PROJECT_ID = '6f70656e737461636b20342065766572' functional/api_samples_test_base.py:25:PROJECT_ID = "6f70656e737461636b20342065766572" for full list, inside tests, grep for 6f70656e737461636b20342065766572. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/2036867/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp