[Yahoo-eng-team] [Bug 2035168] [NEW] Remaining db migrations for unmaintained Nuage plugin

2023-09-11 Thread Takashi Kajinami
Public bug reported:

(This is not a functional bug but is a potential cleanup opportunity)

The latest master still contains database migration code for tables used
by Nuage plugin.

https://github.com/openstack/neutron/tree/8cba9a2ee86cb3b65645674ef315c14cfb261143/neutron/db/migration/alembic_migrations
 -> nuage_init_opts.py

However I noticed the nuage plugin is no longer maintained.

https://github.com/nuagenetworks/nuage-openstack-neutron/tree/master

AFAIU we can't remove these tables because plugins split out from the neutron 
repo early
rely in tables/databases created by neutron, but it's no longer useful to 
maintain these
in case the plugin is already unmaintained.

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Remaining db migrations for Nuage plugin 
+ Remaining db migrations for unmaintained Nuage plugin

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2035168

Title:
  Remaining db migrations for unmaintained Nuage plugin

Status in neutron:
  New

Bug description:
  (This is not a functional bug but is a potential cleanup opportunity)

  The latest master still contains database migration code for tables
  used by Nuage plugin.

  
https://github.com/openstack/neutron/tree/8cba9a2ee86cb3b65645674ef315c14cfb261143/neutron/db/migration/alembic_migrations
   -> nuage_init_opts.py

  However I noticed the nuage plugin is no longer maintained.

  https://github.com/nuagenetworks/nuage-openstack-neutron/tree/master

  AFAIU we can't remove these tables because plugins split out from the neutron 
repo early
  rely in tables/databases created by neutron, but it's no longer useful to 
maintain these
  in case the plugin is already unmaintained.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2035168/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1960758] Re: UEFI libvirt servers can't boot on Ubuntu 20.04 hypervisors with Ussuri/Victoria

2023-09-11 Thread Corey Bryant
This bug was fixed in the package nova - 2:22.4.0-0ubuntu1~cloud5
---

 nova (2:22.4.0-0ubuntu1~cloud5) focal-victoria; urgency=medium
 .
   * d/p/lp1960758-ubuntu-uefi-loader-path.patch: add config option
 'ubuntu_libvirt_uefi_loader_path' to restrict UEFI loaders to
 only those shipped/supported in Ubuntu/Ussuri. (LP: #1960758)


** Changed in: cloud-archive/victoria
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1960758

Title:
  UEFI libvirt servers can't boot on Ubuntu 20.04 hypervisors with
  Ussuri/Victoria

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive ussuri series:
  Fix Committed
Status in Ubuntu Cloud Archive victoria series:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) ussuri series:
  Invalid
Status in OpenStack Compute (nova) victoria series:
  Invalid
Status in nova package in Ubuntu:
  Invalid
Status in nova source package in Focal:
  Fix Committed

Bug description:
  Impact:
  ===

  Currently, setting `hw_firwmare_type=uefi` may create
  _unbootable_ servers on 20.04 hypervisors with Ussuri
  and Victoria (Wallaby and later are OK).

  We should not use the Secure Boot firmware on the 'pc'
  machine type, as 'q35' is _required_ by OVMF firmware
  if SMM feature is built (usually the case, to actually
  secure the SB feature).
  [See comment #6 for research and #7 for test evidence.]

  We should not use the Secure Boot firmware on the 'q35'
  machine type _either_, as it might not work regardless,
  since other libvirt XML options such as SMM and S3/S4
  disable may be needed for Secure Boot to work, but are
  _not_ configured by Openstack Ussuri (no SB support).

  
  Approach:
  ===

  Considering how long Focal/Ussuri have been out there
  (and maybe worked with UEFI enabled for some cases?)
  add a config option to _opt-in_ to actually supported
  UEFI loaders for nova/libvirt.

  This seems to benefit downstream/Ubuntu more (although
  other distros might be affected) add the config option
  "ubuntu_libvirt_uefi_loader_path" (disabled by default)
  in the DEFAULT libvirt config section (so it can be set
  in nova-compute charm's 'config-flags' option).

  
  Test Plan:
  ===

  $ openstack image set --property hw_firmware_type=uefi $IMAGE
  $ openstack server create --image $IMAGE --flavor $FLAVOR --network $NETWORK 
uefi-server

  (with patched packages:)
  Set `ubuntu_libvirt_uefi_loader_path = true` in `[DEFAULT]` in 
/etc/nova/nova.conf
  (eg `juju config nova-compute 
config-flags='ubuntu_libvirt_uefi_loader_path=true'`)
  $ openstack server stop uefi-server
  $ openstack server start uefi-server

  - Expected Result:

  The server's libvirt XML uses UEFI _without_ Secure Boot.

  /usr/share/OVMF/OVMF_CODE.fd

  The guest boots, and console log confirms UEFI mode:

  $ openstack console log show srv | grep -i -e efi -e bios
  ...
  Creating boot entry "Boot0003" with label "ubuntu" for file 
"\EFI\ubuntu\shimx64.efi"
  ...
  [0.00] efi: EFI v2.70 by EDK II
  [0.00] efi:  SMBIOS=0x7fbcd000  ACPI=0x7fbfa000  ACPI
  2.0=0x7fbfa014  MEMATTR=0x7eb30018
  [0.00] SMBIOS 2.8 present.
  [0.00] DMI: OpenStack Foundation OpenStack Nova, BIOS 0.0.0 
02/06/2015
  ...

  - Actual Result:

  The server's libvirt XML uses UEFI _with_ Secure Boot.

  /usr/share/OVMF/OVMF_CODE.secboot.fd

  The guest doesn't boot; empty console log; qemu-kvm looping at 100%
  CPU.

  $ openstack console log show srv | grep -i -e efi -e bios
  $ openstack console log show srv | wc -l
  0

  $ juju run --app nova-compute 'top -b -d1 -n5 | grep qemu'
    67205 libvirt+  ... 100.0   1.4   1:18.35 qemu-sy+
    67205 libvirt+  ... 100.0   1.4   1:19.36 qemu-sy+
    67205 libvirt+  ...  99.0   1.4   1:20.36 qemu-sy+
    67205 libvirt+  ... 101.0   1.4   1:21.37 qemu-sy+
    67205 libvirt+  ... 100.0   1.4   1:22.38 qemu-sy+

  
  Where problems could occur:
  ===

  The changes are opt-in with `ubuntu_libvirt_uefi_loader_path=true`,
  so users are not affected by default.

  Theoretically, regressions would more likely manifest and be contained
  in nova's libvirt driver, when `hw_firwmare_type=uefi` (not by default).

  The expected symptoms of regressions are boot failures (server starts
  from openstack perspective, but doesn't boot to the operating system).

  
  Other Info:
  ===

  - Hypervisor running Ubuntu 20.04 LTS (Focal)
  - Nova packages from Ussuri (Ubuntu Archive) or Victoria (Cloud Archive).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1960758/+subscriptions


-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1983863] Re: Can't log within tpool.execute

2023-09-11 Thread sean mooney
adding nova as the change to fix this is breaking our unit tests.
https://review.opendev.org/c/openstack/nova/+/894538 corrects this
setting as critical as this is blocking the bump of upper constratis to include 
oslo.log 5.3.0

i don't think there is any  real-world impact beyond that.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

** Changed in: nova
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1983863

Title:
  Can't log within tpool.execute

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.log:
  Fix Released

Bug description:
  There is a bug in eventlet where logging within a native thread can
  lead to a deadlock situation:
  https://github.com/eventlet/eventlet/issues/432

  When encountered with this issue some projects in OpenStack using
  oslo.log, eg. Cinder, resolve them by removing any logging withing
  native threads.

  There is actually a better approach.  The Swift team came up with a
  solution a long time ago, and it would be great if oslo.log could use
  this workaround automaticaly:
  
https://opendev.org/openstack/swift/commit/69c715c505cf9e5df29dc1dff2fa1a4847471cb6

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1983863/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033683] Re: openvswitch.agent.ovs_neutron_agent fails to Cmd: ['iptables-restore', '-n']

2023-09-11 Thread Takashi Kajinami
We are facing this issue in Puppet OpenStack CI which uses RDO stable/yoga and 
c8s, so this looks like a legit bug in iptables.
I don't think this is also related to TripleO so I'll close this as invalid.

** Changed in: tripleo
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033683

Title:
  openvswitch.agent.ovs_neutron_agent fails to Cmd: ['iptables-restore',
  '-n']

Status in neutron:
  Invalid
Status in tripleo:
  Invalid

Bug description:
  Description
  ===
  Wallaby deployment via undercloud/overcloud started to fail recently on 
overcloud node provision
  Neutron constantly reports inability to update iptables that in turn makes 
baremetal to fail to boot from PXE
  From the review it seems that /usr/bin/update-alternatives set to legacy 
fails since neutron user doesn't have sudo to run it
  In the info I can see that neutron user has the following subset of commands 
it's able to run:
  ...
  (root) NOPASSWD: /usr/bin/update-alternatives --set iptables 
/usr/sbin/iptables-legacy
  (root) NOPASSWD: /usr/bin/update-alternatives --set ip6tables 
/usr/sbin/ip6tables-legacy
  (root) NOPASSWD: /usr/bin/update-alternatives --auto iptables
  (root) NOPASSWD: /usr/bin/update-alternatives --auto ip6tables

  But the issue is the fact that command isn't found as it was moved to
  /usr/sbin/update-alternatives

  Steps to reproduce
  ==
  1. Deploy undercloud
  2. Deploy networks and VIP
  3. Add and introspect a node
  4. Execute overcloud node provision ... that will timeout 

  Expected result
  ===
  Successful overcloud node baremetal provisioning

  Logs & Configs
  ==
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-18d52177-9c93-401c-b97d-0334e488a257 - - - - -] Error while processing VIF 
ports: neutron_lib.exceptions.ProcessExecutionError: Exit code: 1; Cmd: 
['iptables-restore', '-n']; Stdin: # Generated by iptables_manager

  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent COMMIT
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent # Completed by 
iptables_manager
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ; Stdout: ; 
Stderr: iptables-restore: line 23 failed

  Environment
  ===
  Centos 9 Stream and undercloud deployment tool

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033683/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2035095] [NEW] CI:test_live_migration_with_trunk failing frequently on nova-live-migration job

2023-09-11 Thread Amit Uniyal
Public bug reported:

tests:
https://9d880f4dac5b6d1509a3-d490a441310dc4e25f1212d07e075dda.ssl.cf1.rackcdn.com/893744/1/check/nova-live-migration/8e97128/testr_results.html
https://291f4451bebc670e507b-a999ae1d5baedde86711d4f3bf719537.ssl.cf1.rackcdn.com/873648/23/check/nova-live-migration/8634c7c/testr_results.html
https://d6c736fcc9a860f59461-fbb3a5107d50e8d0a9c9940ac7f8a1de.ssl.cf5.rackcdn.com/894288/1/check/nova-live-migration/acaf4a4/testr_results.html
https://f0b27972d169a4e6104a-40416aec901d1e1b0fbe6fedfed92f1f.ssl.cf5.rackcdn.com/877446/22/check/nova-live-migration/975e3fc/testr_results.html
https://e19c202f51d149771e8a-51988972a6d6f0f30aafba2bfab9c470.ssl.cf2.rackcdn.com/891289/3/check/nova-live-migration/9812cc6/testr_results.html


Error backtrace:

` 
Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in 
wrapper
return func(*func_args, **func_kwargs)
  File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
return f(*func_args, **func_kwargs)
  File "/opt/stack/tempest/tempest/api/compute/admin/test_live_migration.py", 
line 292, in test_live_migration_with_trunk
self.assertTrue(
  File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
`

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2035095

Title:
  CI:test_live_migration_with_trunk  failing frequently on nova-live-
  migration job

Status in OpenStack Compute (nova):
  New

Bug description:
  tests:
  
https://9d880f4dac5b6d1509a3-d490a441310dc4e25f1212d07e075dda.ssl.cf1.rackcdn.com/893744/1/check/nova-live-migration/8e97128/testr_results.html
  
https://291f4451bebc670e507b-a999ae1d5baedde86711d4f3bf719537.ssl.cf1.rackcdn.com/873648/23/check/nova-live-migration/8634c7c/testr_results.html
  
https://d6c736fcc9a860f59461-fbb3a5107d50e8d0a9c9940ac7f8a1de.ssl.cf5.rackcdn.com/894288/1/check/nova-live-migration/acaf4a4/testr_results.html
  
https://f0b27972d169a4e6104a-40416aec901d1e1b0fbe6fedfed92f1f.ssl.cf5.rackcdn.com/877446/22/check/nova-live-migration/975e3fc/testr_results.html
  
https://e19c202f51d149771e8a-51988972a6d6f0f30aafba2bfab9c470.ssl.cf2.rackcdn.com/891289/3/check/nova-live-migration/9812cc6/testr_results.html

  
  Error backtrace:

  ` 
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in 
wrapper
  return func(*func_args, **func_kwargs)
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
  return f(*func_args, **func_kwargs)
File "/opt/stack/tempest/tempest/api/compute/admin/test_live_migration.py", 
line 292, in test_live_migration_with_trunk
  self.assertTrue(
File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true
  `

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2035095/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2034703] Re: Kolla Ansible- Global.yaml file

2023-09-11 Thread Akihiro Motoki
This is a bug tracker for horizon. It is not a place to ask questions to 
kolla-ansible.
Marking this as Invalid.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2034703

Title:
  Kolla Ansible- Global.yaml file

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Hi. anyone can guide me how to configure the olla ansible- gloal.yaml
  file correctly so that all the essentials services of openstack should
  be installed with a running state?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2034703/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928764] Re: Fullstack test TestUninterruptedConnectivityOnL2AgentRestart failing often with LB agent

2023-09-11 Thread Slawek Kaplonski
This issue is still not resolved definitely. It happens pretty often in
the CI jobs, see
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-7d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%20%22bug%201928764%22'),sort:!())

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1928764

Title:
  Fullstack test TestUninterruptedConnectivityOnL2AgentRestart failing
  often with LB agent

Status in neutron:
  Confirmed

Bug description:
  It seems that test
  
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart
  in various LB scenarios (flat, vxlan network) are failing recently
  pretty often.

  Examples of failures:

  
https://09f8e4e92bfb8d2ac89d-b41143eab52d80358d8555f964e9341b.ssl.cf5.rackcdn.com/670611/13/check/neutron-fullstack-with-uwsgi/8f51833/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_400/790288/1/check/neutron-fullstack-with-uwsgi/40025f9/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_400/790288/1/check/neutron-fullstack-with-uwsgi/40025f9/testr_results.html
  
https://0603beb4ddbd36de1165-42644bdefd5590a8f7e4e2e8a8a4112f.ssl.cf5.rackcdn.com/787956/1/check/neutron-fullstack-with-uwsgi/7640987/testr_results.html
  
https://e978bdcfc0235dcd9417-6560bc3b6382c1d289b358872777ca09.ssl.cf1.rackcdn.com/787956/1/check/neutron-fullstack-with-uwsgi/779913e/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0cb/789648/5/check/neutron-fullstack-with-uwsgi/0cb6d65/testr_results.html

  Stacktrace:

  ft1.1: 
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart(LB,Flat
 network)testtools.testresult.real._StringException: Traceback (most recent 
call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_connectivity.py",
 line 236, in test_l2_agent_restart
  self._assert_ping_during_agents_restart(
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/base.py", 
line 123, in _assert_ping_during_agents_restart
  common_utils.wait_until_true(
File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
  next(self.gen)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/net_helpers.py",
 line 147, in async_ping
  f.result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 432, in result
  return self.__get_result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in 
__get_result
  raise self._exception
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
  result = self.fn(*self.args, **self.kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/net_helpers.py",
 line 128, in assert_async_ping
  ns_ip_wrapper.netns.execute(
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/ip_lib.py", 
line 718, in execute
  return utils.execute(cmd, check_exit_code=check_exit_code,
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/utils.py", 
line 156, in execute
  raise exceptions.ProcessExecutionError(msg,
  neutron_lib.exceptions.ProcessExecutionError: Exit code: 1; Cmd: ['ip', 
'netns', 'exec', 'test-af70cf3a-c531-4fdf-ab4c-31cc69cc2c56', 'ping', '-W', 2, 
'-c', '1', '20.0.0.212']; Stdin: ; Stdout: PING 20.0.0.212 (20.0.0.212) 56(84) 
bytes of data.

  --- 20.0.0.212 ping statistics ---
  1 packets transmitted, 0 received, 100% packet loss, time 0ms

  ; Stderr:


  I checked linuxbridge-agent logs (2 cases) and I found there error
  like below:

  2021-05-13 15:46:07.721 96421 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139960964907248]: (4, ()) _call_back 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py:510
  2021-05-13 15:46:07.725 96421 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139960964907248]: (4, None) _call_back 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py:510
  2021-05-13 15:46:07.728 96421 DEBUG oslo.privsep.daemon [-] privsep: 
Exception during request[139960964907248]: Network interface brqa235fa8c-09 not 
found in namespace None. _process_cmd