[Yahoo-eng-team] [Bug 1966192] Re: cloud-init imports python-babel-localedata as dependency

2022-05-27 Thread Launchpad Bug Tracker
[Expired for cloud-init because there has been no activity for 60 days.]

** Changed in: cloud-init
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1966192

Title:
  cloud-init imports python-babel-localedata as dependency

Status in cloud-init:
  Expired

Bug description:
  cloud-init pulls the python-babel-localedata as a dependency in jammy.
  This is a relatively big dependency (5MB compressed, 27MB
  uncompressed), that impacts the size of the core22 snap. The package
  is pulled by the jinja module, apparently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1966192/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1966565] Re: cloud-init warning using Linode

2022-05-27 Thread Launchpad Bug Tracker
[Expired for cloud-init because there has been no activity for 60 days.]

** Changed in: cloud-init
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1966565

Title:
  cloud-init warning using Linode

Status in cloud-init:
  Expired

Bug description:
  I got this when logging in to my newly created VM from linode.com

  # A new feature in cloud-init identified possible datasources for#
  # this system as:#
  #   []   #
  # However, the datasource used was: None #
  ##
  # In the future, cloud-init will only attempt to use datasources that#
  # are identified or specifically configured. #
  # For more information see   #
  #   https://bugs.launchpad.net/bugs/1669675  #
  ##
  # If you are seeing this message, please file a bug against  #
  # cloud-init at  #
  #https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #
  # Make sure to include the cloud provider your instance is   #
  # running on.#
  ##
  # After you have filed a bug, you can disable this warning by launching  #
  # your instance with the cloud-config below, or putting that content #
  # into /etc/cloud/cloud.cfg.d/99-warnings.cfg#
  ##
  # #cloud-config  #
  # warnings:  #
  #   dsid_missing_source: off #
  **

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1966565/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1973487] Re: [RFE] Allow setting --dst-port for all port based protocols at once

2022-05-27 Thread Lajos Katona
We discussed the proposal today on the drivers meeting, see the logs:
https://meetings.opendev.org/meetings/neutron_drivers/2022/neutron_drivers.2022-05-27-14.00.log.html#l-14

The decisions was to keep this functionality in client side as there can
be complications in case it is implemented in Neutron, i.e.: iptables
can add such rule one-by-one anyway.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973487

Title:
  [RFE] Allow setting --dst-port for all port based protocols at once

Status in neutron:
  Won't Fix

Bug description:
  Currently creating a security rule[0] with with an --dst-port argument
  requires specifying a protocol which support ports [1]. If a user
  wants to set a security rule for another protocol in this group the
  same command will have to be issued again. This RFE, is a simple "ask"
  if it would be worth adding a new --protocol argument which would
  apply for all L4 protocols at once. For example, a CLI command can
  look something like this

  openstack security group rule create --ingress --dst-port 53:53
  --protocol all_L4_protocols 

  Side info, specifying "--protocol any" does not work, but that is
  expected.

  The only benefit of this RFE would be to reduce number of commands
  needed to open up ports across different L4 protocols.

  
  [0] 
https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/security-group-rule.html#security-group-rule-create
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/common/_constants.py#L23-L29

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1973487/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1965297] Re: l3ha don't set backup qg ports down

2022-05-27 Thread Corey Bryant
** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/xena
   Status: In Progress => Triaged

** Changed in: cloud-archive/wallaby
   Importance: Undecided => High

** Changed in: cloud-archive/wallaby
   Status: New => Triaged

** Changed in: cloud-archive/victoria
   Importance: Undecided => High

** Changed in: cloud-archive/victoria
   Status: New => Triaged

** Changed in: cloud-archive/ussuri
   Importance: Undecided => High

** Changed in: cloud-archive/ussuri
   Status: New => Triaged

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Focal)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Focal)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1965297

Title:
  l3ha don't set backup qg ports down

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive ussuri series:
  Triaged
Status in Ubuntu Cloud Archive victoria series:
  Triaged
Status in Ubuntu Cloud Archive wallaby series:
  Triaged
Status in Ubuntu Cloud Archive xena series:
  Triaged
Status in Ubuntu Cloud Archive yoga series:
  Triaged
Status in Ubuntu Cloud Archive zed series:
  Triaged
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Triaged
Status in neutron source package in Focal:
  Triaged
Status in neutron source package in Impish:
  Triaged
Status in neutron source package in Jammy:
  Triaged
Status in neutron source package in Kinetic:
  Triaged

Bug description:
  The history to this request is as follows; bug 1916024 fixed an issue
  that subsequently had to be reverted due to a regression that it
  introduced (see bug 1927868) and the original issue can once again
  present itself in that keepalived is unable to send GARP on the qg
  port until the port is marked as UP by neutron which in loaded
  environments can sometimes take longer than keepalived will wait (e.g.
  when an l3-agent is restarted on a host that has hundreds of routers).
  The reason why qg- ports are marked as DOWN is because of the patch
  landed as part of bug 1859832 and as I understand it there is now
  consensus from upstream [1] to revert that patch as well and a better
  solution is needed to fix that particular issue. I have not found a
  bug open yet for the revert hence why I am opening this one.

  [1]
  
https://meetings.opendev.org/meetings/neutron_drivers/2022/neutron_drivers.2022-03-04-14.03.log.txt

  

  [Impact]
  Please see LP bug description for full details but in short, this patch is a 
revert of a patch that has show instability in the field for users of Neutron 
L3HA.

  [Test Plan]
* Deploy Openstack with Neutron L3 HA enabled
* Create a number of HA routers
* Check all qrouter namespaces and ensure that the qg- port is UP in all

  [Regression Potential]
  Since the original patch was intended to address issues with MLDv2 it is 
possible that reverting it will re-introduce those issues and a new patch will 
need to be proposed to address that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1965297/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1965297] Re: l3ha don't set backup qg ports down

2022-05-27 Thread Corey Bryant
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Kinetic)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Impish)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Impish)
   Status: New => Triaged

** Changed in: neutron (Ubuntu Jammy)
   Status: New => Triaged

** Changed in: neutron (Ubuntu Impish)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Jammy)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Kinetic)
   Status: New => Triaged

** Changed in: neutron (Ubuntu Kinetic)
   Importance: Undecided => High

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/zed
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/yoga
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/xena
   Status: New => Triaged

** Changed in: cloud-archive/yoga
   Status: New => Triaged

** Changed in: cloud-archive/zed
   Status: New => Triaged

** Changed in: cloud-archive/yoga
   Importance: Undecided => High

** Changed in: cloud-archive/xena
   Importance: Undecided => High

** Changed in: cloud-archive/zed
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1965297

Title:
  l3ha don't set backup qg ports down

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive xena series:
  Triaged
Status in Ubuntu Cloud Archive yoga series:
  Triaged
Status in Ubuntu Cloud Archive zed series:
  Triaged
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Triaged
Status in neutron source package in Impish:
  Triaged
Status in neutron source package in Jammy:
  Triaged
Status in neutron source package in Kinetic:
  Triaged

Bug description:
  The history to this request is as follows; bug 1916024 fixed an issue
  that subsequently had to be reverted due to a regression that it
  introduced (see bug 1927868) and the original issue can once again
  present itself in that keepalived is unable to send GARP on the qg
  port until the port is marked as UP by neutron which in loaded
  environments can sometimes take longer than keepalived will wait (e.g.
  when an l3-agent is restarted on a host that has hundreds of routers).
  The reason why qg- ports are marked as DOWN is because of the patch
  landed as part of bug 1859832 and as I understand it there is now
  consensus from upstream [1] to revert that patch as well and a better
  solution is needed to fix that particular issue. I have not found a
  bug open yet for the revert hence why I am opening this one.

  [1]
  
https://meetings.opendev.org/meetings/neutron_drivers/2022/neutron_drivers.2022-03-04-14.03.log.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1965297/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1975837] Re: ``ovn_revision_bumbers_db._ensure_revision_row_exist`` is mixing DB contexts

2022-05-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/843478
Committed: 
https://opendev.org/openstack/neutron/commit/39d751a33265e8780828b3aca10a781726d0a300
Submitter: "Zuul (22348)"
Branch:master

commit 39d751a33265e8780828b3aca10a781726d0a300
Author: Rodolfo Alonso Hernandez 
Date:   Sun May 15 01:28:32 2022 +

Refactor the OVN revision module to access the DB correctly

Method ``_ensure_revision_row_exist`` creates a DB reader context
when called from ``bump_revision``. This call is always done from
inside a DB write context. This method removes the unneded reader
context.

Closes-Bug: #1975837
Change-Id: Ifb500eef5513e930bf3a22d99183ca348e5fc427


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1975837

Title:
  ``ovn_revision_bumbers_db._ensure_revision_row_exist`` is mixing DB
  contexts

Status in neutron:
  Fix Released

Bug description:
  The method ``ovn_revision_bumbers_db._ensure_revision_row_exist``
  creates a DB reader decorator to check if the "OVNRevisionNumbers"
  register exists, based on the resource ID. If it doesn't, the method
  calls ``create_initial_revision``, that creates a DB writer context
  inside the reader one.

  In older versions (networking-ovn Train for example), the error is
  even worst, as described in [1]. Inside the DB reader context we catch
  a DB exception without ending this transaction or rolling back. Then
  we call the ``create_initial_revision`` method, opening a new write
  context inside the reader one.

  Related Bugzilla: [1]

  [1]https://bugzilla.redhat.com/show_bug.cgi?id=2090757

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1975837/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967157] Re: Fails to extend in-use (non LUKS v1) encrypted volumes

2022-05-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/836064
Committed: 
https://opendev.org/openstack/nova/commit/8fbaeba11f445bcf6c6be1f5f7b7aeeb6995c9cd
Submitter: "Zuul (22348)"
Branch:master

commit 8fbaeba11f445bcf6c6be1f5f7b7aeeb6995c9cd
Author: Gorka Eguileor 
Date:   Wed Mar 30 19:49:18 2022 +0200

Fix extending non LUKSv1 encrypted volumes

Patch fixing bug #1861071 resolved the issue of extending LUKS v1
volumes when nova connects them via libvirt instead of through os-brick,
but nova side still fails to extend in-use volumes when they don't go
through libvirt (i.e., LUKS v2).

The logs will show a very similar error, but the user won't know that
this has happened and Cinder will show the new size:

 libvirt.libvirtError: internal error: unable to execute QEMU command
 'block_resize': Cannot grow device files

There are 2 parts to this problem:

- The device mapper device is not automatically extended.
- Nova tries to use the encrypted block device size as the size of the
  decrypted device.

This patch leverages the "extend_volume" method in os-brick connectors
to extend the device mapper device, after the encrypted device has been
extended, and use the size of the decrypted volume for the block_resize
operation.

Related change: I351f1a7769c9f915e4cd280f05a8b8b87f40df84
Closes-Bug: #1967157
Change-Id: Ia1411f11ec4bf44af6a42d5f96c8a0903846ed66


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1967157

Title:
  Fails to extend in-use (non LUKS v1) encrypted volumes

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Patch fixing bug #1861071 resolved the issue of extending LUKS v1
  volumes when nova connects them via libvirt instead of through os-
  brick, but nova side still fails to extend in-use volumes when they
  don't go through libvirt (i.e., LUKS v2).

  The logs will show a very similar error, but the user won't know that
  his has happened and Cinder will show the new size:

  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [req-100471fa-c198-40ac-b713-adc395e480f1 
req-3a1ea13e-916b-4851-be67-6d849bf4aa3a service nova] [instance: 
3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9] resizing block device failed.: 
libvirt.libvirtError: internal error: unable to execut>
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9] 
Traceback (most recent call last):
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2809, in extend_volume
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9] 
connection_info, encryption)
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2763, in 
_resize_attached_encrypted_volume
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9] 
decrypted_device_new_size, block_device, instance)
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2712, in 
_resize_attached_volume
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9] 
block_device.resize(new_size)
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9]   
File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 789, in resize
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9] 
self._guest._domain.blockResize(self._disk, size, flags=flags)
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9]   
File "/usr/local/lib/python3.6/site-packages/eventlet/tpool.py", line 193, in 
doit
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9] 
result = proxy_call(self._autowrap, f, *args, **kwargs)
  Mar 29 21:25:39 ssmc.localdomain nova-compute[1376242]: ERROR 
nova.virt.libvirt.driver [instance: 3f206ec4-fad5-48b8-9cb2-c3e6f00f30c9]   
File 

[Yahoo-eng-team] [Bug 1971672] Re: [FT] Error in "test_virtual_port_host_update"

2022-05-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/841298
Committed: 
https://opendev.org/openstack/neutron/commit/494c477b21ebbaa441b55d63755af72cc24244af
Submitter: "Zuul (22348)"
Branch:master

commit 494c477b21ebbaa441b55d63755af72cc24244af
Author: Rodolfo Alonso Hernandez 
Date:   Sat May 7 02:15:16 2022 +

[OVN][FT] Wait until virtual parents are written

In "test_virtual_port_host_update", wait until VIP virtual parents
have been updated in the SB "Port_Binding" register. Then, the
test should check if "update_virtual_port_host" has been called or
not.

Closes-Bug: #1971672
Change-Id: Ifa04bb59f4b9acd308299cfa44f2316394d14505


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1971672

Title:
  [FT] Error in "test_virtual_port_host_update"

Status in neutron:
  Fix Released

Bug description:
  Error executing
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestNBDbMonitorOverSsl.test_virtual_port_host_update

  Log:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_207/840448/2/check/neutron-
  functional-with-uwsgi/2074ad0/testr_results.html

  Snippet: https://paste.opendev.org/show/bVvDh4svozwjBpLokYre/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1971672/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1975542] Re: Open vSwitch agent - does not report to the segment plugin

2022-05-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/843294
Committed: 
https://opendev.org/openstack/neutron/commit/d89d7bd5e6477a1ad24165831dbc5f1e2fc357b5
Submitter: "Zuul (22348)"
Branch:master

commit d89d7bd5e6477a1ad24165831dbc5f1e2fc357b5
Author: Rodolfo Alonso Hernandez 
Date:   Sat May 14 16:55:33 2022 +

Remove session active check in "_add_segment_host_mapping_for_segment"

Method ``_add_segment_host_mapping_for_segment`` is called by the event
(resources.SEGMENT, events.PRECOMMIT_CREATE), from
``SegmentDbMixin._create_segment_db``, and is called inside a database
writer context. That means it is irrelevant to check if the session is
active (must be always).

Closes-Bug: #1975542
Change-Id: Ib19dacf886486876237ed1157fb95ae157ed430e


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1975542

Title:
  Open vSwitch agent - does not report to the segment plugin

Status in neutron:
  Fix Released

Bug description:
  The atteched devstack configuration `local.conf` can be used to
  reproduce this issue.

  In networking-baremetal CI we are seeing an issue where the DHCP agent
  does not create a namespace for subnets on a routed provider network.
  No DHCP namespace is created because this test[1] `if (any(s for s in
  network.subnets if s.enable_dhcp)` returns false.

  The OVS agent does have the correct configuration, with mappings to
  the physical_network "mynetwork" on bridge "brbm".

  
  $ openstack network agent show ef7ca33a-de9c-4a2b-9af5-e1c9cb029a25 -f yaml   


  
  admin_state_up: true  


   
  agent_type: Open vSwitch agent


   
  alive: true   


   
  availability_zone: null   


   
  binary: neutron-openvswitch-agent 


   
  configuration:   
arp_responder_enabled: false
baremetal_smartnic: false   


   
bridge_mappings:


   
  mynetwork: brbm   


   
  public: br-ex

  But looking in the database, there are only `segment host mappings`
  for baremetal nodes.

  mysql> select * from segmenthostmappings;
  
+--+--+
  | segment_id   | host 
|
  

[Yahoo-eng-team] [Bug 1975743] Re: ML2 OVN - Creating an instance with hardware offloaded port is broken

2022-05-27 Thread Frode Nordahl
** Changed in: neutron
   Status: New => Confirmed

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1975743

Title:
  ML2 OVN - Creating an instance with hardware offloaded port is broken

Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  OpenStack Release: Yoga
  Platform: Ubuntu focal

  Creating an instance with vnic-type ‘direct’ port and ‘switchdev’ 
binding-profile is failing over the following validation error:
  ```
  2022-05-25 19:13:40.331 125269 DEBUG neutron.api.v2.base 
[req-504a0204-6f1a-46ae-8b95-dcfdf2692f91 b2a31335e63b4dd391cc3e6bf4600fe1 - - 
654b9b803e6a4a68b31676c16973e3cc 654b9b803e6a4a68b31676c16973e3cc] Request 
body: {'port': {'device_id': 'd46aef48-e42e-49c8-af9f-a83768747b4f', 
'device_owner': 'compute:nova', 'binding:profile': {'capabilities': 
['switchdev'], 'pci_vendor_info': '15b3:101e', 'pci_slot': ':08:03.2', 
'physical_network': None, 'card_serial_number': 'MT2034X11488', 
'pf_mac_address': '04:3f:72:9e:0b:a1', 'vf_num': 7}, 'binding:host_id': 
'node3.maas', 'dns_name': 'vm1'}} prepare_request_body 
/usr/lib/python3/dist-packages/neutron/api/v2/base.py:729

  
  2022-05-25 19:13:40.429 125269 DEBUG neutron_lib.callbacks.manager 
[req-504a0204-6f1a-46ae-8b95-dcfdf2692f91 b2a31335e63b4dd391cc3e6bf4600fe1 - - 
654b9b803e6a4a68b31676c16973e3cc 654b9b803e6a4a68b31676c16973e3cc] Publish 
callbacks 
['neutron.plugins.ml2.plugin.SecurityGroupDbMixin._ensure_default_security_group_handler-1311372',
 'neutron.services.ovn_l3.plugin.OVNL3RouterPlugin._port_update-8735219071964'] 
for port (0f1e4e9c-68ef-4b38-a3bc-68e624bca6c7), before_update _notify_loop 
/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py:176
  2022-05-25 19:13:41.221 125269 DEBUG neutron.notifiers.nova 
[req-504a0204-6f1a-46ae-8b95-dcfdf2692f91 b2a31335e63b4dd391cc3e6bf4600fe1 - - 
654b9b803e6a4a68b31676c16973e3cc 654b9b803e6a4a68b31676c16973e3cc] Ignoring 
state change previous_port_status: DOWN current_port_status: DOWN port_id 
0f1e4e9c-68ef-4b38-a3bc-68e624bca6c7 record_port_status_changed 
/usr/lib/python3/dist-packages/neutron/notifiers/nova.py:233
  2022-05-25 19:13:41.229 125269 DEBUG neutron_lib.callbacks.manager 
[req-504a0204-6f1a-46ae-8b95-dcfdf2692f91 b2a31335e63b4dd391cc3e6bf4600fe1 - - 
654b9b803e6a4a68b31676c16973e3cc 654b9b803e6a4a68b31676c16973e3cc] Publish 
callbacks [] for port (0f1e4e9c-68ef-4b38-a3bc-68e624bca6c7), precommit_update 
_notify_loop /usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py:176

  
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers 
[req-504a0204-6f1a-46ae-8b95-dcfdf2692f91 b2a31335e63b4dd391cc3e6bf4600fe1 - - 
654b9b803e6a4a68b31676c16973e3cc 654b9b803e6a4a68b31676c16973e3cc] Mechanism 
driver 'ovn' failed in update_port_precommit: 
neutron_lib.exceptions.InvalidInput: Invalid input for operation: Invalid 
binding:profile. too many parameters.
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py", line 482, in 
_call_on_drivers
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 792, in update_port_precommit
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers 
ovn_utils.validate_and_get_data_from_binding_profile(port)
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python3/dist-packages/neutron/common/ovn/utils.py", line 266, in 
validate_and_get_data_from_binding_profile
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers raise 
n_exc.InvalidInput(error_message=msg)
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers 
neutron_lib.exceptions.InvalidInput: Invalid input for operation: Invalid 
binding:profile. too many parameters.
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers 
  ```

  Seems like the issue is related to the commit from: 
  https://review.opendev.org/c/openstack/neutron/+/818420

  
  To reproduce:
  
https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-ovn.html

  1. Prepare a setup with SR-IOV adjusted for OVN HW Offload
  2. Create a port with switchdev capabilities

  $ openstack port create direct_overlay2 --vnic-type=direct --network
  gen_data --binding-profile '{"capabilities":["switchdev"]}'
  --security-group my_policy

  3. Create an instance

  $ openstack server create --key-name bastion --flavor d1.demo 

[Yahoo-eng-team] [Bug 1964940] Re: Compute tests are failing with failed to reach ACTIVE status and task state "None" within the required time.

2022-05-27 Thread Lajos Katona
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1964940

Title:
  Compute tests are failing with failed to reach ACTIVE status and task
  state "None" within the required time.

Status in neutron:
  In Progress
Status in tripleo:
  In Progress

Bug description:
  On Fs001 CentOS Stream 9 wallaby, Multiple compute server tempest tests are 
failing with following error [1][2]:
  ```
  {1} 
tempest.api.compute.images.test_images.ImagesTestJSON.test_create_image_from_paused_server
 [335.060967s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File 
"/usr/lib/python3.9/site-packages/tempest/api/compute/images/test_images.py", 
line 99, in test_create_image_from_paused_server
  server = self.create_test_server(wait_until='ACTIVE')
    File "/usr/lib/python3.9/site-packages/tempest/api/compute/base.py", 
line 270, in create_test_server
  body, servers = compute.create_test_server(
    File "/usr/lib/python3.9/site-packages/tempest/common/compute.py", line 
267, in create_test_server
  LOG.exception('Server %s failed to delete in time',
    File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
227, in __exit__
  self.force_reraise()
    File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
200, in force_reraise
  raise self.value
    File "/usr/lib/python3.9/site-packages/tempest/common/compute.py", line 
237, in create_test_server
  waiters.wait_for_server_status(
    File "/usr/lib/python3.9/site-packages/tempest/common/waiters.py", line 
100, in wait_for_server_status
  raise lib_exc.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: (ImagesTestJSON:test_create_image_from_paused_server) Server 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1 failed to reach ACTIVE status and task 
state "None" within the required time (300 s). Server boot request ID: 
req-4930f047-7f5f-4d08-9ebb-8ac99b29ad7b. Current status: BUILD. Current task 
state: spawning.
  ```

  Below is the list of other tempest tests failing on the same job.[2]
  ```
  
tempest.api.compute.images.test_images.ImagesTestJSON.test_create_image_from_paused_server[id-71bcb732-0261-11e7-9086-fa163e4fa634]
  
tempest.api.compute.admin.test_volume.AttachSCSIVolumeTestJSON.test_attach_scsi_disk_with_config_drive[id-777e468f-17ca-4da4-b93d-b7dbf56c0494]
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_attached_volume[id-d0f3f0d6-d9b6-4a32-8da4-23015dcab23c,volume]
  
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesV270Test.test_create_get_list_interfaces[id-2853f095-8277-4067-92bd-9f10bd4f8e0c,network]
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_shelved_state[id-bb0cb402-09dd-4947-b6e5-5e7e1cfa61ad]
  setUpClass 
(tempest.api.compute.images.test_images_oneserver_negative.ImagesOneServerNegativeTestJSON)
  
tempest.api.compute.servers.test_device_tagging.TaggedBootDevicesTest_v242.test_tagged_boot_devices[id-a2e65a6c-66f1-4442-aaa8-498c31778d96,image,network,slow,volume]
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_suspended_state[id-1f82ebd3-8253-4f4e-b93f-de9b7df56d8b]
  
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestJSON.test_create_list_show_delete_interfaces_by_network_port[id-73fe8f02-590d-4bf1-b184-e9ca81065051,network]
  setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSONUnderV235)
  ```

  Here is the traceback from nova-compute logs [3],
  ```
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager 
[req-4930f047-7f5f-4d08-9ebb-8ac99b29ad7b d5ea6c724785473b8ea1104d70fb0d14 
64c7d31d84284a28bc9aaa4eaad2b9fb - default default] [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Instance failed to spawn: 
nova.exception.VirtualInterfaceCreateException: Virtual Interface creation 
failed
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Traceback (most recent call last):
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1]   File 
"/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7231, in 
_create_guest_with_network
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] guest = self._create_guest(
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1973349] Re: Slow queries after upgrade to Xena

2022-05-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/841761
Committed: 
https://opendev.org/openstack/neutron/commit/db2ae854cf65b59eb0f7b0eef1b20e404c2214cb
Submitter: "Zuul (22348)"
Branch:master

commit db2ae854cf65b59eb0f7b0eef1b20e404c2214cb
Author: Rodolfo Alonso Hernandez 
Date:   Thu May 12 13:08:48 2022 +

Create an index for "ports.network_id"

The method ``_port_filter_hook``, that is added in any "Port" SELECT
command, filters the database "Port" registers by "network_id", using
an exact match. This query speed will improve if this column is
indexed in the database engine.

Closes-Bug: #1973349
Change-Id: Ia20f96dc78ea04bb0ab4665e6d47a6365789d2c9


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973349

Title:
  Slow queries after upgrade to Xena

Status in neutron:
  Fix Released

Bug description:
  After upgrading to Xena we started noticing slow queries that were written 
down in mysql slow log.
  Most of them were including next subquery:
  SELECT DISTINCT ports.id AS ports_id FROM ports, networks WHERE 
ports.project_id = '' OR ports.network_id = networks.id AND 
networks.project_id = ''.

  So for example, when issuing `openstack project list` this subquery appears 
several times:
  ```
  SELECT allowedaddresspairs.port_id AS allowedaddresspairs_port_id, 
allowedaddresspairs.mac_address AS allowedaddresspairs_mac_address, 
allowedaddresspairs.ip_address AS allowedaddresspairs_ip_address, 
anon_1.ports_id AS anon_1_ports_id \nFROM (SELECT DISTINCT ports.id AS ports_id 
\nFROM ports, networks \nWHERE ports.project_id = '' OR 
ports.network_id = networks.id AND networks.project_id = '') AS anon_1 
INNER JOIN allowedaddresspairs ON anon_1.ports_id = allowedaddresspairs.port_id

  SELECT extradhcpopts.id AS extradhcpopts_id, extradhcpopts.port_id AS
  extradhcpopts_port_id, extradhcpopts.opt_name AS
  extradhcpopts_opt_name, extradhcpopts.opt_value AS
  extradhcpopts_opt_value, extradhcpopts.ip_version AS
  extradhcpopts_ip_version, anon_1.ports_id AS anon_1_ports_id \nFROM
  (SELECT DISTINCT ports.id AS ports_id \nFROM ports, networks \nWHERE
  ports.project_id = '' OR ports.network_id = networks.id AND
  networks.project_id = '') AS anon_1 INNER JOIN extradhcpopts
  ON anon_1.ports_id = extradhcpopts.port_id0.000

  SELECT ipallocations.port_id AS ipallocations_port_id, 
ipallocations.ip_address AS ipallocations_ip_address, ipallocations.subnet_id 
AS ipallocations_subnet_id, ipallocations.network_id AS 
ipallocations_network_id, anon_1.ports_id AS anon_1_ports_id \nFROM (SELECT 
DISTINCT ports.id AS ports_id \nFROM ports, networks \nWHERE ports.project_id = 
'' OR ports.network_id = networks.id AND networks.project_id = 
'') AS anon_1 INNER JOIN ipallocations ON anon_1.ports_id = 
ipallocations.port_id ORDER BY ipallocations.ip_address, ipallocations.subnet_id
  ```

  Another interesting thing is difference in execution time between 
admin/non-admin call:
  (openstack) dmitriy@6BT6XT2:~$ . Documents/openrc/admin.rc
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list --project  | 
wc -l
  2142

  real0m5,401s
  user0m1,565s
  sys 0m0,086s
  (openstack) dmitriy@6BT6XT2:~$ . Documents/openrc/.rc
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list | wc -l
  2142

  real2m38,101s
  user0m1,626s
  sys 0m0,083s
  (openstack) dmitriy@6BT6XT2:~$
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list --project  | 
wc -l
  2142

  real1m17,029s
  user0m1,541s
  sys 0m0,085s
  (openstack) dmitriy@6BT6XT2:~$

  So basically if provide tenant_id to query, it will be execute twice
  as fast.But it won't look through networks owned by tenant (which
  would kind of explain difference in speed).

  Environment:
  Neutron SHA: 97180b01837638bd0476c28bdda2340eccd649af
  Backend: ovs
  OS: Ubuntu 20.04
  Mariadb: 10.6.5
  SQLalchemy: 1.4.23
  Backend: openvswitch
  Plugins: router vpnaas metering 
neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1973349/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1975907] [NEW] cloud-init devel net-convert crash when --debug is enabled

2022-05-27 Thread Benjamin Hesmans
Public bug reported:

Since 22.2 enabling "--debug" for "cloud-init devel net-convert" will
make cloud-init crash.

Probably linked to 3e5938c6ae22b9f158f1404f41e3e43738cadff0 and the use
of safe dumper.

Stack trace shows:
Traceback (most recent call last):  


  File "/xyz/git/cloud-init/bin/cloud-init", line 8, in 

 
sys.exit(main())


  File "/xyz/git/cloud-init/lib/python3.8/site-packages/cloudinit/cmd/main.py", 
line 1059, in main  
 
retval = util.log_time( 


  File "/xyz/git/cloud-init/lib/python3.8/site-packages/cloudinit/util.py", 
line 2637, in log_time  
 
ret = func(*args, **kwargs) 


  File 
"/xyz/git/cloud-init/lib/python3.8/site-packages/cloudinit/cmd/devel/net_convert.py",
 line 136, in handle_args   
 
"\n".join(["", "Internal State", safeyaml.dumps(ns, noalias=True), ""]) 


  File "/xyz/git/cloud-init/lib/python3.8/site-packages/cloudinit/safeyaml.py", 
line 161, in dumps  
 
return yaml.dump(   


  File "/xyz/git/cloud-init/lib/python3.8/site-packages/yaml/__init__.py", line 
253, in dump
 
return dump_all([data], stream, Dumper=Dumper, **kwds)  


  File "/xyz/git/cloud-init/lib/python3.8/site-packages/yaml/__init__.py", line 
241, in dump_all
 
dumper.represent(data)  


  File "/xyz/git/cloud-init/lib/python3.8/site-packages/yaml/representer.py", 
line 27, in represent   
   
node = self.represent_data(data)


  File "/xyz/git/cloud-init/lib/python3.8/site-packages/yaml/representer.py", 
line 58, in represent_data  
   
node = self.yaml_representers[None](self, data) 


  File "/xyz/git/cloud-init/lib/python3.8/site-packages/yaml/representer.py", 
line 231, in represent_undefined
   
raise RepresenterError("cannot represent an object", data)  


yaml.representer.RepresenterError: ('cannot represent an object', 
)

i tried to replace to dumper with the unsafe version and it was working
again

** Affects: cloud-init
 Importance: Undecided
  

[Yahoo-eng-team] [Bug 1975743] Re: ML2 OVN - Creating an instance with hardware offloaded port is broken

2022-05-27 Thread Frode Nordahl
Itai, thank you for reporting this bug.

The Neutron OVN driver does strict validation of the binding profile. As
part of adding support for SmartNIC DPUs the validation was extended to
handle both the existing hardware offload vnic-type direct +
capabilities switchdev workflow as well as the new SmartNIC DPU vnic-
type remote-managed workflow.

What's happening here is that Neutron does not expect Nova to provide
the 'card_serial_number', 'pf_mac_address' and 'vf_num' keys in the
binding profile for the vnic-type direct + capabilities switchdev
workflow, and rejects the request.

The key/value pairs appear to be added whenever a VF from a card with a
serial number in the VPD is used, if the card does not have a serial in
the VPD the key/value pairs are not provided.

This is problematic because there exist cards that do not provide this
information, and cards that do provide the information depending on
which firmware version is in use.

The Neutron validation code does not have a concept of a optional key in
the binding profile, and since the information is not required for the
vnic direct + capabilities switchdev workflow I'm inclined to think Nova
should refrain from providing it in this case.

If this would be hard for Nova to do, we would at the very least need it
to always provide the keys and just not populate the values when they
are not there, with that the Neutron validation code could make the
validation of the values optional for the vnic direct + capabilties
switchdev workflow.

To unblock you while we figure out how to solve this properly you could
apply this patch [0] to your neutron-api units.

0: https://pastebin.ubuntu.com/p/3dsHX4rHdT/

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1975743

Title:
  ML2 OVN - Creating an instance with hardware offloaded port is broken

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  OpenStack Release: Yoga
  Platform: Ubuntu focal

  Creating an instance with vnic-type ‘direct’ port and ‘switchdev’ 
binding-profile is failing over the following validation error:
  ```
  2022-05-25 19:13:40.331 125269 DEBUG neutron.api.v2.base 
[req-504a0204-6f1a-46ae-8b95-dcfdf2692f91 b2a31335e63b4dd391cc3e6bf4600fe1 - - 
654b9b803e6a4a68b31676c16973e3cc 654b9b803e6a4a68b31676c16973e3cc] Request 
body: {'port': {'device_id': 'd46aef48-e42e-49c8-af9f-a83768747b4f', 
'device_owner': 'compute:nova', 'binding:profile': {'capabilities': 
['switchdev'], 'pci_vendor_info': '15b3:101e', 'pci_slot': ':08:03.2', 
'physical_network': None, 'card_serial_number': 'MT2034X11488', 
'pf_mac_address': '04:3f:72:9e:0b:a1', 'vf_num': 7}, 'binding:host_id': 
'node3.maas', 'dns_name': 'vm1'}} prepare_request_body 
/usr/lib/python3/dist-packages/neutron/api/v2/base.py:729

  
  2022-05-25 19:13:40.429 125269 DEBUG neutron_lib.callbacks.manager 
[req-504a0204-6f1a-46ae-8b95-dcfdf2692f91 b2a31335e63b4dd391cc3e6bf4600fe1 - - 
654b9b803e6a4a68b31676c16973e3cc 654b9b803e6a4a68b31676c16973e3cc] Publish 
callbacks 
['neutron.plugins.ml2.plugin.SecurityGroupDbMixin._ensure_default_security_group_handler-1311372',
 'neutron.services.ovn_l3.plugin.OVNL3RouterPlugin._port_update-8735219071964'] 
for port (0f1e4e9c-68ef-4b38-a3bc-68e624bca6c7), before_update _notify_loop 
/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py:176
  2022-05-25 19:13:41.221 125269 DEBUG neutron.notifiers.nova 
[req-504a0204-6f1a-46ae-8b95-dcfdf2692f91 b2a31335e63b4dd391cc3e6bf4600fe1 - - 
654b9b803e6a4a68b31676c16973e3cc 654b9b803e6a4a68b31676c16973e3cc] Ignoring 
state change previous_port_status: DOWN current_port_status: DOWN port_id 
0f1e4e9c-68ef-4b38-a3bc-68e624bca6c7 record_port_status_changed 
/usr/lib/python3/dist-packages/neutron/notifiers/nova.py:233
  2022-05-25 19:13:41.229 125269 DEBUG neutron_lib.callbacks.manager 
[req-504a0204-6f1a-46ae-8b95-dcfdf2692f91 b2a31335e63b4dd391cc3e6bf4600fe1 - - 
654b9b803e6a4a68b31676c16973e3cc 654b9b803e6a4a68b31676c16973e3cc] Publish 
callbacks [] for port (0f1e4e9c-68ef-4b38-a3bc-68e624bca6c7), precommit_update 
_notify_loop /usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py:176

  
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers 
[req-504a0204-6f1a-46ae-8b95-dcfdf2692f91 b2a31335e63b4dd391cc3e6bf4600fe1 - - 
654b9b803e6a4a68b31676c16973e3cc 654b9b803e6a4a68b31676c16973e3cc] Mechanism 
driver 'ovn' failed in update_port_precommit: 
neutron_lib.exceptions.InvalidInput: Invalid input for operation: Invalid 
binding:profile. too many parameters.
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2022-05-25 19:13:41.229 125269 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/managers.py", line 482, in