[Yahoo-eng-team] [Bug 2025202] [NEW] Execute neutron-ovn-db-sync-util report TypeError

2023-06-27 Thread ZhouHeng
Public bug reported:

A TypeError was thrown during a synchronization 
command(eutron-ovn-db-sync-util) execution. By checking the error call stack, 
it was found that it was an error during the creation of QoS. After analysis, 
there should be a port in the Neutron database, but not in the ovn-nb database. 
At this time, when executing the UpdateLSwitchQosOptionsCommand to create a 
logical port and update the QoS, it was found that the port_id is None. 
tracking variable  port_id is obtained by executing AddLSwitchPortCommand. It 
should be that this command did not set the port_id correctly caused. Analyzing 
AddLSwitchPortCommand, it was found that if the port already exists, no result 
was set. 
This seems a bit contradictory. It was determined earlier that the port does 
not exist, but later it does. I think this situation may occur when executing 
synchronization commands and calling the API that creates the port. This 
operation is not very reasonable.

But I think the AddLSwitchPortCommand command should return consistent
results. This issue should be fixed.


ERROR Message:

2023-06-26 11:06:24.385 345 WARNING 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_db_sync [None 
req-01d7864c-e3a6-409a-a852-2f6ea869fdae - - - - -] Port found in Neutron but 
not in OVN DB, port_id=ae5a8d95-e59f-465a-833d-28b3d0fabb2d
2023-06-26 11:06:24.440 345 ERROR ovsdbapp.backend.ovs_idl.transaction [None 
req-01d7864c-e3a6-409a-a852-2f6ea869fdae - - - - -] Traceback (most recent call 
last):
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 131, in run
txn.results.put(txn.do_commit())
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 92, in do_commit
command.run_idl(txn)
  File "/root/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/commands.py", 
line 216, in run_idl
port = self.api.lookup('Logical_Switch_Port', port_id)
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/__init__.py",
 line 181, in lookup
return self._lookup(table, record)
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/__init__.py",
 line 203, in _lookup
uuid_ = uuid.UUID(record)
  File "/usr/lib64/python3.6/uuid.py", line 134, in __init__
raise TypeError('one of the hex, bytes, bytes_le, fields, '
TypeError: one of the hex, bytes, bytes_le, fields, or int arguments must be 
given

2023-06-26 11:06:24.441 345 CRITICAL neutron_ovn_db_sync_util [None 
req-01d7864c-e3a6-409a-a852-2f6ea869fdae - - - - -] Unhandled error: TypeError: 
one of the hex, bytes, bytes_le, fields, or int arguments must be given
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util Traceback (most 
recent call last):
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/var/lib/kolla/venv/bin/neutron-ovn-db-sync-util", line 8, in 
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util sys.exit(main())
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/root/neutron/cmd/ovn/neutron_ovn_db_sync_util.py", line 231, in main
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util 
synchronizer.do_sync()
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/root/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py", line 
104, in do_sync
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util 
self.sync_networks_ports_and_dhcp_opts(ctx)
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/root/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py", line 
999, in sync_networks_ports_and_dhcp_opts
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util for port_id, 
port in db_ports.items():
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/root/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py", line 
120, in _create_port_in_ovn
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util 
self._ovn_client.create_port(ctx, port)
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/root/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py", line 
427, in create_port
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util 
self._qos_driver.create_port(txn, port, port_cmd)
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/usr/lib64/python3.6/contextlib.py", line 88, in __exit__
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util next(self.gen)
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/root/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", line 
262, in transaction
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util yield t
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util   File 
"/usr/lib64/python3.6/contextlib.py", line 88, in __exit__
2023-06-26 11:06:24.441 345 ERROR neutron_ovn_db_sync_util 

[Yahoo-eng-team] [Bug 2024510] Re: Address on SNAT port won't be advertised by BGP speaker

2023-06-27 Thread Rodolfo Alonso
** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron-dynamic-routing (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024510

Title:
  Address on SNAT port won't be advertised by BGP speaker

Status in neutron:
  Invalid
Status in neutron-dynamic-routing package in Ubuntu:
  Incomplete

Bug description:
  ENV:
  Zed

  FYI: 
  1. All IP addresses mentioned are all in the same scope
  2. The IP addresses are only examples, this bug is not related to any 
specific IP addresses range.

  Description:
  1. When a DVR floating IP associate to a VM, the BGP will advertise the FIP 
to Provider router successfully,

  2. But when using private IP addresses for VM and FIP for SNAT
  forwarding port, the FIP, on that DVR port with forwarding rules,
  won't be advertised by BGP.

  e.g, DVR port with floating IP 123.0.0.20/24 and rule (internal_ip
  10.10.10.10, internal_port , external_port 64000), and assign a
  private IP (10.10.10.10/24) to a VM. The floating IP 123.0.0.20 won’t
  be advertised through BGP.

  
  Additons:
  1. This is a basic DVR + Floating IP + BGP dynamic routing environment, plus, 
testing with shared IP.
  2. The port_forwardings rule makes the port act like a SNAT role and forward 
any packets that reach it with destination 123.0.0.20:64000 to the private IP 
10.10.10.10/24.
  3. The IP address could be reached in the neutron network.
  4. PE IP address, CE IP address, and floating IP gateway are using the same 
subnet A and subnet pool (192.168.123.0/24), while floating IP belongs to 
subnet B and subnet pool (123.0.0.0/24), both subnets belong to the provider 
network.
  5. Only floating IP that assigned to the VM will be advertised to PE through 
BGP
  6. Floating IP that is assigned to DVR port won’t be advertised, even if the 
IP is activated and is reachable internally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2024510/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025146] [NEW] There is a confusion in priority between when clouds.yml parameters are used and when the exported variables are used

2023-06-27 Thread Milana Levy
Public bug reported:

Description of problem:
In the past (OSP less then 17.1 +SRBAC configuration)
we were able to use only the rc file if we wanted to source the user, without 
the need to update the clouds.yml and with no obligation to put other value in 
the OS_CLOUD filed.

Version-Release number of selected component (if applicable):
RHOS-17.1-RHEL-9-20230607.n.2 +SRBAC configuration

How reproducible:
Every time, in the same session.

Steps to Reproduce:
Prerequisites:
1.Create a project under the default domain (fro example "project2"
2.Create the user:
(overcloud) [stack@undercloud-0 ~]$ openstack user create admin3 --password 
12345678
3.Add a role to the user-
(overcloud) [stack@undercloud-0 ~]$ openstack role add --user admin3 --project 
project2 admin
3.Make sure the user is assign with the right role-
(overcloud) [stack@undercloud-0 ~]$ openstack role assignment list --user 
admin3 --names
4.Copy the overcloudrc file and give it a new name like "admin3rc"
5.Change the file’s fields to be compatible to the new user.
6.***In this point, if we don't change the clouds.yml file, this user will not 
be able to send requests.***- This is new -Is it part of the bug?
7. Change the clouds.yml - Add the new user and make sure the user's name is 
right above auth: in the clouds.yml file+ make sure the same name is the same 
as in the field of "OS_CLOUD=" of the admin3rc file. This is the way it should 
be:"OS_CLOUD=admin3"
8.Sourcing admin3rc and sending regular commands in a new session when all env 
variables are empty will end successfully-
[stack@undercloud-0 ~]$ . admin3rc 
(admin3) [stack@undercloud-0 ~]$ openstack user list
+--+---+
| ID   | Name   
   |
+--+---+
| 1735403404384416876f68c51c0388c7 | admin  
   |
| 2b82835faf1b4f479cd0395036dcd677 | barbican   
   |

..
.

Leaving overcloud in the OS_CLOUD filed e.g. export OS_CLOUD=overcloud in the 
admin3rc file, will make a confution in this scenario:
Steps:
1. [stack@undercloud-0 ~]$ . admin3rc 
2. (overcloud) [stack@undercloud-0 ~]$ openstack user list
The request you have made requires authentication. (HTTP 401) (Request-ID: 
req-fa114d20-5492-4e61-bfc9-6f9586b8aafe)


Actual results:
There is a confusion in priority between when clouds.yml parameters are used 
and when the exported variables are used. We get the error "The request you 
have made requires authentication. (HTTP 401) "

Expected results:
The action should end successfully.
If the behavior was not changed, we need to be able to use only the rc file if 
we want to , without the need to update the clouds.yml and with no obligation 
to put other value in the OS_CLOUD filed.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2025146

Title:
  There is a confusion in priority between when clouds.yml parameters
  are used and when the exported variables are used

Status in OpenStack Identity (keystone):
  New

Bug description:
  Description of problem:
  In the past (OSP less then 17.1 +SRBAC configuration)
  we were able to use only the rc file if we wanted to source the user, without 
the need to update the clouds.yml and with no obligation to put other value in 
the OS_CLOUD filed.

  Version-Release number of selected component (if applicable):
  RHOS-17.1-RHEL-9-20230607.n.2 +SRBAC configuration

  How reproducible:
  Every time, in the same session.

  Steps to Reproduce:
  Prerequisites:
  1.Create a project under the default domain (fro example "project2"
  2.Create the user:
  (overcloud) [stack@undercloud-0 ~]$ openstack user create admin3 --password 
12345678
  3.Add a role to the user-
  (overcloud) [stack@undercloud-0 ~]$ openstack role add --user admin3 
--project project2 admin
  3.Make sure the user is assign with the right role-
  (overcloud) [stack@undercloud-0 ~]$ openstack role assignment list --user 
admin3 --names
  4.Copy the overcloudrc file and give it a new name like "admin3rc"
  5.Change the file’s fields to be compatible to the new user.
  6.***In this point, if we don't change the clouds.yml file, this user will 
not be able to send requests.***- This is new -Is it part of the bug?
  7. Change the clouds.yml - Add the new user and make sure the user's name is 
right above auth: in the clouds.yml file+ make sure the same name is the same 
as in the field of "OS_CLOUD=" of the admin3rc file. This is the way it should 
be:"OS_CLOUD=admin3"
  8.Sourcing admin3rc and sending regular 

[Yahoo-eng-team] [Bug 2025144] [NEW] [OVN] ``update_floatingip`` should handle the case when only the QoS policy is updated

2023-06-27 Thread Rodolfo Alonso
Public bug reported:

The ``OVNClient.update_floatingip`` method deletes and creates again the
OVN NAT rules when a FIP is updated. However, this process is not
necessary if only the QoS policy is updated. Only the QoS driver call is
needed. That speeds up the FIP process and avoids the
``FIPAddDeleteEvent`` that is called twice, when the NAT register is
deleted first and then added (if there is a fixed port associated to the
FIP).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025144

Title:
  [OVN] ``update_floatingip`` should handle the case when only the QoS
  policy is updated

Status in neutron:
  New

Bug description:
  The ``OVNClient.update_floatingip`` method deletes and creates again
  the OVN NAT rules when a FIP is updated. However, this process is not
  necessary if only the QoS policy is updated. Only the QoS driver call
  is needed. That speeds up the FIP process and avoids the
  ``FIPAddDeleteEvent`` that is called twice, when the NAT register is
  deleted first and then added (if there is a fixed port associated to
  the FIP).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2025144/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2024510] Re: Address on SNAT port won't be advertised by BGP speaker

2023-06-27 Thread Slawek Kaplonski
** Also affects: neutron-dynamic-routing (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024510

Title:
  Address on SNAT port won't be advertised by BGP speaker

Status in neutron:
  New
Status in neutron-dynamic-routing package in Ubuntu:
  New

Bug description:
  ENV:
  Zed

  FYI: 
  1. All IP addresses mentioned are all in the same scope
  2. The IP addresses are only examples, this bug is not related to any 
specific IP addresses range.

  Description:
  1. When a DVR floating IP associate to a VM, the BGP will advertise the FIP 
to Provider router successfully,

  2. But when using private IP addresses for VM and FIP for SNAT
  forwarding port, the FIP, on that DVR port with forwarding rules,
  won't be advertised by BGP.

  e.g, DVR port with floating IP 123.0.0.20/24 and rule (internal_ip
  10.10.10.10, internal_port , external_port 64000), and assign a
  private IP (10.10.10.10/24) to a VM. The floating IP 123.0.0.20 won’t
  be advertised through BGP.

  
  Additons:
  1. This is a basic DVR + Floating IP + BGP dynamic routing environment, plus, 
testing with shared IP.
  2. The port_forwardings rule makes the port act like a SNAT role and forward 
any packets that reach it with destination 123.0.0.20:64000 to the private IP 
10.10.10.10/24.
  3. The IP address could be reached in the neutron network.
  4. PE IP address, CE IP address, and floating IP gateway are using the same 
subnet A and subnet pool (192.168.123.0/24), while floating IP belongs to 
subnet B and subnet pool (123.0.0.0/24), both subnets belong to the provider 
network.
  5. Only floating IP that assigned to the VM will be advertised to PE through 
BGP
  6. Floating IP that is assigned to DVR port won’t be advertised, even if the 
IP is activated and is reachable internally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2024510/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025129] [NEW] DvrLocalRouter init references namespace before it is created

2023-06-27 Thread Adam Oswick
Public bug reported:

Description
---

When the DvrLocalRouter object is instantiated, it calls the the
_load_used_fip_information() function. In some cases this function will
try to add ip rules in a specific network namespace however that
namespace may not exist at the time. This results in
neutron.privileged.agent.linux.ip_lib.NetworkNamespaceNotFound being
thrown.


Pre-conditions
--

- DVR is in use and the created router is distributed and HA
- The state file 'fip-priorities' is missing some entires which results in 
https://opendev.org/openstack/neutron/src/commit/0c5d4b872899497437d1399c845be756103a46d3/neutron/agent/l3/dvr_local_router.py#L76
 being skipped
- The qrouter network namespace does not exist (possibly due to a reboot of the 
host or something similar)


Step-by-step reproduction steps
---

- Setup OpenStack with DVR enabled
- Create a HA router with an external subnet attached so we can use the IPs as 
FIPs
- Create a VM with a FIP attached from the aforementioned router
- SSH to the host running the aforementioned VM and:
  - Delete the qrouter namespace associated with this router
  - Remove the entry for the FIP from the fip-priorities state file in the 
Neutron state directory
  - Restart the Neutron L3 agent


Expected output
---

Neutron L3 agent should restart without any errors.


Actual output
-

Neutron L3 agent throws a NetworkNamespaceNotFound exception for each
missing FIP in the fip-priorities state file, fails to setup the router
and then retries. Note that if there are more than 5 missing FIP entires
in the fip-priorities file then the router setup fails completely as it
hits the retry limit specified in
https://opendev.org/openstack/neutron/src/commit/0c5d4b872899497437d1399c845be756103a46d3/neutron/agent/l3/agent.py#L730-L733.
This leaves the router completely broken and not setup on the node
resulting in broken networking for all VMs using that router on a
particular host.


Version
---
- OpenStack version - master/zed
- Linux distro - AlmaLinux9
- Deployed via Kolla Ansible

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025129

Title:
  DvrLocalRouter init references namespace before it is created

Status in neutron:
  New

Bug description:
  Description
  ---

  When the DvrLocalRouter object is instantiated, it calls the the
  _load_used_fip_information() function. In some cases this function
  will try to add ip rules in a specific network namespace however that
  namespace may not exist at the time. This results in
  neutron.privileged.agent.linux.ip_lib.NetworkNamespaceNotFound being
  thrown.

  
  Pre-conditions
  --

  - DVR is in use and the created router is distributed and HA
  - The state file 'fip-priorities' is missing some entires which results in 
https://opendev.org/openstack/neutron/src/commit/0c5d4b872899497437d1399c845be756103a46d3/neutron/agent/l3/dvr_local_router.py#L76
 being skipped
  - The qrouter network namespace does not exist (possibly due to a reboot of 
the host or something similar)

  
  Step-by-step reproduction steps
  ---

  - Setup OpenStack with DVR enabled
  - Create a HA router with an external subnet attached so we can use the IPs 
as FIPs
  - Create a VM with a FIP attached from the aforementioned router
  - SSH to the host running the aforementioned VM and:
- Delete the qrouter namespace associated with this router
- Remove the entry for the FIP from the fip-priorities state file in the 
Neutron state directory
- Restart the Neutron L3 agent

  
  Expected output
  ---

  Neutron L3 agent should restart without any errors.

  
  Actual output
  -

  Neutron L3 agent throws a NetworkNamespaceNotFound exception for each
  missing FIP in the fip-priorities state file, fails to setup the
  router and then retries. Note that if there are more than 5 missing
  FIP entires in the fip-priorities file then the router setup fails
  completely as it hits the retry limit specified in
  
https://opendev.org/openstack/neutron/src/commit/0c5d4b872899497437d1399c845be756103a46d3/neutron/agent/l3/agent.py#L730-L733.
  This leaves the router completely broken and not setup on the node
  resulting in broken networking for all VMs using that router on a
  particular host.

  
  Version
  ---
  - OpenStack version - master/zed
  - Linux distro - AlmaLinux9
  - Deployed via Kolla Ansible

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2025129/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025126] [NEW] Functional MySQL sync test failing with oslo/sqlalchemy master branch

2023-06-27 Thread Slawek Kaplonski
Public bug reported:

Since some time, periodic jobs neutron-functional-with-oslo-master and
neutron-functional-with-sqlalchemy-master are failing with one test
failed:

neutron.tests.functional.db.test_migrations.TestModelsMigrationsMySQL.test_models_sync

Stacktrace:

ft1.4: 
neutron.tests.functional.db.test_migrations.TestModelsMigrationsMySQL.test_models_synctesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/db/test_migrations.py",
 line 382, in test_models_sync
super(TestModelsMigrationsMySQL, self).test_models_sync()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/oslo_db/sqlalchemy/test_migrations.py",
 line 284, in test_models_sync
self.fail(
  File "/usr/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: Models and migration scripts aren't in sync:
[ [ ( 'modify_default',
  None,
  'ml2_distributed_port_bindings',
  'vif_details',
  { 'existing_comment': None,
'existing_nullable': False,
'existing_type': VARCHAR(length=4095)},
  DefaultClause(, for_update=False),
  DefaultClause(, for_update=False))],
  [ ( 'modify_default',
  None,
  'ml2_distributed_port_bindings',
  'profile',
  { 'existing_comment': None,
'existing_nullable': False,
'existing_type': VARCHAR(length=4095)},
  DefaultClause(, for_update=False),
  DefaultClause(, for_update=False))],
  [ ( 'modify_default',
  None,
  'ml2_port_bindings',
  'profile',
  { 'existing_comment': None,
'existing_nullable': False,
'existing_type': VARCHAR(length=4095)},
  DefaultClause(, for_update=False),
  DefaultClause(, for_update=False))],
  [ ( 'modify_default',
  None,
  'ml2_port_bindings',
  'vif_details',
  { 'existing_comment': None,
'existing_nullable': False,
'existing_type': VARCHAR(length=4095)},
  DefaultClause(, for_update=False),
  DefaultClause(, for_update=False))],
  [ ( 'modify_default',
  None,
  'portdnses',
  'dns_domain',
  { 'existing_comment': None,
'existing_nullable': False,
'existing_type': VARCHAR(length=255)},
  DefaultClause(, for_update=False),
  DefaultClause(, for_update=False))],
  [ ( 'modify_default',
  None,
  'subnetpools',
  'hash',
  { 'existing_comment': None,
'existing_nullable': False,
'existing_type': VARCHAR(length=36)},
  DefaultClause(, for_update=False),

Failure examples:
* https://zuul.openstack.org/build/55a065238b784ac28e91469d2acce3da
* https://zuul.openstack.org/build/2d8d000b62a1448d984eab7059d677a7

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: db functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025126

Title:
  Functional MySQL sync test failing with oslo/sqlalchemy master branch

Status in neutron:
  Confirmed

Bug description:
  Since some time, periodic jobs neutron-functional-with-oslo-master and
  neutron-functional-with-sqlalchemy-master are failing with one test
  failed:

  
neutron.tests.functional.db.test_migrations.TestModelsMigrationsMySQL.test_models_sync

  Stacktrace:

  ft1.4: 
neutron.tests.functional.db.test_migrations.TestModelsMigrationsMySQL.test_models_synctesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/db/test_migrations.py",
 line 382, in test_models_sync
  super(TestModelsMigrationsMySQL, self).test_models_sync()
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/oslo_db/sqlalchemy/test_migrations.py",
 line 284, in test_models_sync
  self.fail(
File "/usr/lib/python3.10/unittest/case.py", line 675, in fail
  raise self.failureException(msg)
  AssertionError: Models and migration scripts aren't in sync:
  [ [ ( 'modify_default',
None,
'ml2_distributed_port_bindings',
'vif_details',
{ 'existing_comment': None,
  'existing_nullable': False,
  'existing_type': VARCHAR(length=4095)},
DefaultClause(, for_update=False),
DefaultClause(, for_update=False))],
[ ( 'modify_default',
None,
'ml2_distributed_port_bindings',
'profile',
{ 'existing_comment': None,
  'existing_nullable': False,