[Yahoo-eng-team] [Bug 2062965] Re: octavia/ovn: missed healthmon port cleanup

2024-04-22 Thread Gregory Thiemonge
moving to neutron, the ovn-octavia-provider is a neutron project

** Project changed: octavia => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2062965

Title:
  octavia/ovn: missed healthmon port cleanup

Status in neutron:
  New

Bug description:
  Creating an octavia load-balancer with the ovn provider, adding a 
health-monitor and then members, octavia creates a neutron hm port in each 
subnet where a member was added.
  Removing the members again, the hm ports do not get cleaned up. The hm 
removal then cleans up one of the hm ports, the one that is in the subnet where 
the vip happens to be. The others are still left and do not get cleaned up by 
octavia. This of course will cause issues when subnets can later not be deleted 
due to being still populated by the orphaned ports.
  The cleanup logic simply does not match the hm port creation logic.

  Mitigating factors:
  * openstack loadbalancer delete --cascade does clean up all hm ports.
  * Deleting the health mon before removing the members also avoids the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2062965/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2063043] Re: octavia/ovn: filedescriptor out of range in select()

2024-04-22 Thread Gregory Thiemonge
moving to neutron, the ovn-octavia-provider is a neutron project

** Project changed: octavia => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2063043

Title:
  octavia/ovn: filedescriptor out of range in select()

Status in neutron:
  New

Bug description:
  Running octavia-api in a container (kolla-ansible), where a health monitoring 
process creates OVN provider loadbalancers with listeners, pools, members, 
health-mons and tests them and cleans them up again.
  Upon creation of resources, the octavia-api process creates `[eventpoll]` 
file descriptors (with `epoll_create1()` system call which it never closes. 
Once it hits 1024, it errors out with
  ```
   2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
[None req-eefd1e9d-8bfe-473d-9ff1-5a8b0d4ab5d2 - 
2767ef0256804b92ae0e51e3a99f809a - - 93db18cb391748009639058c52577527 
93db18cb391748009639058c52577527] OVS database connection to OVN_Northbound 
failed with error: 'filedescriptor out of range in select()'. Verify that the 
OVS and OVN services are available and that the 'ovn_nb_connection' and 
'ovn_sb_connection' configuration options are correct.: ValueError: 
filedescriptor out of range in select()
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
Traceback (most recent call last):
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/ovn_octavia_provider/ovsdb/impl_idl_ovn.py",
 line 65, in start_connection
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
self.ovsdb_connection.start()
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 83, in start
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
idlutils.wait_for_change(self.idl, self.timeout)
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/ovsdbapp/backend/ovs_idl/idlutils.py",
 line 252, in wait_for_change
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
ovs_poller.block()
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File "/usr/lib/python3/dist-packages/ovs/poller.py", line 231, in block
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
events = self.poll.poll(self.timeout)
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File "/usr/lib/python3/dist-packages/ovs/poller.py", line 137, in poll
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
rlist, wlist, xlist = select.select(self.rlist,
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
ValueError: filedescriptor out of range in select()
  2024-04-09 03:00:12.130 732 ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn
  2024-04-09 03:00:12.132 732 ERROR octavia.api.drivers.driver_factory [None 
req-eefd1e9d-8bfe-473d-9ff1-5a8b0d4ab5d2 - 2767ef0256804b92ae0e51e3a99f809a - - 
93db18cb391748009639058c52577527 93db18cb391748009639058c52577527] Unable to 
load provider driver ovn due to: OVS database connection to OVN_Northbound 
failed with error: 'filedescriptor out of range in select()'. Verify that the 
OVS and OVN services are available and that the 'ovn_nb_connection' and 
'ovn_sb_connection' configuration options are correct.: 
ovn_octavia_provider.ovsdb.impl_idl_ovn.OvsdbConnectionUnavailable: OVS 
database connection to OVN_Northbound failed with error: 'filedescriptor out of 
range in select()'. Verify that the OVS and OVN services are available and that 
the 'ovn_nb_connection' and 'ovn_sb_connection' configuration options are 
correct.
  2024-04-09 03:00:12.134 732 ERROR wsme.api [None 
req-eefd1e9d-8bfe-473d-9ff1-5a8b0d4ab5d2 - 2767ef0256804b92ae0e51e3a99f809a - - 
93db18cb391748009639058c52577527 93db18cb391748009639058c52577527] Server-side 
error: "Provider 'ovn' was not found.". Detail:
  Traceback (most recent call last):
  [...]
  ```
  Subsequently, the ovn provider no longer is registered from an octavia-api 
perspective and the container goes into an unhealthy state and needs to be 
restarted.

  This was observed on octavia from OpenStack 2023.2 (Bobcat) installed
  via kolla-ansible (OSISM).

  Original bug report at
  https://github.com/osism/issues/issues/959

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2063043/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2043673] [NEW] Creating flavors with --id auto

2023-11-16 Thread Gregory Thiemonge
Public bug reported:

The doc on flavors
(https://docs.openstack.org/nova/latest/user/flavors.html) is
inaccurate.

It mentions that Flavor ID is:

| Unique ID (integer or UUID) for the new flavor. This property is
required. If specifying ‘auto’, a UUID will be automatically generated.


But it seems that the "auto" keyword is no longer supported.

The Octavia devstack plugin uses it:

https://opendev.org/openstack/octavia/src/branch/master/devstack/plugin.sh#L610

but it generates a flavor with the id "auto":

$ openstack flavor show m1.amphora
++-+
| Field  | Value   |
++-+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0   |
| access_project_ids | []  |
| description| None|
| disk   | 3   |
| id | auto|
| name   | m1.amphora  |
| os-flavor-access:is_public | False   |
| properties | disk='5', hw_rng:allowed='True' |
| ram| 1024|
| rxtx_factor| 1.0 |
| swap   | 0   |
| vcpus  | 1   |
++-+


Can you clarify that? is it a doc bug? or a bug in the client?

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2043673

Title:
  Creating flavors with --id auto

Status in OpenStack Compute (nova):
  New

Bug description:
  The doc on flavors
  (https://docs.openstack.org/nova/latest/user/flavors.html) is
  inaccurate.

  It mentions that Flavor ID is:

  | Unique ID (integer or UUID) for the new flavor. This property is
  required. If specifying ‘auto’, a UUID will be automatically
  generated.

  
  But it seems that the "auto" keyword is no longer supported.

  The Octavia devstack plugin uses it:

  
https://opendev.org/openstack/octavia/src/branch/master/devstack/plugin.sh#L610

  but it generates a flavor with the id "auto":

  $ openstack flavor show m1.amphora
  ++-+
  | Field  | Value   |
  ++-+
  | OS-FLV-DISABLED:disabled   | False   |
  | OS-FLV-EXT-DATA:ephemeral  | 0   |
  | access_project_ids | []  |
  | description| None|
  | disk   | 3   |
  | id | auto|
  | name   | m1.amphora  |
  | os-flavor-access:is_public | False   |
  | properties | disk='5', hw_rng:allowed='True' |
  | ram| 1024|
  | rxtx_factor| 1.0 |
  | swap   | 0   |
  | vcpus  | 1   |
  ++-+

  
  Can you clarify that? is it a doc bug? or a bug in the client?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2043673/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2026345] Re: Sphinx raises 'ImageDraw' object has no attribute 'textsize' error

2023-10-06 Thread Gregory Thiemonge
** Also affects: octavia
   Importance: Undecided
   Status: New

** Changed in: octavia
   Importance: Undecided => Low

** Changed in: octavia
   Status: New => Confirmed

** Changed in: octavia
 Assignee: (unassigned) => Gregory Thiemonge (gthiemonge)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2026345

Title:
  Sphinx raises 'ImageDraw' object has no attribute 'textsize' error

Status in Designate:
  Fix Released
Status in Ironic:
  New
Status in OpenStack Identity (keystone):
  New
Status in OpenStack Compute (nova):
  Confirmed
Status in octavia:
  Confirmed
Status in tacker:
  New

Bug description:
  Pillow version 10.0 or higher sphinx raises error.

  '''
   'ImageDraw' object has no attribute 'textsize'
  '''

  
  Tacker specs use sphinx and pillow to build some diagrams in .rst file.
  Pillow remove ImageDraw.textsize() form version 10.0[1],
  but sphinx use ImageDraw.textsize().


  [1]
  https://pillow.readthedocs.io/en/stable/releasenotes/10.0.0.html#font-
  size-and-offset-methods

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/2026345/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2036705] [NEW] A port that is disabled and bound is still ACTIVE with ML2/OVN

2023-09-20 Thread Gregory Thiemonge
Public bug reported:

Issue originally reported to the Octavia project: 
https://bugs.launchpad.net/octavia/+bug/2033392
During the failover of a loadbalancer, Octavia disables a port and waits for 
its status to be DOWN, but it never happens, the port is still ACTIVE (it 
impacts the duration of the failover in Octavia, but also the availability of 
the loadbalancer).


When a bound port is disabled, its status is expected to be switched to DOWN.
But with ML2/OVN, the port remains ACTIVE.


$ openstack server create --image cirros-0.5.2-x86_64-disk --flavor m1.nano 
--network public server1
[..]
| id  | 7e392799-7a25-4ec6-a0ff-e479b3c37cc6
|
[..]


$ openstack port list --device-id 7e392799-7a25-4ec6-a0ff-e479b3c37cc6
+--+--+---+--++
| ID   | Name | MAC Address   | Fixed IP 
Addresses   | Status |
+--+--+---+--++
| 208c473c-4161-4c3a-ab9e-8444d7bc375f |  | fa:16:3e:85:bc:ac | 
ip_address='172.24.4.251', subnet_id='9441b590-d9d4-4f8f-b4aa-838736070222'  | 
ACTIVE |
|  |  |   | 
ip_address='2001:db8::322', subnet_id='813adce0-21de-44c9-958a-6967441b8623' |  
  |
+--+--+---+--++


$ openstack port show -c admin_state_up -c status 
208c473c-4161-4c3a-ab9e-8444d7bc375f 
+++
| Field  | Value  |
+++
| admin_state_up | UP |
| status | ACTIVE |
+++


# Disabling the port
$ openstack port set --disable 208c473c-4161-4c3a-ab9e-8444d7bc375f


$ openstack port show -c admin_state_up -c status 
208c473c-4161-4c3a-ab9e-8444d7bc375f
+++
| Field  | Value  |
+++
| admin_state_up | DOWN   |
| status | ACTIVE |
+++

Folks on #openstack-neutron confirmed that with ML2/OVS, the status is
DOWN when the port is disabled.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2036705

Title:
  A port that is disabled and bound is still ACTIVE with ML2/OVN

Status in neutron:
  New

Bug description:
  Issue originally reported to the Octavia project: 
https://bugs.launchpad.net/octavia/+bug/2033392
  During the failover of a loadbalancer, Octavia disables a port and waits for 
its status to be DOWN, but it never happens, the port is still ACTIVE (it 
impacts the duration of the failover in Octavia, but also the availability of 
the loadbalancer).

  
  When a bound port is disabled, its status is expected to be switched to DOWN.
  But with ML2/OVN, the port remains ACTIVE.

  
  $ openstack server create --image cirros-0.5.2-x86_64-disk --flavor m1.nano 
--network public server1
  [..]
  | id  | 7e392799-7a25-4ec6-a0ff-e479b3c37cc6  
  |
  [..]

  
  $ openstack port list --device-id 7e392799-7a25-4ec6-a0ff-e479b3c37cc6
  
+--+--+---+--++
  | ID   | Name | MAC Address   | Fixed IP 
Addresses   | Status |
  
+--+--+---+--++
  | 208c473c-4161-4c3a-ab9e-8444d7bc375f |  | fa:16:3e:85:bc:ac | 
ip_address='172.24.4.251', subnet_id='9441b590-d9d4-4f8f-b4aa-838736070222'  | 
ACTIVE |
  |  |  |   | 
ip_address='2001:db8::322', subnet_id='813adce0-21de-44c9-958a-6967441b8623' |  
  |
  
+--+--+---+--++

  
  $ openstack port show -c admin_state_up -c status 
208c473c-4161-4c3a-ab9e-8444d7bc375f 
  +++
  | Field  | Value  |
  +++
  | admin_state_up | UP |
  | status | ACTIVE |
  +++

  
  # Disabling the port
  $ openstack port set --disable 208c473c-4161-4c3a-ab9e-8444d7bc375f

  
  $ openstack port show -c admin_state_up -c status 

[Yahoo-eng-team] [Bug 2028651] [NEW] IPv6 VIPs broken with ML2/OVN

2023-07-25 Thread Gregory Thiemonge
Public bug reported:

Originally reported in the Octavia launchpad:
https://bugs.launchpad.net/octavia/+bug/2028524

The commit https://review.opendev.org/c/openstack/neutron/+/882588
introduced a regression in Octavia

It adds a validate_port_binding_and_virtual_port function that raises an 
exception when a port:
- has non-empty binding:host_id
- has fixed_ips/subnets
- has VIRTUAL type (in ovn)


When we create a load balancer in Octavia (with an IPv6 VIP)

$ openstack loadbalancer create --vip-subnet ipv6-public-subnet --name lb1
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | True |
| availability_zone   | None |
| created_at  | 2023-07-25T07:11:25  |
| description |  |
| flavor_id   | None |
| id  | 75cf51d2-4576-4878-8bfe-ad55584a7d76 |
| listeners   |  |
| name| lb1  |
| operating_status| OFFLINE  |
| pools   |  |
| project_id  | 86f57e2e56874381a0d586263fc8d900 |
| provider| amphora  |
| provisioning_status | PENDING_CREATE   |
| updated_at  | None |
| vip_address | 2001:db8::b1 |
| vip_network_id  | 2d16ac53-8438-435d-a787-e5ceb4b783be |
| vip_port_id | 83e51017-8f02-4916-bcd2-ebe0475b1ce6 |
| vip_qos_policy_id   | None |
| vip_subnet_id   | 813adce0-21de-44c9-958a-6967441b8623 |
| tags|  |
| additional_vips | []   |
+-+--+


The VIP port contains:

$ openstack port show 83e51017-8f02-4916-bcd2-ebe0475b1ce6
+-++
| Field   | Value   
   |
+-++
| admin_state_up  | DOWN
   |
| allowed_address_pairs   | 
   |
| binding_host_id | gthiemon-devstack   
   |
| binding_profile | 
   |
| binding_vif_details | 
   |
| binding_vif_type| unbound 
   |
| binding_vnic_type   | normal  
   |
| created_at  | 2023-07-25T07:11:25Z
   |
| data_plane_status   | None
   |
| description | 
   |
| device_id   | lb-75cf51d2-4576-4878-8bfe-ad55584a7d76 
   |
| device_owner| Octavia 
   |
| device_profile  | None
   |
| dns_assignment  | fqdn='host-2001-db8--b1.openstackgate.local.', 
hostname='host-2001-db8--b1', ip_address='2001:db8::b1' |
| dns_domain  | 
   |
| dns_name| 
   |
| extra_dhcp_opts | 
   |
| fixed_ips   | ip_address='2001:db8::b1', 
subnet_id='813adce0-21de-44c9-958a-6967441b8623' 

[Yahoo-eng-team] [Bug 1517839] Re: Make CONF.set_override with parameter enforce_type=True by default

2023-03-31 Thread Gregory Thiemonge
Abandoned after re-enabling the Octavia launchpad.

** Changed in: octavia
   Status: New => Invalid

** Tags added: auto-abandon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1517839

Title:
  Make CONF.set_override with parameter enforce_type=True by default

Status in Cinder:
  In Progress
Status in cloudkitty:
  Fix Released
Status in Designate:
  Fix Released
Status in OpenStack Backup/Restore and DR (Freezer):
  Fix Committed
Status in Glance:
  Invalid
Status in OpenStack Heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in Karbor:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in kolla:
  Expired
Status in Magnum:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  Invalid
Status in oslo.config:
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in Quark: Money Reinvented:
  New
Status in Rally:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in watcher:
  Fix Released

Bug description:
  1. Problems :
     oslo_config provides method CONF.set_override[1] , developers usually use 
it to change config option's value in tests. That's convenient .
     By default  parameter enforce_type=False,  it doesn't check any type or 
value of override. If set enforce_type=True , will check parameter
     override's type and value.  In production code(running time code),  
oslo_config  always checks  config option's value.
     In short, we test and run code in different ways. so there's  gap:  config 
option with wrong type or invalid value can pass tests when
     parameter enforce_type = False in consuming projects.  that means some 
invalid or wrong tests are in our code base.

     [1]
  https://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py#L2173

  2. Proposal
     1) Fix violations when enforce_type=True in each project.

    2) Make method CONF.set_override with  enforce_type=True by default
  in oslo_config

   You can find more details and comments  in
  https://etherpad.openstack.org/p/enforce_type_true_by_default

  3. How to find violations in your projects.

     1. Run tox -e py27

     2. then modify oslo.config with enforce_type=True
    cd .tox/py27/lib64/python2.7/site-packages/oslo_config
    edit cfg.py with enforce_type=True

  -def set_override(self, name, override, group=None, enforce_type=False):
  +def set_override(self, name, override, group=None, enforce_type=True):

    3. Run tox -e py27 again, you will find violations.

  
  The current state is that oslo.config make enforce_type as True by default 
and deprecate this parameter, will remove it in the future, the current work
  is that remove usage of enforce_type in consuming projects. We can list the
  usage of it in 
http://codesearch.openstack.org/?q=enforce_type=nope==

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1517839/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615502] Re: LBAAS - housekeeping serive does not cleanup stale amphora VMs

2023-03-31 Thread Gregory Thiemonge
Abandoned after re-enabling the Octavia launchpad.

** Changed in: octavia
   Status: In Progress => Invalid

** Changed in: octavia
 Assignee: Ravikumar (ravikumar-vallabhu) => (unassigned)

** Tags added: auto-abandon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615502

Title:
  LBAAS - housekeeping serive does not cleanup stale amphora VMs

Status in neutron:
  Invalid
Status in octavia:
  Invalid

Bug description:
  1.Initially there were no spare VMs since the “spare_amphora_pool_size = 
0”
   .

   [house_keeping]
  # Pool size for the spare pool
  spare_amphora_pool_size = 0

  
  stack@hlm:~/scratch/ansible/next/hos/ansible$ nova list --all
  WARNING: Option "--all_tenants" is deprecated; use "--all-tenants"; this 
option will be removed in novaclient 3.3.0.
  
+--+--+--+++-+---+
  | ID   | Name 
| Tenant ID| Status | Task State | Power State 
| Networks  |
  
+--+--+--+++-+---+
  | 91eef324-0c51-4b91-8a54-e16abdb64e55 | vm1  
| d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running 
| n1=4.5.6.5|
  | 7d85921c-e7d9-4b70-9023-0478c66b7e7c | vm2  
| d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running 
| n1=4.5.6.6|
  
+--+--+--+++-+---+

  2.  Change the spare pool size to 1 and restart Octavia-
  housekeeping service. Spare Amphora VM gets created as below.

  
  stack@hlm:~/scratch/ansible/next/hos/ansible$  nova list --all
  WARNING: Option "--all_tenants" is deprecated; use "--all-tenants"; this 
option will be removed in novaclient 3.3.0.
  
+--+--+--+++-+---+
  | ID   | Name 
| Tenant ID| Status | Task State | Power State 
| Networks  |
  
+--+--+--+++-+---+
  | 6a1101cd-d9d3-4c8e-aa1d-0790f7f4ac8b | 
amphora-18f4d90f-fe6e-4085-851e-7571cba0c65a | a5e6e87d402847e7b4210e035a0fceec 
| ACTIVE | -  | Running | OCTAVIA-MGMT-NET=100.74.25.13 |
  | 91eef324-0c51-4b91-8a54-e16abdb64e55 | vm1  
| d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running 
| n1=4.5.6.5|
  | 7d85921c-e7d9-4b70-9023-0478c66b7e7c | vm2  
| d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running 
| n1=4.5.6.6|
  
+--+--+--+++-+---+

  3.  Now change the spare pool size to 0 and restart Octavia-
  housekeeping service. Spare Amphora VM does not get deleted.

  stack@hlm:~/scratch/ansible/next/hos/ansible$  nova list --all
  WARNING: Option "--all_tenants" is deprecated; use "--all-tenants"; this 
option will be removed in novaclient 3.3.0.
  
+--+--+--+++-+---+
  | ID   | Name 
| Tenant ID| Status | Task State | Power State 
| Networks  |
  
+--+--+--+++-+---+
  | 6a1101cd-d9d3-4c8e-aa1d-0790f7f4ac8b | 
amphora-18f4d90f-fe6e-4085-851e-7571cba0c65a | a5e6e87d402847e7b4210e035a0fceec 
| ACTIVE | -  | Running | OCTAVIA-MGMT-NET=100.74.25.13 |
  | 91eef324-0c51-4b91-8a54-e16abdb64e55 | vm1  
| d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running 

[Yahoo-eng-team] [Bug 1548774] Re: LBaas V2: operating_status of 'dead' member is always online with Healthmonitor

2023-03-31 Thread Gregory Thiemonge
Abandoned after re-enabling the Octavia launchpad.

** Changed in: octavia
   Status: In Progress => Invalid

** Changed in: octavia
 Assignee: Carlos Goncalves (cgoncalves) => (unassigned)

** Tags added: auto-abandon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548774

Title:
  LBaas V2: operating_status of 'dead' member is always online with
  Healthmonitor

Status in neutron:
  Won't Fix
Status in octavia:
  Invalid
Status in senlin:
  New

Bug description:
  Expectation:
  Lbaas v2 healthmonitor will update status of "bad" member just as it behaves 
with v1. However, operating_status of pool members will not change no matter it 
is normal or not.

  ENV:
  My devstack runs in a single node of ubuntu14.04 and uses master branch code, 
mysql and rabbitmq. Tenantname is 'demo', username is 'demo'. I am using 
private-subnet for loadbalancer and member VM. octavia provider.

  Steps to reproduce:
  create a vm from cirros-0.3.4-x86_64-uec image and create one member 
accordingly into loadbalancer pool with healthmonitor. Then curl to get the 
statues of loadbalancer, find member status is online. Then nova stop the 
member mapped VM, curl again and again. Its operating_status of member keeps 
'online' instead of 'error'. 

  Below comes the curl response. No difference before and after pool
  member VM turns into SHUTOFF since no status change happens ever.

  {"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools":
  [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor":
  {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name":
  "", "provisioning_status": "ACTIVE"}, "members": [{"name": "",
  "provisioning_status": "ACTIVE", "address": "10.0.0.13",
  "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0",
  "operating_status": "ONLINE"}], "id":
  "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status":
  "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65",
  "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id":
  "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE",
  "provisioning_status": "ACTIVE"}}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548774/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670585] Re: lbaas-agent: 'ascii' codec can't encode characters

2023-03-31 Thread Gregory Thiemonge
Abandoned after re-enabling the Octavia launchpad.

** Changed in: octavia
   Status: New => Invalid

** Tags added: auto-abandon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1670585

Title:
  lbaas-agent: 'ascii' codec can't encode characters

Status in neutron:
  Invalid
Status in octavia:
  Invalid

Bug description:
  version: liberty

  1). Once the Chinese characters are used as the loadbalance name, there will 
be the following error:
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
[req-4a3f6b62-c449-4d88-82d1-96b8b96c7307 18295a4db5364daaa9f27e1169b96926 
65fe786567a341829aa05751b2b7360f - - -] Create listener 
75fef462-fe18-46a3-9722-6db2cf0be8ea failed on device driver haproxy_ns
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
Traceback (most recent call last):
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
300, in create_listener
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
driver.listener.create(listener)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 405, in create
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
self.driver.loadbalancer.refresh(listener.loadbalancer)e
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 369, in refresh
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager if 
(not self.driver.deploy_instance(loadbalancer) and
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 254, in 
inner
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
return f(*args, **kwargs)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 174, in deploy_instance
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
self.create(loadbalancer)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 202, in create
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
self._spawn(loadbalancer)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 352, in _spawn
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
haproxy_base_dir)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py",
 line 90, in save_config
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
utils.replace_file(conf_path, config_str)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 192, in 
replace_file
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
tmp_file.write(data)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib64/python2.7/socket.py", line 316, in write
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
data = str(data) # XXX Should really reject non-string non-buffers
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
UnicodeEncodeError: 'ascii' codec can't encode characters in position 20-43: 
ordinal not in range(128)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1670585/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1827746] Re: Port detach fails when compute host is unreachable

2023-03-31 Thread Gregory Thiemonge
Abandoned after re-enabling the Octavia launchpad.

** Changed in: octavia
   Status: New => Invalid

** Tags added: auto-abandon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1827746

Title:
  Port detach fails when compute host is unreachable

Status in OpenStack Compute (nova):
  Confirmed
Status in octavia:
  Invalid

Bug description:
  When a compute host is unreachable, a port detach for a VM on that
  host will not complete until the host is reachable again. In some
  cases, this may for an extended period or even indefinitely (for
  example, a host is powered down for hardware maintenance, and possibly
  needs to be removed from the fleet entirely). This is problematic for
  multiple reasons:

  1) The port should not be deleted in this state (it can be, but for reasons 
outside the scope of this bug, that is not recommended). Thus, the quota cannot 
be reclaimed by the project.
  2) The port cannot be reassigned to another VM. This means that for projects 
that rely heavily on maintaining a published IP (or possibly even a published 
port ID), there is no way to proceed. For example, if Octavia wanted to allow 
failing over from one VM to another in a VM down event (as would happen if the 
host was powered off) without using AAP, it would be unable to do so, leading 
to an extended downtime.

  Nova will supposedly clean up such resources after the host has been
  powered up, but that could take hours or possibly never happen. So,
  there should be a way to force the port to detach regardless of
  ability to reach the compute host, and simply allow the cleanup to
  happen on that host in the future (if possible) but immediately
  release the port for delete or rebinding.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1827746/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996033] [NEW] glance CLI always shows hw_vif_multiqueue_enabled='True'

2022-11-09 Thread Gregory Thiemonge
Public bug reported:

"openstack image show" reports incorrect properties when setting
hw_vif_multiqueue_enabled

in devstack master (also reproduced on wallaby), I have an image with no
hw_vif_multiqueue_enabled property

$ openstack image show -c properties -f value amphora-x64-haproxy   
   
{'os_hidden': False, 'os_hash_algo': 'sha512', 'os_hash_value': 
'fddf81f46b53ec0b3e1760cc0c1baa64578357003a00440ac4e0257af2af3556bb460f5fc38eab88e5aea43f997f59f75156bfdba92d9726f80bafba4e6a0911',
 'owner_specified.openstack.md5': '', 'owner_specified.openstack.sha256': '', 
'owner_specified.openstack.object': 'images/amphora-x64-haproxy', 
'hw_architecture': 'x86_64', 'hw_rng_model': 'virtio'}


Set the hw_vif_multiqueue_enabled property to True, it is ok:

$ openstack image set --property hw_vif_multiqueue_enabled=True 
amphora-x64-haproxy
$ openstack image show -c properties -f value amphora-x64-haproxy   
   
{'os_hidden': False, 'os_hash_algo': 'sha512', 'os_hash_value': 
'fddf81f46b53ec0b3e1760cc0c1baa64578357003a00440ac4e0257af2af3556bb460f5fc38eab88e5aea43f997f59f75156bfdba92d9726f80bafba4e6a0911',
 'owner_specified.openstack.md5': '', 'owner_specified.openstack.sha256': '', 
'owner_specified.openstack.object': 'images/amphora-x64-haproxy', 
'hw_architecture': 'x86_64', 'hw_rng_model': 'virtio', 
'hw_vif_multiqueue_enabled': True}


Set the property to False (or false), it is not ok, image show still returns 
"True"

$ openstack image set --property hw_vif_multiqueue_enabled=False 
amphora-x64-haproxy
$ openstack image show -c properties -f value amphora-x64-haproxy   

{'os_hidden': False, 'os_hash_algo': 'sha512', 'os_hash_value': 
'fddf81f46b53ec0b3e1760cc0c1baa64578357003a00440ac4e0257af2af3556bb460f5fc38eab88e5aea43f997f59f75156bfdba92d9726f80bafba4e6a0911',
 'owner_specified.openstack.md5': '', 'owner_specified.openstack.sha256': '', 
'owner_specified.openstack.object': 'images/amphora-x64-haproxy', 
'hw_architecture': 'x86_64', 'hw_rng_model': 'virtio', 
'hw_vif_multiqueue_enabled': True}


The value is False (but it is a string not a boolean) in the DB:

$ mysql -u root glance -e "select * from image_properties where image_id = 
'394ec5e4-3aab-47e8-a36f-dfdba732994b' and name = 'hw_vif_multiqueue_enabled' 
\G"
*** 1. row ***
id: 15
  image_id: 394ec5e4-3aab-47e8-a36f-dfdba732994b
  name: hw_vif_multiqueue_enabled
 value: False
created_at: 2022-11-09 07:42:24
updated_at: 2022-11-09 07:43:01
deleted_at: NULL
   deleted: 0


Unsetting the property doesn't work:

$ openstack image unset --property hw_vif_multiqueue_enabled 
amphora-x64-haproxy 
property unset failed, 'hw_vif_multiqueue_enabled' is a nonexistent property 
Failed to unset 1 of 1 properties.


Note: this is an issue only with the CLI, the property is correctly read by 
nova when creating the VM.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1996033

Title:
  glance CLI always shows hw_vif_multiqueue_enabled='True'

Status in Glance:
  New

Bug description:
  "openstack image show" reports incorrect properties when setting
  hw_vif_multiqueue_enabled

  in devstack master (also reproduced on wallaby), I have an image with
  no hw_vif_multiqueue_enabled property

  $ openstack image show -c properties -f value amphora-x64-haproxy 
 
  {'os_hidden': False, 'os_hash_algo': 'sha512', 'os_hash_value': 
'fddf81f46b53ec0b3e1760cc0c1baa64578357003a00440ac4e0257af2af3556bb460f5fc38eab88e5aea43f997f59f75156bfdba92d9726f80bafba4e6a0911',
 'owner_specified.openstack.md5': '', 'owner_specified.openstack.sha256': '', 
'owner_specified.openstack.object': 'images/amphora-x64-haproxy', 
'hw_architecture': 'x86_64', 'hw_rng_model': 'virtio'}

  
  Set the hw_vif_multiqueue_enabled property to True, it is ok:

  $ openstack image set --property hw_vif_multiqueue_enabled=True 
amphora-x64-haproxy
  $ openstack image show -c properties -f value amphora-x64-haproxy 
 
  {'os_hidden': False, 'os_hash_algo': 'sha512', 'os_hash_value': 
'fddf81f46b53ec0b3e1760cc0c1baa64578357003a00440ac4e0257af2af3556bb460f5fc38eab88e5aea43f997f59f75156bfdba92d9726f80bafba4e6a0911',
 'owner_specified.openstack.md5': '', 'owner_specified.openstack.sha256': '', 
'owner_specified.openstack.object': 'images/amphora-x64-haproxy', 
'hw_architecture': 'x86_64', 'hw_rng_model': 'virtio', 
'hw_vif_multiqueue_enabled': True}

  
  Set the property to False (or false), it is not ok, image show still returns 
"True"

  $ openstack image set --property hw_vif_multiqueue_enabled=False 
amphora-x64-haproxy
  $ openstack image show -c properties -f value amphora-x64-haproxy 
  
  {'os_hidden': False, 

[Yahoo-eng-team] [Bug 1973276] [NEW] OVN port loses its virtual type after port update

2022-05-13 Thread Gregory Thiemonge
Public bug reported:

Bug found in Octavia (master)

Octavia creates at least 2 ports for each load balancer:
- the VIP port, it is down, it keeps/stores the IP address of the LB
- the VRRP port, plugged into a VM, it has the VIP address in the 
allowed-address list (and the VIP address is configured on the interface in the 
VM)

When sending an ARP request for the VIP address, the VRRP port should
reply with its mac-address.

In OVN the VIP port is marked as "type: virtual".

But when the VIP port is updated, it loses its "port: virtual" status
and that breaks the ARP resolution (OVN replies to the ARP request by
sending the mac-address of the VIP port - which is not used/down).

Quick reproducer that simulates the Octavia behavior:


===

import subprocess
import time
 
import openstack
 
conn = openstack.connect(cloud="devstack-admin-demo")
 
network = conn.network.find_network("public")
 
sg = conn.network.find_security_group('sg')
if not sg:
sg = conn.network.create_security_group(name='sg')
 
vip_port = conn.network.create_port(
name="lb-vip",
network_id=network.id,
device_id="lb-1",
device_owner="me",
is_admin_state_up=False)
 
vip_address = [
fixed_ip['ip_address']
for fixed_ip in vip_port.fixed_ips
if '.' in fixed_ip['ip_address']][0]
 
vrrp_port = conn.network.create_port(
name="lb-vrrp",
device_id="vrrp",
device_owner="vm",
network_id=network.id)
vrrp_port = conn.network.update_port(
vrrp_port,
allowed_address_pairs=[
{"ip_address": vip_address,
 "mac_address": vrrp_port.mac_address}])
 
time.sleep(1)
 
output = subprocess.check_output(
f"sudo ovn-nbctl show | grep -A2 'port {vip_port.id}'",
shell=True)
output = output.decode('utf-8')
 
if 'type: virtual' in output:
print("Port is virtual, this is ok.")
print(output)
 
conn.network.update_port(
vip_port,
security_group_ids=[sg.id])
 
time.sleep(1)
 
output = subprocess.check_output(
f"sudo ovn-nbctl show | grep -A2 'port {vip_port.id}'",
shell=True)
output = output.decode('utf-8')
 
if 'type: virtual' not in output:
print("Port is not virtual, this is an issue.")
print(output)

===


In my env (devstack master on c9s):
$ python3 /mnt/host/virtual_port_issue.py
Port is virtual, this is ok.
port e0fe2894-e306-42d9-8c5e-6e77b77659e2 (aka lb-vip)
type: virtual
addresses: ["fa:16:3e:93:00:8f 172.24.4.111 2001:db8::178"]

Port is not virtual, this is an issue.
port e0fe2894-e306-42d9-8c5e-6e77b77659e2 (aka lb-vip)
addresses: ["fa:16:3e:93:00:8f 172.24.4.111 2001:db8::178"]
port 8ec36278-82b1-436b-bc5e-ea03ef22192f


In Octavia, the "port: virtual" is _sometimes_ back after other updates of the 
ports, but in some cases the LB is unreachable.

(and "ovn-nbctl lsp-set-type  virtual" fixes the LB)

** Affects: neutron
 Importance: High
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973276

Title:
  OVN port loses its virtual type after port update

Status in neutron:
  Confirmed

Bug description:
  Bug found in Octavia (master)

  Octavia creates at least 2 ports for each load balancer:
  - the VIP port, it is down, it keeps/stores the IP address of the LB
  - the VRRP port, plugged into a VM, it has the VIP address in the 
allowed-address list (and the VIP address is configured on the interface in the 
VM)

  When sending an ARP request for the VIP address, the VRRP port should
  reply with its mac-address.

  In OVN the VIP port is marked as "type: virtual".

  But when the VIP port is updated, it loses its "port: virtual" status
  and that breaks the ARP resolution (OVN replies to the ARP request by
  sending the mac-address of the VIP port - which is not used/down).

  Quick reproducer that simulates the Octavia behavior:

  
  ===

  import subprocess
  import time
   
  import openstack
   
  conn = openstack.connect(cloud="devstack-admin-demo")
   
  network = conn.network.find_network("public")
   
  sg = conn.network.find_security_group('sg')
  if not sg:
  sg = conn.network.create_security_group(name='sg')
   
  vip_port = conn.network.create_port(
  name="lb-vip",
  network_id=network.id,
  device_id="lb-1",
  device_owner="me",
  is_admin_state_up=False)
   
  vip_address = [
  fixed_ip['ip_address']
  for fixed_ip in vip_port.fixed_ips
  if '.' in fixed_ip['ip_address']][0]
   
  vrrp_port = conn.network.create_port(
  name="lb-vrrp",
  device_id="vrrp",
  device_owner="vm",
  network_id=network.id)
  vrrp_port = conn.network.update_port(
  vrrp_port,
  allowed_address_pairs=[
  {"ip_address": vip_address,
   "mac_address": vrrp_port.mac_address}])
   
  time.sleep(1)
   
  output = subprocess.check_output(
  

[Yahoo-eng-team] [Bug 1933638] [NEW] neutronclient returns Conflict on security group rules delete

2021-06-25 Thread Gregory Thiemonge
Public bug reported:

This issue was caught in an Octavia CI job
(https://zuul.opendev.org/t/openstack/build/9cb24aa49cbb47e6abeb580e5d5ec6f0/logs)

During the deletion of a load balancer, Octavia deletes security group
rules in neutron. It seems that Octavia is trying to delete many times
the same security group rule, and it sometimes receives a Conflict
exception while the exception message explains that the security group
rule doesn't exist:


Jun 22 12:36:00.226969 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
neutron-server[88368]: INFO neutron.wsgi [None 
req-e2baf119-3462-4af0-8b08-1e35cf0ba6d2 admin admin] 
199.19.213.147,199.19.213.147 "DELETE 
/v2.0/security-group-rules/ec7d4cb6-a872-4709-854a-efaca7527822 HTTP/1.1" 
status: 204  len: 173 time: 0.0580175
Jun 22 12:36:00.228298 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
neutron-server[88367]: INFO neutron.api.v2.resource [None 
req-42ecd2c1-85d8-4ea6-b9ec-b98af458a8aa admin admin] delete failed (client 
error): The resource could not be found.
Jun 22 12:36:00.229361 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
neutron-server[88367]: INFO neutron.wsgi [None 
req-42ecd2c1-85d8-4ea6-b9ec-b98af458a8aa admin admin] 
199.19.213.147,199.19.213.147 "DELETE 
/v2.0/security-group-rules/ec7d4cb6-a872-4709-854a-efaca7527822 HTTP/1.1" 
status: 404  len: 361 time: 0.0507255
Jun 22 12:36:00.230639 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
neutron-server[88368]: DEBUG neutron.api.rpc.handlers.resources_rpc [None 
req-e2baf119-3462-4af0-8b08-1e35cf0ba6d2 admin admin] Pushing event deleted for 
resources: {'SecurityGroupRule': 
['ID=ec7d4cb6-a872-4709-854a-efaca7527822,revision_number=None']} {{(pid=88368) 
push /opt/stack/neutron/neutron/api/rpc/handlers/resources_rpc.py:237}}
Jun 22 12:36:00.230973 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
neutron-server[88367]: DEBUG neutron_lib.callbacks.manager [None 
req-92d0b84a-6fce-40e8-8c6c-f08c4c362481 admin admin] Callback 
neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver._process_sg_rule_notification-750270
 raised Security group rule ec7d4cb6-a872-4709-854a-efaca7527822 does not exist 
{{(pid=88367) _notify_loop 
/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:209}}
Jun 22 12:36:00.231248 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
neutron-server[88367]: DEBUG neutron_lib.callbacks.manager [None 
req-92d0b84a-6fce-40e8-8c6c-f08c4c362481 admin admin] Notify callbacks [] for 
security_group_rule, abort_delete {{(pid=88367) _notify_loop 
/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:192}}
Jun 22 12:36:00.231444 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
neutron-server[88368]: DEBUG oslo_concurrency.lockutils [None 
req-e2baf119-3462-4af0-8b08-1e35cf0ba6d2 admin admin] Lock "event-dispatch" 
released by "neutron.plugins.ml2.ovo_rpc._ObjectChangeHandler.dispatch_events" 
:: held 0.008s {{(pid=88368) inner 
/usr/local/lib/python3.8/dist-packages/oslo_concurrency/lockutils.py:367}}
Jun 22 12:36:00.231819 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
neutron-server[88367]: INFO neutron.api.v2.resource [None 
req-92d0b84a-6fce-40e8-8c6c-f08c4c362481 admin admin] delete failed (client 
error): There was a conflict when trying to complete your request.
Jun 22 12:36:00.232958 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
neutron-server[88367]: INFO neutron.wsgi [None 
req-92d0b84a-6fce-40e8-8c6c-f08c4c362481 admin admin] 
199.19.213.147,199.19.213.147 "DELETE 
/v2.0/security-group-rules/ec7d4cb6-a872-4709-854a-efaca7527822 HTTP/1.1" 
status: 409  len: 588 time: 0.0547035


In the octavia logs, we received:

Jun 22 12:36:00.263676 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
octavia-worker[127003]: ERROR octavia.controller.worker.v1.controller_worker 
Traceback (most recent call last):
Jun 22 12:36:00.263676 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
octavia-worker[127003]: ERROR octavia.controller.worker.v1.controller_worker   
File 
"/usr/local/lib/python3.8/dist-packages/taskflow/engines/action_engine/executor.py",
 line 53, in _execute_task
Jun 22 12:36:00.263676 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
octavia-worker[127003]: ERROR octavia.controller.worker.v1.controller_worker
 result = task.execute(**arguments)
Jun 22 12:36:00.263676 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
octavia-worker[127003]: ERROR octavia.controller.worker.v1.controller_worker   
File "/opt/stack/octavia/octavia/controller/worker/v1/tasks/network_tasks.py", 
line 519, in execute
Jun 22 12:36:00.263676 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
octavia-worker[127003]: ERROR octavia.controller.worker.v1.controller_worker
 self.network_driver.update_vip(loadbalancer, for_delete=True)
Jun 22 12:36:00.263676 nested-virt-ubuntu-focal-vexxhost-ca-ymq-1-0025224760 
octavia-worker[127003]: ERROR