[Yahoo-eng-team] [Bug 2056537] [NEW] [ovn-octavia-provider] gateway chassis not filled on LogicalRouterPort event

2024-03-08 Thread Fernando Royo
Public bug reported:

The gateway neutron-ovn-invalid-chassis previously used for the CR-LRP
gateway_chassis has been removed in [1]. At this way, the logical router
port event received at creation is considered as a new port attached to
the router to a tenant network, adding the LB to that LS, which results
in failure during the functional tests.

In a real environment, this situation may not occur, except in the
scenario where the gateway_chassis for the LRP would arrive in a second
event rather than in the initial creation event.

[1] https://review.opendev.org/c/openstack/neutron/+/909305

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2056537

Title:
  [ovn-octavia-provider] gateway chassis not filled on LogicalRouterPort
  event

Status in neutron:
  In Progress

Bug description:
  The gateway neutron-ovn-invalid-chassis previously used for the CR-LRP
  gateway_chassis has been removed in [1]. At this way, the logical
  router port event received at creation is considered as a new port
  attached to the router to a tenant network, adding the LB to that LS,
  which results in failure during the functional tests.

  In a real environment, this situation may not occur, except in the
  scenario where the gateway_chassis for the LRP would arrive in a
  second event rather than in the initial creation event.

  [1] https://review.opendev.org/c/openstack/neutron/+/909305

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2056537/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055876] [NEW] [ovn-octavia-provider] OVN LB health checks for IPv6 not working

2024-03-04 Thread Fernando Royo
6:3eff:fe56:d5a7]:8082"="[fd2e:6f44:5dd8:c956:f816:3eff:fe2a:1eac]:31602,[fd2e:6f44:5dd8:c956:f816:3eff:fe46:52d2]:31602,[fd2e:6f44:5dd8:c956:f816:3eff:fe48:1ba0]:31602,[fd2e:6f44:5dd8:c956:f816:3eff:fe06:cf4a]:31602,[fd2e:6f44:5dd8:c956:f816:3eff:fe09:1b3e]:31602,[fd2e:6f44:5dd8:c956:f816:3eff:fea4:1218]:31602"}
 
 
 
[root@controller-0 /]# ovn-nbctl list load_balancer_health_check

  
_uuid   : 04b18ea0-0f88-43fa-b759-aba5fde256bf  

  
external_ids: 
{"octavia:healthmonitor"="195b1c33-cfd4-4994-98cb-240103a0b653", 
"octavia:pool_id"="3f820089-7769-46ee-92ea-7e1c15f03c98", 
"octavia:vip"="fd2e:6f44:5dd8:c956:f816:3eff:fe56:d5a7"}
 
options : {failure_count="3", interval="5", success_count="2", 
timeout="5"}    

   
vip : "fd2e:6f44:5dd8:c956:f816:3eff:fe56:d5a7:8082"
 
[root@controller-0 /]# ovn-sbctl --no-leader-only list Service_Monitor  

  
[root@controller-0 /]#

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2055876

Title:
  [ovn-octavia-provider] OVN LB health checks for IPv6 not working

Status in neutron:
  New

Bug description:
  When creating a health monitor for a IPv6 load-balancer the members
  are not correctly checked. Upon further analysis, the problem is
  related to there being no entry in the OVN SB database
  (Service_Monitor table) to map LB health checks created in the OVN NB
  database.

  [root@controller-0 /]# ovn-nbctl list load_balancer   


  _uuid   : b67d67ef-d4b6-4c84-95a4-21f211008525
  external_ids: {enabled=True, 
listener_23b0368b-4b69-442d-8e7a-118fac8bc3cf="8082:pool_3f820089-7769-46ee-92ea-7e1c15f03c98",
 lr_ref=neutron-94f17de0-91bc-4b3d-b808-e2cbdf963c66, 
ls_refs="{\"neutron-eba8acfd-b0e4-4874-b106-fa8542a8
  2c4e\": 7}", 
"neutron:member_status"="{\"0db4a0e0-23ed-4ee8-8283-2e5784f172ae\": \"ONLINE\", 
\"8dfc2bdc-193e-4e61-adbf-503e36e3aab9\": \"ONLINE\", 
\"c1c0b48d-a477-4fe1-965e-60da20e34cc1\": \"ONLINE\", 
\"6f2b2e6a-18d0-4783-b871-0c424e8397c
  0\": \"ONLINE\", \"49b28a9f-07b9-4d9f-8c7e-8cf5161be031\": \"ONLINE\", 
\"54691261-3f18-4afe-8239-ed0b0c6082e2\": \"ONLINE\"}", 
"neutron:vip"="fd2e:6f44:5dd8:c956:f816:3eff:fe56:d5a7", 
"neutron:vip_port_id"="489cbe15-de07-4f1e-93db-a8d552380653", 
"octavia:healthmonitors"="[\"195b1c33-cfd4-4994-98cb-240103a0b653\"]", 
pool_3f820089-7769-46ee-92ea-7e1c15f03c98="member_0db4a0e0-23ed-4ee8-8283-2e5784f172ae_fd2e:6f44:5dd8:c956:f816:3eff:fe2a:1eac:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_8dfc2bdc-193e-4e61-adbf-503e36e3aab9_fd2e:6f44:5dd8:c956:f816:3eff:fe46:52d2:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_c1c0b48d-a477-4fe1-965e-60da20e34cc1_fd2e:6f44:5dd8:c956:f816:3eff:fe48:1ba0:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_6f2b2e6a-18d0-4783-b871-0c424e8397c0_fd2e:6f44:5dd8:c956:f816:3eff:fe06:cf4a:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_49b28a9f-07b9-4d9f-8c7e-8cf5161be031_fd2e:6f44:5dd8:c956:f816:3eff:fe09:1b3e:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6,m
 
ember_54691261-3f18-4afe-8239-ed0b0c6082e2_fd2e:6f44:5dd8:c956:f816:3eff:fea4:1218:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6"}
   
  health_check: [04b18ea0-0f88-43fa-b759-aba5fde256bf]  

   
  ip_port

[Yahoo-eng-team] [Bug 2053227] Re: [ovn-octavia-provider] ovn-octavia-provider-functional-master is broken by ovn build failure

2024-02-20 Thread Fernando Royo
Merged in a previous patch  https://review.opendev.org/c/openstack/ovn-
octavia-provider/+/908320. Thanks Takashi Kajinami!

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2053227

Title:
  [ovn-octavia-provider] ovn-octavia-provider-functional-master is
  broken by ovn build failure

Status in neutron:
  Fix Released

Bug description:
  The ovn-octavia-provider-functional-master job consistently fails during set 
up.
  The log indicates ovn build is failing.

  
  example: 
https://zuul.opendev.org/t/openstack/build/65fafcb26fdb4b9b97d9ce481f70037e
  ```
  ...
  gcc -DHAVE_CONFIG_H -I.   -I ./include  -I ./include -I ./ovn -I ./include -I 
./lib -I ./lib -I /home/zuul/src/opendev.org/openstack/ovs/include -I 
/home/zuul/src/opendev.org/openstack/ovs/include -I 
/home/zuul/src/opendev.org/openstack/ovs/lib -I 
/home/zuul/src/opendev.org/openstack/ovs/lib -I 
/home/zuul/src/opendev.org/openstack/ovs -I 
/home/zuul/src/opendev.org/openstack/ovs-Wstrict-prototypes -Wall -Wextra 
-Wno-sign-compare -Wpointer-arith -Wformat -Wformat-security -Wswitch-enum 
-Wunused-parameter -Wbad-function-cast -Wcast-align -Wstrict-prototypes 
-Wold-style-definition -Wmissing-prototypes -Wmissing-field-initializers 
-fno-strict-aliasing -Wswitch-bool -Wlogical-not-parentheses 
-Wsizeof-array-argument -Wbool-compare -Wshift-negative-value -Wduplicated-cond 
-Wshadow -Wmultistatement-macros -Wcast-align=strict   -g -O2 -MT 
controller/physical.o -MD -MP -MF $depbase.Tpo -c -o controller/physical.o 
controller/physical.c &&\
  mv -f $depbase.Tpo $depbase.Po
  controller/ofctrl.c: In function ‘ofctrl_inject_pkt’:   
  controller/ofctrl.c:3048:5: error: too many arguments to function 
‘flow_compose’
   3048 | flow_compose(, , NULL, 64, false);
| ^~~~
  In file included from 
/home/zuul/src/opendev.org/openstack/ovs/lib/dp-packet.h:34,
   from controller/ofctrl.c:21:
  /home/zuul/src/opendev.org/openstack/ovs/lib/flow.h:129:6: note: declared 
here 
129 | void flow_compose(struct dp_packet *, const struct flow *,
|  ^~~~
  make[1]: *** [Makefile:2369: controller/ofctrl.o] Error 1  
  make[1]: *** Waiting for unfinished jobs
  controller/pinctrl.c: In function ‘pinctrl_ip_mcast_handle_igmp’: 
 
  controller/pinctrl.c:5488:54: error: ‘MCAST_GROUP_IGMPV1’ undeclared 
(first use in this function)
   5488 |   port_key_data, 
MCAST_GROUP_IGMPV1);
|  
^~
  controller/pinctrl.c:5488:54: note: each undeclared identifier is reported 
only once for each function it appears in
  controller/pinctrl.c:5487:13: error: too many arguments to function 
‘mcast_snooping_add_group4’
   5487 | mcast_snooping_add_group4(ip_ms->ms, ip4, IP_MCAST_VLAN,
| ^
  In file included from controller/ip-mcast.h:19,
   from controller/pinctrl.c:64:
  /home/zuul/src/opendev.org/openstack/ovs/lib/mcast-snooping.h:190:6: note: 
declared here
190 | bool mcast_snooping_add_group4(struct mcast_snooping *ms, ovs_be32 
ip4,
|  ^
  controller/pinctrl.c:5493:54: error: ‘MCAST_GROUP_IGMPV2’ undeclared 
(first use in this function)
   5493 |   port_key_data, 
MCAST_GROUP_IGMPV2);
|  
^~
  controller/pinctrl.c:5492:13: error: too many arguments to function 
‘mcast_snooping_add_group4’
   5492 | mcast_snooping_add_group4(ip_ms->ms, ip4, IP_MCAST_VLAN,
| ^
  In file included from controller/ip-mcast.h:19,
   from controller/pinctrl.c:64:
  /home/zuul/src/opendev.org/openstack/ovs/lib/mcast-snooping.h:190:6: note: 
declared here
190 | bool mcast_snooping_add_group4(struct mcast_snooping *ms, ovs_be32 
ip4,
|  ^
  make[1]: *** [Makefile:2369: controller/pinctrl.o] Error 1
 
  make[1]: Leaving directory '/home/zuul/src/opendev.org/openstack/ovn' 
 
  make: *** [Makefile:1528: all] Error 2
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2053227/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052628] [NEW] [stable-only][ovn-octavia-provider] Multiple listener-pool-member on IPv6 LB getting second pool in ERROR

2024-02-07 Thread Fernando Royo
Public bug reported:

When an LB IPv6 is created using a bulk command or when multiple
listener-pool-member are added sequentially, the second listener-pool
managed fails when adding a member, resulting in the pool's state being
in an ERROR state.

The error in traceback is:

2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver Traceback 
(most recent call last):
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1883, in member_create
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver 
self._add_member(member, ovn_lb, pool_key)
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1841, in _add_member
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver 
self._refresh_lb_vips(ovn_lb.uuid, external_ids)
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1051, in _refresh_lb_vips
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver vip_ips 
= self._frame_vip_ips(lb_external_ids)
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1039, in _frame_vip_ips
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver if 
netaddr.IPNetwork(lb_vip).version == 6:
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/netaddr/ip/__init__.py", line 938, in __init__
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver raise 
AddrFormatError('invalid IPNetwork %s' % addr)
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver 
netaddr.core.AddrFormatError: invalid IPNetwork [fd2e:6f44:5dd8:c956::1a]

So apparently, the LB_VIP is enclosed in additional brackets [ ] when
adding the member to the second listener-pool-member.

This issue is only occurring in stable branches.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052628

Title:
  [stable-only][ovn-octavia-provider] Multiple listener-pool-member on
  IPv6 LB getting second pool in ERROR

Status in neutron:
  New

Bug description:
  When an LB IPv6 is created using a bulk command or when multiple
  listener-pool-member are added sequentially, the second listener-pool
  managed fails when adding a member, resulting in the pool's state
  being in an ERROR state.

  The error in traceback is:

  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver Traceback 
(most recent call last):
  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1883, in member_create
  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver 
self._add_member(member, ovn_lb, pool_key)
  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1841, in _add_member
  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver 
self._refresh_lb_vips(ovn_lb.uuid, external_ids)
  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1051, in _refresh_lb_vips
  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver 
vip_ips = self._frame_vip_ips(lb_external_ids)
  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1039, in _frame_vip_ips
  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver if 
netaddr.IPNetwork(lb_vip).version == 6:
  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/netaddr/ip/__init__.py", line 938, in __init__
  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver raise 
AddrFormatError('invalid IPNetwork %s' % addr)
  2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver 
netaddr.core.AddrFormatError: invalid IPNetwork [fd2e:6f44:5dd8:c956::1a]

  So apparently, the LB_VIP is enclosed in additional brackets [ ] when
  adding the member to the second listener-pool-member.

  This issue is only occurring in stable branches.

To ma

[Yahoo-eng-team] [Bug 2047055] [NEW] [ovn-octavia-provider] Multivips not linking correctly member to ipv4 and ipv6 VIP

2023-12-20 Thread Fernando Royo
Public bug reported:

When a LB is created passing --additional-vips (mixing ipv4 and ipv6)
the addition of member are not taking that into account, rejecting the
call with following error:

Provider 'ovn' does not support a requested option: OVN provider does
not support mixing IPv4/IPv6 configuration within the same Load
Balancer. (HTTP 501) (Request-ID:
req-1ecd3533-e8fa-4219-a658-c90cd3059fcd)

Additionally once this error is fixed, the vips field need to correlate
the IPv4 VIP with IPv4 members and in a similar way for the IPv6 VIP.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2047055

Title:
  [ovn-octavia-provider] Multivips not linking correctly member to ipv4
  and ipv6 VIP

Status in neutron:
  New

Bug description:
  When a LB is created passing --additional-vips (mixing ipv4 and ipv6)
  the addition of member are not taking that into account, rejecting the
  call with following error:

  Provider 'ovn' does not support a requested option: OVN provider does
  not support mixing IPv4/IPv6 configuration within the same Load
  Balancer. (HTTP 501) (Request-ID:
  req-1ecd3533-e8fa-4219-a658-c90cd3059fcd)

  Additionally once this error is fixed, the vips field need to
  correlate the IPv4 VIP with IPv4 members and in a similar way for the
  IPv6 VIP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2047055/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2036620] [NEW] [ovn-octavia-provider] Fix issue when LRP has more than one address

2023-09-19 Thread Fernando Royo
Public bug reported:

When a LB is created or a new backend member is attached the OVN
provider will search for the LRP attached to the LS where the LB is
created, in order to associate this new LB to the LR. An exception is
triggered if the LRP has more than one address, because current code
can't find the port.

e.g a LRP with this neutron:cidrs in external_id field '10.10.10.1/24
fd8b:8a01:ab1d:0:f816:3eff:fe3d:24ab/64' will trigger this exception:

During handling of the above exception, another exception occurred:


Traceback (most recent call last):

  File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/.tox/py/lib/python3.10/site-packages/neutron/tests/base.py",
 line 178, in func
return f(self, *args, **kwargs)

  File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/ovn_octavia_provider/tests/unit/test_helper.py",
 line 2629, in test__find_lr_of_ls_multiple_address
returned_lr = self.helper._find_lr_of_ls(ls, '10.10.10.1')

  File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/ovn_octavia_provider/helper.py",
 line 804, in _find_lr_of_ls
port_cidr = netaddr.IPNetwork(

  File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/.tox/py/lib/python3.10/site-packages/netaddr/ip/__init__.py",
 line 942, in __init__
value, prefixlen = parse_ip_network(module, addr,

  File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/.tox/py/lib/python3.10/site-packages/netaddr/ip/__init__.py",
 line 818, in parse_ip_network
mask = IPAddress(val2, module.version, flags=INET_PTON)

  File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/.tox/py/lib/python3.10/site-packages/netaddr/ip/__init__.py",
 line 278, in __init__
raise ValueError('%s() does not support netmasks or subnet' \

ValueError: IPAddress() does not support netmasks or subnet
prefixes! See documentation for details.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2036620

Title:
  [ovn-octavia-provider] Fix issue when LRP has more than one address

Status in neutron:
  In Progress

Bug description:
  When a LB is created or a new backend member is attached the OVN
  provider will search for the LRP attached to the LS where the LB is
  created, in order to associate this new LB to the LR. An exception is
  triggered if the LRP has more than one address, because current code
  can't find the port.

  e.g a LRP with this neutron:cidrs in external_id field '10.10.10.1/24
  fd8b:8a01:ab1d:0:f816:3eff:fe3d:24ab/64' will trigger this exception:

  During handling of the above exception, another exception occurred:

  
  Traceback (most recent call last):

File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/.tox/py/lib/python3.10/site-packages/neutron/tests/base.py",
 line 178, in func
  return f(self, *args, **kwargs)

File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/ovn_octavia_provider/tests/unit/test_helper.py",
 line 2629, in test__find_lr_of_ls_multiple_address
  returned_lr = self.helper._find_lr_of_ls(ls, '10.10.10.1')

File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/ovn_octavia_provider/helper.py",
 line 804, in _find_lr_of_ls
  port_cidr = netaddr.IPNetwork(

File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/.tox/py/lib/python3.10/site-packages/netaddr/ip/__init__.py",
 line 942, in __init__
  value, prefixlen = parse_ip_network(module, addr,

File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/.tox/py/lib/python3.10/site-packages/netaddr/ip/__init__.py",
 line 818, in parse_ip_network
  mask = IPAddress(val2, module.version, flags=INET_PTON)

File 
"/home/froyo/Documentos/gitprojects/ovn-octavia-provider/.tox/py/lib/python3.10/site-packages/netaddr/ip/__init__.py",
 line 278, in __init__
  raise ValueError('%s() does not support netmasks or subnet' \

  ValueError: IPAddress() does not support netmasks or subnet
  prefixes! See documentation for details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2036620/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2034522] [NEW] Fake members operating_status ONLINE

2023-09-06 Thread Fernando Royo
Public bug reported:

I can deploy members with an invalid / invented ip address (no real
servers with that address) and the LB shows that everything is ok with
them (running `openstack loadbalancer status show ` will show that
the members have "operating_status": "ONLINE".

An example: I deployed the following:
{
"loadbalancer": {
"id": "c50b7cb3-6b8f-434b-9a47-a10a27d0a9b5",
"name": "ovn_lb",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"listeners": [
{
"id": "87bafdda-0ac6-438f-8824-cb75f9e014df",
"name": "tcp_listener",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"pools": [
{
"id": "aa6ed64c-4d19-448b-969d-6cc686385162",
"name": "tcp_pool",
"provisioning_status": "ACTIVE",
"operating_status": "ONLINE",
"health_monitor": {
"id": "cc72e7eb-722b-49be-b3d2-3857f880346d",
"name": "hm_ovn_provider",
"type": "TCP",
"provisioning_status": "ACTIVE",
"operating_status": "ONLINE"
},
"members": [
{
"id": "648b9d51-115a-4312-b92e-cc59af0d0401",
"name": "fake_member",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"address": "10.100.0.204",
"protocol_port": 80
},
{
"id": "8dae11a2-e2d5-45f9-9e85-50f61fa07753",
"name": "de3f2a06",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"address": "10.0.64.34",
"protocol_port": 80
},
{
"id": "9b044180-71b4-4fa6-83df-4d0f99b4a3f7",
"name": "fake_member2",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"address": "10.100.0.205",
"protocol_port": 80
},
{
"id": "fe9ce8ca-e6b7-4c5b-807c-8e295156df85",
"name": "6c186a80",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"address": "10.0.64.39",
"protocol_port": 80
}
]
}
]
}
]
}
}

when the existing servers are the following:
+--+-++--+++
| ID   | Name| Status | Networks
 | |  |
+--+-++--+++
| 1e4a4464-4bbf-4107-94e4-974e87c31074 | 8941a208 | ACTIVE | 
private=10.0.64.34, fd47:e41c:f56e:0:f816:3eff:fe9f:67f4 |  |  |
| 1a0de4d2-d9ea-4d60-85ff-018bcc00d285 | tobiko_44801dfe | ACTIVE | 
private=10.0.64.39, fd47:e41c:f56e:0:f816:3eff:fea2:7af9 |  |  |
+--+-++-

[Yahoo-eng-team] [Bug 2028161] [NEW] [ovn-octavia-provider] FIP not included into LogicalSwitchPortUpdate event handler method

2023-07-19 Thread Fernando Royo
Public bug reported:

When a LogicalSwitchPortUpdate event is triggered after removing
FIP from LB VIP, the event received include the port affected,
but the FIP related is not passing to the handler method.

Including the FIP would help on decide if it is an association or and
disassociation action and also help to search the related objects
to be updated/deleted.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028161

Title:
  [ovn-octavia-provider] FIP not included into LogicalSwitchPortUpdate
  event  handler method

Status in neutron:
  New

Bug description:
  When a LogicalSwitchPortUpdate event is triggered after removing
  FIP from LB VIP, the event received include the port affected,
  but the FIP related is not passing to the handler method.

  Including the FIP would help on decide if it is an association or and
  disassociation action and also help to search the related objects
  to be updated/deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2028161/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2024912] [NEW] [ovn-octavia-provider] Updating status on incorrect pool when HM delete

2023-06-23 Thread Fernando Royo
Public bug reported:

When a Health Monitor is deleted from a LB with multiples pools, the HM
is deleted correctly. But sometimes a random pool (not related to the HM
being deleted) keeps in PENDING_UPDATE provisioning_status.

Look like the status update sending to Octavia API was referencing a
pool not related to the HM deleted.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024912

Title:
  [ovn-octavia-provider] Updating status on incorrect pool when HM
  delete

Status in neutron:
  New

Bug description:
  When a Health Monitor is deleted from a LB with multiples pools, the
  HM is deleted correctly. But sometimes a random pool (not related to
  the HM being deleted) keeps in PENDING_UPDATE provisioning_status.

  Look like the status update sending to Octavia API was referencing a
  pool not related to the HM deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2024912/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2020195] [NEW] [ovn-octavia-provider] functional test intermittently fail with DB error: Cursor needed to be reset because of commit/rollback and can no longer be fetched from

2023-05-19 Thread Fernando Royo
raise()
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
raise self.value
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 181, in wrapped
return f(*dup_args, **dup_kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/neutron/db/l3_db.py",
 line 646, in get_router
router = self._get_router(context, id)
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 1022, in wrapper
return fn(*args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/neutron/db/l3_db.py",
 line 205, in _get_router
router = model_query.get_by_id(
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/neutron_lib/db/model_query.py",
 line 169, in get_by_id
return query.filter(model.id == object_id).one()
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/orm/query.py",
 line 2869, in one
return self._iter().one()
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/engine/result.py",
 line 1476, in one
return self._only_one_row(
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/engine/result.py",
 line 559, in _only_one_row
row = onerow(hard_close=True)
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/engine/result.py",
 line 1340, in _fetchone_impl
return self._real_result._fetchone_impl(hard_close=hard_close)
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/engine/result.py",
 line 1743, in _fetchone_impl
row = next(self.iterator, _NO_ROW)
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/orm/loading.py",
 line 147, in chunks
fetch = cursor._raw_all_rows()
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/engine/result.py",
 line 392, in _raw_all_rows
rows = self._fetchall_impl()
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/engine/cursor.py",
 line 1805, in _fetchall_impl
return self.cursor_strategy.fetchall(self, self.cursor)
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/engine/cursor.py",
 line 981, in fetchall
self.handle_exception(result, dbapi_cursor, e)
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/engine/cursor.py",
 line 941, in handle_exception
result.connection._handle_dbapi_exception(
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/engine/base.py",
 line 2122, in _handle_dbapi_exception
util.raise_(newraise, with_traceback=exc_info[2], from_=e)
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/util/compat.py",
 line 208, in raise_
raise exception
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.10/site-packages/sqlalchemy/engine/cursor.py",
 line 977, in fetchall
rows = dbapi_cursor.fetchall()

oslo_db.exception.DBError: (sqlite3.InterfaceError) Cursor needed to be 
reset because of commit/rollback and can no longer be fetched from.
(Background on this error at: https://sqlalche.me/e/14/rvf5)

It happends randomnly on functional tests, not related to any specific
one.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

** Summary changed:

-  functional test intermittently fail with DB error: Cursor needed to be reset 
because of commit/rollback and can no longer be fetched from
+ [ovn-octavia-provider] f

[Yahoo-eng-team] [Bug 2017680] [NEW] Tenant user cannot delete a port associated with an FIP belonging to the admin tenant

2023-04-25 Thread Fernando Royo
Public bug reported:

When a tenant A creates a port but an admin associated a FIP to that
port. At this point, tenant A cannot delete the port, because a foreign
key relation with the FIP (500 server error).

2023-04-14 10:01:49.116 277 ERROR neutron.pecan_wsgi.hooks.translation
pymysql.err.IntegrityError: (1451, 'Cannot delete or update a parent
row: a foreign key constraint fails (`ovs_neutron`.`floatingips`,
CONSTRAINT `floatingips_ibfk_1` FOREIGN KEY (`fixed_port_id`) REFERENCES
`ports` (`id`))')

Basically the issue is the user is not allowed to disassociate the port
from the FIP,

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2017680

Title:
  Tenant user cannot delete a port associated with an FIP belonging to
  the admin tenant

Status in neutron:
  New

Bug description:
  When a tenant A creates a port but an admin associated a FIP to that
  port. At this point, tenant A cannot delete the port, because a
  foreign key relation with the FIP (500 server error).

  2023-04-14 10:01:49.116 277 ERROR neutron.pecan_wsgi.hooks.translation
  pymysql.err.IntegrityError: (1451, 'Cannot delete or update a parent
  row: a foreign key constraint fails (`ovs_neutron`.`floatingips`,
  CONSTRAINT `floatingips_ibfk_1` FOREIGN KEY (`fixed_port_id`)
  REFERENCES `ports` (`id`))')

  Basically the issue is the user is not allowed to disassociate the
  port from the FIP,

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2017680/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2017216] [NEW] [ovn-octavia-provider] batch member update generates out of sync with Octavia DB

2023-04-21 Thread Fernando Royo
Public bug reported:

When a batch_update_members request [1] is sending not supported options
(e.g. monitor_X) over a member already existing or a new one the OVN-
provider applies the rest of changes for other members and skip the
member with the not supported option. For each action executed
(create/delete/update) an updated status is sent to Octavia API, and for
the skipped members and UnsupportedOptionError is replied.

The weird thing here is that Octavia API is not applying the other
member changes a soon as it receives the UnsupportedOptionError update
status for the skipped member, and at this way we are generating a out
of sync between the info in Octavia DB and the OVN NB DB side.

A reproducer is logged in
https://paste.openstack.org/show/bU4o2ylJOg7XLXgVXKps/

[1] https://docs.openstack.org/api-ref/load-
balancer/v2/?expanded=update-a-member-detail,list-pools-detail,create-
pool-detail,batch-update-members-detail#batch-update-members

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2017216

Title:
  [ovn-octavia-provider] batch member update generates out of sync with
  Octavia DB

Status in neutron:
  New

Bug description:
  When a batch_update_members request [1] is sending not supported
  options (e.g. monitor_X) over a member already existing or a new one
  the OVN-provider applies the rest of changes for other members and
  skip the member with the not supported option. For each action
  executed (create/delete/update) an updated status is sent to Octavia
  API, and for the skipped members and UnsupportedOptionError is
  replied.

  The weird thing here is that Octavia API is not applying the other
  member changes a soon as it receives the UnsupportedOptionError update
  status for the skipped member, and at this way we are generating a out
  of sync between the info in Octavia DB and the OVN NB DB side.

  A reproducer is logged in
  https://paste.openstack.org/show/bU4o2ylJOg7XLXgVXKps/

  [1] https://docs.openstack.org/api-ref/load-
  balancer/v2/?expanded=update-a-member-detail,list-pools-detail,create-
  pool-detail,batch-update-members-detail#batch-update-members

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2017216/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2017127] [NEW] [ovn-octavia-provider] Fix member update according to the Octavia API definition

2023-04-20 Thread Fernando Royo
Public bug reported:

Nowadays member parameters are checked in an update that are not able to
modify by Octavia API, such as address, port or subnet_id for an
existing member. According to the definition of the member update (and
batch_update) the parameters supported by the ovn-provider is only the
admin_state_up of the member, so it seems coherent to clean the code to
just analyze this one.

In fact, the current checks performed on the attributes (not editable in
an update operation) trigger an error and the update does not finish
correctly.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2017127

Title:
  [ovn-octavia-provider] Fix member update according to the Octavia API
  definition

Status in neutron:
  New

Bug description:
  Nowadays member parameters are checked in an update that are not able
  to modify by Octavia API, such as address, port or subnet_id for an
  existing member. According to the definition of the member update (and
  batch_update) the parameters supported by the ovn-provider is only the
  admin_state_up of the member, so it seems coherent to clean the code
  to just analyze this one.

  In fact, the current checks performed on the attributes (not editable
  in an update operation) trigger an error and the update does not
  finish correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2017127/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2016862] [NEW] [ovn-octavia-provider] Admin_state_up ignored on create a new member

2023-04-18 Thread Fernando Royo
Public bug reported:

When a new member is added, if it is done with a admin_state_up to
False, the member should not take part on the request balancing over the
LB VIP. Currently the API applies the member operation_Status  well but
the member receives requests.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2016862

Title:
  [ovn-octavia-provider] Admin_state_up ignored on create a new member

Status in neutron:
  In Progress

Bug description:
  When a new member is added, if it is done with a admin_state_up to
  False, the member should not take part on the request balancing over
  the LB VIP. Currently the API applies the member operation_Status
  well but the member receives requests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2016862/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1956035] Re: ovn load balancer member failover not working when accessed from floating ip

2023-03-29 Thread Fernando Royo
*** This bug is a duplicate of bug 1997418 ***
https://bugs.launchpad.net/bugs/1997418

** This bug has been marked a duplicate of bug 1997418
[ovn-octavia-provider] HM not working for FIPs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1956035

Title:
  ovn load balancer member failover not working when accessed from
  floating ip

Status in neutron:
  New

Bug description:
  When the health monitor is eanbled, unhealthy members are not excluded
  from the traffic when accessed by the floating ip associated with the
  load balancer. It needs one more Load_Balancer_Health_Check row with
  the vip to be set to the floating ip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1956035/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2011573] [NEW] [ovn-octavia-provider] Job pep8 failing due to bandit new lint rule

2023-03-14 Thread Fernando Royo
Public bug reported:

Pep8 jobs are failing in master and stables branches after update bandit
to 1.7.5. Basically it is doing a new lint rule checking timeout is
specified in any request.

The rule B113 is marked as a warning but it making the job failing.

[1]
https://github.com/PyCQA/bandit/commit/5ff73ff8ff956df7d63fde49c3bd671db8e821eb

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2011573

Title:
  [ovn-octavia-provider] Job pep8 failing due to bandit new lint rule

Status in neutron:
  In Progress

Bug description:
  Pep8 jobs are failing in master and stables branches after update
  bandit to 1.7.5. Basically it is doing a new lint rule checking
  timeout is specified in any request.

  The rule B113 is marked as a warning but it making the job failing.

  [1]
  
https://github.com/PyCQA/bandit/commit/5ff73ff8ff956df7d63fde49c3bd671db8e821eb

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2011573/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2008695] [NEW] Remove any LB HM references from the external_ids upon deleting an HM

2023-02-27 Thread Fernando Royo
Public bug reported:

HM's uuid is included in the external_ids of the associated LB, like
this:
"octavia:healthmonitors"="["483e8e50-3d0a-4f03-9c4b-42ab315f5a11"]". The
expected behavior is that this entry would be removed from the
external_ids of the LB when the HM is deleted. But currently, this entry
is not removed or cleaned up.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2008695

Title:
  Remove any LB HM references from the external_ids upon deleting an HM

Status in neutron:
  New

Bug description:
  HM's uuid is included in the external_ids of the associated LB, like
  this:
  "octavia:healthmonitors"="["483e8e50-3d0a-4f03-9c4b-42ab315f5a11"]".
  The expected behavior is that this entry would be removed from the
  external_ids of the LB when the HM is deleted. But currently, this
  entry is not removed or cleaned up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2008695/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2007985] [NEW] [ovn-octavia-provider] restore the member provisioning status to NO_MONITOR after delete the HM

2023-02-21 Thread Fernando Royo
Public bug reported:

When a HM is creating over a pool, the members associated to it change
theis provisioning_status to ONLINE, and if the health checks packets
detects an error on communication it will move to ERROR. When the HM is
deleted the provisioning status of those members should come back to
NO_MONITOR, but now is keeping on the last status (ONLINE or ERROR)

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2007985

Title:
  [ovn-octavia-provider] restore the member provisioning status to
  NO_MONITOR after delete the HM

Status in neutron:
  New

Bug description:
  When a HM is creating over a pool, the members associated to it change
  theis provisioning_status to ONLINE, and if the health checks packets
  detects an error on communication it will move to ERROR. When the HM
  is deleted the provisioning status of those members should come back
  to NO_MONITOR, but now is keeping on the last status (ONLINE or ERROR)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2007985/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2007835] [NEW] [ovn-octavia-provider] LB ip port mapping clean up on every member change

2023-02-20 Thread Fernando Royo
Public bug reported:

>From patch [1], the ip_port_mapping is updated just adding and deleting
every member after any related operation over the LB-HM, this operation
is done in two steps, a db_clear and a db_set. Taking into account that
ovsdbapp has specific commands for add/del backends to the
ip_port_mapping seems more appropiate use those methods.

[1] https://review.opendev.org/c/openstack/ovn-octavia-
provider/+/873426/10/ovn_octavia_provider/helper.py#1219

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2007835

Title:
  [ovn-octavia-provider] LB ip port mapping clean up on every member
  change

Status in neutron:
  New

Bug description:
  From patch [1], the ip_port_mapping is updated just adding and
  deleting every member after any related operation over the LB-HM, this
  operation is done in two steps, a db_clear and a db_set. Taking into
  account that ovsdbapp has specific commands for add/del backends to
  the ip_port_mapping seems more appropiate use those methods.

  [1] https://review.opendev.org/c/openstack/ovn-octavia-
  provider/+/873426/10/ovn_octavia_provider/helper.py#1219

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2007835/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2002800] [NEW] Allow multiple IPv6 ports on router from same network on ml2/ovs+vxlan+dvr

2023-01-13 Thread Fernando Royo
Public bug reported:

On a recent change [1], some additional checks was added to avoid ports
overlapping cidrs on a router. On this change was also added a check to
do not attached more than one port IPv6 from same network, but this
check need to allow multiple ports when a deployment is done using
ml2/ovs+xvlan+dvr, because two ports are added:

- one with device_owner as network:router_interface_distributed
- another one with device_owner as network:router_centralized_snat

[1] https://review.opendev.org/c/openstack/neutron/+/859143

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

** Description changed:

- After a recent change [1], some additional checks was added to avoid
- ports overlapping cidrs on a router. On this change was also added a
- check to do not attached more than one port IPv6 from same network, but
- this check need to allow multiple ports when a deployment is done using
+ On a recent change [1], some additional checks was added to avoid ports
+ overlapping cidrs on a router. On this change was also added a check to
+ do not attached more than one port IPv6 from same network, but this
+ check need to allow multiple ports when a deployment is done using
  ml2/ovs+xvlan+dvr, because two ports are added:
  
  - one with device_owner as network:router_interface_distributed
  - another one with device_owner as network:router_centralized_snat
  
  [1] https://review.opendev.org/c/openstack/neutron/+/859143

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2002800

Title:
  Allow multiple IPv6 ports on router from same network on
  ml2/ovs+vxlan+dvr

Status in neutron:
  In Progress

Bug description:
  On a recent change [1], some additional checks was added to avoid
  ports overlapping cidrs on a router. On this change was also added a
  check to do not attached more than one port IPv6 from same network,
  but this check need to allow multiple ports when a deployment is done
  using ml2/ovs+xvlan+dvr, because two ports are added:

  - one with device_owner as network:router_interface_distributed
  - another one with device_owner as network:router_centralized_snat

  [1] https://review.opendev.org/c/openstack/neutron/+/859143

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2002800/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2000071] [NEW] [ovn-octavia-provider] Do not make the status of a newly HM conditional on the status of existing members

2022-12-19 Thread Fernando Royo
Public bug reported:

When a new HM is created for a LB (ovn-provider), it is checking the
member status, if any of the member (ports related) is not found the HM
is created with ERROR provisioning status. It doesn't make sense if we
take into account that the HM is an independent entity and should not
see its status conditioned by the status of the members it will monitor.

In fact, this behaviour only occurs if we follow these steps secuen:

- Creation of a pool (pool1)
- Creation of a member (member1) associated to the previous pool (pool1), which 
starts in ACTIVE
- Creation of a member (member2) associated to the previous pool (pool1), which 
starts in ERROR status for example because we have invented the member address.
- Creation of a HM associated to the pool (pool1)

as output the HM will be in ERROR.

If we do the same steps in a bulk request the output will be HM as
ACTIVE and the members as expected (member1 ACTIVE, member2 ERROR)

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/271

Title:
  [ovn-octavia-provider] Do not make the status of a newly HM
  conditional on the status of existing members

Status in neutron:
  New

Bug description:
  When a new HM is created for a LB (ovn-provider), it is checking the
  member status, if any of the member (ports related) is not found the
  HM is created with ERROR provisioning status. It doesn't make sense if
  we take into account that the HM is an independent entity and should
  not see its status conditioned by the status of the members it will
  monitor.

  In fact, this behaviour only occurs if we follow these steps secuen:

  - Creation of a pool (pool1)
  - Creation of a member (member1) associated to the previous pool (pool1), 
which starts in ACTIVE
  - Creation of a member (member2) associated to the previous pool (pool1), 
which starts in ERROR status for example because we have invented the member 
address.
  - Creation of a HM associated to the pool (pool1)

  as output the HM will be in ERROR.

  If we do the same steps in a bulk request the output will be HM as
  ACTIVE and the members as expected (member1 ACTIVE, member2 ERROR)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/271/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999813] [NEW] [ovn-octavia-provider] when a HM is created/deleted the listener remains in PENDING_UPDATE

2022-12-15 Thread Fernando Royo
Public bug reported:

When a HM is created/deleted over a pool of a fully populated LB, is put
the provisioning status of the listener owner of the pool in
PENDING_UPDATE, the operation is done correctly but the status of the
listener keeps on that stuck status.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999813

Title:
  [ovn-octavia-provider] when a HM is created/deleted  the listener
  remains in PENDING_UPDATE

Status in neutron:
  New

Bug description:
  When a HM is created/deleted over a pool of a fully populated LB, is
  put the provisioning status of the listener owner of the pool in
  PENDING_UPDATE, the operation is done correctly but the status of the
  listener keeps on that stuck status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999813/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1997567] Re: [ovn-octavia-provider] Octavia LB stuck in PENDING_UPDATE after creation

2022-12-12 Thread Fernando Royo
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1997567

Title:
  [ovn-octavia-provider] Octavia LB stuck in PENDING_UPDATE after
  creation

Status in neutron:
  Invalid

Bug description:
  Wallaby OpenStack deployment using OpenStack Kayobe

  Running on Ubuntu Focal

  Relevant package version:

  - octavia-lib: 2.3.1
  - neutron-lib: 2.10.2
  - ovn-octavia-provider: 1.0.2.dev5

  I have encountered a bug where after creating an Octavia load balancer
  it gets stuck and cannot be deleted.

  Attempt in Horizon to delete the load balancer are met with the following
  Error: Unable to delete Load Balancer: test_ovn_lb. It also reports 
Provisioning Status: Pending Update.

  When attempting to delete via the openstack client I get this
  response.

  (openstackclient-venv) [~] openstack loadbalancer delete  
64951486-8143-4a17-a88b-9f576688e662
  Validation failure: Cannot delete Load Balancer 
64951486-8143-4a17-a88b-9f576688e662 - it has children (HTTP 400) (Request-ID: 
req-5bf53e03-d33d-4995-88fb-8617060afdf4)

  (openstackclient-venv) [~] openstack loadbalancer delete  
64951486-8143-4a17-a88b-9f576688e662 --cascade
  Invalid state PENDING_UPDATE of loadbalancer resource 
64951486-8143-4a17-a88b-9f576688e662 (HTTP 409) (Request-ID: 
req-cd917d82-67cd-4704-b6d2-032939e08d88)

  In the octavia-api.log the following error message is logged in the
  moments prior to getting stuck in this state.
  https://paste.opendev.org/show/bkKWy2WkjC9fo05kOFE3/

  The only solution to this problem that I have found that works is to
  edit the Octavia table to change the current Pending State to ERROR.

  use octavia
  UPDATE load_balancer SET provisioning_status = 'ERROR' WHERE 
provisioning_status LIKE "PENDING_UPDATE";

  This manual edit of the database then allows for the removal of the
  load balancer via the API:

  openstack loadbalancer delete id-here --cascade

  This bug is not blocking however it would nice to prevent this from
  happening again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1997567/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1997094] [NEW] [ovn-octavia-provider] HM created at fully populated loadbalancer stuck in PENDING_CREATE

2022-11-18 Thread Fernando Royo
Public bug reported:

When try to create a health monitor on OVN LBs using the API to create
fully populated loadbalancers, where the pool object will include the
information about the HM to be created.

These become stuck in PENDING_CREATE and is not functional at all. If I
delete it, the LB it was tied to will become stuck in PENDING_UPDATE.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1997094

Title:
  [ovn-octavia-provider] HM created at fully populated loadbalancer
  stuck in PENDING_CREATE

Status in neutron:
  In Progress

Bug description:
  When try to create a health monitor on OVN LBs using the API to create
  fully populated loadbalancers, where the pool object will include the
  information about the HM to be created.

  These become stuck in PENDING_CREATE and is not functional at all. If
  I delete it, the LB it was tied to will become stuck in
  PENDING_UPDATE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1997094/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1990129] [NEW] [ovn-octavia-provider] Avoid LB in ERROR status on delete due to LR/LS not found

2022-09-19 Thread Fernando Royo
Public bug reported:

The LB delete operation a single transaccion will delete the LB
reference from all LS and LR, and also  the deletion of the LB itself.

As it is in an atomic transaction, if any operation fails, the whole
operation will report a LB ERROR status.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

** Tags added: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1990129

Title:
  [ovn-octavia-provider] Avoid LB in ERROR status on delete due to LR/LS
  not found

Status in neutron:
  In Progress

Bug description:
  The LB delete operation a single transaccion will delete the LB
  reference from all LS and LR, and also  the deletion of the LB itself.

  As it is in an atomic transaction, if any operation fails, the whole
  operation will report a LB ERROR status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1990129/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989460] [NEW] [ovn-octavia-provider] HealthMonitor event received on port deleted

2022-09-13 Thread Fernando Royo
Public bug reported:

When the port associated to a VM is deleted, no event is received by the
driver agent, so basically the LB reflects a wrong ONLINE
operating_status of the member associated to the affected VM.

As the port associated to the VM can be deleted, that case need to be
covered.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1989460

Title:
  [ovn-octavia-provider] HealthMonitor event received on port deleted

Status in neutron:
  New

Bug description:
  When the port associated to a VM is deleted, no event is received by
  the driver agent, so basically the LB reflects a wrong ONLINE
  operating_status of the member associated to the affected VM.

  As the port associated to the VM can be deleted, that case need to be
  covered.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1989460/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1987666] [NEW] Race condition when adding two subnet with same cidr to router

2022-08-25 Thread Fernando Royo
Public bug reported:

When two subnets with the same cidr are connected to a router, the
second request should fail with an error like this:

BadRequest: resources._ipv4_gateway_interface: Bad router request: Cidr
10.100.130.0/24 of subnet 41626435-77b8-4858-9594-a6709e2de5c5 overlaps
with cidr 10.100.130.0/24 of subnet cd6566de-add9-4129-9f5e-5b99cc57194c

But if those connections are triggered simultaneously, both subnets
finally are connected to the router without raising the previous
BadRequest.

A simple script like this allow to replicate the situation described:

echo "create resources"
openstack router create r0
openstack network create n0-A
openstack subnet create sn0-A --network n0-A --subnet-range 10.100.0.0/24
openstack network create n0-B
openstack subnet create sn0-B --network n0-B --subnet-range 10.100.0.0/24

echo "connect subnets to routers"
openstack router add subnet r0 sn0-A&
openstack router add subnet r0 sn0-B

as result:

(overcloud) [stack@undercloud-0 ~]$ openstack router show r0 -c interfaces_info 
-f value; done
[{'port_id': '171028ae-3a0d-4690-86fd-09bf3cf9fabe', 'ip_address': 
'10.100.0.1', 'subnet_id': 'b1f1cfb0-3d8d-41ae-b5e4-4839f4c5d7a4'}, {'port_id': 
'46596629-a1bc-49d6-903e-45cd27ba6b22', 'ip_address': '10.100.0.1', 
'subnet_id': '1f463853-487e-4aeb-b0ec-cd43048bf692'}]

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1987666

Title:
  Race condition when adding two subnet with same cidr to router

Status in neutron:
  New

Bug description:
  When two subnets with the same cidr are connected to a router, the
  second request should fail with an error like this:

  BadRequest: resources._ipv4_gateway_interface: Bad router request:
  Cidr 10.100.130.0/24 of subnet 41626435-77b8-4858-9594-a6709e2de5c5
  overlaps with cidr 10.100.130.0/24 of subnet cd6566de-
  add9-4129-9f5e-5b99cc57194c

  But if those connections are triggered simultaneously, both subnets
  finally are connected to the router without raising the previous
  BadRequest.

  A simple script like this allow to replicate the situation described:

  echo "create resources"
  openstack router create r0
  openstack network create n0-A
  openstack subnet create sn0-A --network n0-A --subnet-range 10.100.0.0/24
  openstack network create n0-B
  openstack subnet create sn0-B --network n0-B --subnet-range 10.100.0.0/24

  echo "connect subnets to routers"
  openstack router add subnet r0 sn0-A&
  openstack router add subnet r0 sn0-B

  as result:

  (overcloud) [stack@undercloud-0 ~]$ openstack router show r0 -c 
interfaces_info -f value; done
  [{'port_id': '171028ae-3a0d-4690-86fd-09bf3cf9fabe', 'ip_address': 
'10.100.0.1', 'subnet_id': 'b1f1cfb0-3d8d-41ae-b5e4-4839f4c5d7a4'}, {'port_id': 
'46596629-a1bc-49d6-903e-45cd27ba6b22', 'ip_address': '10.100.0.1', 
'subnet_id': '1f463853-487e-4aeb-b0ec-cd43048bf692'}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1987666/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1987308] [NEW] [ovn-octavia-provider] ovn compilation is failing in "ovn-octavia-provider-tempest-master " job

2022-08-22 Thread Fernando Royo
Public bug reported:

Log example:
https://ce39650b7c9f8fc83e70-7e9148bad8639eb8c7a86b7a975d2d7d.ssl.cf5.rackcdn.com/853681/1/check/ovn-
octavia-provider-tempest-master/39685b9/job-output.txt

Snippet: https://paste.opendev.org/show/bfxM67OLMZcjJgFWpZXL/

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1987308

Title:
  [ovn-octavia-provider] ovn compilation is failing in "ovn-octavia-
  provider-tempest-master " job

Status in neutron:
  In Progress

Bug description:
  Log example:
  
https://ce39650b7c9f8fc83e70-7e9148bad8639eb8c7a86b7a975d2d7d.ssl.cf5.rackcdn.com/853681/1/check/ovn-
  octavia-provider-tempest-master/39685b9/job-output.txt

  Snippet: https://paste.opendev.org/show/bfxM67OLMZcjJgFWpZXL/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1987308/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1986977] [NEW] [ovn-octavia-provider] HealthMonitor affecting several LBs

2022-08-18 Thread Fernando Royo
Public bug reported:

HealthMonitor over a pool that share a member (over a different port)
with another pool or indeed over a different LB is updating all pools
involving the member notified on the ServiceMonitorUpdateEvent. In
particular the information mixed is on NBDB, on the field external_ids.

E.g

A member 10.0.0.100 on LB1 over pool1 (port 80)
Same member 10.0.0.100 on LB2 over pool2 (port 22)
HM over pool1

if HM notifies an offline status for member over pool 80, both LBs are
including the information about the member in ERROR state on the
external_ids.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

** Summary changed:

- HealthMonitor affecting several LBs
+ [ovn-octavia-provider] HealthMonitor affecting several LBs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1986977

Title:
  [ovn-octavia-provider] HealthMonitor affecting several LBs

Status in neutron:
  In Progress

Bug description:
  HealthMonitor over a pool that share a member (over a different port)
  with another pool or indeed over a different LB is updating all pools
  involving the member notified on the ServiceMonitorUpdateEvent. In
  particular the information mixed is on NBDB, on the field
  external_ids.

  E.g

  A member 10.0.0.100 on LB1 over pool1 (port 80)
  Same member 10.0.0.100 on LB2 over pool2 (port 22)
  HM over pool1

  if HM notifies an offline status for member over pool 80, both LBs are
  including the information about the member in ERROR state on the
  external_ids.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1986977/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1981594] [NEW] [ovn-octavia-provider] LB member in ERROR state when delete using member_update_batch

2022-07-13 Thread Fernando Royo
Public bug reported:

When using the batch member update function to apply several changes to
members of a OVN-LB, a delete operation keeps the member in ERROR state.
Error happening when IPv6 is used on LB VIP and members.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1981594

Title:
  [ovn-octavia-provider] LB member in ERROR state when delete using
  member_update_batch

Status in neutron:
  New

Bug description:
  When using the batch member update function to apply several changes
  to members of a OVN-LB, a delete operation keeps the member in ERROR
  state. Error happening when IPv6 is used on LB VIP and members.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1981594/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1965772] Re: ovn-octavia-provider does not report status correctly to octavia

2022-07-13 Thread Fernando Royo
** Changed in: neutron
   Status: In Progress => Fix Released

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1965772

Title:
  ovn-octavia-provider does not report status correctly to octavia

Status in neutron:
  Fix Released

Bug description:
  Hi all,

  The OVN Octavia provider does not report status correctly to Octavia
  due to a few bugs in the health monitoring implementation:

  1) 
https://opendev.org/openstack/ovn-octavia-provider/src/commit/d6adbcef86e32bc7befbd5890a2bc79256b7a8e2/ovn_octavia_provider/helper.py#L2374
 :
  In _get_lb_on_hm_event, the request to the OVN NB API (db_find_rows) is 
incorrect:
  lbs = self.ovn_nbdb_api.db_find_rows(
  'Load_Balancer', (('ip_port_mappings', '=', mappings),
('protocol', '=', row.protocol))).execute()

  Should be :
  lbs = self.ovn_nbdb_api.db_find_rows(
  'Load_Balancer', ('ip_port_mappings', '=', mappings),
('protocol', '=', row.protocol[0])).execute()

  Note the removed extra parenthesis and the protocol string which is
  found in the first element of the protocol[] list.

  2) https://opendev.org/openstack/ovn-octavia-
  
provider/src/commit/d6adbcef86e32bc7befbd5890a2bc79256b7a8e2/ovn_octavia_provider/helper.py#L2426
  :

  There is a confusion with the Pool object returned by (pool =
  self._octavia_driver_lib.get_pool(pool_id)) : this object does not
  contain any operating_status attribute and it seems given the current
  state of the octavia-lib that it possible to set and update the status
  for a listener/pool/member but not possible to retrieve the current
  status.

  See https://opendev.org/openstack/octavia-
  lib/src/branch/master/octavia_lib/api/drivers/data_models.py for the
  current Pool data model.

  As a result, the computation done by _get_new_operating_statuses
  cannot use the current operating status to set a new operating status.
  It is still possible to set an operating status for the members by
  setting them to "OFFLINE" separately when a HM update event is fired.

  3) The Load_Balancer_Health_Check NB entry creates the Service_Monitor
  SB entries, but there isn't any way to link the Service_Monitor
  entries created with the original NB entry. The result is that health
  monitor events received from the SB and processed by the octavia
  driver agent cannot be accurately matched with the correct octavia
  health monitor entry. If we have for example two load balancer entries
  using the same pool members and the same ports, only the first LB
  returned with db_find_rows would be updated (given the #2 bug is
  fixed). The case for having 2 load balancers with the same members is
  perfectly valid when using separate load balancers for public traffic
  (using a VIP from a public pool) and another one for internal/admin
  traffic (using a VIP from another pool, and with a source range
  whitelist). The code selecting only the first LB in that case is the
  same as bug #1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1965772/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1977831] [NEW] Response time increasing on new subnets over same network

2022-06-07 Thread Fernando Royo
Public bug reported:

After some subnets created over same network, response time for new ones
is incresase linearly, if number of subnet is elevated (over 1000)
timeout is triggered.

The issue can be easily reproduced by creating subnets in a loop and
capture the time it takes to create as the total count increases.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1977831

Title:
  Response time increasing on new subnets over same network

Status in neutron:
  In Progress

Bug description:
  After some subnets created over same network, response time for new
  ones is incresase linearly, if number of subnet is elevated (over
  1000) timeout is triggered.

  The issue can be easily reproduced by creating subnets in a loop and
  capture the time it takes to create as the total count increases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1977831/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1974052] [NEW] Load Balancer remains in PENDING_CREATE

2022-05-18 Thread Fernando Royo
Public bug reported:

While running a heavy load test, where some hundreds of LBs are created
in a short period of time, some of them remains in PENDING_CREATE
without any apparent error.

[stack@undercloud ~]$ openstack loadbalancer list |grep -i pending
| 073b60fa-e41d-4df7-905e-9455b5fbd39a | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-207/cluster-density-207-1   | 
172.30.58.222  | PENDING_CREATE  | ovn  |
| b129391a-ad69-4c0a-b1a3-d8319522cc18 | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-157/cluster-density-157-1   | 
172.30.161.89  | PENDING_CREATE  | ovn  |
| 01ac61d0-5c0a-4504-ae2e-07ecfd57a8f9 | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-210/cluster-density-210-3   | 
172.30.218.204 | PENDING_CREATE  | ovn  |
| 6244e2c7-2abf-4fc1-a088-4a4145d39177 | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-185/cluster-density-185-4   | 
172.30.141.108 | PENDING_CREATE  | ovn  |

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress


** Tags: loadimpact ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

** Changed in: neutron
   Status: New => In Progress

** Description changed:

  While running a heavy load test, where some hundreds of LBs are created
  in a short period of time, some of them remains in PENDING_CREATE
  without any apparent error.
  
  [stack@undercloud ~]$ openstack loadbalancer list |grep -i pending
  | 073b60fa-e41d-4df7-905e-9455b5fbd39a | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-207/cluster-density-207-1   | 
172.30.58.222  | PENDING_CREATE  | ovn  |
  | b129391a-ad69-4c0a-b1a3-d8319522cc18 | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-157/cluster-density-157-1   | 
172.30.161.89  | PENDING_CREATE  | ovn  |
  | 01ac61d0-5c0a-4504-ae2e-07ecfd57a8f9 | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-210/cluster-density-210-3   | 
172.30.218.204 | PENDING_CREATE  | ovn  |
  | 6244e2c7-2abf-4fc1-a088-4a4145d39177 | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-185/cluster-density-185-4   | 
172.30.141.108 | PENDING_CREATE  | ovn  |
- 
- This patch cover any exception on requests over neutron client (e.g.
- subnet or listing port) during the creation of the LB

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1974052

Title:
  Load Balancer remains in PENDING_CREATE

Status in neutron:
  In Progress

Bug description:
  While running a heavy load test, where some hundreds of LBs are
  created in a short period of time, some of them remains in
  PENDING_CREATE without any apparent error.

  [stack@undercloud ~]$ openstack loadbalancer list |grep -i pending
  | 073b60fa-e41d-4df7-905e-9455b5fbd39a | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-207/cluster-density-207-1   | 
172.30.58.222  | PENDING_CREATE  | ovn  |
  | b129391a-ad69-4c0a-b1a3-d8319522cc18 | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-157/cluster-density-157-1   | 
172.30.161.89  | PENDING_CREATE  | ovn  |
  | 01ac61d0-5c0a-4504-ae2e-07ecfd57a8f9 | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-210/cluster-density-210-3   | 
172.30.218.204 | PENDING_CREATE  | ovn  |
  | 6244e2c7-2abf-4fc1-a088-4a4145d39177 | 
767b70f3-f5fa-496e-ac28-0819ce9411a5-185/cluster-density-185-4   | 
172.30.141.108 | PENDING_CREATE  | ovn  |

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1974052/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1973765] [NEW] LB creation failed due to address already in use

2022-05-17 Thread Fernando Royo
Public bug reported:

While running a heavy load test, where create CRUD operations over load
balancers and editing all pending items (listener/pool/member). We are
getting error "IP_ADDRESS already allocated in subnet". Apparently
because the address is being used by a Port for the load-balancer, but
the load-balancer doesn't exist in a post error check.

e.g. when create loadbalancer is sent we get this error from Octavia
API:

octavia.common.exceptions.ProviderDriverError: Provider 'ovn' reports
error: IP address 172.30.125.1 already allocated in subnet
beb4de56-a2b1-487b-9ab5-e10a0ad0d7ac


[stack@undercloud ~]$ openstack port list | grep 172.30.125.1
| 6a9c24a2-7907-4b5c-b406-f74ba5a05820 | 
ovn-lb-vip-22efa8a7-460c-443f-8f53-ac53929e8637  | 
fa:16:3e:ea:e5:c4 | ip_address='172.30.125.1', 
subnet_id='beb4de56-a2b1-487b-9ab5-e10a0ad0d7ac'   | DOWN   |


[stack@undercloud ~]$ openstack loadbalancer list --vip-port-id 
ovn-lb-vip-22efa8a7-460c-443f-8f53-ac53929e8637  
[stack@undercloud ~]$

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973765

Title:
  LB creation failed due to address already in use

Status in neutron:
  In Progress

Bug description:
  While running a heavy load test, where create CRUD operations over
  load balancers and editing all pending items (listener/pool/member).
  We are getting error "IP_ADDRESS already allocated in subnet".
  Apparently because the address is being used by a Port for the load-
  balancer, but the load-balancer doesn't exist in a post error check.

  e.g. when create loadbalancer is sent we get this error from Octavia
  API:

  octavia.common.exceptions.ProviderDriverError: Provider 'ovn' reports
  error: IP address 172.30.125.1 already allocated in subnet
  beb4de56-a2b1-487b-9ab5-e10a0ad0d7ac

  
  [stack@undercloud ~]$ openstack port list | grep 172.30.125.1
  | 6a9c24a2-7907-4b5c-b406-f74ba5a05820 | 
ovn-lb-vip-22efa8a7-460c-443f-8f53-ac53929e8637  | 
fa:16:3e:ea:e5:c4 | ip_address='172.30.125.1', 
subnet_id='beb4de56-a2b1-487b-9ab5-e10a0ad0d7ac'   | DOWN   |

  
  [stack@undercloud ~]$ openstack loadbalancer list --vip-port-id 
ovn-lb-vip-22efa8a7-460c-443f-8f53-ac53929e8637  
  [stack@undercloud ~]$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1973765/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1972278] Re: ovn-octavia-provider oslo config options colliding with neutron ones

2022-05-13 Thread Fernando Royo
** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1972278

Title:
  ovn-octavia-provider oslo config options colliding with neutron ones

Status in neutron:
  Fix Released

Bug description:
  Some jobs in zuul are reporting this error:

  Failed to import test module: 
ovn_octavia_provider.tests.functional.test_integration
  Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/loader.py", line 436, in _find_test_path
  module = self._get_module_from_name(name)
File "/usr/lib/python3.8/unittest/loader.py", line 377, in 
_get_module_from_name
  __import__(name)
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/ovn_octavia_provider/tests/functional/test_integration.py",
 line 18, in 
  from ovn_octavia_provider.tests.functional import base as ovn_base
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/ovn_octavia_provider/tests/functional/base.py",
 line 31, in 
  from neutron.tests.functional import base
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/neutron/tests/functional/base.py",
 line 40, in 
  from neutron.conf.plugins.ml2.drivers.ovn import ovn_conf
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/neutron/conf/plugins/ml2/drivers/ovn/ovn_conf.py",
 line 212, in 
  cfg.CONF.register_opts(ovn_opts, group='ovn')
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_config/cfg.py",
 line 2077, in __inner
  ...
  if _is_opt_registered(self._opts, opt):
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_config/cfg.py",
 line 356, in _is_opt_registered
  raise DuplicateOptError(opt.name)
  oslo_config.cfg.DuplicateOptError: duplicate option: ovn_nb_connection

  Basically the OVN octavia provider is registering opts a soon modules
  (driver, agent or helper) are imported so when tests run the setUp
  they are triggered by a Duplicate option error because they are based
  on TestOVNFunctionalBase from Neutron where same options are loaded.
  Error doesn't appear in running environment as neutron and ovn-
  octavia-provider (octavia) are running in separate process but in zuul
  jobs they collide.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1972278/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1972278] [NEW] ovn-octavia-provider oslo config options colliding with neutron ones

2022-05-09 Thread Fernando Royo
Public bug reported:

Some jobs in zuul are reporting this error:

Failed to import test module: 
ovn_octavia_provider.tests.functional.test_integration
Traceback (most recent call last):
  File "/usr/lib/python3.8/unittest/loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
  File "/usr/lib/python3.8/unittest/loader.py", line 377, in 
_get_module_from_name
__import__(name)
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/ovn_octavia_provider/tests/functional/test_integration.py",
 line 18, in 
from ovn_octavia_provider.tests.functional import base as ovn_base
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/ovn_octavia_provider/tests/functional/base.py",
 line 31, in 
from neutron.tests.functional import base
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/neutron/tests/functional/base.py",
 line 40, in 
from neutron.conf.plugins.ml2.drivers.ovn import ovn_conf
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/neutron/conf/plugins/ml2/drivers/ovn/ovn_conf.py",
 line 212, in 
cfg.CONF.register_opts(ovn_opts, group='ovn')
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_config/cfg.py",
 line 2077, in __inner
...
if _is_opt_registered(self._opts, opt):
  File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_config/cfg.py",
 line 356, in _is_opt_registered
raise DuplicateOptError(opt.name)
oslo_config.cfg.DuplicateOptError: duplicate option: ovn_nb_connection

Basically the OVN octavia provider is registering opts a soon modules
(driver, agent or helper) are imported so when tests run the setUp they
are triggered by a Duplicate option error because they are based on
TestOVNFunctionalBase from Neutron where same options are loaded. Error
doesn't appear in running environment as neutron and ovn-octavia-
provider (octavia) are running in separate process but in zuul jobs they
collide.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1972278

Title:
  ovn-octavia-provider oslo config options colliding with neutron ones

Status in neutron:
  New

Bug description:
  Some jobs in zuul are reporting this error:

  Failed to import test module: 
ovn_octavia_provider.tests.functional.test_integration
  Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/loader.py", line 436, in _find_test_path
  module = self._get_module_from_name(name)
File "/usr/lib/python3.8/unittest/loader.py", line 377, in 
_get_module_from_name
  __import__(name)
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/ovn_octavia_provider/tests/functional/test_integration.py",
 line 18, in 
  from ovn_octavia_provider.tests.functional import base as ovn_base
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/ovn_octavia_provider/tests/functional/base.py",
 line 31, in 
  from neutron.tests.functional import base
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/neutron/tests/functional/base.py",
 line 40, in 
  from neutron.conf.plugins.ml2.drivers.ovn import ovn_conf
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/neutron/conf/plugins/ml2/drivers/ovn/ovn_conf.py",
 line 212, in 
  cfg.CONF.register_opts(ovn_opts, group='ovn')
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_config/cfg.py",
 line 2077, in __inner
  ...
  if _is_opt_registered(self._opts, opt):
File 
"/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_config/cfg.py",
 line 356, in _is_opt_registered
  raise DuplicateOptError(opt.name)
  oslo_config.cfg.DuplicateOptError: duplicate option: ovn_nb_connection

  Basically the OVN octavia provider is registering opts a soon modules
  (driver, agent or helper) are imported so when tests run the setUp
  they are triggered by a Duplicate option error because they are based
  on TestOVNFunctionalBase from Neutron where same options are loaded.
  Error doesn't appear in running environment as neutron and ovn-
  octavia-provider (octavia) are running

[Yahoo-eng-team] [Bug 1964817] [NEW] [ovn-octavia-provider] Deleted members remain in ERROR status

2022-03-14 Thread Fernando Royo
Public bug reported:

An OVN provider Load Balancer allows the creation of members without
specifying the subnet-id. The internal logic allows OVN provider to
create the member by associating it with the subnet-id of the pool to
which it belongs, but the member is stored in Octavia DB without a
reference to the subnet_id associated with the pool.

Subsequently, when an attempt is made to delete a member that was
created without specifying the subnet_id to which it belongs, it is left
in an ERROR state, while a member that did include that parameter at
creation time is deleted correctly.

Steps to Reproduce:
1. Create a load balancer with OVN as provider
2. Create a listener and a pool associated to LB create in 1.
3. Create a member without parameter subnet-id
4. Try to delete the member created in 3.

Expected result:
Member is deleted correctly

Actual result:
Member keeps in ERROR state.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1964817

Title:
  [ovn-octavia-provider] Deleted members remain in ERROR status

Status in neutron:
  New

Bug description:
  An OVN provider Load Balancer allows the creation of members without
  specifying the subnet-id. The internal logic allows OVN provider to
  create the member by associating it with the subnet-id of the pool to
  which it belongs, but the member is stored in Octavia DB without a
  reference to the subnet_id associated with the pool.

  Subsequently, when an attempt is made to delete a member that was
  created without specifying the subnet_id to which it belongs, it is
  left in an ERROR state, while a member that did include that parameter
  at creation time is deleted correctly.

  Steps to Reproduce:
  1. Create a load balancer with OVN as provider
  2. Create a listener and a pool associated to LB create in 1.
  3. Create a member without parameter subnet-id
  4. Try to delete the member created in 3.

  Expected result:
  Member is deleted correctly

  Actual result:
  Member keeps in ERROR state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1964817/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958561] Re: [ovn-octavia-provider] Listner/Pool Stuck in PENDING_CREATE Using Fully Populated LB API

2022-03-11 Thread Fernando Royo
The issue looks exactly to the one reported here
https://bugs.launchpad.net/neutron/+bug/1958964. It was fixed in
https://review.opendev.org/c/openstack/ovn-octavia-provider/+/826257


** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958561

Title:
  [ovn-octavia-provider] Listner/Pool Stuck in PENDING_CREATE Using
  Fully Populated LB API

Status in neutron:
  Fix Released

Bug description:
  Description
  ===
  When using creating a fully populated LB with ovn as provider, the 
listener/pools will stuck in PENDING_CREATE STATE.

  I Haven't tested what the result for other Octavia provider as I only
  have OVN in my environment.

  Found this issue while using cloud-provider-openstack with K8S to
  create LB, as it creates LB with fully populated request, it will have
  listeners and pools in PENDING_CREATE status.

  Steps to reproduce
  ==

  Create a fully populated LB with API (only includes listener in this
  example):

  curl -i -X POST -H "Content-Type: application/json" -H "X-Auth-Token:
  $OS_TOKEN" -d
  
'{"loadbalancer":{"admin_state_up":true,"listeners":[{"name":"http_listener","protocol":"TCP","protocol_port":80}],"vip_subnet_id":"SUBNET_ID","provider":"ovn","name":"best_load_balancer","tags":["test_tag"]}}'
  https://X:9876/v2.0/lbaas/loadbalancers

  Expected Result
  ===

  LB and listener are both created and in ACTIVE state

  Actual Result
  =

  openstack loadbalancer listener show http_listener
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | connection_limit| -1   |
  | created_at  | 2022-01-20T16:37:58  |
  | default_pool_id | None |
  | default_tls_container_ref   | None |
  | description |  |
  | id  | de2bbd23-ca08-445f-b58b-e36c8ea709ad |
  | insert_headers  | None |
  | l7policies  |  |
  | loadbalancers   | 467846cf-7b1a-49f8-ae2f-398da103e6e4 |
  | name| http_listener|
  | operating_status| OFFLINE  |
  | project_id  | 76235f59db4f4b289513b08c3739d2d6 |
  | protocol| TCP  |
  | protocol_port   | 80   |
  | provisioning_status | PENDING_CREATE   |
  | sni_container_refs  | []   |
  | timeout_client_data | 5|
  | timeout_member_connect  | 5000 |
  | timeout_member_data | 5|
  | timeout_tcp_inspect | 0|
  | updated_at  | None |
  | client_ca_tls_container_ref | None |
  | client_authentication   | NONE |
  | client_crl_container_ref| None |
  | allowed_cidrs   | None |
  | tls_ciphers | None |
  | tls_versions| None |
  | alpn_protocols  | None |
  | tags|  |
  +-+--+

  The listener will remain in PENDING_CREATE state forever.

  Environment
  ===

  Deployment method: Kolla-Ansible

  Octavia version:
  octavia-api --version
  9.0.1.dev8

  OpenStack Version: stabe/xena

  octavia.conf is generated with Kolla-Ansible with no modification.

  Logs
  

  2022-01-20 15:52:57.991 24 DEBUG ovn_octavia_provider.helper [-] Handling 
request lb_create with info {'id': '5b743d3f-3c66-4735-89d3-f02f26cd63cc', 
'vip_address': '10.112.7.174', 'vip_network_id': 
'a5bb4f81-fbe7-4a3c-b6d6-b25d22e48ec0', 'admin_state_up': True} reques

[Yahoo-eng-team] [Bug 1963921] [NEW] Load balancer creation often failing due to logical switch not found

2022-03-07 Thread Fernando Royo
ecord)
2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 
130, in row_by_value
2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver raise 
RowNotFound(table=table, col=column, match=match)
2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver 
ovsdbapp.backend.ovs_idl.idlutils.RowNotFound: Cannot find Logical_Switch with 
name=neutron-6fa06cae-0145-4571-9919-0541f0bea93a

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1963921

Title:
  Load balancer creation often failing due to logical switch not found

Status in neutron:
  In Progress

Bug description:
  It seems possible that a load-balancer creation was triggered while
  multiple Subnets were being deleted, causing an exception of logical
  switch not found and moving the load-balancer to ERROR state.

  2022-02-10 14:48:49.115 16 ERROR ovsdbapp.backend.ovs_idl.transaction [-] 
Traceback (most recent call last):
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver [-] 
Exception occurred during creation of loadbalancer: 
ovsdbapp.backend.ovs_idl.idlutils.RowNotFound: Cannot find Logical_Switch with 
name=neutron-6fa06cae-0145-4571-9919-0541f0bea93a
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver Traceback 
(most recent call last):
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/ovsdbapp/api.py", line 111, in transaction
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver yield 
self._nested_txns_map[cur_thread_id]
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver KeyError: 
139860903978752
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver 
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver During 
handling of the above exception, another exception occurred:
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver 
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver Traceback 
(most recent call last):
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1033, in lb_create
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver 
self._execute_commands(commands)
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
626, in _execute_commands
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver 
txn.add(command)
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib64/python3.6/contextlib.py", line 88, in __exit__
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver 
next(self.gen)
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/ovsdb/impl_idl_ovn.py", line 
252, in transaction
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver yield t
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib64/python3.6/contextlib.py", line 88, in __exit__
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver 
next(self.gen)
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/ovsdbapp/api.py", line 119, in transaction
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver del 
self._nested_txns_map[cur_thread_id]
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/ovsdbapp/api.py", line 69, in __exit__
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver 
self.result = self.commit()
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
line 62, in commit
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver raise 
result.ex
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
128, in run
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver 
txn.results.put(txn.do_commit())
  2022-02-10 14:48:49.115 16 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/ovsdbapp/bac