[Yahoo-eng-team] [Bug 2048745] [NEW] [ovn] FIP not working when mixing vlan and geneve tenant networks

2024-01-08 Thread Luis Tomas
Public bug reported:

The flag redirect-type=bridge can only be used when there is no mix of
geneve and vlan networks in the same router, as handled here [1].

When there is such a mix, the flag reside-on-redirect-chassis is being used, 
but it is not working for all cases:
- Either you centralize the traffic and you make it work for VM with FIPs (also 
meaning no DVR)
- Or you distribute the traffic and make it work for VMs without FIPs (enabling 
DVR but breaking traffic for VMs with FIPs as SNAT is not perform on the 
traffic out)

Due to this, we should block the option to mix geneve and vlan networks
in the same router so that the "redirect-type=bridge" can be used and we
can have DVR + vlan tenant networks + NATing


[1] https://bugs.launchpad.net/neutron/+bug/2012712

[2] https://issues.redhat.com/browse/FDP-209

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2048745

Title:
  [ovn] FIP not working when mixing vlan and geneve tenant networks

Status in neutron:
  New

Bug description:
  The flag redirect-type=bridge can only be used when there is no mix of
  geneve and vlan networks in the same router, as handled here [1].

  When there is such a mix, the flag reside-on-redirect-chassis is being used, 
but it is not working for all cases:
  - Either you centralize the traffic and you make it work for VM with FIPs 
(also meaning no DVR)
  - Or you distribute the traffic and make it work for VMs without FIPs 
(enabling DVR but breaking traffic for VMs with FIPs as SNAT is not perform on 
the traffic out)

  Due to this, we should block the option to mix geneve and vlan
  networks in the same router so that the "redirect-type=bridge" can be
  used and we can have DVR + vlan tenant networks + NATing

  
  [1] https://bugs.launchpad.net/neutron/+bug/2012712

  [2] https://issues.redhat.com/browse/FDP-209

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2048745/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2035325] [NEW] FDB entries grows indefinitely

2023-09-13 Thread Luis Tomas
Public bug reported:

With the added support for learning FDB entries [1] there is a problem
that FDB table can grow indefinitely, leading to performance/scale
issues. New options are added to OVN [2] to tackle this problem, and
neutron should make use of it


[1] https://review.opendev.org/c/openstack/neutron/+/877675  
[2] 
https://github.com/ovn-org/ovn/commit/ae9a5488824c49e25215b02e7e81a62eb4d0bd53

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas (luis5tb)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas (luis5tb)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2035325

Title:
  FDB entries grows indefinitely

Status in neutron:
  In Progress

Bug description:
  With the added support for learning FDB entries [1] there is a problem
  that FDB table can grow indefinitely, leading to performance/scale
  issues. New options are added to OVN [2] to tackle this problem, and
  neutron should make use of it

  
  [1] https://review.opendev.org/c/openstack/neutron/+/877675  
  [2] 
https://github.com/ovn-org/ovn/commit/ae9a5488824c49e25215b02e7e81a62eb4d0bd53

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2035325/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025264] Re: [ovn][DVR]FIP traffic centralized in DVR environments

2023-07-04 Thread Luis Tomas
Fix: https://review.opendev.org/c/openstack/neutron/+/886626

** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025264

Title:
  [ovn][DVR]FIP traffic centralized in DVR environments

Status in neutron:
  Fix Released

Bug description:
  When a port is down, the FIP associated to it get centralized
  (external_mac removed on NAT table entry) despite DVR being enabled.
  This also happen when deleting a VM with a FIP associated, where
  during some period of time, the FIP gets centralized -- time between
  removing the external_mac from NAT table entry, and the deletion of
  the NAT table entry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2025264/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025637] [NEW] [ovn-octavia-provider] FIP traffic not distributed for members' FIP after lb cascade deletion

2023-07-03 Thread Luis Tomas
Public bug reported:

When a member of a loadbalancer has a FIP, it gets centralized (mac info
removed from NAT table) due to requirements on OVN side. When a member
gets deleted from a loadbalancer, the mac information gets updated in
the NAT table and the traffic to the FIP gets distributed again.
However, if the whole loadbalancer gets deleted with the cascade option,
the traffic to the member FIP is left centralized (without the NAT table
being updated with the MAC again)

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas (luis5tb)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas (luis5tb)

** Changed in: neutron
   Status: New => In Progress

** Summary changed:

- [ovn-octavia-provider] DVR traffic not reenabled for members' FIP upon 
cascade deletion
+ [ovn-octavia-provider] FIP traffic not distributed for members' FIP after lb 
cascade deletion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025637

Title:
  [ovn-octavia-provider] FIP traffic not distributed for members' FIP
  after lb cascade deletion

Status in neutron:
  In Progress

Bug description:
  When a member of a loadbalancer has a FIP, it gets centralized (mac
  info removed from NAT table) due to requirements on OVN side. When a
  member gets deleted from a loadbalancer, the mac information gets
  updated in the NAT table and the traffic to the FIP gets distributed
  again. However, if the whole loadbalancer gets deleted with the
  cascade option, the traffic to the member FIP is left centralized
  (without the NAT table being updated with the MAC again)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2025637/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025264] [NEW] [ovn][DVR]FIP traffic centralized in DVR environments

2023-06-28 Thread Luis Tomas Bolivar
Public bug reported:

When a port is down, the FIP associated to it get centralized
(external_mac removed on NAT table entry) despite DVR being enabled.
This also happen when deleting a VM with a FIP associated, where during
some period of time, the FIP gets centralized -- time between removing
the external_mac from NAT table entry, and the deletion of the NAT table
entry.

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025264

Title:
  [ovn][DVR]FIP traffic centralized in DVR environments

Status in neutron:
  New

Bug description:
  When a port is down, the FIP associated to it get centralized
  (external_mac removed on NAT table entry) despite DVR being enabled.
  This also happen when deleting a VM with a FIP associated, where
  during some period of time, the FIP gets centralized -- time between
  removing the external_mac from NAT table entry, and the deletion of
  the NAT table entry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2025264/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1949230] Re: OVN Octavia provider driver should implement allowed_cidrs to enforce security groups on LB ports

2023-06-20 Thread Luis Tomas
There is no need to support this as the SG enforced are the ones of the
members, since the source IP does not change

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1949230

Title:
  OVN Octavia provider driver should implement allowed_cidrs to enforce
  security groups on LB ports

Status in neutron:
  Invalid

Bug description:
  Octavia can use OVN as a provider driver using it's driver framework.
  The OVN Octavia provider driver, part of ML2/OVN, does not implement
  all of the functionality of the Octavia API [1].  One feature that
  should be supported is allowed_cidrs.

  The Octavia allowed_cidrs functionality allows Octavia to manage and
  communicate the CIDR blocks allowed to address an Octavia load
  balancer.  Implementing this in the OVN provider driver would allow
  load balancers to be only accessible from specific CIDR blocks, a
  requirement for customer security ina number of scenarios.

  [1] https://docs.openstack.org/octavia/latest/user/feature-
  classification/index.html#listener-api-features

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1949230/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2012712] [NEW] [ovn] N/S traffic for VMs without FIPs not working

2023-03-24 Thread Luis Tomas
Public bug reported:

The N/S traffic for VMs without FIPs is not working due to redirect-
type=bridge option being set on cr-lrp for routers with geneve tenant
networks connected to it. It seems this flag should only be used for
vlan networks, and not for geneve

This option was recently added as part of [1]

[1] https://review.opendev.org/c/openstack/neutron/+/875644

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas (luis5tb)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas (luis5tb)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2012712

Title:
  [ovn] N/S traffic for VMs without FIPs not working

Status in neutron:
  In Progress

Bug description:
  The N/S traffic for VMs without FIPs is not working due to redirect-
  type=bridge option being set on cr-lrp for routers with geneve tenant
  networks connected to it. It seems this flag should only be used for
  vlan networks, and not for geneve

  This option was recently added as part of [1]

  [1] https://review.opendev.org/c/openstack/neutron/+/875644

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2012712/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2012069] [NEW] [OVN] Flooding issue on provider networks with disabled port security

2023-03-17 Thread Luis Tomas Bolivar
Public bug reported:

When VMs associated to a provider network, with disabled port security,
try to reach IPs on the provider network not known by OpenStack, there
is a flooding issue due to FDB table not learning MACs. It seems there
is a option in ovn [1] to address this issue but it is not used by
OpenStack.

[1] https://github.com/ovn-
org/ovn/commit/93514df0d4c8fe7986dc5f287d7011f420d1be6d

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2012069

Title:
  [OVN] Flooding issue on provider networks with disabled port security

Status in neutron:
  New

Bug description:
  When VMs associated to a provider network, with disabled port
  security, try to reach IPs on the provider network not known by
  OpenStack, there is a flooding issue due to FDB table not learning
  MACs. It seems there is a option in ovn [1] to address this issue but
  it is not used by OpenStack.

  [1] https://github.com/ovn-
  org/ovn/commit/93514df0d4c8fe7986dc5f287d7011f420d1be6d

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2012069/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2003997] [NEW] [ovn-octavia-provider] ovn-lb with VIP on provider network not working

2023-01-27 Thread Luis Tomas Bolivar
Public bug reported:

In core OVN, LBs on switches with localnet ports (i.e., neutron
provider networks) don't work if traffic comes from localnet [1]

In order to force NAT to happen at the virtual router instead
of the LS level, when the VIP of the LoadBalancer is associated
to a provider network we should avoid adding the LB to the
LS associated to the provider network

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2003997

Title:
  [ovn-octavia-provider] ovn-lb with VIP on provider network not working

Status in neutron:
  New

Bug description:
  In core OVN, LBs on switches with localnet ports (i.e., neutron
  provider networks) don't work if traffic comes from localnet [1]

  In order to force NAT to happen at the virtual router instead
  of the LS level, when the VIP of the LoadBalancer is associated
  to a provider network we should avoid adding the LB to the
  LS associated to the provider network

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2003997/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2003455] [NEW] [ovn] MTU issues due to centralized vlan provider networks

2023-01-20 Thread Luis Tomas Bolivar
Public bug reported:

After this change was added [1] the traffic gets centralized not only
for vlan tenant networks, but also for vlan provider networks. This
means that extra reduction on the MTU size needs to be done to account
for the geneve encapsulation due to traffic going through the networker
node instead of directly from the node


[1] 
https://opendev.org/openstack/networking-ovn/commit/1440207c0d568068a37a306a7f03a81ad58e468f

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2003455

Title:
  [ovn] MTU issues due to centralized vlan provider networks

Status in neutron:
  New

Bug description:
  After this change was added [1] the traffic gets centralized not only
  for vlan tenant networks, but also for vlan provider networks. This
  means that extra reduction on the MTU size needs to be done to account
  for the geneve encapsulation due to traffic going through the
  networker node instead of directly from the node

  
  [1] 
https://opendev.org/openstack/networking-ovn/commit/1440207c0d568068a37a306a7f03a81ad58e468f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2003455/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1997418] [NEW] [ovn-octavia-provider] HM not working for FIPs

2022-11-22 Thread Luis Tomas Bolivar
Public bug reported:

When an OVN Load Balancer has HealthMonitors associated to its
pool/members, if a member is detected as error, there is not traffic
being sent to it. However, if the OVN LoadBalancer has a FIP associated
to the VIP, when using the FIP to access the Load Balancer, the traffic
is loadbalanced also using the members that are in error status.

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1997418

Title:
   [ovn-octavia-provider] HM not working for FIPs

Status in neutron:
  New

Bug description:
  When an OVN Load Balancer has HealthMonitors associated to its
  pool/members, if a member is detected as error, there is not traffic
  being sent to it. However, if the OVN LoadBalancer has a FIP
  associated to the VIP, when using the FIP to access the Load Balancer,
  the traffic is loadbalanced also using the members that are in error
  status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1997418/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1997416] [NEW] [ovn-octavia-provider] HM updates lost previous values

2022-11-22 Thread Luis Tomas Bolivar
Public bug reported:

When one of the parameters of the HealthMonitor is changed, the other
options are incorrectly set to "Unset", e.g:

options : {failure_count=Unset, interval=Unset,
success_count=Unset, timeout="60"}

If only one value is updated, the other should keep its previous value

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1997416

Title:
   [ovn-octavia-provider] HM updates lost previous values

Status in neutron:
  In Progress

Bug description:
  When one of the parameters of the HealthMonitor is changed, the other
  options are incorrectly set to "Unset", e.g:

  options : {failure_count=Unset, interval=Unset,
  success_count=Unset, timeout="60"}

  If only one value is updated, the other should keep its previous value

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1997416/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1992363] [NEW] [ovn-octavia-provider] Detach OVN-LB LS from the LR breaks OVN-LB connectivity

2022-10-10 Thread Luis Tomas Bolivar
Public bug reported:

If an OVN-LB is in a LS connected to a LR, upon removal of the LS from
the LR the loadbalancer should be removed from the LR but should be kept
at the LS, and still provide connectivity within that LS.

However, this is not the case and the loadbalancer is also removed from
the LS, leading to the removal of the LB from the OVN SB DB, and
consequently beaking the connectivity as the flows for the LB are not
installed

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1992363

Title:
  [ovn-octavia-provider] Detach OVN-LB LS from the LR breaks OVN-LB
  connectivity

Status in neutron:
  New

Bug description:
  If an OVN-LB is in a LS connected to a LR, upon removal of the LS from
  the LR the loadbalancer should be removed from the LR but should be
  kept at the LS, and still provide connectivity within that LS.

  However, this is not the case and the loadbalancer is also removed
  from the LS, leading to the removal of the LB from the OVN SB DB, and
  consequently beaking the connectivity as the flows for the LB are not
  installed

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1992363/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1992356] [NEW] [ovn-octavia-provider] LS has LB leftover after being removed from the router

2022-10-10 Thread Luis Tomas Bolivar
Public bug reported:

In order to have ovn-lb properly configured it is needed to include the
loadbalancer in all the logical_switches connected to the logical_router
where the VIP and the members are connected to.

With that, in the next scenario:
- LR1
- LS1 and LS2 connected to LR1
- OVN-LB1 with VIP and the members on LS1

The LB is added to both LR1, and LS1 and LS2. However, if LS2 is
detached from the LR1, the OVN-LB1 remains added to the LS2 while it
should be removed

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1992356

Title:
   [ovn-octavia-provider] LS has LB leftover after being removed from
  the router

Status in neutron:
  In Progress

Bug description:
  In order to have ovn-lb properly configured it is needed to include
  the loadbalancer in all the logical_switches connected to the
  logical_router where the VIP and the members are connected to.

  With that, in the next scenario:
  - LR1
  - LS1 and LS2 connected to LR1
  - OVN-LB1 with VIP and the members on LS1

  The LB is added to both LR1, and LS1 and LS2. However, if LS2 is
  detached from the LR1, the OVN-LB1 remains added to the LS2 while it
  should be removed

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1992356/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1991509] [NEW] [ovn-octavia-provider] router gateway unset + set breaks ovn lb connectivity

2022-10-03 Thread Luis Tomas Bolivar
Public bug reported:

The LogicalRouterPortEvent for gateway_chassis port are skip [1],
however if ovn lb VIPs are on provider network, the create event needs
to be handle so that the loadbalancer gets properly configured and added
to the router


[1] 
https://opendev.org/openstack/ovn-octavia-provider/src/commit/acbf6e7f3e223c088582390475c84464bc27227d/ovn_octavia_provider/event.py#L39

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1991509

Title:
  [ovn-octavia-provider] router gateway unset + set breaks ovn lb
  connectivity

Status in neutron:
  In Progress

Bug description:
  The LogicalRouterPortEvent for gateway_chassis port are skip [1],
  however if ovn lb VIPs are on provider network, the create event needs
  to be handle so that the loadbalancer gets properly configured and
  added to the router

  
  [1] 
https://opendev.org/openstack/ovn-octavia-provider/src/commit/acbf6e7f3e223c088582390475c84464bc27227d/ovn_octavia_provider/event.py#L39

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1991509/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988793] Re: OVN as a Provider Driver for Octavia in ovn-octavia-provider

2022-09-14 Thread Luis Tomas Bolivar
This is not a limitation. The failover action is already properly handle
to state it is not supported. But this is not due to a limitation into
the ovn-octavia driver, but due to not needed this functionality at all
(I would say this is an improvement). In amphora case you have a VM that
needs to be recovered (failover) in certain ocassions. In ovn-octavia
there is no VM for loadbalancing (with its pros and cons), and the flows
are already distributed in all the nodes, so there is no need to
failover/recover anything

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1988793

Title:
  OVN as a Provider Driver for Octavia in ovn-octavia-provider

Status in neutron:
  Invalid

Bug description:
  - [X] This is a doc addition request.

  Under Limitations of the OVN Provider Driver[1] I believe we should
  add that manual failover is not supported as per [2]

  
  This also should be updated imo [3]

  [1] https://docs.openstack.org/ovn-octavia-
  provider/latest/admin/driver.html#limitations-of-the-ovn-provider-
  driver

  [2] https://bugs.launchpad.net/neutron/+bug/1901936

  [3] https://docs.openstack.org/ovn-octavia-
  provider/latest/contributor/loadbalancer.html#limitations

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1988793/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1982111] [NEW] [ovn-octavia-provider] members without subnet wrongly associated to VIP subnet

2022-07-19 Thread Luis Tomas Bolivar
Public bug reported:

When members are added without subnet_id information, the ovn-octavia
provider used the VIP subnet for the subnet_id. However, if the member
does not belong to the same subnet as the VIP subnet (i.e., different
cidr), the API does not return any error but there is no connectivity to
the member (as it does not belong to the obtained subnet).

An extra checking to ensure the VIP CIDR includes the member IP should
be done.

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1982111

Title:
   [ovn-octavia-provider] members without subnet wrongly associated to
  VIP subnet

Status in neutron:
  In Progress

Bug description:
  When members are added without subnet_id information, the ovn-octavia
  provider used the VIP subnet for the subnet_id. However, if the member
  does not belong to the same subnet as the VIP subnet (i.e., different
  cidr), the API does not return any error but there is no connectivity
  to the member (as it does not belong to the obtained subnet).

  An extra checking to ensure the VIP CIDR includes the member IP should
  be done.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1982111/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1965732] [NEW] loadbalancer stuck in PENDING_X if delete_vip_port fails

2022-03-21 Thread Luis Tomas Bolivar
Public bug reported:

Load balancer are stuck in pending_x status if delete_vip_port function fails
with an error different than PortNotFound when::
- deleting a loadbalancer
- failed to created a loadbalancer

The problem comes from proper status update not being sent back to
octavia

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1965732

Title:
  loadbalancer stuck in PENDING_X if delete_vip_port fails

Status in neutron:
  In Progress

Bug description:
  Load balancer are stuck in pending_x status if delete_vip_port function fails
  with an error different than PortNotFound when::
  - deleting a loadbalancer
  - failed to created a loadbalancer

  The problem comes from proper status update not being sent back to
  octavia

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1965732/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1964901] [NEW] Wrong addition of VIPs to all logical router pods leading to triggering GARP on different locations

2022-03-15 Thread Luis Tomas Bolivar
Public bug reported:

When a loadbalancer is created in an OSP tenant network (VIP and
members), and that tenant networks is connected to a router, which in
turns is connected to the provider network, the ovn loadbalancer gets
associated to the ovn logical router. This includes also the cr-lrp port
(patch port connecting the router and the provider network, in OSP
world, the router gateway port), and it can be seen by the entry on the
nat_address of that port, which includes the VIP of the loadbalalancer.

This may cause problems as that means ovn-controller will send GARPs for
that (internal, tenant network) IP. There is nothing blocking different
tenants in OSP to create a subnet with the same CIDR and then a
loadbalancer with the same VIP. If that is the case, there may be
several ovn-controllers generating GARPs on the provider network for the
same IP, each one with the MAC of the logical router port belonging to
each user. This could be a problem for the physical network
infrastructure.


Steps to Reproduce:
1. Create a router in OSP and attach it to the provider network
2. Create a tenant network/subnet and connect it to the router
3. Create a Load Balancer in OSP, with the VIP in  that tenant network

Actual results:
Check the VIP of the loadbalancer is on the OVN SB Port_Binding table, at the 
nat_addresses of the patch port connecting the router to the provider network:

datapath: e3a0a334-9a02-41c7-a64d-6ea747839808
external_ids: {"neutron:cidrs"="172.24.100.181/24 
2001:db8::f816:3eff:fe77:7f9c/64", 
"neutron:device_id"="335cd008-216f-4571-a685-b0de5a7ffe50", 
"neutron:device_owner"="network:router_gateway", 
"neutron:network_name"=neutron-d923b3db-500d-4241-95be-c3869c72b36a, 
"neutron:port_name"="", "neutron:project_id"="", "neutron:revision_number"="6", 
"neutron:security_group_ids"=""}

logical_port: "add962d2-21ab-4733-b6ef-35538eff25a8"
mac : [router]
nat_addresses   : ["fa:16:3e:77:7f:9c 172.24.100.181 
is_chassis_resident(\"cr-lrp-add962d2-21ab-4733-b6ef-35538eff25a8\")", 
"fa:16:3e:77:7f:9c 172.24.100.229 *20.0.0.98* 172.24.100.112 
is_chassis_resident(\"cr-lrp-add962d2-21ab-4733-b6ef-35538eff25a8\")"]
options : {peer=lrp-add962d2-21ab-4733-b6ef-35538eff25a8}
parent_port : []
tag : []
tunnel_key  : 4
type: patch
up  : false
virtual_parent  : []


Expected results:
In the example, ip 20.0.0.98 should not be there as that belongs to an IP in a 
tenant network that should not be advertized (GARP) in the provider network.

** Affects: neutron
 Importance: Undecided
 Status: Fix Committed

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1964901

Title:
  Wrong addition of VIPs to all logical router pods leading to
  triggering GARP on different locations

Status in neutron:
  Fix Committed

Bug description:
  When a loadbalancer is created in an OSP tenant network (VIP and
  members), and that tenant networks is connected to a router, which in
  turns is connected to the provider network, the ovn loadbalancer gets
  associated to the ovn logical router. This includes also the cr-lrp
  port (patch port connecting the router and the provider network, in
  OSP world, the router gateway port), and it can be seen by the entry
  on the nat_address of that port, which includes the VIP of the
  loadbalalancer.

  This may cause problems as that means ovn-controller will send GARPs
  for that (internal, tenant network) IP. There is nothing blocking
  different tenants in OSP to create a subnet with the same CIDR and
  then a loadbalancer with the same VIP. If that is the case, there may
  be several ovn-controllers generating GARPs on the provider network
  for the same IP, each one with the MAC of the logical router port
  belonging to each user. This could be a problem for the physical
  network infrastructure.

  
  Steps to Reproduce:
  1. Create a router in OSP and attach it to the provider network
  2. Create a tenant network/subnet and connect it to the router
  3. Create a Load Balancer in OSP, with the VIP in  that tenant network

  Actual results:
  Check the VIP of the loadbalancer is on the OVN SB Port_Binding table, at the 
nat_addresses of the patch port connecting the router to the provider network:

  datapath: e3a0a334-9a02-41c7-a64d-6ea747839808
  external_ids: {"neutron:cidrs"="172.24.100.181/

[Yahoo-eng-team] [Bug 1962713] Re: Race between loadbalancer creation and FIP association with ovn-octavia provider

2022-03-02 Thread Luis Tomas Bolivar
Yes, this is what I did in this (partial) backport:
https://review.opendev.org/c/openstack/networking-ovn/+/831349/

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1962713

Title:
  Race between loadbalancer creation and FIP association with ovn-
  octavia provider

Status in neutron:
  Fix Released

Bug description:
  With Kuryr, when a service of LoadBalancer type is created in kubernetes the
  process is the next:
  - Create a load balancer
  - Associate FIP to the load balancer VIP

  In busy enviroments, with HA, there may be a race condition where
  the method to associate the FIP to the loadbalancer fails to find
  the recently created loadbalancer, therefore not doing the FIP to
  VIP association in the OVN NB DB. Which breaks the connectivity to
  the LoadBalancer FIP (k8s external-ip associated to the loadbalancer)
  until there is a modification on the service (for instance, adding
  a new member/endpoint) and the FIP to VIP association is reconfigured

  This problem only happens in stable/train, as fix was released as part
  of this code reshape at 
https://opendev.org/openstack/ovn-octavia-provider/commit/c6cee9207349a12e499cbc81fe0e5d4d5bfa015c

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1962713/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1962713] [NEW] Race between loadbalancer creation and FIP association with ovn-octavia provider

2022-03-02 Thread Luis Tomas Bolivar
Public bug reported:

With Kuryr, when a service of LoadBalancer type is created in kubernetes the
process is the next:
- Create a load balancer
- Associate FIP to the load balancer VIP

In busy enviroments, with HA, there may be a race condition where
the method to associate the FIP to the loadbalancer fails to find
the recently created loadbalancer, therefore not doing the FIP to
VIP association in the OVN NB DB. Which breaks the connectivity to
the LoadBalancer FIP (k8s external-ip associated to the loadbalancer)
until there is a modification on the service (for instance, adding
a new member/endpoint) and the FIP to VIP association is reconfigured

This problem only happens in stable/train, as fix was released as part
of this code reshape at 
https://opendev.org/openstack/ovn-octavia-provider/commit/c6cee9207349a12e499cbc81fe0e5d4d5bfa015c

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1962713

Title:
  Race between loadbalancer creation and FIP association with ovn-
  octavia provider

Status in neutron:
  New

Bug description:
  With Kuryr, when a service of LoadBalancer type is created in kubernetes the
  process is the next:
  - Create a load balancer
  - Associate FIP to the load balancer VIP

  In busy enviroments, with HA, there may be a race condition where
  the method to associate the FIP to the loadbalancer fails to find
  the recently created loadbalancer, therefore not doing the FIP to
  VIP association in the OVN NB DB. Which breaks the connectivity to
  the LoadBalancer FIP (k8s external-ip associated to the loadbalancer)
  until there is a modification on the service (for instance, adding
  a new member/endpoint) and the FIP to VIP association is reconfigured

  This problem only happens in stable/train, as fix was released as part
  of this code reshape at 
https://opendev.org/openstack/ovn-octavia-provider/commit/c6cee9207349a12e499cbc81fe0e5d4d5bfa015c

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1962713/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958961] Re: [ovn-octavia-provider] lb create failing with with ValueError: invalid literal for int() with base 10: '24 2001:db8::131/64'

2022-02-16 Thread Luis Tomas Bolivar
This bug is the same as https://bugs.launchpad.net/neutron/+bug/1959903
(or a subset of it).

The fix at https://review.opendev.org/c/openstack/ovn-octavia-
provider/+/827670 also solves this problem

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958961

Title:
  [ovn-octavia-provider] lb create failing with with ValueError: invalid
  literal for int() with base 10: '24 2001:db8::131/64'

Status in neutron:
  Fix Released

Bug description:
  When deployed with octavia-ovn-provider with below local.conf,
  loadbalancer create(openstack loadbalancer create --vip-network-id
  public --provider ovn) goes into ERROR state.

  From o-api logs:-
  ERROR ovn_octavia_provider.helper Traceback (most recent call last):
  ERROR ovn_octavia_provider.helper   File 
"/usr/local/lib/python3.8/dist-packages/netaddr/ip/__init__.py", line 811, in 
parse_ip_network
  ERROR ovn_octavia_provider.helper prefixlen = int(val2)
  ERROR ovn_octavia_provider.helper ValueError: invalid literal for int() with 
base 10: '24 2001:db8::131/64'

  Seems regression caused after
  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/816868.

  # Logical switch ports output
  sudo ovn-nbctl find logical_switch_port  type=router 
  _uuid   : 4865f50c-a2cd-4a5c-ae4a-bbc911985fb2
  addresses   : [router]
  dhcpv4_options  : []
  dhcpv6_options  : []
  dynamic_addresses   : []
  enabled : true
  external_ids: {"neutron:cidrs"="172.24.4.149/24 2001:db8::131/64", 
"neutron:device_id"="31a0e24f-6278-4714-b543-cba735a6c49d", 
"neutron:device_owner"="network:router_gateway", 
"neutron:network_name"=neutron-4708e992-cff8-4438-8142-1cc2ac7010db, 
"neutron:port_name"="", "neutron:project_id"="", "neutron:revision_number"="6", 
"neutron:security_group_ids"=""}
  ha_chassis_group: []
  name: "c18869b9--49a8-bc8a-5d2c51db5b6e"
  options : {mcast_flood_reports="true", nat-addresses=router, 
requested-chassis=ykarel-devstack, 
router-port=lrp-c18869b9--49a8-bc8a-5d2c51db5b6e}
  parent_name : []
  port_security   : []
  tag : []
  tag_request : []
  type: router
  up  : true

  _uuid   : f0ed6566-a942-4e2d-94f5-64ccd6bed568
  addresses   : [router]
  dhcpv4_options  : []
  dhcpv6_options  : []
  dynamic_addresses   : []
  enabled : true
  external_ids: {"neutron:cidrs"="fd25:38d5:1d9::1/64", 
"neutron:device_id"="31a0e24f-6278-4714-b543-cba735a6c49d", 
"neutron:device_owner"="network:router_interface", 
"neutron:network_name"=neutron-591d2b8c-3501-49b1-822c-731f2cc9b305, 
"neutron:port_name"="", "neutron:project_id"=f4c9948020024e13a1a091bd09d1fbba, 
"neutron:revision_number"="3", "neutron:security_group_ids"=""}
  ha_chassis_group: []
  name: "e778ac75-a15b-441b-b334-6a7579f851fa"
  options : {router-port=lrp-e778ac75-a15b-441b-b334-6a7579f851fa}
  parent_name : []
  port_security   : []
  tag : []
  tag_request : []
  type: router
  up  : true

  _uuid   : 9c2f3327-ac94-4881-a9c5-a6da87acf6a3
  addresses   : [router]
  dhcpv4_options  : []
  dhcpv6_options  : []
  dynamic_addresses   : []
  enabled : true
  external_ids: {"neutron:cidrs"="10.0.0.1/26", 
"neutron:device_id"="31a0e24f-6278-4714-b543-cba735a6c49d", 
"neutron:device_owner"="network:router_interface", 
"neutron:network_name"=neutron-591d2b8c-3501-49b1-822c-731f2cc9b305, 
"neutron:port_name"="", "neutron:project_id"=f4c9948020024e13a1a091bd09d1fbba, 
"neutron:revision_number"="3", "neutron:security_group_ids"=""}
  ha_chassis_group: []
  name: "d728e2a3-f9fd-4fff-8a6f-0c55a26bc55c"
  options : {router-port=lrp-d728e2a3-f9fd-4fff-8a6f-0c55a26bc55c}
  parent_name : []
  port_security   : []
  tag : []
  tag_request : []
  type: router
  up  : true

  
  local.conf
  ==

  [[local|localrc]]
  RECLONE=yes
  DATABASE_PASSWORD=password
  RABBIT_PASSWORD=password
  SERVICE_PASSWORD=password
  SERVICE_TOKEN=password
  ADMIN_PASSWORD=password
  Q_AGENT=ovn
  Q_ML2_PLUGIN_MECHANISM_DRIVERS=ovn,logger
  Q_ML2_PLUGIN_TYPE_DRIVERS=local,flat,vlan,geneve
  Q_ML2_TENANT_NETWORK_TYPE="geneve"
  OVN_BRANCH="v21.06.0"
  OVN_BUILD_FROM_SOURCE="True"
  OVS_BRANCH="branch-2.15"
  OVS_SYSCONFDIR="/usr/local/etc/openvswitch"
  OVN_L3_CREATE_PUBLIC_NETWORK=True
  OCTAVIA_NODE="api"
  DISABLE_AMP_IMAGE_BUILD=True
  enable_plugin barbican https://opendev.org/openstack/barbican
  enable_plugin octavia https://opendev.org/openstack/octavia
  enable_plugin octavia-dashboard 

[Yahoo-eng-team] [Bug 1958964] [NEW] Fully Populated Load Balancer creation with OVN provider leaves pools as PENDING_CREATE

2022-01-25 Thread Luis Tomas Bolivar
Public bug reported:

Creating a Fully Populated Load Balancer with OVN as provider
successfully creates the LB but the pool stays in PENDING_CREATE state
forever.

How reproducible:
Send the fully populated request to the API in JSON format following 
https://docs.openstack.org/api-ref/load-balancer/v2/index.html?expanded=create-pool-detail,create-a-load-balancer-detail#creating-a-fully-populated-load-balancer


Steps to Reproduce:
1.
#!/bin/sh
 
set -e
 
subnet_id=$(openstack subnet show private-subnet -c id -f value)
 
TOKEN=$(openstack token issue -f value -c id)
OCTAVIA_BASE_URL=$(openstack endpoint list --service octavia --interface public 
-c URL -f value)
 
cat < tree.json
{
"loadbalancer": {
"name": "lb1",
"vip_subnet_id": "$subnet_id",
"provider": "ovn",
"listeners": [
{
"name": "listener1",
"protocol": "TCP",
"protocol_port": 80,
"default_pool": {
"name": "pool1",
"protocol": "TCP",
"lb_algorithm": "SOURCE_IP_PORT",
"members": [
{
"address": "192.168.122.18",
"protocol_port": 8080
}, {
"address": "192.168.122.19",
"protocol_port": 8080
}
]
}
}
]
}
}
EOF
 
curl -X POST \
-H "Content-Type: application/json" \
-H "X-Auth-Token: $TOKEN" \
-d @tree.json \
${OCTAVIA_BASE_URL}/v2.0/lbaas/loadbalancers


2. openstack loadbalancer pool list


LB created but listeners, pool and members stays in PENDING_CREATE status

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958964

Title:
  Fully Populated Load Balancer creation with OVN provider leaves pools
  as PENDING_CREATE

Status in neutron:
  In Progress

Bug description:
  Creating a Fully Populated Load Balancer with OVN as provider
  successfully creates the LB but the pool stays in PENDING_CREATE state
  forever.

  How reproducible:
  Send the fully populated request to the API in JSON format following 
https://docs.openstack.org/api-ref/load-balancer/v2/index.html?expanded=create-pool-detail,create-a-load-balancer-detail#creating-a-fully-populated-load-balancer

  
  Steps to Reproduce:
  1.
  #!/bin/sh
   
  set -e
   
  subnet_id=$(openstack subnet show private-subnet -c id -f value)
   
  TOKEN=$(openstack token issue -f value -c id)
  OCTAVIA_BASE_URL=$(openstack endpoint list --service octavia --interface 
public -c URL -f value)
   
  cat < tree.json
  {
  "loadbalancer": {
  "name": "lb1",
  "vip_subnet_id": "$subnet_id",
  "provider": "ovn",
  "listeners": [
  {
  "name": "listener1",
  "protocol": "TCP",
  "protocol_port": 80,
  "default_pool": {
  "name": "pool1",
  "protocol": "TCP",
  "lb_algorithm": "SOURCE_IP_PORT",
  "members": [
  {
  "address": "192.168.122.18",
  "protocol_port": 8080
  }, {
  "address": "192.168.122.19",
  "protocol_port": 8080
  }
  ]
  }
  }
  ]
  }
  }
  EOF
   
  curl -X POST \
  -H "Content-Type: application/json" \
  -H "X-Auth-Token: $TOKEN" \
  -d @tree.json \
  ${OCTAVIA_BASE_URL}/v2.0/lbaas/loadbalancers

  
  2. openstack loadbalancer pool list

  
  LB created but listeners, pool and members stays in PENDING_CREATE status

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1958964/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1957161] [NEW] Wrong ACTIVE status of subports attached to a trunk whose parent is DOWN

2022-01-12 Thread Luis Tomas Bolivar
Public bug reported:

Subports of a trunk should have the same status as the parent port.
However, with ovn, if the parent port is in DOWN status, the subports
are transitioned to ACTIVE as soon as they are attached to the trunk

Steps to reproduce
- Create 2 ports , Port1 and Port2 (in DOWN status)
- Create trunk: openstack network trunk create --parent-port Port1 trunk
- Add Port2 as a subport of the trunk: openstack network trunk set --subport 
port=Port2,segmentation-type=vlan,segmentation-id=101 trunk
- Check the status of Port2 is ACTIVE, while it should be DOWN

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1957161

Title:
  Wrong ACTIVE status of subports attached to a trunk whose parent is
  DOWN

Status in neutron:
  New

Bug description:
  Subports of a trunk should have the same status as the parent port.
  However, with ovn, if the parent port is in DOWN status, the subports
  are transitioned to ACTIVE as soon as they are attached to the trunk

  Steps to reproduce
  - Create 2 ports , Port1 and Port2 (in DOWN status)
  - Create trunk: openstack network trunk create --parent-port Port1 trunk
  - Add Port2 as a subport of the trunk: openstack network trunk set --subport 
port=Port2,segmentation-type=vlan,segmentation-id=101 trunk
  - Check the status of Port2 is ACTIVE, while it should be DOWN

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1957161/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1956745] [NEW] [ovn-octavia-provider] Load Balancer remained with ACTIVE state even with PENDING_UPDATE listener

2022-01-07 Thread Luis Tomas Bolivar
|
+-+-+

(shiftstack) [stack@undercloud-0 ~]$ openstack loadbalancer pool show 
302709f7-d122-49d7-9d05-8e57ccf2ad11
+--+-+
| Field| Value   |
+--+-+
| admin_state_up   | True|
| created_at   | 2021-11-02T05:21:49 |
| description  | |
| healthmonitor_id | |
| id   | 302709f7-d122-49d7-9d05-8e57ccf2ad11|
| lb_algorithm | SOURCE_IP_PORT  |
| listeners| 39f36082-3a07-44c0-9b00-41ba12db49fa|
| loadbalancers| 8d03213c-ab26-4391-9d26-4878fa6b5d02|
| members  | 214e892a-4e25-46ce-8bba-9ae005304931|
| name | openshift-marketplace/community-operators:TCP:50051 |
| operating_status | OFFLINE |
| project_id   | fd0d6a21436d4ff987682c3a0419569d|
| protocol | TCP |
| provisioning_status  | ERROR   |
| session_persistence  | None|
| updated_at   | 2021-11-02T13:45:52 |
| tls_container_ref| None|
| ca_tls_container_ref | None|
| crl_container_ref| None|
| tls_enabled  | False   |
+--+-+

(shiftstack) [stack@undercloud-0 ~]$ openstack loadbalancer member list 
302709f7-d122-49d7-9d05-8e57ccf2ad11
+--+---+--+-+---+---+--++
| id   | name   
   | project_id   | provisioning_status | 
address   | protocol_port | operating_status | weight |
+--+---+--+-+---+---+--++
| 214e892a-4e25-46ce-8bba-9ae005304931 | 
openshift-marketplace/community-operators-zbmxw:50051 | 
fd0d6a21436d4ff987682c3a0419569d | ACTIVE  | 10.128.46.128 |
 50051 | NO_MONITOR   |  1 |
+--+---+--+-+---+---+--++

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1956745

Title:
  [ovn-octavia-provider]  Load Balancer remained with ACTIVE state even
  with PENDING_UPDATE listener

Status in neutron:
  In Progress

Bug description:
  Description of problem:

  The Load Balancer remained with ACTIVE and ONLINE state even when the
  listener had provisioning_status PENDING_UPDATE and the pool was with
  ERROR, what made the load-balancer to be considered functional by
  Kuryr and a load-balancer member removal was attempt without success
  as pool was immutable:

  2021-11-03 17:04:53.893 1 ERROR kuryr_kubernetes.handlers.logging 
openstack.exceptions.ConflictException: ConflictException: 409: Client Error 
for url: https://10.x.x.x:13876/v2.0/lbaas/po
  
ols/302709f7-d122-49d7-9d05-8e57ccf2ad11/members/214e892a-4e25-46ce-8bba-9ae005304931,
 Pool 302709f7-d122-49d7-9d05-8e57ccf2ad11 is immutable and cannot be updated.  

  2021-11-03 17:04:53.893 1 ERROR kuryr_kubernetes.handlers.logging 

  (shiftstack) [stack@undercloud-0 ~]$ openstack loadbalancer show 
8d03213c-ab26-4391-9d26-4878fa6b5d02 
  +-+---+
  | Field   | Value |
  +-+---+
  | admin_state_up  | True  |
  | created_at  | 2021-11-02T05:21:15

[Yahoo-eng-team] [Bug 1696051] [NEW] Missing binding details for bulk port creation requests

2017-06-06 Thread Luis Tomas Bolivar
Public bug reported:

When doing bulk port creation requests the returned object does not have
the binding information -- unlike the 'standard' port creation.

For a single (standard) port creation with:
neutron_client.create_port(rq).get('port') 

where rq has:

{'port': {'device_owner': 'compute:kuryr', 'binding:host_id': u'kuryr-
devstack', 'name': 'available-port', 'admin_state_up': True,
'network_id': '9b360a57-fb9f-4c6e-a636-b63d0558c551', 'project_id':
'cdf106e1045f47868df764863e58578a', 'fixed_ips': [{'subnet_id':
'2d58a9ba-e1d2-4ed1-90d0-a6ea22d0f3aa'}], 'security_groups': ['7c384ae3
-b43e-4d5a-b14a-0f0eae8967e0'], 'device_id': ''}}

this is the returned object:
{u'allowed_address_pairs': [], u'extra_dhcp_opts': [], u'updated_at': 
u'2017-06-05T10:59:45Z', u'device_owner': u'compute:kuryr', u'revision_number': 
9, u'port_security_enabled': True, u'binding:profile': {}, u'fixed_ips': 
[{u'subnet_id': u'2d58a9ba-e1d2-4ed1-90d0-a6ea22d0f3aa', u'ip_address': 
u'10.0.0.3'}, {u'subnet_id': u'a76b53d2-5654-4256-bafc-73f9756e151a', 
u'ip_address': u'fd16:3870:3761:0:f816:3eff:fee3:111d'}], u'id': 
u'4f30dc4d-37a0-4f82-a146-b432ead06860', u'security_groups': 
[u'7c384ae3-b43e-4d5a-b14a-0f0eae8967e0'], u'binding:vif_details': 
{u'port_filter': True, u'ovs_hybrid_plug': False}, u'binding:vif_type': u'ovs', 
u'mac_address': u'fa:16:3e:e3:11:1d', u'project_id': 
u'cdf106e1045f47868df764863e58578a', u'status': u'DOWN', u'binding:host_id': 
u'kuryr-devstack', u'description': u'', u'tags': [], u'device_id': u'', 
u'name': u'available-port', u'admin_state_up': True, u'network_id': 
u'9b360a57-fb9f-4c6e-a636-b63d0558c551', u'tenant_id': 
u'cdf106e1045f47868df764863e58578a'
 , u'created_at': u'2017-06-05T10:59:45Z', u'binding:vnic_type': u'normal'}

However, when doing the same call but for bulk requests (i.e., 
neutron_client.create_port(rq).get('ports')), with rq:
{'ports': [{'device_owner': 'compute:kuryr', 'binding:host_id': 
u'kuryr-devstack', 'name': 'available-port', 'admin_state_up': True, 
'network_id': '9b360a57-fb9f-4c6e-a636-b63d0558c551', 'project_id': 
'cdf106e1045f47868df764863e58578a', 'fixed_ips': [{'subnet_id': 
'2d58a9ba-e1d2-4ed1-90d0-a6ea22d0f3aa'}], 'security_groups': 
['7c384ae3-b43e-4d5a-b14a-0f0eae8967e0'], 'device_id': ''}]}

The returned object is:
[{u'allowed_address_pairs': [], u'extra_dhcp_opts': [], u'updated_at': 
u'2017-06-05T10:59:44Z', u'device_owner': u'compute:kuryr', u'revision_number': 
9, u'binding:profile': {}, u'port_security_enabled': True, u'fixed_ips': 
[{u'subnet_id': u'2d58a9ba-e1d2-4ed1-90d0-a6ea22d0f3aa', u'ip_address': 
u'10.0.0.6'}, {u'subnet_id': u'a76b53d2-5654-4256-bafc-73f9756e151a', 
u'ip_address': u'fd16:3870:3761:0:f816:3eff:fec1:a5e4'}], u'id': 
u'47342c62-8235-4a82-ac0e-681a44d5b7f2', u'security_groups': 
[u'7c384ae3-b43e-4d5a-b14a-0f0eae8967e0'], u'binding:vif_details': {}, 
u'binding:vif_type': u'unbound', u'mac_address': u'fa:16:3e:c1:a5:e4', 
u'project_id': u'cdf106e1045f47868df764863e58578a', u'status': u'DOWN', 
u'binding:host_id': u'kuryr-devstack', u'description': u'', u'tags': [], 
u'device_id': u'', u'name': u'available-port', u'admin_state_up': True, 
u'network_id': u'9b360a57-fb9f-4c6e-a636-b63d0558c551', u'tenant_id': 
u'cdf106e1045f47868df764863e58578a', u'created_at': u'2017-06-05T10:59:43Z', 
 u'binding:vnic_type': u'normal'}]

Even though it is a bulk creation of only a single port, with the same request 
information as before (same port attrs but a list of ports, instead of a port 
dict), the binding details are missing:
u'binding:vif_details': {}, u'binding:vif_type': u'unbound'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1696051

Title:
  Missing binding details for bulk port creation requests

Status in neutron:
  New

Bug description:
  When doing bulk port creation requests the returned object does not
  have the binding information -- unlike the 'standard' port creation.

  For a single (standard) port creation with:
  neutron_client.create_port(rq).get('port') 

  where rq has:

  {'port': {'device_owner': 'compute:kuryr', 'binding:host_id': u'kuryr-
  devstack', 'name': 'available-port', 'admin_state_up': True,
  'network_id': '9b360a57-fb9f-4c6e-a636-b63d0558c551', 'project_id':
  'cdf106e1045f47868df764863e58578a', 'fixed_ips': [{'subnet_id':
  '2d58a9ba-e1d2-4ed1-90d0-a6ea22d0f3aa'}], 'security_groups':
  ['7c384ae3-b43e-4d5a-b14a-0f0eae8967e0'], 'device_id': ''}}

  this is the returned object:
  {u'allowed_address_pairs': [], u'extra_dhcp_opts': [], u'updated_at': 
u'2017-06-05T10:59:45Z', u'device_owner': u'compute:kuryr', u'revision_number': 
9, u'port_security_enabled': True, u'binding:profile': {}, u'fixed_ips': 
[{u'subnet_id': u'2d58a9ba-e1d2-4ed1-90d0-a6ea22d0f3aa', u'ip_address': 
u'10.0.0.3'}, {u'subnet_id': 

[Yahoo-eng-team] [Bug 1657441] [NEW] Remove the set device_owner when attaching subports

2017-01-18 Thread Luis Tomas Bolivar
Public bug reported:

This is not required by business logic as pointed out in the commit
message of https://review.openstack.org/#/c/368289/, and in some
cases can lead to race problems.

There may be other proyects using neutron TrunkPorts, such as kuryr,
that can trigger the port/subport creation, therefore wanting to use
the device_owner to specify that kuryr-service is the one managing
those subports.

To give an example of the possible race, as the trunk_add_subport
internally calls update_port to set the device owner, it may happen that
a call from kuryr that wants to attach a subport to a port, and then set
the device_owner to kuryr, end up with the wrong device_owner as, 
even though the calls are triggered in this order:
1.- trunk_add_subport (internally calls update_port)
2.- update_port

The update_port is executed between trunk_add_subport and the internal call to
update_port, resulting in device_owner being set to trunk:subport, instead of
kuryr.

Possible solutions are:
- Revert the commit https://review.openstack.org/#/c/368289/. We have already 
have some discussion about this in the patch 
https://review.openstack.org/#/c/419028/
- Make the set device_owner optional based on the value of TRUNK_SUBPORT_OWNER
- Define what should be the scope of device_owner to clarify how this attribute
should be used within/outside neutron

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657441

Title:
  Remove the set device_owner when attaching subports

Status in neutron:
  New

Bug description:
  This is not required by business logic as pointed out in the commit
  message of https://review.openstack.org/#/c/368289/, and in some
  cases can lead to race problems.

  There may be other proyects using neutron TrunkPorts, such as kuryr,
  that can trigger the port/subport creation, therefore wanting to use
  the device_owner to specify that kuryr-service is the one managing
  those subports.

  To give an example of the possible race, as the trunk_add_subport
  internally calls update_port to set the device owner, it may happen that
  a call from kuryr that wants to attach a subport to a port, and then set
  the device_owner to kuryr, end up with the wrong device_owner as, 
  even though the calls are triggered in this order:
  1.- trunk_add_subport (internally calls update_port)
  2.- update_port

  The update_port is executed between trunk_add_subport and the internal call to
  update_port, resulting in device_owner being set to trunk:subport, instead of
  kuryr.

  Possible solutions are:
  - Revert the commit https://review.openstack.org/#/c/368289/. We have already 
  have some discussion about this in the patch 
https://review.openstack.org/#/c/419028/
  - Make the set device_owner optional based on the value of TRUNK_SUBPORT_OWNER
  - Define what should be the scope of device_owner to clarify how this 
attribute
  should be used within/outside neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639186] [NEW] qos max bandwidth rules not working for neutron trunk ports

2016-11-04 Thread Luis Tomas Bolivar
Public bug reported:

When using Qos together with Neutron trunk ports the max bandwidth
limits are not applied, neither for ovs-hybrid, nor for ovs-firewall

The reason is that a new ovs bridge is created to handle the trunk (parent + 
subport) ports.
For instance:
Bridge "tbr-c5402c58-3"
Port "tpt-e739265b-2b"
Interface "tpt-e739265b-2b"
type: patch
options: {peer="tpi-e739265b-2b"}
Port "qvoe739265b-2b"
Interface "qvoe739265b-2b"
Port "spt-17c950c4-f5"
tag: 101
Interface "spt-17c950c4-f5"
type: patch
options: {peer="spi-17c950c4-f5"}
Port "tbr-c5402c58-3"
Interface "tbr-c5402c58-3"
type: internal

Then, the  _set_egress_bw_limit_for_port
(https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L553)
is applied to tpi-e739265b-2b or spi-17c950c4-f5 (depending on if the
qos rule is applied to the parent or the subport ports, respectively).
However, these are of patch type, i.e., they are fully virtual and the
kernel does not know about them, therefore the QoS rules are not
applied.

To reproduce it:
- Enable QoS devstack local.conf: 
enable_plugin neutron https://github.com/openstack/neutron
enable_service q-qos
- Enable trunk in neutron.conf:
service_plugins = ... qos,trunk  

- Create QoS rule
- Apply the qos rule to either parent or subport ports
- Test bandwidth limit (e.g., with iperf)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: qos trunk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1639186

Title:
  qos max bandwidth rules not working for neutron trunk ports

Status in neutron:
  New

Bug description:
  When using Qos together with Neutron trunk ports the max bandwidth
  limits are not applied, neither for ovs-hybrid, nor for ovs-firewall

  The reason is that a new ovs bridge is created to handle the trunk (parent + 
subport) ports.
  For instance:
  Bridge "tbr-c5402c58-3"
  Port "tpt-e739265b-2b"
  Interface "tpt-e739265b-2b"
  type: patch
  options: {peer="tpi-e739265b-2b"}
  Port "qvoe739265b-2b"
  Interface "qvoe739265b-2b"
  Port "spt-17c950c4-f5"
  tag: 101
  Interface "spt-17c950c4-f5"
  type: patch
  options: {peer="spi-17c950c4-f5"}
  Port "tbr-c5402c58-3"
  Interface "tbr-c5402c58-3"
  type: internal

  Then, the  _set_egress_bw_limit_for_port
  
(https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L553)
  is applied to tpi-e739265b-2b or spi-17c950c4-f5 (depending on if the
  qos rule is applied to the parent or the subport ports, respectively).
  However, these are of patch type, i.e., they are fully virtual and the
  kernel does not know about them, therefore the QoS rules are not
  applied.

  To reproduce it:
  - Enable QoS devstack local.conf: 
  enable_plugin neutron https://github.com/openstack/neutron
  enable_service q-qos
  - Enable trunk in neutron.conf:
  service_plugins = ... qos,trunk  

  - Create QoS rule
  - Apply the qos rule to either parent or subport ports
  - Test bandwidth limit (e.g., with iperf)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1639186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591240] [NEW] progress_watermark is not updated

2016-06-10 Thread Luis Tomas
Public bug reported:

During the live migration process the progress_watermark/progress_time
are not being (re)updated with the new progress made by the live
migration at the "_live_migration_monitor" function
(virt/libvirt/driver.py).

More specifically, in these lines of code:
if ((progress_watermark is None) or
(progress_watermark > info.data_remaining)):
progress_watermark = info.data_remaining
progress_time = now


It may happen that the first time it gets inside (progress_watermark = None), 
the info.data_remaining is still 0, thus the progress_watermark is set to 0. 
This avoids to get inside the "if" block in the future iterations (as 
progress_watermark=0 is never bigger than info.data_remaining), and thus not 
updating neither the progress_watermark, nor the progress_time from that point. 

This may lead to (unneeded) abort migrations due to progress_time not
being updated, making (now - progress_time) > progress_timeout.

It can be fixed just by modifying the if clause to be like:
if ((progress_watermark is None) or
(progress_watermark == 0) or
(progress_watermark > info.data_remaining)):

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591240

Title:
  progress_watermark is not updated

Status in OpenStack Compute (nova):
  New

Bug description:
  During the live migration process the progress_watermark/progress_time
  are not being (re)updated with the new progress made by the live
  migration at the "_live_migration_monitor" function
  (virt/libvirt/driver.py).

  More specifically, in these lines of code:
  if ((progress_watermark is None) or
  (progress_watermark > info.data_remaining)):
  progress_watermark = info.data_remaining
  progress_time = now

  
  It may happen that the first time it gets inside (progress_watermark = None), 
the info.data_remaining is still 0, thus the progress_watermark is set to 0. 
This avoids to get inside the "if" block in the future iterations (as 
progress_watermark=0 is never bigger than info.data_remaining), and thus not 
updating neither the progress_watermark, nor the progress_time from that point. 

  This may lead to (unneeded) abort migrations due to progress_time not
  being updated, making (now - progress_time) > progress_timeout.

  It can be fixed just by modifying the if clause to be like:
  if ((progress_watermark is None) or
  (progress_watermark == 0) or
  (progress_watermark > info.data_remaining)):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1591240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp