Public bug reported:
The flag redirect-type=bridge can only be used when there is no mix of
geneve and vlan networks in the same router, as handled here [1].
When there is such a mix, the flag reside-on-redirect-chassis is being used,
but it is not working for all cases:
- Either you centralize
/openstack/neutron/+/877675
[2]
https://github.com/ovn-org/ovn/commit/ae9a5488824c49e25215b02e7e81a62eb4d0bd53
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas (luis5tb)
Status: In Progress
** Changed in: neutron
Assignee: (unassigned) => Luis Tomas (luis
Fix: https://review.opendev.org/c/openstack/neutron/+/886626
** Changed in: neutron
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025264
Title:
again.
However, if the whole loadbalancer gets deleted with the cascade option,
the traffic to the member FIP is left centralized (without the NAT table
being updated with the MAC again)
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas (luis5tb)
Status: In Progress
the external_mac from NAT table entry, and the deletion of the NAT table
entry.
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas Bolivar (ltomasbo)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)
--
You recei
There is no need to support this as the SG enforced are the ones of the
members, since the source IP does not change
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
as part of [1]
[1] https://review.opendev.org/c/openstack/neutron/+/875644
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas (luis5tb)
Status: In Progress
** Changed in: neutron
Assignee: (unassigned) => Luis Tomas (luis5tb)
--
You received this
Public bug reported:
When VMs associated to a provider network, with disabled port security,
try to reach IPs on the provider network not known by OpenStack, there
is a flooding issue due to FDB table not learning MACs. It seems there
is a option in ovn [1] to address this issue but it is not
network we should avoid adding the LB to the
LS associated to the provider network
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas Bolivar (ltomasbo)
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
the networker
node instead of directly from the node
[1]
https://opendev.org/openstack/networking-ovn/commit/1440207c0d568068a37a306a7f03a81ad58e468f
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas Bolivar (ltomasbo)
Status: New
** Changed in: neutron
Assignee
is loadbalanced also using the members that are in error status.
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas Bolivar (ltomasbo)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)
--
You received this
ld keep its previous value
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas Bolivar (ltomasbo)
Status: In Progress
** Changed in: neutron
Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)
--
You received this bug notification because you are a membe
the LS, leading to the removal of the LB from the OVN SB DB, and
consequently beaking the connectivity as the flows for the LB are not
installed
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas Bolivar (ltomasbo)
Status: New
** Changed in: neutron
Assignee
Public bug reported:
In order to have ovn-lb properly configured it is needed to include the
loadbalancer in all the logical_switches connected to the logical_router
where the VIP and the members are connected to.
With that, in the next scenario:
- LR1
- LS1 and LS2 connected to LR1
- OVN-LB1
-provider/src/commit/acbf6e7f3e223c088582390475c84464bc27227d/ovn_octavia_provider/event.py#L39
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas Bolivar (ltomasbo)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering
This is not a limitation. The failover action is already properly handle
to state it is not supported. But this is not due to a limitation into
the ovn-octavia driver, but due to not needed this functionality at all
(I would say this is an improvement). In amphora case you have a VM that
needs to
but there is no connectivity to
the member (as it does not belong to the obtained subnet).
An extra checking to ensure the VIP CIDR includes the member IP should
be done.
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas Bolivar (ltomasbo)
Status: In Progress
--
You received this bug
: neutron
Importance: Undecided
Assignee: Luis Tomas Bolivar (ltomasbo)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1965732
Title:
loadbalancer
: patch
up : false
virtual_parent : []
Expected results:
In the example, ip 20.0.0.98 should not be there as that belongs to an IP in a
tenant network that should not be advertized (GARP) in the provider network.
** Affects: neutron
Importance: Undecided
Statu
Yes, this is what I did in this (partial) backport:
https://review.opendev.org/c/openstack/networking-ovn/+/831349/
** Changed in: neutron
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
: Undecided
Assignee: Luis Tomas Bolivar (ltomasbo)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
ht
This bug is the same as https://bugs.launchpad.net/neutron/+bug/1959903
(or a subset of it).
The fix at https://review.opendev.org/c/openstack/ovn-octavia-
provider/+/827670 also solves this problem
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug
Public bug reported:
Creating a Fully Populated Load Balancer with OVN as provider
successfully creates the LB but the pool stays in PENDING_CREATE state
forever.
How reproducible:
Send the fully populated request to the API in JSON format following
Importance: Undecided
Assignee: Luis Tomas Bolivar (ltomasbo)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neut
|
+--+---+--+-+---+---+--++
** Affects: neutron
Importance: Undecided
Assignee: Luis Tomas Bolivar
Public bug reported:
When doing bulk port creation requests the returned object does not have
the binding information -- unlike the 'standard' port creation.
For a single (standard) port creation with:
neutron_client.create_port(rq).get('port')
where rq has:
{'port': {'device_owner':
Public bug reported:
This is not required by business logic as pointed out in the commit
message of https://review.openstack.org/#/c/368289/, and in some
cases can lead to race problems.
There may be other proyects using neutron TrunkPorts, such as kuryr,
that can trigger the port/subport
Public bug reported:
When using Qos together with Neutron trunk ports the max bandwidth
limits are not applied, neither for ovs-hybrid, nor for ovs-firewall
The reason is that a new ovs bridge is created to handle the trunk (parent +
subport) ports.
For instance:
Bridge "tbr-c5402c58-3"
Public bug reported:
During the live migration process the progress_watermark/progress_time
are not being (re)updated with the new progress made by the live
migration at the "_live_migration_monitor" function
(virt/libvirt/driver.py).
More specifically, in these lines of code:
if
29 matches
Mail list logo