[Yahoo-eng-team] [Bug 1930414] Re: Traffic leaked from dhcp port before vlan tag is applied

2022-01-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/820897
Committed: 
https://opendev.org/openstack/neutron/commit/7aae31c9f9ed938760ca0be3c461826b598c7004
Submitter: "Zuul (22348)"
Branch:master

commit 7aae31c9f9ed938760ca0be3c461826b598c7004
Author: Bence Romsics 
Date:   Tue Oct 5 17:02:41 2021 +0200

Make the dead vlan actually dead

All ports plugged into the dead vlan (DEAD_VLAN_TAG 4095 or 0xfff)
should not be able to send or receive traffic. We install a flow
to br-int to drop all traffic of the dead vlan [1]. However before
this patch the flow we install looks like:

priority=65535,vlan_tci=0x0fff/0x1fff actions=drop

Which is wrong and it usually does not match anything.

According to ovs-fields (7) section Open vSwitch Extension VLAN Field,
VLAN TCI Field [2] (see especially the usage example
vlan_tci=0x1123/0x1fff) we need to explicitly set the bit 0x1000
to match the presence of an 802.1Q header.

Setting that bit this flow becomes:
priority=65535,vlan_tci=0x1fff/0x1fff actions=drop

which is equivalent to:
priority=65535,dl_vlan=4095 actions=drop

which should match and drop dead vlan traffic.

However there's a second problem: ovs access ports were designed to
work together with the NORMAL action. The NORMAL action considers the
vlan of an access port, but the openflow pipeline does not. An openflow
rule does not see the vlan set for an access port, because that vlan
tag is only pushed to the frame if and when the frame leaves the switch
on a trunk port [3][4].

So we have to explicitly push the DEAD_VLAN_TAG if we want the dead
vlan's drop flow match anything.

That means we are adding a flow to push the dead vlan tag from
dhcp-agent/l3-agent but we are deleting that flow from ovs-agent right
after ovs-agent sets the vlan tag of the port to a non-dead vlan. Which
is ugly but we have to keep adding the flow as early as possible if we
want to minimize the window until frames can leak onto the dead vlan.
Even with this change there's a short time window in which the dead vlan
could theoretically leak.

[1] 
https://opendev.org/openstack/neutron/src/commit/ecdc11a56448428f77f5a64fd028f1e4c9644ea3/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py#L60-L62
[2] http://www.openvswitch.org/support/dist-docs/ovs-fields.7.html
[3] 
https://mail.openvswitch.org/pipermail/ovs-discuss/2021-December/051647.html
[4] https://docs.openvswitch.org/en/latest/faq/vlan/
see 'Q: My OpenFlow controller doesn’t see the VLANs that I expect.'

Change-Id: Ib6b70114efb140cf1393b57ebc350fea4b0a2443
Closes-Bug: #1930414


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930414

Title:
  Traffic leaked from dhcp port before vlan tag is applied

Status in neutron:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  This is a bug with potential security implications. I don't see a
  clear way to exploit it at the moment, but to err on the safe side,
  I'm opening this as private to the security team.

  Short summary: Using openvswitch-agent, traffic sent on some (at least
  dhcp) ports before ovs-agent applies the port's vlan tag can be seen
  and intercepted on ports from other networks on the same integration
  bridge.

  We observed this bug:
  * using vlan and vxlan networks
  * using the noop and openvswitch firewall drivers
  * on openstack versions mitaka, pike and master (commit 5a6f61af4a)

  The time window between the port's creation and ovs-agent applying its
  vlan tag is usually very short. We observed this bug in the wild on a
  heavily loaded host. However to make the reproduction reliable on
  lightly loaded systems I inserted a sleep() into ovs-agent's source
  (just before the port's vlan tag is set):

  $ git --no-pager format-patch --stdout 5a6f61af4a
  From 8389b3e8e5c60c81ff2bb262e3ae2e8aab73d3f5 Mon Sep 17 00:00:00 2001
  From: Bence Romsics 
  Date: Mon, 31 May 2021 13:12:34 +0200
  Subject: [PATCH] WIP

  Change-Id: Ibef4278a2f6a85f52a8ffa43caef6de38cbb40cb
  ---
   .../plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py   | 1 +
   1 file changed, 1 insertion(+)

  diff --git 
a/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py 
b/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
  index 2c209bd387..355584b325 100644
  --- a/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
  +++ b/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
  @@ -1190,6 +1190,7 @@ class 
OVSNeutronAgent(l2population_rpc.L2populationRpcCallBackTunnelMixin,
   

[Yahoo-eng-team] [Bug 1958458] [NEW] Multiple GPU card bind to multiple vms

2022-01-19 Thread Satish Patel
Public bug reported:

I am running wallaby and I have compute node which has two GPU card. My
requirement is to create vm1 which bind with GPU-1 and vm2 bind with
GPU-2 card but i am getting error

[root@GPUN06 /]# lspci -nn | grep -i nv
5e:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100S PCIe 
32GB] [10de:1df6] (rev a1)
d8:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100S PCIe 
32GB] [10de:1df6] (rev a1)

[root@GPUN06 /]# cat /etc/modprobe.d/gpu-vfio.conf
options vfio-pci ids=10de:1df6

[root@GPUN06 /]# cat /etc/modules-load.d/vfio-pci.conf
vfio-pci


Nova Api

[PCI]
alias: { "vendor_id":"10de", "product_id":"1df6", "device_type":"type-PCI", 
"name":"tesla-v100" }


# Flavor 
openstack flavor create --vcpus 4 --ram 8192 --disk 40 --property 
"pci_passthrough:alias"="tesla-v100:1" --property gpu-node=true g1.small


I am successfully able to spin up first GPU vm which bind with single GPU card 
but when i create second VM i get following error in libvirt 

error : virDomainDefDuplicateHostdevInfoValidate:1082 : XML error:
Hostdev already exists in the domain configuration

Look like libvirt or nova doesn't understand it has second GPU card
available.


# if i set "pci_passthrough:alias"="tesla-v100:2" in flavor then i can able to 
bind both GPU card to single VM. 

libvirt version: 7.6.0
Openstack version: Wallaby
Distro: CentOS 8 stream

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1958458

Title:
  Multiple GPU card bind to multiple vms

Status in OpenStack Compute (nova):
  New

Bug description:
  I am running wallaby and I have compute node which has two GPU card.
  My requirement is to create vm1 which bind with GPU-1 and vm2 bind
  with GPU-2 card but i am getting error

  [root@GPUN06 /]# lspci -nn | grep -i nv
  5e:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100S PCIe 
32GB] [10de:1df6] (rev a1)
  d8:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100S PCIe 
32GB] [10de:1df6] (rev a1)

  [root@GPUN06 /]# cat /etc/modprobe.d/gpu-vfio.conf
  options vfio-pci ids=10de:1df6

  [root@GPUN06 /]# cat /etc/modules-load.d/vfio-pci.conf
  vfio-pci

  
  Nova Api

  [PCI]
  alias: { "vendor_id":"10de", "product_id":"1df6", "device_type":"type-PCI", 
"name":"tesla-v100" }

  
  # Flavor 
  openstack flavor create --vcpus 4 --ram 8192 --disk 40 --property 
"pci_passthrough:alias"="tesla-v100:1" --property gpu-node=true g1.small

  
  I am successfully able to spin up first GPU vm which bind with single GPU 
card but when i create second VM i get following error in libvirt 

  error : virDomainDefDuplicateHostdevInfoValidate:1082 : XML error:
  Hostdev already exists in the domain configuration

  Look like libvirt or nova doesn't understand it has second GPU card
  available.

  
  # if i set "pci_passthrough:alias"="tesla-v100:2" in flavor then i can able 
to bind both GPU card to single VM. 

  libvirt version: 7.6.0
  Openstack version: Wallaby
  Distro: CentOS 8 stream

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1958458/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940812] Re: UnboundLocalError: local variable 'instance_uuid' referenced before assignment

2022-01-19 Thread melanie witt
** Also affects: nova/xena
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1940812

Title:
  UnboundLocalError: local variable 'instance_uuid' referenced before
  assignment

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) xena series:
  New

Bug description:
  there is an unbound variable in [1] causing:

  
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi [None 
req-b45811ea-c6f9-4422-8cf1-8970273587bc 
tempest-DeleteServersTestJSON-1241246976 
tempest-DeleteServersTestJSON-1241246976-project] Unexpected exception in API 
method: UnboundLocalError: local variable 'instance_uuid' referenced before 
assignment
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi Traceback (most 
recent call last):
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/objects/instance.py", line 653, in destroy
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi db_inst = 
db.instance_destroy(self._context, self.uuid,
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/db/utils.py", line 35, in wrapper
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi return 
f(*args, **kwargs)
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python3.8/dist-packages/oslo_db/api.py", line 154, in wrapper
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi ectxt.value = 
e.inner_exc
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 227, in 
__exit__
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi 
self.force_reraise()
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi raise 
self.value
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python3.8/dist-packages/oslo_db/api.py", line 142, in wrapper
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi return 
f(*args, **kwargs)
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/db/main/api.py", line 190, in wrapper
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi return 
f(context, *args, **kwargs)
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/db/main/api.py", line 1294, in instance_destroy
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi raise 
exception.ConstraintNotMet()
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi 
nova.exception.ConstraintNotMet: Constraint not met.
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi 
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi During handling 
of the above exception, another exception occurred:
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi 
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi Traceback (most 
recent call last):
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 
devstack@n-api.service[111846]: ERROR nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/compute/api.py", line 2248, in _delete
  Aug 20 14:57:42.704720 ubuntu-focal-inap-mtl01-0026012484 

[Yahoo-eng-team] [Bug 1958363] [NEW] Notifications to nova disabled causing tests failures

2022-01-19 Thread Slawek Kaplonski
Public bug reported:

For some reason (I have no idea why) during the scenario tests neutron
stops sending notifications about port status change to nova. In the
logs of neutron server there is info that notifications are disabled,
like:

Jan 18 12:05:45.415318 ubuntu-focal-inmotion-iad3-0028053821 
neutron-server[7]: DEBUG neutron.db.provisioning_blocks [None 
req-b33edd99-de3e-4c86-9fb8-405862418a59 None None] Provisioning complete for 
port 52a33477-c818-456c-b47f-a323e04cce42 triggered by entity L2. {{(pid=7) 
provisioning_complete /opt/stack/neutron/neutron/db/provisioning_blocks.py:139}}
Jan 18 12:05:45.415822 ubuntu-focal-inmotion-iad3-0028053821 
neutron-server[7]: DEBUG neutron_lib.callbacks.manager [None 
req-b33edd99-de3e-4c86-9fb8-405862418a59 None None] Publish callbacks 
['neutron.plugins.ml2.plugin.Ml2Plugin._port_provisioned-1745609'] for port 
(52a33477-c818-456c-b47f-a323e04cce42), provisioning_complete {{(pid=7) 
_notify_loop 
/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:176}}
Jan 18 12:05:45.483625 ubuntu-focal-inmotion-iad3-0028053821 
neutron-server[7]: DEBUG neutron.plugins.ml2.plugin [None 
req-b33edd99-de3e-4c86-9fb8-405862418a59 None None] Current status of the port 
52a33477-c818-456c-b47f-a323e04cce42 is: DOWN; New status is: ACTIVE 
{{(pid=7) _update_individual_port_db_status 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py:2213}}
Jan 18 12:05:45.484006 ubuntu-focal-inmotion-iad3-0028053821 
neutron-server[7]: DEBUG neutron_lib.callbacks.manager [None 
req-b33edd99-de3e-4c86-9fb8-405862418a59 None None] Publish callbacks 
['neutron.plugins.ml2.plugin.SecurityGroupDbMixin._ensure_default_security_group_handler-1688939',
 
'neutron.services.qos.qos_plugin.QoSPlugin._check_port_for_placement_allocation_change-102324']
 for port (52a33477-c818-456c-b47f-a323e04cce42), before_update {{(pid=7) 
_notify_loop 
/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:176}}
Jan 18 12:05:45.514465 ubuntu-focal-inmotion-iad3-0028053821 
neutron-server[7]: DEBUG neutron.notifiers.nova [None 
req-b33edd99-de3e-4c86-9fb8-405862418a59 None None] Nova notifier disabled 
{{(pid=7) _can_notify /opt/stack/neutron/neutron/notifiers/nova.py:180}}

Example of failure:
https://0cc0fb9d5c802f1c7b9c-6aa1154e79cc29d864ed6f661bf68125.ssl.cf5.rackcdn.com/824982/2/check/neutron-
ovs-tempest-multinode-full/1e135d1/testr_results.html

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958363

Title:
  Notifications to nova disabled causing tests failures

Status in neutron:
  Confirmed

Bug description:
  For some reason (I have no idea why) during the scenario tests neutron
  stops sending notifications about port status change to nova. In the
  logs of neutron server there is info that notifications are disabled,
  like:

  Jan 18 12:05:45.415318 ubuntu-focal-inmotion-iad3-0028053821 
neutron-server[7]: DEBUG neutron.db.provisioning_blocks [None 
req-b33edd99-de3e-4c86-9fb8-405862418a59 None None] Provisioning complete for 
port 52a33477-c818-456c-b47f-a323e04cce42 triggered by entity L2. {{(pid=7) 
provisioning_complete /opt/stack/neutron/neutron/db/provisioning_blocks.py:139}}
  Jan 18 12:05:45.415822 ubuntu-focal-inmotion-iad3-0028053821 
neutron-server[7]: DEBUG neutron_lib.callbacks.manager [None 
req-b33edd99-de3e-4c86-9fb8-405862418a59 None None] Publish callbacks 
['neutron.plugins.ml2.plugin.Ml2Plugin._port_provisioned-1745609'] for port 
(52a33477-c818-456c-b47f-a323e04cce42), provisioning_complete {{(pid=7) 
_notify_loop 
/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:176}}
  Jan 18 12:05:45.483625 ubuntu-focal-inmotion-iad3-0028053821 
neutron-server[7]: DEBUG neutron.plugins.ml2.plugin [None 
req-b33edd99-de3e-4c86-9fb8-405862418a59 None None] Current status of the port 
52a33477-c818-456c-b47f-a323e04cce42 is: DOWN; New status is: ACTIVE 
{{(pid=7) _update_individual_port_db_status 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py:2213}}
  Jan 18 12:05:45.484006 ubuntu-focal-inmotion-iad3-0028053821 
neutron-server[7]: DEBUG neutron_lib.callbacks.manager [None 
req-b33edd99-de3e-4c86-9fb8-405862418a59 None None] Publish callbacks 
['neutron.plugins.ml2.plugin.SecurityGroupDbMixin._ensure_default_security_group_handler-1688939',
 
'neutron.services.qos.qos_plugin.QoSPlugin._check_port_for_placement_allocation_change-102324']
 for port (52a33477-c818-456c-b47f-a323e04cce42), before_update {{(pid=7) 
_notify_loop 
/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:176}}
  Jan 18 12:05:45.514465 ubuntu-focal-inmotion-iad3-0028053821 
neutron-server[7]: DEBUG neutron.notifiers.nova [None 

[Yahoo-eng-team] [Bug 1958364] [NEW] [ovn]Set NB/SB connection inactivity_probe does not work for cluster

2022-01-19 Thread ZhouHeng
Public bug reported:

If ovn is single node, we set config like:
[ovn]
ovn_nb_connection = tcp:100.2.223.2:6641
ovn_sb_connection = tcp:100.2.223.2:6642
The NB/SB connection inactivity_probe can be set correctly.

When ovn is in cluster deployment, config like:

[ovn]
ovn_nb_connection = 
tcp:100.2.223.2:6641,tcp:100.2.223.12:6641,tcp:100.2.223.30:6641
ovn_sb_connection = 
tcp:100.2.223.2:6642,tcp:100.2.223.12:6642,tcp:100.2.223.30:6642

Set NB/SB connection inactivity_probe does not work right.

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958364

Title:
  [ovn]Set NB/SB connection inactivity_probe does not work for cluster

Status in neutron:
  In Progress

Bug description:
  If ovn is single node, we set config like:
  [ovn]
  ovn_nb_connection = tcp:100.2.223.2:6641
  ovn_sb_connection = tcp:100.2.223.2:6642
  The NB/SB connection inactivity_probe can be set correctly.

  When ovn is in cluster deployment, config like:

  [ovn]
  ovn_nb_connection = 
tcp:100.2.223.2:6641,tcp:100.2.223.12:6641,tcp:100.2.223.30:6641
  ovn_sb_connection = 
tcp:100.2.223.2:6642,tcp:100.2.223.12:6642,tcp:100.2.223.30:6642

  Set NB/SB connection inactivity_probe does not work right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1958364/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958355] [NEW] [OVN][SRIOV] Not support create port which is type of external_port backed geneve network

2022-01-19 Thread Liu Xie
Public bug reported:

The external_port only supported where network is vlan type, not geneve
network. So we should return it when create a sriov nic backed tunnel
net.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958355

Title:
  [OVN][SRIOV] Not support create port which is type of external_port
  backed geneve network

Status in neutron:
  New

Bug description:
  The external_port only supported where network is vlan type, not
  geneve network. So we should return it when create a sriov nic backed
  tunnel net.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1958355/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958353] [NEW] [ovn]Gateway port is down after gateway chassis changed

2022-01-19 Thread ZhouHeng
Public bug reported:

ml2 driver: ovn
gateway chassis: nodeA, nodeB, nodeC

step 1: create a router and set gateway, the gateway port status is active, and 
bound nodeA.
step 2: down nodeA, gateway port status change to down, and bound nodeB.
step3: down nodeB, gateway port status change to active, and bound nodeC.

Gateway port status should always be active!

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958353

Title:
  [ovn]Gateway port is down after gateway chassis changed

Status in neutron:
  In Progress

Bug description:
  ml2 driver: ovn
  gateway chassis: nodeA, nodeB, nodeC

  step 1: create a router and set gateway, the gateway port status is active, 
and bound nodeA.
  step 2: down nodeA, gateway port status change to down, and bound nodeB.
  step3: down nodeB, gateway port status change to active, and bound nodeC.

  Gateway port status should always be active!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1958353/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp