[Yahoo-eng-team] [Bug 1988026] Re: Neutron should not create security group with project==None

2022-09-01 Thread Jeremy Stanley
** Also affects: ossa
   Importance: Undecided
   Status: New

** Information type changed from Public to Public Security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1988026

Title:
  Neutron should not create security group with project==None

Status in neutron:
  New
Status in OpenStack Security Advisory:
  New

Bug description:
  When a non-admin user tries to list security groups for project_id
  "None", Neutron creates a default security group for that project and
  returns an empty list to the caller.

  To reproduce:

  openstack --os-cloud devstack security group list --project None
  openstack --os-cloud devstack-admin security group list

  The API call that is made is essentially

  GET /networking/v2.0/security-groups?project_id=None

  The expected result would be an authorization failure, since normal
  users should not be allowed to list security groups for other
  projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1988026/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988482] [NEW] stable/yoga gate blocker: failures in py39 unit tests

2022-09-01 Thread Jay Faulkner
Public bug reported:

All jobs in nova.tests.unit.cmd.test_manage.DbCommandsTestCase, as well
as
test_nova.tests.unit.cmd.test_manage.UtilitiesTestCase.test_format_dict.

One specific example:
```
ft2.1: 
nova.tests.unit.cmd.test_manage.UtilitiesTestCase.test_format_dicttesttools.testresult.real._StringException:
 pythonlogging:'': {{{
2022-09-01 17:37:16,772 WARNING [oslo_policy.policy] JSON formatted policy_file 
support is deprecated since Victoria release. You need to use YAML format which 
will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool 
to convert existing JSON-formatted policy file to YAML-formatted in backward 
compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
2022-09-01 17:37:16,773 WARNING [oslo_policy.policy] JSON formatted policy_file 
support is deprecated since Victoria release. You need to use YAML format which 
will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool 
to convert existing JSON-formatted policy file to YAML-formatted in backward 
compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
2022-09-01 17:37:16,774 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 
'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
'project_admin_api', 'project_member_api', 'project_reader_api', 
'project_reader_or_admin', 'os_compute_api:limits:other_project', 
'os_compute_api:os-lock-server:unlock:unlock_override', 
'os_compute_api:servers:create:zero_disk_flavor', 
'compute:servers:resize:cross_cell'] specified in policy files are the same as 
the defaults provided by the service. You can remove these rules from policy 
files which will make maintenance easier. You can detect these redundant rules 
by ``oslopolicy-list-redundant`` tool also.
}}}

Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/tests/unit/cmd/test_manage.py", 
line 55, in test_format_dict
self.assertEqual(
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py39/lib/python3.9/site-packages/testtools/testcase.py",
 line 393, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py39/lib/python3.9/site-packages/testtools/testcase.py",
 line 480, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = '''\
+--+--+
| Property | Value|
+--+--+
| bing | bat  |
| foo  | bar  |
| test | {'a nested': 'dict'} |
| wow  | a multiline  |
|  | string   |
+--+--+'''
actual= '''\
+--+--+
| Property |Value |
+--+--+
| bing | bat  |
| foo  | bar  |
| test | {'a nested': 'dict'} |
| wow  | a multiline  |
|  | string   |
+--+--+'''

```

This is failing on at least two different changes that are completely unrelated 
and are unlikely to be causing these failures:
- https://review.opendev.org/c/openstack/nova/+/855025
- https://review.opendev.org/c/openstack/nova/+/854257


I was unable to create an opensearch query to find all these failures, but 
https://tinyurl.com/mr3pjzyn catches related failures in cross-nova-py38


I was able to reproduce the failure in test_format_dict locally, by downloading 
854257 and running tox -e py39. Output from that reproduction is:

```
nova.tests.unit.cmd.test_manage.UtilitiesTestCase.test_format_dict
--

Captured traceback:
~~~
Traceback (most recent call last):

  File "/home/jay/dev/nova/nova/tests/unit/cmd/test_manage.py", line 55, in 
test_format_dict
self.assertEqual(

  File 
"/home/jay/dev/nova/.tox/py39/lib/python3.9/site-packages/testtools/testcase.py",
 line 393, in assertEqual
self.assertThat(observed, matcher, message)

  File 
"/home/jay/dev/nova/.tox/py39/lib/python3.9/site-packages/testtools/testcase.py",
 line 480, in assertThat
raise mismatch_error

testtools.matchers._impl.MismatchError: !=:
reference = '''\
+--+--+
| Property | Value|
+--+--+
| bing | bat  |
| foo  | bar  |
| test | {'a nested': 'dict'} |
| wow  | a multiline  |
|  | string   |
+--+--+'''
actual= '''\
+--+--+
| Property |Value |
+--+--+
| bing | bat  |
| foo  | bar   

[Yahoo-eng-team] [Bug 1988421] [NEW] Duplicate indexes in table ports of neutron database

2022-09-01 Thread Christian Rohmann
Public bug reported:

Currently the primary key and an additional unique index are configured on the 
same column.
This is why sqlalchemy logs a warning on a database migration displaying 
following information:

```
​/usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (1831, 
'Duplicate index `uniq_instances0uuid`. This is deprecated and will be 
disallowed in a future release')
result = self._query(query)
```
(​This example is actually taken from the nova output, but looks just the same 
for Keystone.
There actually is the same issue within Nova schemas, see bug 
https://bugs.launchpad.net/nova/+bug/1641185)

From my understanding of the documentation of mysql (see [1] [2]) and
postgres (see [3] [4]) a unique constraint, which is created in the
first place, automatically creates an index for the column(s). So there
should be no need to create an additional index for the same column:

```
Table: ports 
(https://opendev.org/openstack/neutron/src/commit/732c1dcbc2fe95bc3d8b6a61b124d59595958b4f/neutron/db/models_v2.py#L128)

Columns: network_id, mac_address
Indexes:
Unique Constraint: uniq_ports0network_id0mac_address
Index: ix_ports_network_id_mac_address
```

[1] 
https://dev.mysql.com/doc/refman/8.0/en/create-index.html#create-index-unique
[2] https://dev.mysql.com/doc/refman/8.0/en/mysql-indexes.html
[3] 
https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-UNIQUE-CONSTRAINTS
[4] https://www.postgresql.org/docs/current/indexes-types.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1988421

Title:
   Duplicate indexes in table ports of neutron database

Status in neutron:
  New

Bug description:
  Currently the primary key and an additional unique index are configured on 
the same column.
  This is why sqlalchemy logs a warning on a database migration displaying 
following information:

  ```
  ​/usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (1831, 
'Duplicate index `uniq_instances0uuid`. This is deprecated and will be 
disallowed in a future release')
  result = self._query(query)
  ```
  (​This example is actually taken from the nova output, but looks just the 
same for Keystone.
  There actually is the same issue within Nova schemas, see bug 
https://bugs.launchpad.net/nova/+bug/1641185)

  From my understanding of the documentation of mysql (see [1] [2]) and
  postgres (see [3] [4]) a unique constraint, which is created in the
  first place, automatically creates an index for the column(s). So
  there should be no need to create an additional index for the same
  column:

  ```
  Table: ports 
(https://opendev.org/openstack/neutron/src/commit/732c1dcbc2fe95bc3d8b6a61b124d59595958b4f/neutron/db/models_v2.py#L128)

  Columns: network_id, mac_address
  Indexes:
  Unique Constraint: uniq_ports0network_id0mac_address
  Index: ix_ports_network_id_mac_address
  ```

  [1] 
https://dev.mysql.com/doc/refman/8.0/en/create-index.html#create-index-unique
  [2] https://dev.mysql.com/doc/refman/8.0/en/mysql-indexes.html
  [3] 
https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-UNIQUE-CONSTRAINTS
  [4] https://www.postgresql.org/docs/current/indexes-types.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1988421/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1879878] Re: VM become Error after confirming resize with Error info CPUUnpinningInvalid on source node

2022-09-01 Thread kevinzhao
** Changed in: nova/train
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1879878

Title:
  VM become Error after confirming resize with Error info
  CPUUnpinningInvalid on source node

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) train series:
  Fix Released
Status in OpenStack Compute (nova) ussuri series:
  Fix Released

Bug description:
  Description
  ===

  In my environmet, it will take some time to clean VM on source node in 
confirming resize.
  during confirming resize process, periodic_task update_available_resource may 
update resource usage at the same time.
  It may cause ERROR like: 
  CPUUnpinningInvalid: CPU set to unpin [1, 2, 18, 17] must be a subset of 
pinned CPU set []
  during confirming resize process.

  
   

  Steps to reproduce
  ==
  * Set /etc/nova/nova.conf "update_resources_interval" to small value, let's 
say 30 seconds on compute nodes. This step will increase the probability of 
error.

  * create a "dedicated" VM, the flavor can be
  ++--+
  | Property   | Value|
  ++--+
  | OS-FLV-DISABLED:disabled   | False|
  | OS-FLV-EXT-DATA:ephemeral  | 0|
  | disk   | 80   |
  | extra_specs| {"hw:cpu_policy": "dedicated"}   |
  | id | 2be0f830-c215-4018-a96a-bee3e60b5eb1 |
  | name   | 4vcpu.4mem.80ssd.0eph.numa   |
  | os-flavor-access:is_public | True |
  | ram| 4096 |
  | rxtx_factor| 1.0  |
  | swap   |  |
  | vcpus  | 4|
  ++--+

  * Resize the VM with a new flavor to another node.

  * Confirm resize. 
  Make sure it will take some time to undefine the vm on source node, 30 
seconds will lead to inevitable results.  

  * Then you will see the ERROR notice on dashboard, And the VM become
  ERROR

  
  Expected result
  ===
  VM resized successfuly, vm state is active

  
  Actual result
  =

  * VM become ERROR

  * On dashboard you can see this notice:
  Please try again later [Error: CPU set to unpin [1, 2, 18, 17] must be a 
subset of pinned CPU set []].


  Environment
  ===
  1. Exact version of OpenStack you are running.

Newton version with patch https://review.opendev.org/#/c/641806/21
I am sure it will happen to other new vision with 
https://review.opendev.org/#/c/641806/21
such as Train and Ussuri

  2. Which hypervisor did you use?
 Libvirt + KVM

  3. Which storage type did you use?
 local disk

  4. Which networking type did you use?
 Neutron with OpenVSwitch

  Logs & Configs
  ==

  ERROR log on source node
  2020-05-15 10:11:12.324 425843 ERROR nova.compute.manager 
[req-364606bb-9fa6-41db-a20e-6df9ff779934 b0887a73f3c1441686bf78944ee284d0 
95262f1f45f14170b91cd8054bb36512 - - -] [instance: 
993138d6-4b80-4b19-81c1-a16dbc6e196c] Setting instance vm_state to ERROR
  2020-05-15 10:11:12.324 425843 ERROR nova.compute.manager [instance: 
993138d6-4b80-4b19-81c1-a16dbc6e196c] Traceback (most recent call last):
  2020-05-15 10:11:12.324 425843 ERROR nova.compute.manager [instance: 
993138d6-4b80-4b19-81c1-a16dbc6e196c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6661, in 
_error_out_instance_on_exception
  2020-05-15 10:11:12.324 425843 ERROR nova.compute.manager [instance: 
993138d6-4b80-4b19-81c1-a16dbc6e196c] yield
  2020-05-15 10:11:12.324 425843 ERROR nova.compute.manager [instance: 
993138d6-4b80-4b19-81c1-a16dbc6e196c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3444, in 
_confirm_resize
  2020-05-15 10:11:12.324 425843 ERROR nova.compute.manager [instance: 
993138d6-4b80-4b19-81c1-a16dbc6e196c] prefix='old_')
  2020-05-15 10:11:12.324 425843 ERROR nova.compute.manager [instance: 
993138d6-4b80-4b19-81c1-a16dbc6e196c]   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in 
inner
  2020-05-15 10:11:12.324 425843 ERROR nova.compute.manager [instance: 
993138d6-4b80-4b19-81c1-a16dbc6e196c] return f(*args, **kwargs)
  2020-05-15 10:11:12.324 425843 ERROR nova.compute.manager [instance: 
993138d6-4b80-4b19-81c1-a16dbc6e196c]   File 

[Yahoo-eng-team] [Bug 1988296] Re: neutron-lib pep8 CI failing with pylint==2.15.0

2022-09-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-lib/+/855356
Committed: 
https://opendev.org/openstack/neutron-lib/commit/7026199065fc67977871c6f2521ce6334c20147a
Submitter: "Zuul (22348)"
Branch:master

commit 7026199065fc67977871c6f2521ce6334c20147a
Author: Rodolfo Alonso Hernandez 
Date:   Wed Aug 31 12:57:04 2022 +0200

Fix pep8 job issues with pylint==2.15.0

Closes-Bug: #1988296
Change-Id: I3a4a27ec4672f8ea8848d7c04651730dae6f40ff


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1988296

Title:
  neutron-lib pep8 CI failing with pylint==2.15.0

Status in neutron:
  Fix Released

Bug description:
  Error: https://paste.opendev.org/show/bvMNn5rrf87eBwCVKV4W/

  pylint 2.15.0 released on August 26th, 2022:
  https://pypi.org/project/pylint/#history

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1988296/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1986551] Re: Current kinetic ISO images not installable on s390x

2022-09-01 Thread Frank Heimes
I can confirm that this issue is fixed with the latest 'pending' ISO from today 
(Sept 1st):
https://cdimage.ubuntu.com/ubuntu-server/daily-live/pending/kinetic-live-server-s390x.iso
Many thx!

I tried that on two systems and I was able to reach subiquity - so,
complete the initial basic network configuration.

I was able to complete the entire installation on one of the systems,
but faced another (independent) problem on the 2nd one - will open a
separate bug for that (LP#1988407).

But this bug can be closed now as Fix Released.

** Changed in: ubuntu-z-systems
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1986551

Title:
  Current kinetic ISO images not installable on s390x

Status in cloud-init:
  Fix Released
Status in subiquity:
  Invalid
Status in Ubuntu on IBM z Systems:
  Fix Released

Bug description:
  While wanting to install an kinetic/22.10 system on s390x for testing new and 
updated packages
  I found that the current daily ISO image for s390x is not installable - not 
on LPAR nor on z/VM - not interactively using subiquity, not non-interactively 
using autoinstall.

  I had the image from August 2nd and the installation ended at the console 
with these messages (please ignore the weird special characters):
  ...
  Ý Ý0;32m  OK   Ý0m¨ Started  Ý0;1;39mTime & Date Service Ý0m.
  connecting...   - \ |
  waiting for cloud-init...   -

  It is possible to connect to the installer over the network, which
  
  might allow the use of a more capable terminal and can offer more languages
  than can be rendered in the Linux console.


  Unfortunately this system seems to have no global IP addresses at this
  time.

   Starting  Ý0;1;39mTime & Date Service Ý0m...
  Ý Ý0;32m  OK   Ý0m¨ Started  Ý0;1;39mTime & Date Service Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Finished  Ý0;1;39mWait until snapd is fully seeded Ý0m.
   Starting  Ý0;1;39mApply the settings specified in cloud-config Ý0m...
  Ý Ý0;32m  OK   Ý0m¨ Started  Ý0;1;39mSubiquity, the installer for Ubuntu 
Server
  hvc0 Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Started  Ý0;1;39mSubiquity, the ins   er for Ubuntu 
Server t
  tysclp0 Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Reached target  Ý0;1;39mLogin Prompts Ý0m.
   Stopping  Ý0;1;39mOpenBSD Secure Shell server Ý0m...
  Ý Ý0;32m  OK   Ý0m¨ Stopped  Ý0;1;39mOpenBSD Secure Shell server Ý0m.
   Starting  Ý0;1;39mOpenBSD Secure Shell server Ý0m...
  Ý Ý0;32m  OK   Ý0m¨ Started  Ý0;1;39mOpenBSD Secure Shell server Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Finished  Ý0;1;39mApply the settings specified in 
cloud-con 
  ig Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Reached target  Ý0;1;39mMulti-User System Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Reached target  Ý0;1;39mGraphical Interface Ý0m.
   Starting  Ý0;1;39mExecute cloud user/final scripts Ý0m...
   Starting  Ý0;1;39mRecord Runlevel Change in UTMP Ý0m...
  Ý Ý0;32m  OK   Ý0m¨ Finished  Ý0;1;39mRecord Runlevel Change in UTMP Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Finished  Ý0;1;39mExecute cloud user/final scripts Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Reached target  Ý0;1;39mCloud-init target Ý0m.
  ...

  Then updated to the latest ISO from today (Aug 15th), I got the same:
  ...
  Ý Ý0;32m  OK   Ý0m¨ Finished  Ý0;1;39mHolds Snappy daemon refresh Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Finished  Ý0;1;39mService for snap application 
lxd.activate
  Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Started  Ý0;1;39msnap.lxd.hook.conf   
-4b29-8a88-87b80c6b731
  8.scope Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Started  Ý0;1;39msnap.subiquity.hoo   
-4a63-9355-e4654a5890c
  1.scope Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Started  Ý0;1;39mService for snap a   on 
subiquity.subiquity
  -server Ý0m.
  Ý Ý0;32m  OK   Ý0m¨ Started  Ý0;1;39mService for snap a   n 
subiquity.subiquity-
  service Ý0m.
   Starting  Ý0;1;39mTime & Date Service Ý0m...
  Ý Ý0;32m  OK   Ý0m¨ Started  Ý0;1;39mTime & Date Service Ý0m.
  connecting...   - \ |
  waiting for cloud-init...   - \

  It is possible to connect to the installer over the network, which
  might allow the use of a more capable terminal and can offer more languages
  than can be rendered in the Linux console.

  Unfortunately this system seems to have no global IP addresses at this
  time.
  ...

  Unfortunately I am not able to get any logs at that (very early) stage
  of the installation.

  On top I did a 22.04.1 installation on the same systems, using the
  same data (IP etc) which worked fine.

  (I kept one of the systems in that stage for now ...)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1986551/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988382] [NEW] L3 agent(agent_mode=dvr_snat) restart, fip namespace removed rfp-port, resulting in fip not connecting

2022-09-01 Thread liujinxin
Public bug reported:

stable/victoria

openstack network node(agent_mode=dvr_snat) and compute node are the
same node,the VM on this node is bound to fip, but the snat_port of the
router of this VM is located in another network node,VM can access
north-south traffic via fip.But if you restart the l3-agent,The
external_gateway_removed is called during l3-agent restart, causing the
fip on that node to be unreachable

https://github.com/openstack/neutron/blob/stable/victoria/neutron/agent/l3/dvr_edge_router.py
#39

def external_gateway_added(self, ex_gw_port, interface_name):
elif self.snat_namespace.exists():
# This is the case where the snat was moved manually or
# rescheduled to a different agent when the agent was dead.
LOG.debug("SNAT was moved or rescheduled to a different host "
  "and does not match with the current host. This is "
  "a stale namespace %s and will be cleared from the "
  "current dvr_snat host.", self.snat_namespace.name)
self.external_gateway_removed(ex_gw_port, interface_name)

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  stable/victoria
  
- 
- openstack network node(agent_mode=dvr_snat) and compute node are the same 
node,the VM on this node is bound to fip, but the snat_port of the router of 
this VM is located in another network node,VM can access north-south traffic 
via fip.But if you restart the l3-agent,The external_gateway_removed is called 
during the reboot, causing the fip on that node to be unreachable
+ openstack network node(agent_mode=dvr_snat) and compute node are the
+ same node,the VM on this node is bound to fip, but the snat_port of the
+ router of this VM is located in another network node,VM can access
+ north-south traffic via fip.But if you restart the l3-agent,The
+ external_gateway_removed is called during l3-agent restart, causing the
+ fip on that node to be unreachable
  
  
https://github.com/openstack/neutron/blob/stable/victoria/neutron/agent/l3/dvr_edge_router.py
  #39
  
- def external_gateway_added(self, ex_gw_port, interface_name):
- elif self.snat_namespace.exists():
- # This is the case where the snat was moved manually or
- # rescheduled to a different agent when the agent was dead.
- LOG.debug("SNAT was moved or rescheduled to a different host "
-   "and does not match with the current host. This is "
-   "a stale namespace %s and will be cleared from the "
-   "current dvr_snat host.", self.snat_namespace.name)
- self.external_gateway_removed(ex_gw_port, interface_name)
+ def external_gateway_added(self, ex_gw_port, interface_name):
+ elif self.snat_namespace.exists():
+ # This is the case where the snat was moved manually or
+ # rescheduled to a different agent when the agent was dead.
+ LOG.debug("SNAT was moved or rescheduled to a different host "
+   "and does not match with the current host. This is "
+   "a stale namespace %s and will be cleared from the "
+   "current dvr_snat host.", self.snat_namespace.name)
+ self.external_gateway_removed(ex_gw_port, interface_name)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1988382

Title:
  L3 agent(agent_mode=dvr_snat) restart, fip namespace removed rfp-port,
  resulting in fip not connecting

Status in neutron:
  New

Bug description:
  stable/victoria

  openstack network node(agent_mode=dvr_snat) and compute node are the
  same node,the VM on this node is bound to fip, but the snat_port of
  the router of this VM is located in another network node,VM can access
  north-south traffic via fip.But if you restart the l3-agent,The
  external_gateway_removed is called during l3-agent restart, causing
  the fip on that node to be unreachable

  
https://github.com/openstack/neutron/blob/stable/victoria/neutron/agent/l3/dvr_edge_router.py
  #39

  def external_gateway_added(self, ex_gw_port, interface_name):
  elif self.snat_namespace.exists():
  # This is the case where the snat was moved manually or
  # rescheduled to a different agent when the agent was dead.
  LOG.debug("SNAT was moved or rescheduled to a different host "
    "and does not match with the current host. This is "
    "a stale namespace %s and will be cleared from the "
    "current dvr_snat host.", self.snat_namespace.name)
  self.external_gateway_removed(ex_gw_port, interface_name)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1988382/+subscriptions


--