[Yahoo-eng-team] [Bug 1919369] Re: Instances panel shows some readable flavors as 'Not available '

2021-08-16 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1919369

Title:
  Instances panel shows some readable flavors as 'Not available '

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  As we move towards having 'reader' roles in nova, we can get into some
  interesting situations where a user can see a resource but not use it.

  Today I have a case where I've added a private flavor to projects A
  and B, created a VM in project B, and then removed the flavor project
  B.

  I belong to both projects, so in some contexts I can still see the
  flavor.  In the Horizon instance view for project A, though, the
  flavor is shown as 'not available'.

  That's reasonable behavior, but it happens to be unnecessary. The code
  change to make the flavor appear for that VM is trivial, and doesn't
  affect the ability to create VMs with the removed flavor (which would
  be bad).

  Supporting this case also provides a potential solution to the issue
  raised in bug 1259262, phasing out a flavor without causing VMs to
  know know what size they are; flavors can be moved out of scope for a
  project (thus preventing their re-use for new instances) but still
  remain viewable by the project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1919369/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928211] Re: Remove quota "ConfDriver", deprecated in Liberty

2021-08-16 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/790999
Committed: 
https://opendev.org/openstack/neutron/commit/ad31c58d60142cffcdea86d0257dc10277b53ff0
Submitter: "Zuul (22348)"
Branch:master

commit ad31c58d60142cffcdea86d0257dc10277b53ff0
Author: Rodolfo Alonso Hernandez 
Date:   Wed May 12 13:28:36 2021 +

Remove ``ConfDriver`` code

The quota driver ``ConfDriver`` was deprecated in Liberty release.

``NullQuotaDriver`` is created for testing although it could be used
in production if no quota enforcement is needed. However, because
the Quota engine is not plugable (is an extension always loaded), it
could be interesting to make it plugable as any other plugin.

This patch also creates a Quota engine driver API class that should be
used in any Quota engine driver. Currently it is used in the three
in-tree drivers implemented: ``NullQuotaDriver``, ``DbQuotaDriver``
and ``DbQuotaNoLockDriver``.

Change-Id: Ib4af80e18fac52b9f68f26c84a215415e63c2822
Closes-Bug: #1928211


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1928211

Title:
  Remove quota "ConfDriver", deprecated in Liberty

Status in neutron:
  Fix Released

Bug description:
  Remove quota "ConfDriver" code because it was deprecated in Liberty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1928211/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1927868] Re: vRouter not working after update to 16.3.1

2021-08-16 Thread Chris MacNaughton
This bug was fixed in the package neutron - 2:15.3.4-0ubuntu1~cloud1
---

 neutron (2:15.3.4-0ubuntu1~cloud1) bionic-train; urgency=medium
 .
   * d/p/revert-l3-ha-retry-when-setting-ha-router-gw-status.patch: Revert
 upstream patch that introduced regression that prevented full restore
 of HA routers on restart of L3 agent (LP: #1927868).


** Changed in: cloud-archive/train
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1927868

Title:
  vRouter not working after update to 16.3.1

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in Ubuntu Cloud Archive ussuri series:
  Fix Released
Status in Ubuntu Cloud Archive victoria series:
  Fix Released
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in Ubuntu Cloud Archive xena series:
  Fix Released
Status in neutron:
  New
Status in oslo.privsep:
  New
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  Fix Released
Status in neutron source package in Hirsute:
  Fix Released
Status in neutron source package in Impish:
  Fix Released

Bug description:
  We run a juju managed Openstack Ussuri on Bionic. After updating
  neutron packages from 16.3.0 to 16.3.1 all virtual routers stopped
  working. It seems that most (not all) namespaces are created but have
  only the lo interface and sometime the ha-XYZ interface in DOWN state.
  The underlying tap interfaces are also in down.

  neutron-l3-agent has many logs similar to the following:
  2021-05-08 15:01:45.286 39411 ERROR neutron.agent.l3.ha_router [-] Gateway 
interface for router 02945b59-639b-41be-8237-3b7933b4e32d was not set up; 
router will not work properly

  and journal logs report at around the same time
  May 08 15:01:40 lar1615.srv-louros.grnet.gr 
neutron-keepalived-state-change[18596]: 2021-05-08 15:01:40.765 18596 INFO 
neutron.agent.linux.ip_lib [-] Failed sending gratuitous ARP to 62.62.62.62 on 
qg-5a6efe8c-6b in namespace qrouter-02945b59-639b-41be-8237-3b7933b4e32d: Exit 
code: 2; Stdin: ; Stdout: Interface "qg-5a6efe8c-6b" is down
  May 08 15:01:40 lar1615.srv-louros.grnet.gr 
neutron-keepalived-state-change[18596]: 2021-05-08 15:01:40.767 18596 INFO 
neutron.agent.linux.ip_lib [-] Interface qg-5a6efe8c-6b or address 62.62.62.62 
in namespace qrouter-02945b59-639b-41be-8237-3b7933b4e32d was deleted 
concurrently

  The neutron packages installed are:

  ii  neutron-common 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - common
  ii  neutron-dhcp-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - DHCP agent
  ii  neutron-l3-agent   2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - l3 agent
  ii  neutron-metadata-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - metadata agent
  ii  neutron-metering-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - metering agent
  ii  neutron-openvswitch-agent  2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - Open vSwitch plugin agent
  ii  python3-neutron2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - Python library
  ii  python3-neutron-lib2.3.0-0ubuntu1~cloud0  
 all  Neutron shared routines and utilities - 
Python 3.x
  ii  python3-neutronclient  1:7.1.1-0ubuntu1~cloud0
 all  client API library for Neutron - Python 
3.x

  Downgrading to 16.3.0 resolves the issues.

  =

  Ubuntu SRU details:

  [Impact]
  See above.

  [Test Case]
  Deploy openstack with l3ha and create several HA routers, the number required 
varies per environment. It is probably best to deploy a known bad version of 
the package, ensure it is failing, upgrade to the version in proposed, and 
re-test several times to confirm it is fixed.

  Restarting neutron-l3-agent should expect all HA Routers are restored.

  [Regression Potential]
  This change is fixing a regression by reverting a patch that was introduced 
in a stable point release of neutron.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1927868] Re: vRouter not working after update to 16.3.1

2021-08-16 Thread Chris MacNaughton
This bug was fixed in the package neutron - 2:17.2.0-0ubuntu1~cloud1
---

 neutron (2:17.2.0-0ubuntu1~cloud1) focal-victoria; urgency=medium
 .
   * d/p/revert-l3-ha-retry-when-setting-ha-router-gw-status.patch: Revert
 upstream patch that introduced regression that prevented full restore
 of HA routers on restart of L3 agent (LP: #1927868).
 .
 neutron (2:17.2.0-0ubuntu1~cloud0) focal-victoria; urgency=medium
 .
   * New upstream release for the Ubuntu Cloud Archive.
 .
 neutron (2:17.2.0-0ubuntu1) groovy; urgency=medium
 .
   * New stable point release for OpenStack Victoria (LP: #1935029).


** Changed in: cloud-archive/victoria
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1927868

Title:
  vRouter not working after update to 16.3.1

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in Ubuntu Cloud Archive ussuri series:
  Fix Released
Status in Ubuntu Cloud Archive victoria series:
  Fix Released
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in Ubuntu Cloud Archive xena series:
  Fix Released
Status in neutron:
  New
Status in oslo.privsep:
  New
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  Fix Released
Status in neutron source package in Hirsute:
  Fix Released
Status in neutron source package in Impish:
  Fix Released

Bug description:
  We run a juju managed Openstack Ussuri on Bionic. After updating
  neutron packages from 16.3.0 to 16.3.1 all virtual routers stopped
  working. It seems that most (not all) namespaces are created but have
  only the lo interface and sometime the ha-XYZ interface in DOWN state.
  The underlying tap interfaces are also in down.

  neutron-l3-agent has many logs similar to the following:
  2021-05-08 15:01:45.286 39411 ERROR neutron.agent.l3.ha_router [-] Gateway 
interface for router 02945b59-639b-41be-8237-3b7933b4e32d was not set up; 
router will not work properly

  and journal logs report at around the same time
  May 08 15:01:40 lar1615.srv-louros.grnet.gr 
neutron-keepalived-state-change[18596]: 2021-05-08 15:01:40.765 18596 INFO 
neutron.agent.linux.ip_lib [-] Failed sending gratuitous ARP to 62.62.62.62 on 
qg-5a6efe8c-6b in namespace qrouter-02945b59-639b-41be-8237-3b7933b4e32d: Exit 
code: 2; Stdin: ; Stdout: Interface "qg-5a6efe8c-6b" is down
  May 08 15:01:40 lar1615.srv-louros.grnet.gr 
neutron-keepalived-state-change[18596]: 2021-05-08 15:01:40.767 18596 INFO 
neutron.agent.linux.ip_lib [-] Interface qg-5a6efe8c-6b or address 62.62.62.62 
in namespace qrouter-02945b59-639b-41be-8237-3b7933b4e32d was deleted 
concurrently

  The neutron packages installed are:

  ii  neutron-common 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - common
  ii  neutron-dhcp-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - DHCP agent
  ii  neutron-l3-agent   2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - l3 agent
  ii  neutron-metadata-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - metadata agent
  ii  neutron-metering-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - metering agent
  ii  neutron-openvswitch-agent  2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - Open vSwitch plugin agent
  ii  python3-neutron2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - Python library
  ii  python3-neutron-lib2.3.0-0ubuntu1~cloud0  
 all  Neutron shared routines and utilities - 
Python 3.x
  ii  python3-neutronclient  1:7.1.1-0ubuntu1~cloud0
 all  client API library for Neutron - Python 
3.x

  Downgrading to 16.3.0 resolves the issues.

  =

  Ubuntu SRU details:

  [Impact]
  See above.

  [Test Case]
  Deploy openstack with l3ha and create several HA routers, the number required 
varies per environment. It is probably best to deploy a known bad version of 
the package, ensure it is failing, upgrade to the version in proposed, and 
re-test several times to confirm it is fixed.

  Restarting neutron-l3-agent should expect all HA 

[Yahoo-eng-team] [Bug 1927868] Re: vRouter not working after update to 16.3.1

2021-08-16 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron - 2:18.1.0-0ubuntu2

---
neutron (2:18.1.0-0ubuntu2) hirsute; urgency=medium

  * d/p/revert-l3-ha-retry-when-setting-ha-router-gw-status.patch: Revert
upstream patch that introduced regression that prevented full restore
of HA routers on restart of L3 agent (LP: #1927868).

neutron (2:18.1.0-0ubuntu1) hirsute; urgency=medium

  * New stable point release for OpenStack Wallaby (LP: #1935027).
  * Remove patches that have landed upstream:
- d/p/remove-leading-zeroes-from-an-ip-address.patch.
- d/p/initialize-privsep-library-for-neutron-ovs-cleanup.patch.
- d/p/initialize-privsep-library-in-neutron-commands.patch.

 -- Corey Bryant   Wed, 28 Jul 2021 16:52:11
-0400

** Changed in: neutron (Ubuntu Hirsute)
   Status: Fix Committed => Fix Released

** Changed in: neutron (Ubuntu Focal)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1927868

Title:
  vRouter not working after update to 16.3.1

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Committed
Status in Ubuntu Cloud Archive ussuri series:
  Fix Released
Status in Ubuntu Cloud Archive victoria series:
  Fix Committed
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in Ubuntu Cloud Archive xena series:
  Fix Released
Status in neutron:
  New
Status in oslo.privsep:
  New
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  Fix Released
Status in neutron source package in Hirsute:
  Fix Released
Status in neutron source package in Impish:
  Fix Released

Bug description:
  We run a juju managed Openstack Ussuri on Bionic. After updating
  neutron packages from 16.3.0 to 16.3.1 all virtual routers stopped
  working. It seems that most (not all) namespaces are created but have
  only the lo interface and sometime the ha-XYZ interface in DOWN state.
  The underlying tap interfaces are also in down.

  neutron-l3-agent has many logs similar to the following:
  2021-05-08 15:01:45.286 39411 ERROR neutron.agent.l3.ha_router [-] Gateway 
interface for router 02945b59-639b-41be-8237-3b7933b4e32d was not set up; 
router will not work properly

  and journal logs report at around the same time
  May 08 15:01:40 lar1615.srv-louros.grnet.gr 
neutron-keepalived-state-change[18596]: 2021-05-08 15:01:40.765 18596 INFO 
neutron.agent.linux.ip_lib [-] Failed sending gratuitous ARP to 62.62.62.62 on 
qg-5a6efe8c-6b in namespace qrouter-02945b59-639b-41be-8237-3b7933b4e32d: Exit 
code: 2; Stdin: ; Stdout: Interface "qg-5a6efe8c-6b" is down
  May 08 15:01:40 lar1615.srv-louros.grnet.gr 
neutron-keepalived-state-change[18596]: 2021-05-08 15:01:40.767 18596 INFO 
neutron.agent.linux.ip_lib [-] Interface qg-5a6efe8c-6b or address 62.62.62.62 
in namespace qrouter-02945b59-639b-41be-8237-3b7933b4e32d was deleted 
concurrently

  The neutron packages installed are:

  ii  neutron-common 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - common
  ii  neutron-dhcp-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - DHCP agent
  ii  neutron-l3-agent   2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - l3 agent
  ii  neutron-metadata-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - metadata agent
  ii  neutron-metering-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - metering agent
  ii  neutron-openvswitch-agent  2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - Open vSwitch plugin agent
  ii  python3-neutron2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - Python library
  ii  python3-neutron-lib2.3.0-0ubuntu1~cloud0  
 all  Neutron shared routines and utilities - 
Python 3.x
  ii  python3-neutronclient  1:7.1.1-0ubuntu1~cloud0
 all  client API library for Neutron - Python 
3.x

  Downgrading to 16.3.0 resolves the issues.

  =

  Ubuntu SRU details:

  [Impact]
  See above.

  [Test Case]
  Deploy openstack with l3ha and create several HA routers, the number required 
varies per environment. It is probably 

[Yahoo-eng-team] [Bug 1939704] Re: db-sync script may fail when migrating from ovs to ovn

2021-08-16 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/804405
Committed: 
https://opendev.org/openstack/neutron/commit/9e0c075bf169ed6f3d768cf23be6f47aeae0f98e
Submitter: "Zuul (22348)"
Branch:master

commit 9e0c075bf169ed6f3d768cf23be6f47aeae0f98e
Author: Jakub Libosvar 
Date:   Thu Aug 12 16:48:59 2021 +0200

ovn: Don't fail db-sync if port binding changes

During migration from OVS to OVN it can happen that gateway ports are
scheduled to a different gateway chassis when Neutron is running. This
patch doesn't fail in such case. The migration procedure runs the db
sync twice in a row so it should be good to not perform any action when
this happens and let the next migration handle that.

Change-Id: I28a4a5fef20d5049f4887d43006947b434de3d78
Closes-Bug: #1939704
Signed-off-by: Jakub Libosvar 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1939704

Title:
  db-sync script may fail when migrating from ovs to ovn

Status in neutron:
  Fix Released

Bug description:
  During the steps where port bindings are updated [1] there might be a
  neutron-server l3 gateway scheduling going on and the port binding for
  gateway router ports can change its gateway chassis. It makes the db
  sync script fail with following traceback

  2021-08-12 06:21:40.738 48 INFO networking_ovn.cmd.neutron_ovn_db_sync_util 
[req-988ef422-ae85-41d7-a4a0-1e7b6c68a29e - - - - -] Migrating Neutron database 
from OVS to OVN
  2021-08-12 06:21:41.300 48 CRITICAL neutron_ovn_db_sync_util 
[req-988ef422-ae85-41d7-a4a0-1e7b6c68a29e - - - - -] Unhandled error: 
neutron_lib.exceptions.ObjectNotFound: Object 
PortBinding(port_id=a95eaa4b-fe5f-4770-9a03-2e76f20c2870, 
host=controller-2.redhat.local) not found.
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util Traceback (most 
recent call last):
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util   File 
"/usr/bin/neutron-ovn-db-sync-util", line 10, in 
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util sys.exit(main())
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3.6/site-packages/networking_ovn/cmd/neutron_ovn_db_sync_util.py",
 line 233, in main
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util 
db_migration.migrate_neutron_database_to_ovn(core_plugin)
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3.6/site-packages/networking_ovn/ml2/db_migration.py", line 66, 
in migrate_neutron_database_to_ovn
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util pb.update()
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3.6/site-packages/neutron/objects/base.py", line 337, in 
decorator
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util return 
func(self, *args, **kwargs)
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3.6/site-packages/neutron/objects/base.py", line 906, in update
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util 
self._get_composite_keys()))
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3.6/site-packages/neutron/objects/db/api.py", line 86, in 
update_object
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util db_obj = 
_safe_get_object(obj_cls, context, **kwargs)
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3.6/site-packages/neutron/objects/db/api.py", line 80, in 
_safe_get_object
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util id="%s(%s)" % 
(obj_cls.db_model.__name__, key))
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util 
neutron_lib.exceptions.ObjectNotFound: Object 
PortBinding(port_id=a95eaa4b-fe5f-4770-9a03-2e76f20c2870, 
host=controller-2.redhat.local) not found.
  2021-08-12 06:21:41.300 48 ERROR neutron_ovn_db_sync_util 

  [1]
  
https://opendev.org/openstack/neutron/src/commit/caac890c8e121e1bebe33f4e93ab79d4c294db35/neutron/plugins/ml2/drivers/ovn/db_migration.py#L73-L83

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1939704/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940090] [NEW] options of the castellan library are missing from glance-api.conf

2021-08-16 Thread Takashi Kajinami
Public bug reported:

Glance loads the castellan library for encryption but options for that
library(like ones under [key_manager], [barbican] and etc) are missing
from example glance-api.conf.

I've regenerated the conf file using `tox -e genconfig` which interaly
calls oslo-confing-generator, but even in the generated config file the
options are still missing.

** Affects: glance
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1940090

Title:
  options of the castellan library are missing from glance-api.conf

Status in Glance:
  In Progress

Bug description:
  Glance loads the castellan library for encryption but options for that
  library(like ones under [key_manager], [barbican] and etc) are missing
  from example glance-api.conf.

  I've regenerated the conf file using `tox -e genconfig` which interaly
  calls oslo-confing-generator, but even in the generated config file
  the options are still missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1940090/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1927868] Re: vRouter not working after update to 16.3.1

2021-08-16 Thread Chris MacNaughton
This bug was fixed in the package neutron - 2:18.1.0-0ubuntu2~cloud0
---

 neutron (2:18.1.0-0ubuntu2~cloud0) focal-wallaby; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:18.1.0-0ubuntu2) hirsute; urgency=medium
 .
   * d/p/revert-l3-ha-retry-when-setting-ha-router-gw-status.patch: Revert
 upstream patch that introduced regression that prevented full restore
 of HA routers on restart of L3 agent (LP: #1927868).


** Changed in: cloud-archive/wallaby
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1927868

Title:
  vRouter not working after update to 16.3.1

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Committed
Status in Ubuntu Cloud Archive ussuri series:
  Fix Released
Status in Ubuntu Cloud Archive victoria series:
  Fix Committed
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in Ubuntu Cloud Archive xena series:
  Fix Released
Status in neutron:
  New
Status in oslo.privsep:
  New
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  Fix Committed
Status in neutron source package in Hirsute:
  Fix Committed
Status in neutron source package in Impish:
  Fix Released

Bug description:
  We run a juju managed Openstack Ussuri on Bionic. After updating
  neutron packages from 16.3.0 to 16.3.1 all virtual routers stopped
  working. It seems that most (not all) namespaces are created but have
  only the lo interface and sometime the ha-XYZ interface in DOWN state.
  The underlying tap interfaces are also in down.

  neutron-l3-agent has many logs similar to the following:
  2021-05-08 15:01:45.286 39411 ERROR neutron.agent.l3.ha_router [-] Gateway 
interface for router 02945b59-639b-41be-8237-3b7933b4e32d was not set up; 
router will not work properly

  and journal logs report at around the same time
  May 08 15:01:40 lar1615.srv-louros.grnet.gr 
neutron-keepalived-state-change[18596]: 2021-05-08 15:01:40.765 18596 INFO 
neutron.agent.linux.ip_lib [-] Failed sending gratuitous ARP to 62.62.62.62 on 
qg-5a6efe8c-6b in namespace qrouter-02945b59-639b-41be-8237-3b7933b4e32d: Exit 
code: 2; Stdin: ; Stdout: Interface "qg-5a6efe8c-6b" is down
  May 08 15:01:40 lar1615.srv-louros.grnet.gr 
neutron-keepalived-state-change[18596]: 2021-05-08 15:01:40.767 18596 INFO 
neutron.agent.linux.ip_lib [-] Interface qg-5a6efe8c-6b or address 62.62.62.62 
in namespace qrouter-02945b59-639b-41be-8237-3b7933b4e32d was deleted 
concurrently

  The neutron packages installed are:

  ii  neutron-common 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - common
  ii  neutron-dhcp-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - DHCP agent
  ii  neutron-l3-agent   2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - l3 agent
  ii  neutron-metadata-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - metadata agent
  ii  neutron-metering-agent 2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - metering agent
  ii  neutron-openvswitch-agent  2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - Open vSwitch plugin agent
  ii  python3-neutron2:16.3.1-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for 
Openstack - Python library
  ii  python3-neutron-lib2.3.0-0ubuntu1~cloud0  
 all  Neutron shared routines and utilities - 
Python 3.x
  ii  python3-neutronclient  1:7.1.1-0ubuntu1~cloud0
 all  client API library for Neutron - Python 
3.x

  Downgrading to 16.3.0 resolves the issues.

  =

  Ubuntu SRU details:

  [Impact]
  See above.

  [Test Case]
  Deploy openstack with l3ha and create several HA routers, the number required 
varies per environment. It is probably best to deploy a known bad version of 
the package, ensure it is failing, upgrade to the version in proposed, and 
re-test several times to confirm it is fixed.

  Restarting neutron-l3-agent should expect all HA Routers are restored.

  [Regression Potential]
  This change is fixing a regression by reverting a patch that was introduced 
in a stable point 

[Yahoo-eng-team] [Bug 1940084] [NEW] neutron-agent causes 10m delay on start-up

2021-08-16 Thread Sergii Golovatiuk
Public bug reported:

When the environment starts (TripleO deployment), we wait until
pacemaker starts everything, but while that's been happening there are
neutron-agent services which have started by systemd but are waiting
with a 10 minute timeout for a RabbitMQ connection. Looking at code [1]
for resilience code for neutron agent - rabbitmq communication, it
doesn't take in account the start up case when connection to rabbit was
never established causing 10m delay. To solve the problem we should
specify the cases for resilience

1. Initial connection establishment. Connection to rabbit was never 
established, agent is trying to establish it (Initial startup of whole 
openstack cluster after power outage or planned reboot or one compute node 
reboot)
2. Connection to rabbit was established but connection was lost. In this case 
[1] does its job perfectly allowing to reduce load on rabbitmq
3. Connection was established but there is no reply from rabbitmq (rabbit is 
overloaded). In this case [1] does its job as well

To resolve case 1 we should introduce variable
is_connection_ever_established. If it's not set we should try to connect
every 20-30 seconds and set is_connection_ever_established==true when
connection established. When is_connection_ever_established==true but no
reply or connection lost we should use [1] algorithm. This change will
increase initial cluster startup or compute node reboot.





[1] 
https://opendev.org/openstack/neutron-lib/src/branch/master/neutron_lib/rpc.py#L159-L180

** Affects: neutron
 Importance: Wishlist
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940084

Title:
  neutron-agent causes 10m delay on start-up

Status in neutron:
  Confirmed

Bug description:
  When the environment starts (TripleO deployment), we wait until
  pacemaker starts everything, but while that's been happening there are
  neutron-agent services which have started by systemd but are waiting
  with a 10 minute timeout for a RabbitMQ connection. Looking at code
  [1] for resilience code for neutron agent - rabbitmq communication, it
  doesn't take in account the start up case when connection to rabbit
  was never established causing 10m delay. To solve the problem we
  should specify the cases for resilience

  1. Initial connection establishment. Connection to rabbit was never 
established, agent is trying to establish it (Initial startup of whole 
openstack cluster after power outage or planned reboot or one compute node 
reboot)
  2. Connection to rabbit was established but connection was lost. In this case 
[1] does its job perfectly allowing to reduce load on rabbitmq
  3. Connection was established but there is no reply from rabbitmq (rabbit is 
overloaded). In this case [1] does its job as well

  To resolve case 1 we should introduce variable
  is_connection_ever_established. If it's not set we should try to
  connect every 20-30 seconds and set
  is_connection_ever_established==true when connection established. When
  is_connection_ever_established==true but no reply or connection lost
  we should use [1] algorithm. This change will increase initial cluster
  startup or compute node reboot.







  
  [1] 
https://opendev.org/openstack/neutron-lib/src/branch/master/neutron_lib/rpc.py#L159-L180

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940084/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940086] [NEW] [api-ref] doc does not list resource_request as a field of the response of port bulk create

2021-08-16 Thread Balazs Gibizer
Public bug reported:

The API ref of bulk port create[1] is incomplete. It does not list the
resource_request key but the API actually returns that (for admins). See
example run [2].

To reproduce:
1) create a network

openstack network create net0 \
--provider-network-type vlan \
--provider-physical-network physnet0 \
--provider-segment 100 \
##
openstack subnet create subnet0 \
--network net0 \
--subnet-range 10.0.4.0/24 \
##

2) create a QoS policy with a min bw rule

openstack network qos policy create qp0
openstack network qos rule create qp0 \
--type minimum-bandwidth \
--min-kbps 1000 \
--egress \

3) bulk create ports with that QoS policy. See [2]


[1] 
https://docs.openstack.org/api-ref/network/v2/?expanded=bulk-create-ports-detail#bulk-create-ports
[2] https://paste.opendev.org/show/808111/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref doc

** Tags added: api-ref doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940086

Title:
  [api-ref] doc does not list resource_request as a field of the
  response of port bulk create

Status in neutron:
  New

Bug description:
  The API ref of bulk port create[1] is incomplete. It does not list the
  resource_request key but the API actually returns that (for admins).
  See example run [2].

  To reproduce:
  1) create a network

  openstack network create net0 \
  --provider-network-type vlan \
  --provider-physical-network physnet0 \
  --provider-segment 100 \
  ##
  openstack subnet create subnet0 \
  --network net0 \
  --subnet-range 10.0.4.0/24 \
  ##

  2) create a QoS policy with a min bw rule

  openstack network qos policy create qp0
  openstack network qos rule create qp0 \
  --type minimum-bandwidth \
  --min-kbps 1000 \
  --egress \

  3) bulk create ports with that QoS policy. See [2]


  [1] 
https://docs.openstack.org/api-ref/network/v2/?expanded=bulk-create-ports-detail#bulk-create-ports
  [2] https://paste.opendev.org/show/808111/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940086/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940073] [NEW] "Unable to create the network. No available network found in maximum allowed attempts." during rally stress test

2021-08-16 Thread Krzysztof Klimonda
Public bug reported:

When running rally scenario NeutronNetworks.create_and_delete_networks
with concurrency of 60 the following error is observed:

--8<--8<--8<--
2021-08-16 11:28:41.526 710 ERROR oslo_db.api 
[req-61e1d9da-1bad-4410-94ce-d2945c13a2d5 05971ba84eac4b8eb176bd935909f9d0 
03904310315c47c7b33178da2bfc99a2 - default default] DB exceeded retry limit.: 
oslo_db.exception.RetryRequest: Unable to create the network. No available 
network found in maximum allowed attempts.
2021-08-16 11:28:41.526 710 ERROR oslo_db.api Traceback (most recent call last):
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_db/api.py", line 142, in 
wrapper
2021-08-16 11:28:41.526 710 ERROR oslo_db.api return f(*args, **kwargs)
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron_lib/db/api.py", line 
183, in wrapped
2021-08-16 11:28:41.526 710 ERROR oslo_db.api LOG.debug("Retry wrapper got 
retriable exception: %s", e)
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 
220, in __exit__
2021-08-16 11:28:41.526 710 ERROR oslo_db.api self.force_reraise()
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 
196, in force_reraise
2021-08-16 11:28:41.526 710 ERROR oslo_db.api six.reraise(self.type_, 
self.value, self.tb)
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/six.py", line 703, in reraise
2021-08-16 11:28:41.526 710 ERROR oslo_db.api raise value
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron_lib/db/api.py", line 
179, in wrapped
2021-08-16 11:28:41.526 710 ERROR oslo_db.api return f(*dup_args, 
**dup_kwargs)
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/plugin.py",
 line 1053, in create_network
2021-08-16 11:28:41.526 710 ERROR oslo_db.api result, mech_context = 
self._create_network_db(context, network)
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/plugin.py",
 line 1012, in _create_network_db
2021-08-16 11:28:41.526 710 ERROR oslo_db.api tenant_id)
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/managers.py",
 line 226, in create_network_segments
2021-08-16 11:28:41.526 710 ERROR oslo_db.api context, filters=filters)
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/managers.py",
 line 312, in _allocate_tenant_net_segment
2021-08-16 11:28:41.526 710 ERROR oslo_db.api segment = 
self._allocate_segment(context, network_type, filters)
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/managers.py",
 line 308, in _allocate_segment
2021-08-16 11:28:41.526 710 ERROR oslo_db.api return 
driver.obj.allocate_tenant_segment(context, filters)
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/type_tunnel.py",
 line 391, in allocate_tenant_segment
2021-08-16 11:28:41.526 710 ERROR oslo_db.api alloc = 
self.allocate_partially_specified_segment(context, **filters)
2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/helpers.py",
 line 153, in allocate_partially_specified_segment
2021-08-16 11:28:41.526 710 ERROR oslo_db.api 
exceptions.NoNetworkFoundInMaximumAllowedAttempts())
2021-08-16 11:28:41.526 710 ERROR oslo_db.api oslo_db.exception.RetryRequest: 
Unable to create the network. No available network found in maximum allowed 
attempts.
2021-08-16 11:28:41.526 710 ERROR oslo_db.api
--8<--8<--8<--

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940073

Title:
  "Unable to create the network. No available network found in maximum
  allowed attempts." during rally stress test

Status in neutron:
  New

Bug description:
  When running rally scenario NeutronNetworks.create_and_delete_networks
  with concurrency of 60 the following error is observed:

  --8<--8<--8<--
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api 
[req-61e1d9da-1bad-4410-94ce-d2945c13a2d5 05971ba84eac4b8eb176bd935909f9d0 
03904310315c47c7b33178da2bfc99a2 - default default] DB exceeded retry limit.: 
oslo_db.exception.RetryRequest: Unable to create the network. No available 
network found in maximum allowed attempts.
  

[Yahoo-eng-team] [Bug 1940074] [NEW] Neutron port bulk creation procedure ignores binding:vnic_type parameter

2021-08-16 Thread Andrey Bubyr
Public bug reported:

Bulk port creation does not honor binding:vnic_type field. It implicitly uses
binding:vnic_type: normal

Example of bulk creation API call:
curl -v --location --request POST 'https:///v2.0/ports' --header 
'Content-Type: application/json' --header 'X-Auth-Token: ' --data-raw 
'{
  "ports" : [ {
"name" : "port1",
"admin_state_up" : true,
"network_id" : "c2a3464a-dbea-40c9-b421-9313e33992be",
"binding:vnic_type" : "direct"
  }, {
"name" : "port2",
"admin_state_up" : true,
"network_id" : "27dd162f-e8ac-4b21-84f4-e4dff6836fa0",
"binding:vnic_type" : "macvtap"
  }]
}'

At the same time vnic_type is honored in 'single port' mode of this API, f.e. 
with payload like
  "port" : {
"name" : "port_single",
"admin_state_up" : true,
"network_id" : "c2a3464a-dbea-40c9-b421-9313e33992be",
"binding:vnic_type" : "direct"
  }
}'

Seems that binding:vnic_type from port parameters is not passed thru inside 
create_port_bulk() function. I've found a workaround. The following line should 
be added after
https://review.opendev.org/plugins/gitiles/openstack/neutron/+/refs/heads/master/neutron/plugins/ml2/plugin.py#1594:

port_dict[portbindings.VNIC_TYPE] = pdata.get(
portbindings.VNIC_TYPE)

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- Bulk port creation does not honor binding:vnic_type field. It implicitly uses 
+ Bulk port creation does not honor binding:vnic_type field. It implicitly uses
  binding:vnic_type: normal
  
  Example of bulk creation API call:
  curl -v --location --request POST 'https:///v2.0/ports' --header 
'Content-Type: application/json' --header 'X-Auth-Token: ' --data-raw 
'{
-   "ports" : [ {
- "name" : "port1",
- "admin_state_up" : true,
- "network_id" : "c2a3464a-dbea-40c9-b421-9313e33992be",
- "binding:vnic_type" : "direct"
-   }, {
- "name" : "port2",
- "admin_state_up" : true,
- "network_id" : "27dd162f-e8ac-4b21-84f4-e4dff6836fa0",
- "binding:vnic_type" : "macvtap"
-   }]
+   "ports" : [ {
+ "name" : "port1",
+ "admin_state_up" : true,
+ "network_id" : "c2a3464a-dbea-40c9-b421-9313e33992be",
+ "binding:vnic_type" : "direct"
+   }, {
+ "name" : "port2",
+ "admin_state_up" : true,
+ "network_id" : "27dd162f-e8ac-4b21-84f4-e4dff6836fa0",
+ "binding:vnic_type" : "macvtap"
+   }]
  }'
  
- At the same time vnic_type is honored in 'single port' of this API, f.e. with 
payload like
-   "port" : {
- "name" : "port_single",
- "admin_state_up" : true,
- "network_id" : "c2a3464a-dbea-40c9-b421-9313e33992be",
- "binding:vnic_type" : "direct"
-   }
+ At the same time vnic_type is honored in 'single port' mode of this API, f.e. 
with payload like
+   "port" : {
+ "name" : "port_single",
+ "admin_state_up" : true,
+ "network_id" : "c2a3464a-dbea-40c9-b421-9313e33992be",
+ "binding:vnic_type" : "direct"
+   }
  }'
  
  Seems that binding:vnic_type from port parameters is not passed thru inside 
create_port_bulk() function. I've found a workaround. The following line should 
be added after
  
https://review.opendev.org/plugins/gitiles/openstack/neutron/+/refs/heads/master/neutron/plugins/ml2/plugin.py#1594:
  
- port_dict[portbindings.VNIC_TYPE] = pdata.get(
- portbindings.VNIC_TYPE)
+ port_dict[portbindings.VNIC_TYPE] = pdata.get(
+ portbindings.VNIC_TYPE)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940074

Title:
  Neutron port bulk creation procedure ignores binding:vnic_type
  parameter

Status in neutron:
  New

Bug description:
  Bulk port creation does not honor binding:vnic_type field. It implicitly uses
  binding:vnic_type: normal

  Example of bulk creation API call:
  curl -v --location --request POST 'https:///v2.0/ports' --header 
'Content-Type: application/json' --header 'X-Auth-Token: ' --data-raw 
'{
    "ports" : [ {
  "name" : "port1",
  "admin_state_up" : true,
  "network_id" : "c2a3464a-dbea-40c9-b421-9313e33992be",
  "binding:vnic_type" : "direct"
    }, {
  "name" : "port2",
  "admin_state_up" : true,
  "network_id" : "27dd162f-e8ac-4b21-84f4-e4dff6836fa0",
  "binding:vnic_type" : "macvtap"
    }]
  }'

  At the same time vnic_type is honored in 'single port' mode of this API, f.e. 
with payload like
    "port" : {
  "name" : "port_single",
  "admin_state_up" : true,
  "network_id" : "c2a3464a-dbea-40c9-b421-9313e33992be",
  "binding:vnic_type" : "direct"
    }
  }'

  Seems that binding:vnic_type from port parameters is not passed thru inside 
create_port_bulk() function. I've found a workaround. The following line should 
be added after
  

[Yahoo-eng-team] [Bug 1937904] Re: imp module is deprecated

2021-08-16 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/python-novaclient/+/804597
Committed: 
https://opendev.org/openstack/python-novaclient/commit/bff8d4137057c9bc37436b8df29d86a3c2584938
Submitter: "Zuul (22348)"
Branch:master

commit bff8d4137057c9bc37436b8df29d86a3c2584938
Author: Takashi Kajinami 
Date:   Mon Aug 16 09:54:06 2021 +0900

Use importlib instead of imp

... because the imp module is deprecated since Python 3.4 .

Closes-Bug: #1937904
Change-Id: Ia3f83df336fd243c25f7471d56a44370c11bb5e1


** Changed in: python-novaclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1937904

Title:
  imp module is deprecated

Status in neutron:
  In Progress
Status in os-win:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in tripleo:
  New

Bug description:
  The imp module is deprecated since Python 3.4 and should be replaced by the 
importlib module.
  Now usage of the imp module shows the following deprecation warning.
  ~~~
  DeprecationWarning: the imp module is deprecated in favour of importlib; see 
the module's documentation for alternative uses
  ~~~

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1937904/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940071] [NEW] Neutron VPNaaS - growing memory consumption

2021-08-16 Thread Adam Tomas
Public bug reported:

I have problem with neutron vpnaas in Kolla-ansible (Victoria, from
source) on Ubuntu 20.04.2 LTS, kernel 5.4.0-65-generic #73-Ubuntu SMP
Mon Jan 18 17:25:17 UTC 2021 x86_64. Neutron client version 7.1.1,
neutron server version 17.1.3.dev22.


After enabling vpnaas plugin everything was ok at first. I was able to
create VPN connection and communication is correct. But after a week of
running vpnaas (only one VPN connection created/working!) I’ve noticed,
that neutron-vpnaas takes more and more memory. I have 5 processes on
each controller (there should be always five? Or it is changing
dynamically?):

424351545384  0.6  8.6 5802516 5712412 ? SJun15 110:50 
/var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini --config-file 
/etc/neutron/neutron_vpnaas.conf
424351545389  0.6  8.5 5735832 5645856 ? SJun15 112:16 
/var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini --config-file 
/etc/neutron/neutron_vpnaas.conf
424351545378  0.5  8.5 5734192 5643620 ? SJun15 108:09 
/var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini --config-file 
/etc/neutron/neutron_vpnaas.conf
424351545372  0.5  8.5 5731128 5641436 ? SJun15 109:07 
/var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini --config-file 
/etc/neutron/neutron_vpnaas.conf
424351545369  0.6  8.4 5637084 5547392 ? SJun15 114:21 
/var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini --config-file 
/etc/neutron/neutron_vpnaas.conf

now neutron_server takes over 27G of RAM on each controller:

neutron_server running  10.2  27.2G

and forced controllers to use swapfile.

After stopping all neutron containers and starting it again neutron
takes a lot less memory:

neutron_server running   0.5   583M

but memory usage keeps growing (about 5-6 M every minute).


There is only one, bidirectional vpn connection between project networks in two 
regions.
There is no (or almost no) traffic on this vpn link.
Second region is on one host (kolla all-in-one) and situation looks the same- 
with vpn service enabled there is excessive Memory use (growing also about 
5-6MB/ minute and forcing the system to use swapfile).

There is no big cpu usage (ofcourse also due to no traffic inside VPN)
 It's the same case even if there's no vpn connection configured.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940071

Title:
  Neutron VPNaaS - growing memory consumption

Status in neutron:
  New

Bug description:
  I have problem with neutron vpnaas in Kolla-ansible (Victoria, from
  source) on Ubuntu 20.04.2 LTS, kernel 5.4.0-65-generic #73-Ubuntu SMP
  Mon Jan 18 17:25:17 UTC 2021 x86_64. Neutron client version 7.1.1,
  neutron server version 17.1.3.dev22.


  After enabling vpnaas plugin everything was ok at first. I was able to
  create VPN connection and communication is correct. But after a week
  of running vpnaas (only one VPN connection created/working!) I’ve
  noticed, that neutron-vpnaas takes more and more memory. I have 5
  processes on each controller (there should be always five? Or it is
  changing dynamically?):

  424351545384  0.6  8.6 5802516 5712412 ? SJun15 110:50 
/var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini --config-file 
/etc/neutron/neutron_vpnaas.conf
  424351545389  0.6  8.5 5735832 5645856 ? SJun15 112:16 
/var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini --config-file 
/etc/neutron/neutron_vpnaas.conf
  424351545378  0.5  8.5 5734192 5643620 ? SJun15 108:09 
/var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini --config-file 
/etc/neutron/neutron_vpnaas.conf
  424351545372  0.5  8.5 5731128 5641436 ? SJun15 109:07 
/var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini --config-file 
/etc/neutron/neutron_vpnaas.conf
  42435

[Yahoo-eng-team] [Bug 1938261] Re: [ovn]Router scheduler failing for config "default_availability_zones"

2021-08-16 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/802665
Committed: 
https://opendev.org/openstack/neutron/commit/7988ab5df0eb0799811d50eb90f0b292d008f700
Submitter: "Zuul (22348)"
Branch:master

commit 7988ab5df0eb0799811d50eb90f0b292d008f700
Author: zhouhenglc 
Date:   Wed Jul 28 15:25:39 2021 +0800

"default_availability_zones" need to be considered when validate az

If not set availability_zone_hits when create router, should use
configuration parameter default_availability_zones.
At present, only the creation parameters are validate, and the default
availability zones not validate.
Creating a network is the same as creating a route.

Closes-bug: #1938261

Change-Id: I1c7f50b69a31d725b762e3061f09a0bd5b077a58


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938261

Title:
  [ovn]Router scheduler failing for config  "default_availability_zones"

Status in neutron:
  Fix Released

Bug description:
  I have 3 gateway chassis and all available zones are nova, have 1 chassis.
  The default_availability_zones=zone1 are configured in the neutron.conf.

  I create router and not set availability_zone_hits, I can create
  router success and router's availability_zones=zone1, though ovn-nbctl
  command, can found router's gateway_chassis include all chassis(4
  nodes).

  I think should fail in this case, indicating that the available_zone
  does not exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1938261/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940035] [NEW] Manage flavors in horizon remove Update flavors feature from documentation¶

2021-08-16 Thread Kabanov Oleg
Public bug reported:

We dropped Flavor Editing feature in Rocky Release (Bug #1751354),
before it was deprecated in Pike (Bug #1709056). But "Edit Flavor"
description is still present in latest Administration Guide
https://docs.openstack.org/horizon/latest/admin/manage-flavors.html.
This leads to confusion so we have to remove section Update flavors from
documentation.

---
Release: 20.0.1.dev8 on 2018-01-08 23:06:27
SHA: 1800750804502adf9ff31366daa987aeb9acba31
Source: 
https://opendev.org/openstack/horizon/src/doc/source/admin/manage-flavors.rst
URL: https://docs.openstack.org/horizon/latest/admin/manage-flavors.html

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1940035

Title:
  Manage flavors in horizon remove Update flavors feature from
  documentation¶

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We dropped Flavor Editing feature in Rocky Release (Bug #1751354),
  before it was deprecated in Pike (Bug #1709056). But "Edit Flavor"
  description is still present in latest Administration Guide
  https://docs.openstack.org/horizon/latest/admin/manage-flavors.html.
  This leads to confusion so we have to remove section Update flavors
  from documentation.

  ---
  Release: 20.0.1.dev8 on 2018-01-08 23:06:27
  SHA: 1800750804502adf9ff31366daa987aeb9acba31
  Source: 
https://opendev.org/openstack/horizon/src/doc/source/admin/manage-flavors.rst
  URL: https://docs.openstack.org/horizon/latest/admin/manage-flavors.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1940035/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp