[Yahoo-eng-team] [Bug 1942179] Re: neutron api worker leaks memory when processing requests to not existing controllers
Reviewed: https://review.opendev.org/c/openstack/neutron/+/807335 Committed: https://opendev.org/openstack/neutron/commit/e610a5eb9e71aa2549fb11e2139370d227787da2 Submitter: "Zuul (22348)" Branch:master commit e610a5eb9e71aa2549fb11e2139370d227787da2 Author: Slawek Kaplonski Date: Fri Sep 3 16:04:02 2021 +0200 Don't use singleton in routes.middleware.RoutesMiddleware It seems that using default singleton=True in the routes.middleware.RoutesMiddleware which is leading to use thread-local RequestConfig singleton object is not working well with eventlet monkeypatching of threading library which we are doing in Neutron. As a result it leaks memory in neutron-api workers every time when API request to not existing API endpoint is made by user. To avoid that memory leak, let's use singletone=False in that RoutesMiddleware object, at least until problem with thread-local singleton and eventlet monkey patching will be solved. Closes-Bug: #1942179 Change-Id: Id3a529248d3984506f0166bdc32e334127a01b7b ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1942179 Title: neutron api worker leaks memory when processing requests to not existing controllers Status in neutron: Fix Released Status in OpenStack Security Advisory: Confirmed Bug description: Authorized cloud user may do API requests to neutron to not existing endpoints, like e.g.: curl -g -i -X GET http://10.120.0.30:9696/v2.0/blabla -H "Accept: application/json" -H "User-Agent: openstacksdk/0.59.0 keystoneauth1/4.3.1 python-requests/2.26.0 CPython/3.6.8" -H "X-Auth- Token: $token" and each such request will increase memory consumption of the neutron- api worker process. What I did was: * start neutron server with just one api worker (easier to calculate memory consumption but it would be the same leak in case of more workers too). Memory consumption was: sudo pmap 212436 | tail -n 1 total 183736K * now run command like: $ i=1; while [ $i -lt 2000 ]; do echo "Request $i"; curl -g -i -X GET http://10.120.0.30:9696/v2.0/blabla -H "Accept: application/json" -H "User-Agent: openstacksdk/0.59.0 keystoneauth1/4.3.1 python- requests/2.26.0 CPython/3.6.8" -H "X-Auth-Token: $token" 2>1 >/dev/null; i=$(( i+1 )); sleep 0.01; done * check memory consumption of the same api worker now: sudo pmap 212436 | tail -n 1 total 457896K To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1942179/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1942794] [NEW] "DHCP_Options" is not updated when the metadata port IPs are
Public bug reported: In OVN, the DHCP server will inject into the VM the routes defined in "DHCP_Options" register. There is "DHCP_Option" per subnet with DHCP enabled. The "DHCP_Options.options.classless_static_route" is a set of static routes defined in the root namespace of the VM. If the OVN metadata is enabled, a static route will be created to send traffic to the metadata IP "169.254.169.254/32", using the metadata port IP. Currently, if the user manually changes this IP address (the metadata port should have only one IP address per subnet), this is not updated in the corresponding "DHCP_Option" register. Related Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1994591 ** Affects: neutron Importance: Medium Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez) Status: New ** Changed in: neutron Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez) ** Changed in: neutron Importance: Undecided => Medium ** Description changed: In OVN, the DHCP server will inject into the VM the routes defined in "DHCP_Options" register. There is "DHCP_Option" per subnet with DHCP enabled. The "DHCP_Options.options.classless_static_route" is a set of static routes defined in the root namespace of the VM. If the OVN metadata is enabled, a static route will be created to send traffic to the metadata IP "169.254.169.254/32", using the metadata port IP. Currently, if the user manually changes this IP address (the metadata port should have only one IP address per subnet), this is not updated in the corresponding "DHCP_Option" register. + + Related Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1994591 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1942794 Title: "DHCP_Options" is not updated when the metadata port IPs are Status in neutron: New Bug description: In OVN, the DHCP server will inject into the VM the routes defined in "DHCP_Options" register. There is "DHCP_Option" per subnet with DHCP enabled. The "DHCP_Options.options.classless_static_route" is a set of static routes defined in the root namespace of the VM. If the OVN metadata is enabled, a static route will be created to send traffic to the metadata IP "169.254.169.254/32", using the metadata port IP. Currently, if the user manually changes this IP address (the metadata port should have only one IP address per subnet), this is not updated in the corresponding "DHCP_Option" register. Related Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1994591 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1942794/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1942469] Re: Network delete notifications no longer contain segment info
Reviewed: https://review.opendev.org/c/openstack/neutron/+/807243 Committed: https://opendev.org/openstack/neutron/commit/27edf6b6d311d7d334090c89c4cee54d63162a55 Submitter: "Zuul (22348)" Branch:master commit 27edf6b6d311d7d334090c89c4cee54d63162a55 Author: Oleg Bondarev Date: Fri Sep 3 10:29:14 2021 +0300 Ensure net dict has provider info on precommit delete Commit 80eddc40390e63c9c1102b827997054708f2618b optimized net delete by including net info into notification payload, however ML2 plugin needs provider info as well. Closes-Bug: #1942469 Change-Id: I9f753be0ce5ae7870afb9b3cb74f89be8482356e ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1942469 Title: Network delete notifications no longer contain segment info Status in neutron: Fix Released Bug description: Change I07c70db027f2ae03ffb5a95072e019e8a5fdc411 made it so PRECOMMIT_DELETE and AFTER_DELETE both receive the network dict fetched from the DB (decorated with any resource_extend hooks). However, this network representation does not include segment information like segmentation_id or network_type. The networking-generic-switch ML2 plugin assumes that such information is present on the delete postcommit hook and needs it to do its job: https://opendev.org/openstack/networking-generic- switch/src/branch/master/networking_generic_switch/generic_switch_mech.py#L164-L166 As a result networking-generic-switch cannot currently be deployed. Example error: ``` 2021-09-02 12:27:57.438 30 ERROR neutron.plugins.ml2.managers [req-99b0b44f-171d-41a3-b99d-1cccb27b3006 bcb7ef06be674b9199b36e8f18b546f3 570aad8999f7499db99eae22fe9b29bb - default default] Mechanism driver 'genericswitch' failed in delete_network_postcommit: KeyError: 'provider:network_type' 2021-09-02 12:27:57.438 30 ERROR neutron.plugins.ml2.managers Traceback (most recent call last): 2021-09-02 12:27:57.438 30 ERROR neutron.plugins.ml2.managers File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py", line 479, in _call_on_drivers 2021-09-02 12:27:57.438 30 ERROR neutron.plugins.ml2.managers getattr(driver.obj, method_name)(context) 2021-09-02 12:27:57.438 30 ERROR neutron.plugins.ml2.managers File "/var/lib/kolla/venv/lib/python2.7/site-packages/networking_generic_switch/generic_switch_mech.py", line 315, in delete_network_postcommit 2021-09-02 12:27:57.438 30 ERROR neutron.plugins.ml2.managers provider_type = network['provider:network_type'] 2021-09-02 12:27:57.438 30 ERROR neutron.plugins.ml2.managers KeyError: 'provider:network_type' 2021-09-02 12:27:57.438 30 ERROR neutron.plugins.ml2.managers 2021-09-02 12:27:57.440 30 ERROR neutron.plugins.ml2.plugin [req-99b0b44f-171d-41a3-b99d-1cccb27b3006 bcb7ef06be674b9199b36e8f18b546f3 570aad8999f7499db99eae22fe9b29bb - default default] mechanism_manager.delete_network_postcommit failed: MechanismDriverError ``` To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1942469/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2
Reviewed: https://review.opendev.org/c/openstack/python-neutronclient/+/807458 Committed: https://opendev.org/openstack/python-neutronclient/commit/cff9c266c05ebfc13f4917e1646e5dedbe371cc2 Submitter: "Zuul (22348)" Branch:master commit cff9c266c05ebfc13f4917e1646e5dedbe371cc2 Author: Takashi Kajinami Date: Sun Sep 5 00:56:38 2021 +0900 Replace deprecated assertDictContainsSubset The method is deprecated since Python 3.2[1] and shows the following DeprecationWarning. /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: assertDictContainsSubset is deprecated warnings.warn('assertDictContainsSubset is deprecated', [1] https://docs.python.org/3/whatsnew/3.2.html#unittest Closes-Bug: #1938103 Change-Id: I1d0ee6c77476707a7e4fe4fbf2b979bf34550d05 ** Changed in: python-neutronclient Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1938103 Title: assertDictContainsSubset is deprecated since Python3.2 Status in Designate: In Progress Status in Glance: In Progress Status in OpenStack Identity (keystone): In Progress Status in Mistral: In Progress Status in neutron: Fix Released Status in python-neutronclient: Fix Released Bug description: unittest.TestCase.assertDictContainsSubset is deprecated since Python 3.2[1] and shows the following warning. ~~~ /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: assertDictContainsSubset is deprecated warnings.warn('assertDictContainsSubset is deprecated', ~~~ [1] https://docs.python.org/3/whatsnew/3.2.html#unittest To manage notifications about this bug go to: https://bugs.launchpad.net/designate/+bug/1938103/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1942448] Re: Neutron - “Update bandwidth limit rule” API on SUCCESS responds with 200 instead of Expected 202
Reviewed: https://review.opendev.org/c/openstack/neutron-lib/+/807390 Committed: https://opendev.org/openstack/neutron-lib/commit/670f83b0de1359dd301dd65ac2c2571f9d3ee193 Submitter: "Zuul (22348)" Branch:master commit 670f83b0de1359dd301dd65ac2c2571f9d3ee193 Author: Brian Haley Date: Fri Sep 3 14:28:50 2021 -0400 Fix some api-ref typos There were some places in the API ref that it shows a 202 is returned on Update, but we always return a 200 (HTTPOk). Fixed a few other cases where Create (should be 201) and Delete (should be 204) were wrong as well. Change-Id: I4f6eb742f4420d0844e9c254ce989fc62973b0cf Closes-bug: #1942448 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1942448 Title: Neutron - “Update bandwidth limit rule” API on SUCCESS responds with 200 instead of Expected 202 Status in neutron: Fix Released Bug description: ### Scenario ### Activate “Update bandwidth limit rule” API with proper values to make it PASS ### Actual Result ### Received status code is 200 OK ### Expected Result ### According the Documentation it needs to be 202 Accepted https://docs.openstack.org/api-ref/network/v2/index.html?expanded=update-bandwidth-limit-rule-detail#update-bandwidth-limit-rule ### Tempest prompt output ### BUG_200_instead_of_202BUG_200_instead_of_202BUG_200_instead_of_202BUG_200_instead_of_202BUG_200_instead_of_202BUG_200_instead_of_202BUG_200_instead_of_202BUG_200_instead_of_202BUG_200_instead_of_202BUG_200_instead_of_202BUG_200_instead_of_202BUG_200_instead_of_202BUG_200_instead_of_202 2021-09-01 15:21:48.668 529226 INFO tempest.lib.common.rest_client [req-27de559b-db51-433b-b01b-fd27cba24664 ] Request (QoSTest:test_qos_basic_and_update): 200 PUT http://10.0.0.125:9696/v2.0/qos/policies/31e15f98-8b03-4f8b-9a1a-27847ea48971/bandwidth_limit_rules/f17625ab-24c7-46c1-bcd2-0e326c9ac360 0.269s 2021-09-01 15:21:48.668 529226 DEBUG tempest.lib.common.rest_client [req-27de559b-db51-433b-b01b-fd27cba24664 ] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''} Body: {"bandwidth_limit_rule": {"max_kbps": 2000, "max_burst_kbps": 2000}} Response - Headers: {'content-type': 'application/json', 'content-length': '137', 'x-openstack-request-id': 'req-27de559b-db51-433b-b01b-fd27cba24664', 'date': 'Wed, 01 Sep 2021 19:21:48 GMT', 'connection': 'close', 'status': '200', 'content-location': 'http://10.0.0.125:9696/v2.0/qos/policies/31e15f98-8b03-4f8b-9a1a-27847ea48971/bandwidth_limit_rules/f17625ab-24c7-46c1-bcd2-0e326c9ac360'} Body: b'{"bandwidth_limit_rule": {"max_kbps": 2000, "max_burst_kbps": 2000, "direction": "egress", "id": "f17625ab-24c7-46c1-bcd2-0e326c9ac360"}}' _log_request_full /home/stack/tempest/tempest/lib/common/rest_client.py:456 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1942448/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2
Reviewed: https://review.opendev.org/c/openstack/neutron/+/807459 Committed: https://opendev.org/openstack/neutron/commit/34acbd6ff8573e5e2ce149150bae566f987f0ded Submitter: "Zuul (22348)" Branch:master commit 34acbd6ff8573e5e2ce149150bae566f987f0ded Author: Takashi Kajinami Date: Sun Sep 5 01:00:29 2021 +0900 Replace deprecated assertDictContainsSubset The method is deprecated since Python 3.2[1] and shows the following DeprecationWarning. /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: assertDictContainsSubset is deprecated warnings.warn('assertDictContainsSubset is deprecated', [1] https://docs.python.org/3/whatsnew/3.2.html#unittest Closes-Bug: #1938103 Change-Id: Iab60f52ffbfb3668e9509ce86e105917c616b8a9 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1938103 Title: assertDictContainsSubset is deprecated since Python3.2 Status in Designate: In Progress Status in Glance: In Progress Status in OpenStack Identity (keystone): In Progress Status in Mistral: In Progress Status in neutron: Fix Released Status in python-neutronclient: In Progress Bug description: unittest.TestCase.assertDictContainsSubset is deprecated since Python 3.2[1] and shows the following warning. ~~~ /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: assertDictContainsSubset is deprecated warnings.warn('assertDictContainsSubset is deprecated', ~~~ [1] https://docs.python.org/3/whatsnew/3.2.html#unittest To manage notifications about this bug go to: https://bugs.launchpad.net/designate/+bug/1938103/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1942766] [NEW] After the detach volume timeout, the disk is lost after soft reboot
Public bug reported: Description == When the detach disk timeout, and then soft reboot the virtual machine, the disk that was detach timeout was lost, but it is displayed in nova database and bind the vm in nova database. Steps to reproduce == 1. create a windows vm,attach a disk to vm. make a big io to the disk. 2. detach disk timeout 3. soft reboot 4. disk is lost Cause Analysis == Because the detach disk first releases the persist xml now, when the live detach the disk timeout, the persist xml is also gone. If the virtual machine is soft rebooted at this time, the virtual machine will be lost disk because the persistent xml is missing. def _detach_with_retry( if persistent_dev: try: self._detach_from_persistent( guest, instance_uuid, persistent_dev, get_device_conf_func, device_name) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1942766 Title: After the detach volume timeout, the disk is lost after soft reboot Status in OpenStack Compute (nova): New Bug description: Description == When the detach disk timeout, and then soft reboot the virtual machine, the disk that was detach timeout was lost, but it is displayed in nova database and bind the vm in nova database. Steps to reproduce == 1. create a windows vm,attach a disk to vm. make a big io to the disk. 2. detach disk timeout 3. soft reboot 4. disk is lost Cause Analysis == Because the detach disk first releases the persist xml now, when the live detach the disk timeout, the persist xml is also gone. If the virtual machine is soft rebooted at this time, the virtual machine will be lost disk because the persistent xml is missing. def _detach_with_retry( if persistent_dev: try: self._detach_from_persistent( guest, instance_uuid, persistent_dev, get_device_conf_func, device_name) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1942766/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1942740] Re: nova-next job POST_FAILURE due to nova-manage heal _allocations testing fails
being fixed in https://review.opendev.org/q/topic:bug/1942740 ** Project changed: nova => placement-osc-plugin -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1942740 Title: nova-next job POST_FAILURE due to nova-manage heal _allocations testing fails Status in placement-osc-plugin: Triaged Bug description: Since Friday (3rd of Sept) the nova-next job fails[1] on master with: 2021-09-05 10:20:23.082260 | controller | + /opt/stack/nova/gate/post_test_hook.sh:main:189 : openstack port unset --binding-profile allocation port-normal-qos 2021-09-05 10:20:25.387981 | controller | ++ /opt/stack/nova/gate/post_test_hook.sh:main:193 : openstack resource provider allocation show a76b03d8-98e6-4063-a5c2-8d6d9128e233 -c resources -f value 2021-09-05 10:20:27.018412 | controller | 'project_id' 2021-09-05 10:20:27.152002 | controller | + /opt/stack/nova/gate/post_test_hook.sh:main:193 : allocations= 2021-09-05 10:20:27.441731 | controller | ERROR 2021-09-05 10:20:27.442055 | controller | { 2021-09-05 10:20:27.442156 | controller | "delta": "0:01:26.706541", 2021-09-05 10:20:27.442248 | controller | "end": "2021-09-05 10:20:27.152606", 2021-09-05 10:20:27.442337 | controller | "msg": "non-zero return code", 2021-09-05 10:20:27.442424 | controller | "rc": 1, 2021-09-05 10:20:27.442510 | controller | "start": "2021-09-05 10:19:00.446065" 2021-09-05 10:20:27.442599 | controller | } [1] https://zuul.opendev.org/t/openstack/builds?job_name=nova-next&project=openstack%2Fnova&branch=master [2] https://zuul.opendev.org/t/openstack/build/f5b881e3601f4160a5f82a9b4cdc10ad/log/job-output.txt#62198-62200 To manage notifications about this bug go to: https://bugs.launchpad.net/placement-osc-plugin/+bug/1942740/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1942740] [NEW] nova-next job POST_FAILURE due to nova-manage heal _allocations testing fails
Public bug reported: Since Friday (3rd of Sept) the nova-next job fails[1] on master with: 2021-09-05 10:20:23.082260 | controller | + /opt/stack/nova/gate/post_test_hook.sh:main:189 : openstack port unset --binding-profile allocation port-normal-qos 2021-09-05 10:20:25.387981 | controller | ++ /opt/stack/nova/gate/post_test_hook.sh:main:193 : openstack resource provider allocation show a76b03d8-98e6-4063-a5c2-8d6d9128e233 -c resources -f value 2021-09-05 10:20:27.018412 | controller | 'project_id' 2021-09-05 10:20:27.152002 | controller | + /opt/stack/nova/gate/post_test_hook.sh:main:193 : allocations= 2021-09-05 10:20:27.441731 | controller | ERROR 2021-09-05 10:20:27.442055 | controller | { 2021-09-05 10:20:27.442156 | controller | "delta": "0:01:26.706541", 2021-09-05 10:20:27.442248 | controller | "end": "2021-09-05 10:20:27.152606", 2021-09-05 10:20:27.442337 | controller | "msg": "non-zero return code", 2021-09-05 10:20:27.442424 | controller | "rc": 1, 2021-09-05 10:20:27.442510 | controller | "start": "2021-09-05 10:19:00.446065" 2021-09-05 10:20:27.442599 | controller | } [1] https://zuul.opendev.org/t/openstack/builds?job_name=nova-next&project=openstack%2Fnova&branch=master [2] https://zuul.opendev.org/t/openstack/build/f5b881e3601f4160a5f82a9b4cdc10ad/log/job-output.txt#62198-62200 ** Affects: nova Importance: Critical Status: New ** Tags: gate-failure ** Changed in: nova Importance: Undecided => Critical ** Tags added: gate-failure -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1942740 Title: nova-next job POST_FAILURE due to nova-manage heal _allocations testing fails Status in OpenStack Compute (nova): New Bug description: Since Friday (3rd of Sept) the nova-next job fails[1] on master with: 2021-09-05 10:20:23.082260 | controller | + /opt/stack/nova/gate/post_test_hook.sh:main:189 : openstack port unset --binding-profile allocation port-normal-qos 2021-09-05 10:20:25.387981 | controller | ++ /opt/stack/nova/gate/post_test_hook.sh:main:193 : openstack resource provider allocation show a76b03d8-98e6-4063-a5c2-8d6d9128e233 -c resources -f value 2021-09-05 10:20:27.018412 | controller | 'project_id' 2021-09-05 10:20:27.152002 | controller | + /opt/stack/nova/gate/post_test_hook.sh:main:193 : allocations= 2021-09-05 10:20:27.441731 | controller | ERROR 2021-09-05 10:20:27.442055 | controller | { 2021-09-05 10:20:27.442156 | controller | "delta": "0:01:26.706541", 2021-09-05 10:20:27.442248 | controller | "end": "2021-09-05 10:20:27.152606", 2021-09-05 10:20:27.442337 | controller | "msg": "non-zero return code", 2021-09-05 10:20:27.442424 | controller | "rc": 1, 2021-09-05 10:20:27.442510 | controller | "start": "2021-09-05 10:19:00.446065" 2021-09-05 10:20:27.442599 | controller | } [1] https://zuul.opendev.org/t/openstack/builds?job_name=nova-next&project=openstack%2Fnova&branch=master [2] https://zuul.opendev.org/t/openstack/build/f5b881e3601f4160a5f82a9b4cdc10ad/log/job-output.txt#62198-62200 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1942740/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp