[Yahoo-eng-team] [Bug 1782284] [NEW] GET "/" returns "version" instead of expected "versions"
Public bug reported: Whether there is only one or multiple version available, the dict key is expected to be "versions" per the version discovery guidelines [1]. $ curl -i -H "X-Auth-Token: $OS_TOKEN" http://192.168.1.8/compute/v2.1/ HTTP/1.1 200 OK Date: Wed, 18 Jul 2018 04:05:48 GMT Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips mod_wsgi/3.4 Python/2.7.5 Content-Length: 388 Content-Type: application/json OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version x-openstack-request-id: req-7469810f-ecd4-40fa-84ba-65a5d73756ec x-compute-request-id: req-7469810f-ecd4-40fa-84ba-65a5d73756ec Connection: close {"version": {"status": "CURRENT", "updated": "2013-07-23T11:33:21Z", "links": [{"href": "http://192.168.1.8/compute/v2.1/;, "rel": "self"}, {"href": "http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}], "min_version": "2.1", "version": "2.63", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1"}], "id": "v2.1"}} Tested with devstack (commit 3b5477d6356a62d7d64a519a4b1ac99309d251c0). [1] https://github.com/openstack/api- wg/blob/master/guidelines/microversion_specification.rst#version- discovery ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1782284 Title: GET "/" returns "version" instead of expected "versions" Status in OpenStack Compute (nova): New Bug description: Whether there is only one or multiple version available, the dict key is expected to be "versions" per the version discovery guidelines [1]. $ curl -i -H "X-Auth-Token: $OS_TOKEN" http://192.168.1.8/compute/v2.1/ HTTP/1.1 200 OK Date: Wed, 18 Jul 2018 04:05:48 GMT Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips mod_wsgi/3.4 Python/2.7.5 Content-Length: 388 Content-Type: application/json OpenStack-API-Version: compute 2.1 X-OpenStack-Nova-API-Version: 2.1 Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version x-openstack-request-id: req-7469810f-ecd4-40fa-84ba-65a5d73756ec x-compute-request-id: req-7469810f-ecd4-40fa-84ba-65a5d73756ec Connection: close {"version": {"status": "CURRENT", "updated": "2013-07-23T11:33:21Z", "links": [{"href": "http://192.168.1.8/compute/v2.1/;, "rel": "self"}, {"href": "http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}], "min_version": "2.1", "version": "2.63", "media- types": [{"base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1"}], "id": "v2.1"}} Tested with devstack (commit 3b5477d6356a62d7d64a519a4b1ac99309d251c0). [1] https://github.com/openstack/api- wg/blob/master/guidelines/microversion_specification.rst#version- discovery To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1782284/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1781916] Re: nova-compute version cap leads to ValueError
Reading the docs on the upgrade_levels config options, only [upgrade_levels]/compute supports 'auto', so that's why it's blowing up since you're setting it for everything. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1781916 Title: nova-compute version cap leads to ValueError Status in OpenStack Compute (nova): Invalid Bug description: Setting the following config on a compute host leads to a Stacktrace. [upgrade_levels] compute = auto cells = auto intercell = auto cert = auto scheduler = auto conductor = auto console = auto consoleauth = auto network = auto baseapi = auto Stacktrace: 2018-07-16 14:02:39.011 474 ERROR nova File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 119, in _check_version_cap 2018-07-16 14:02:39.011 474 ERROR nova if not utils.version_is_compatible(self.version_cap, version): 2018-07-16 14:02:39.011 474 ERROR nova File "/usr/lib/python2.7/dist-packages/oslo_messaging/_utils.py", line 40, in version_is_compatible 2018-07-16 14:02:39.011 474 ERROR nova if int(version_parts[0]) != int(imp_version_parts[0]): # Major 2018-07-16 14:02:39.011 474 ERROR nova ValueError: invalid literal for int() with base 10: 'auto' The same setting on a controller node works fine. Is it not supposed to use these version caps on a compute node or is this simply a bug in the code? # nova-compute --version 17.0.4 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1781916/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1757407] Re: Notification sending sometimes hits the keystone API to get glance endpoints
Reviewed: https://review.openstack.org/564528 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=93b897348bde072969f67e43875ce08e5d420b8a Submitter: Zuul Branch:master commit 93b897348bde072969f67e43875ce08e5d420b8a Author: Balazs Gibizer Date: Thu Apr 26 16:55:15 2018 +0200 Call generate_image_url only for legacy notification The legacy instance.exists notification includes the full url of the glance image of the given instance. But the versioned notification only includes the image uuid. Generating the full url can be a costly operation as it needs to talk to Keystone. So this patch makes sure that generate_image_url only called when the generated information will be used. Change-Id: I78c2a34b3d03438457cc968cd0a38b8131e4f6e6 Closes-Bug: #1757407 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1757407 Title: Notification sending sometimes hits the keystone API to get glance endpoints Status in OpenStack Compute (nova): Fix Released Bug description: During the investigation of another bug [1] we noticed that notification sending could trigger keystone API call if the glance/api_server config is not present in the nova.conf . The notification sending code paths[2][3] calls info_from_instance [4] that leads to the glance client get_api_servers function that falls back to keystone to get the endpoints if the above config is not present. The versioned notifications do not use the glance endpoint information. However even if the notifications/notification_format config options is set to versioned, nova still hits keystone via the instance.exists notification codepath [3] as that path is shared between versioned and unversioned notifications. This leads to an unnecessary REST API call where the result is not used so the caused performance loss is totally unnecessary. [1] https://bugs.launchpad.net/nova/+bug/1753550 [2] https://github.com/openstack/nova/blob/db0747591ce8df1b0ca62aac0648b7154fed1f86/nova/compute/utils.py#L305 [3] https://github.com/openstack/nova/blob/6eccfb7c01b7e984cb18c7b75bd20a589dfdfe3d/nova/notifications/base.py#L212 [4] https://github.com/openstack/nova/blob/6eccfb7c01b7e984cb18c7b75bd20a589dfdfe3d/nova/notifications/base.py#L381 [5] https://github.com/openstack/nova/blob/24379f1822e3ae1d4f7c8398e60af6e52b386c32/nova/image/glance.py#L126 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1757407/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1777640] Re: Neutron doesn't work with Eventlet >= 0.22
Reviewed: https://review.openstack.org/576638 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=d1efeeb433f090fabb02a07eabfa66576ebea9ea Submitter: Zuul Branch:master commit d1efeeb433f090fabb02a07eabfa66576ebea9ea Author: Brian Haley Date: Tue Jun 19 16:26:46 2018 -0400 Fix UnixDomainHttpProtocol class to support all eventlet versions It was recently decided to uncap eventlet: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129096.html So eventlet is now capped at 0.20 not by global requirements, it is capped in upper-constraints, because currently not every openstack project is able to work with a newer eventlet version, mostly because of the caps in projects requirements.txt. According to global-requirements, last allowed version of eventlet is 0.22.1: https://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt In an effort to support both eventlet<0.22 and eventlet>=0.22, change the code to try and determine the correct number of arguments to use in the call to initialize the parent class. Change-Id: Ibe3dc8af6cf9f8bb4f8eababb7f4276e4db3f1f9 Closes-bug: #1777640 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1777640 Title: Neutron doesn't work with Eventlet >= 0.22 Status in neutron: Fix Released Bug description: https://review.openstack.org/#/c/561953/ tried to make Neutron compatible with two types of eventlet.wsgi.HttpProtocol.__init__: old, with 3 arguments and new, with 2 arguments. But it is not the full solution, because UnixDomainHttpProtocol.__init__ still able to work with 3 arguments only. Pike version of Neutron with backported 2c31f7f35129cb2160592633e52083b412d6c2cd fix fails with File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 458, in fire_timers timer() File "/usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 58, in __call__ cb(*args, **kw) File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 218, in main result = function(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 781, in process_request proto.__init__(conn_state, self) TypeError: __init__() takes exactly 4 arguments (3 given) because Eventlet tries to use UnixDomainHttpProtocol.__init__ in new way. Most likely that __init__ in UnixDomainHttpProtocol should accept variable number of arguments, pick and modify address if needed and call appropriate form of base class __init__. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1777640/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1779826] Re: Unshelve instance can't find the image error message is not updated to the instance fault table
Reviewed: https://review.openstack.org/579747 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b26df4fa6cd78a9b60de79750d4e9a58980d13ca Submitter: Zuul Branch:master commit b26df4fa6cd78a9b60de79750d4e9a58980d13ca Author: zhangbailin Date: Tue Jul 3 11:08:04 2018 +0800 Add unshelve instance error info to fault table When the image of the SHELVED_OFFLOADED instance is deleted, the unshelve instance will report an error, but the fault info not added into instance fault table. Closes-bug: #1779826 Change-Id: I365fcc148b27959acad1d3c4f8bb45c1ed790318 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1779826 Title: Unshelve instance can't find the image error message is not updated to the instance fault table Status in OpenStack Compute (nova): Fix Released Bug description: When deleting the image of an instance of the SHELVED_OFFLOADED state and then executing the unshelve instance operation, the unshelve instance fails, but there is no corresponding error message in the fault table of the target instance. You can reproduce the bug by following the steps below: 1.Create an instance in your environment, named "instance01", then do "Shelve Instance"; 2.The first step, if executed successfully, will generate an image named "instance01-shelved", select and delete it; 3.Return to the instance list, find the target instance, and then do "Unshelve instance". The status of this instance will be set to Error, but there is no error message in the instance fault table. I think that I have seen an error in the instance, but I don’t know why it is wrong. This is unfriendly. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1779826/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1772345] Re: DEFAULT_SERVICE_REGIONS overrides services_region cookie
Reviewed: https://review.openstack.org/571086 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=88fb01884010758d302f651b2525d96132326db1 Submitter: Zuul Branch:master commit 88fb01884010758d302f651b2525d96132326db1 Author: Adrian Turjak Date: Wed May 30 15:49:04 2018 +1200 Rework DEFAULT_SERVICE_REGIONS DEFAULT_SERVICE_REGIONS is cumbersome when you have more than one keystone endpoint and all you want to do is set a global default service region. This adds '*' as an optional fallback key to mean global default. If an endpoint matches it will take precedence over the '*' value. This also fixes the precedence order for DEFAULT_SERVICE_REGIONS so that a user controlled cookie is used instead when that cookie is valid for the given catalog. This changes the way the setting works, but retains the intended result the setting was originally intended for. Change-Id: Ieefbd642d853fcfcf22a17d9edcc7daae72790a4 blueprint: global-default-service-region Closes-Bug: #1772345 Related-Bugs: #1359774 #1703390 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1772345 Title: DEFAULT_SERVICE_REGIONS overrides services_region cookie Status in OpenStack Dashboard (Horizon): Fix Released Bug description: DEFAULT_SERVICE_REGIONS makes it sound like it is setting default regions when a services_region cookie isn't present, when in fact it takes precedence over a cookie. This is terrible UX because a user that can change to another valid region is forced on login to the region always defined in DEFAULT_SERVICE_REGIONS, and then always change manually to the region they want every time. The cookie, a user controlled value, should always take precedence over a 'default' value. If we want to allow a forceful override then it should be called something else such as FORCED_SERVICE_REGIONS. The settings don't even make this behavior clear, which further makes me think this is a bug or an unconsidered part of the implementation: https://docs.openstack.org/horizon/latest/configuration/settings.html#default-service-regions We should change the precedence to: 1. services_region cookie 2. DEFAULT_SERVICE_REGIONS endpoint 3. any valid region from the catalog To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1772345/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1781880] Re: nova service-list for new compute service causes TypeError with servicegroup_driver mc
Looks like the correct analysis of the bug. We don't test the MC server group driver or enable_new_services config option very well, or together (obviously). ** Changed in: nova Importance: Undecided => Medium ** Changed in: nova Status: Confirmed => Triaged ** Tags added: memcache servicegroup ** Also affects: nova/queens Importance: Undecided Status: New ** Also affects: nova/pike Importance: Undecided Status: New ** Changed in: nova/pike Status: New => Confirmed ** Changed in: nova/queens Status: New => Confirmed ** Changed in: nova/pike Importance: Undecided => Medium ** Changed in: nova/queens Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1781880 Title: nova service-list for new compute service causes TypeError with servicegroup_driver mc Status in OpenStack Compute (nova): Triaged Status in OpenStack Compute (nova) pike series: Confirmed Status in OpenStack Compute (nova) queens series: Confirmed Bug description: Description === nova service-list for new compute service causes TypeError. Related commit: https://git.openstack.org/cgit/openstack/nova/commit/?id=0df91a7f799060cd2e9b8a0adac1efacb974bcb3 Steps to reproduce == 1. Set servicegroup_driver=mc and enable_new_services=False in /etc/nova/nova.conf 2. Add new compute host, start nova-compute service. Service becomes disabled, and field updated_at is NULL in nova.services table in DB. 3. Execute nova service-list. Expected result === List of the nova services Actual result = ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-cbd9f4ca-7634-4378-8dcc-5a03d9d4193f) Environment === OpenStack Pike Logs == Trace in the nova-api-os-compute logs: ERROR nova.api.openstack.extensions [req-6034517c-bc29-4ebe-931e-8726fd934bee e05fb82b34cd4265a839f2482debb973 b5d263d6d7c24b84b335f4f00ae6d7c9] Unexpected exception in API method: TypeError: can't compare datetime.datetime to NoneType TRACE nova.api.openstack.extensions Traceback (most recent call last): TRACE nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 336, in wrapped TRACE nova.api.openstack.extensions return f(*args, **kwargs) TRACE nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/services.py", line 261, in index TRACE nova.api.openstack.extensions _services = self._get_services_list(req, ['forced_down']) TRACE nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/services.py", line 115, in _get_services_list TRACE nova.api.openstack.extensions for svc in _services] TRACE nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/services.py", line 83, in _get_service_detail TRACE nova.api.openstack.extensions updated_time = self.servicegroup_api.get_updated_time(svc) TRACE nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/servicegroup/api.py", line 84, in get_updated_time TRACE nova.api.openstack.extensions return self._driver.updated_time(member) TRACE nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/mc.py", line 81, in updated_time TRACE nova.api.openstack.extensions if updated_time_in_db <= updated_time_in_mc: TRACE nova.api.openstack.extensions TypeError: can't compare datetime.datetime to NoneType TRACE nova.api.openstack.extensions 2018-07-16 11:40:16,437.437 2096 INFO nova.api.openstack.wsgi [req-6034517c-bc29-4ebe-931e-8726fd934bee e05fb82b34cd4265a839f2482debb973 b5d263d6d7c24b84b335f4f00ae6d7c9] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1781880/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1782197] [NEW] Mapping Engine Tester is untested
Public bug reported: Looking at a coverage report for the Keystone CLI shows that the entirety of class MappingEngineTester(BaseApp): Is untested. Since this is production and supported code, this is a risk. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1782197 Title: Mapping Engine Tester is untested Status in OpenStack Identity (keystone): New Bug description: Looking at a coverage report for the Keystone CLI shows that the entirety of class MappingEngineTester(BaseApp): Is untested. Since this is production and supported code, this is a risk. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1782197/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1782191] [NEW] Newly added z/VM driver is not in feature support matrix
Public bug reported: We added the zVM driver in Rocky with limited capabilities: https://blueprints.launchpad.net/nova/+spec/add-zvm-driver-rocky So it needs to be documented in the feature support matrix docs: https://docs.openstack.org/nova/latest/user/support-matrix.html I believe in Rocky it supports: - spawn/destroy - stop/start - pause/unpause - reboot - snapshot - get console output ** Affects: nova Importance: Medium Status: Triaged ** Tags: docs zvm -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1782191 Title: Newly added z/VM driver is not in feature support matrix Status in OpenStack Compute (nova): Triaged Bug description: We added the zVM driver in Rocky with limited capabilities: https://blueprints.launchpad.net/nova/+spec/add-zvm-driver-rocky So it needs to be documented in the feature support matrix docs: https://docs.openstack.org/nova/latest/user/support-matrix.html I believe in Rocky it supports: - spawn/destroy - stop/start - pause/unpause - reboot - snapshot - get console output To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1782191/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1761140] Re: [SRU] dpkg eror processing package nova-compute
This bug was fixed in the package nova - 2:15.1.3-0ubuntu1~cloud1 --- nova (2:15.1.3-0ubuntu1~cloud1) xenial-ocata; urgency=medium . * d/control: Drop circular dependencies. nova-compute depends on nova-compute-* packages. nova-compute-* packages shouldn't depend on nova-compute. nova-compute-* should however depend on nova-common. (LP: #1761140). ** Changed in: cloud-archive/ocata Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1761140 Title: [SRU] dpkg eror processing package nova-compute Status in Ubuntu Cloud Archive: Triaged Status in Ubuntu Cloud Archive mitaka series: Triaged Status in Ubuntu Cloud Archive ocata series: Fix Released Status in Ubuntu Cloud Archive pike series: Fix Released Status in Ubuntu Cloud Archive queens series: Triaged Status in Ubuntu Cloud Archive rocky series: Triaged Status in OpenStack Compute (nova): Invalid Status in nova package in Ubuntu: Fix Released Status in nova source package in Xenial: Triaged Status in nova source package in Bionic: Triaged Status in nova source package in Cosmic: Fix Released Bug description: [Impact] Hello! I've encountered the bug while installing Nova on compute nodes: ... Setting up qemu-system-x86 (1:2.11+dfsg-1ubuntu5~cloud0) ... Setting up qemu-kvm (1:2.11+dfsg-1ubuntu5~cloud0) ... Setting up qemu-utils (1:2.11+dfsg-1ubuntu5~cloud0) ... Setting up python-keystone (2:13.0.0-0ubuntu1~cloud0) ... Processing triggers for initramfs-tools (0.122ubuntu8.11) ... update-initramfs: Generating /boot/initrd.img-4.4.0-116-generic Setting up nova-compute-libvirt (2:17.0.1-0ubuntu1~cloud0) ... adduser: The user `nova' does not exist. dpkg: error processing package nova-compute-libvirt (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of nova-compute-kvm: nova-compute-kvm depends on nova-compute-libvirt (= 2:17.0.1-0ubuntu1~cloud0); however: Package nova-compute-libvirt is not configured yet. dpkg: error processing package nova-compute-kvm (--configure): dependency problems - leaving unconfigured Setting up python-os-brick (2.3.0-0ubuntu1~cloud0) ... No apport report written because the error message indicates its a followup error from a previous failure. Setting up python-nova (2:17.0.1-0ubuntu1~cloud0) ... Setting up nova-common (2:17.0.1-0ubuntu1~cloud0) ... Setting up nova-compute (2:17.0.1-0ubuntu1~cloud0) ... Processing triggers for libc-bin (2.23-0ubuntu10) ... Processing triggers for systemd (229-4ubuntu21.2) ... Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for dbus (1.10.6-1ubuntu3.3) ... Errors were encountered while processing: nova-compute-libvirt nova-compute-kvm ... Installation failed when executing 'post-installation script'. After some investigation I've found out that if I've create 'nova' user BEFORE running package installation, it's will be succeded. [Test Case] Steps to reproduce -- 1. Prepare the node for installing nova-compute packages 2. Run 'apt-get install nova-compute' Expected result -- Nova-compute installed successfully without errors Actual result -- Installation failed with dpkg error Workaround -- 1. Create system user: add to /etc/passwd nova:x:64060:64060::/var/lib/nova:/bin/false 2. Create system group: add to /etc/group nova:x:64060: 3. Run 'apt-get install nova-compute' My Environment -- Ubuntu 16.04.4 LTS, 4.4.0-116-generic Openstack Queens Release Nova 17.0.1-0ubuntu1 [Regression Potential] Should be very low. This is a very minor dependency chain to prevent a dependency circular loop. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1761140/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1782141] [NEW] QoS L3 agent extension functional tests fail on CentOS: "rate" or "avrate" MUST be specified.
Public bug reported: Running dsvm-functional tests on CentOS 7.5 and current master (e3e91eb44c20500999c6435203f22d805de7e3ac), these tests fail with same error: neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_connection_from_diff_address_scope neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_connection_from_same_address_scope neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_direct_route_for_address_scope neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_gateway_move_does_not_remove_redirect_rules neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_ha_router_failover_with_gw_and_floatingip neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_ha_router_failover_with_gw neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_non_ha_router_update neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_gateway_redirect_cleanup_on_agent_restart neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_gateway_update_to_none neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_lifecycle_ha_with_snat_with_fips neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_lifecycle_without_ha_with_snat_with_fips neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_rem_fips_on_restarted_agent neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_rule_and_route_table_cleared_when_fip_removed neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_snat_namespace_with_interface_remove neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_static_routes_in_fip_and_snat_namespace neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_static_routes_in_snat_namespace_and_router_namespace neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_update_on_restarted_agent_sets_rtr_fip_connect neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_with_ha_for_fip_disassociation neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_snat_namespace_has_ip_nonlocal_bind_disabled neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_unused_snat_ns_deleted_when_agent_restarts_after_move neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_floating_ip_migration_from_unbound_to_bound neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_prevent_snat_rule_exist_on_restarted_agent neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_snat_bound_floating_ip 2018-07-17 14:16:49.316 16568 ERROR neutron.agent.linux.utils [req-4b90c8c6-b3a3-4482-9e5f-6db77a806139 - - - - -] Exit code: 1; Stdin: ; Stdout: ; Stderr: "rate" or "avrate" MUST be specified. Illegal "police" : FilterIDForIPNotFound: Filter ID for IP 19.4.4.2 could not be found. $ rpm -qf /usr/sbin/tc iproute-4.11.0-14.el7.x86_64 ** Affects: neutron Importance: Undecided Status: New ** Tags: functional-tests qos ** Attachment added: "Sample failure log" https://bugs.launchpad.net/bugs/1782141/+attachment/5164527/+files/neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_snat_namespace_has_ip_nonlocal_bind_disabled.txt -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1782141 Title: QoS L3 agent extension functional tests fail on CentOS: "rate" or "avrate" MUST be specified. Status in neutron: New Bug description: Running dsvm-functional tests on CentOS 7.5 and current master (e3e91eb44c20500999c6435203f22d805de7e3ac), these tests fail with same error: neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_connection_from_diff_address_scope
[Yahoo-eng-team] [Bug 1781915] Re: QoS (DSCP Mark IDs) – No correlation between the implemented functionality and design
Reviewed: https://review.openstack.org/582974 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b935f9d9a7c7bc872b49d66ce6ee586283ce69c0 Submitter: Zuul Branch:master commit b935f9d9a7c7bc872b49d66ce6ee586283ce69c0 Author: Nate Johnston Date: Mon Jul 16 11:26:20 2018 -0400 Add list of all working DSCP marks There is no place in the documentation that explicitly lists the valid DSCP marks, except for an incomplete hint in the DSCP spec in neutron-specs. This provides an explicit list. Change-Id: Ic350c88e59c33d98b54086707c9add05cf137dc2 Closes-Bug: #1781915 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1781915 Title: QoS (DSCP Mark IDs) – No correlation between the implemented functionality and design Status in neutron: Fix Released Bug description: ### General Description ### There is a “Proposed Change” regarding “QoS DSCP marking support” as it documented in: http://specs.openstack.org/openstack/neutron-specs/specs/newton/ml2-qos-with-dscp.html We propose an update to the QoS API and OVS driver to support DSCP marks. Valid DSCP mark values can be between 0 and 56, except 2-6, 42, 44, and 50-54. ### Test scenario ### 1) ssh to your Undercloud host 2) switch to “stack” user with: “su – stack“ 3) Source overcloudrc file with: “source overcloudrc” 4) Upload the attached “Check_DSCP_Options.py” Python script (inside the attached *.zip) to your home directory (stack home) and run it with “python Check_DSCP_Options.py” command while monitoring “server.log” on your controller node. ### Expected Result ### Valid DSCP mark values can be between 0 and 56, except 2-6, 42, 44, and 50-54. ### Actual Result ### 1) Failed DSCP mark IDs: [1, 2, 3, 4, 5, 6, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 42, 43, 44, 45, 47, 49, 50, 51, 52, 53, 54, 55] 2) As you can see we do have supported and not DSCP Mark IDs, but it has different behavior than expected. 3) You can find the controller “server.log” attached (inside attached *.zip) and you can see there messages like this one: 2018-07-02 15:04:58.535 28 INFO neutron.api.v2.resource [req- 74bb7060-b9b2-4542-bd8a-d805a8ff079c b4332e1592ab480f96fc87e5af797895 f3f03848a45746c7bcbe95b625d7e1d8 - default default] update failed (client error): Invalid input for dscp_mark. Reason: 55 is not in valid_values. for all mentioned IDs: [ashtempl@ashtempl ~]$ grep -i 'is not in' server.log | cut -d ':' -f5 1 is not in valid_values. 2 is not in valid_values. 3 is not in valid_values. 4 is not in valid_values. 5 is not in valid_values. 6 is not in valid_values. 7 is not in valid_values. 9 is not in valid_values. 11 is not in valid_values 13 is not in valid_values. 15 is not in valid_values. 17 is not in valid_values. 19 is not in valid_values. 21 is not in valid_values. 23 is not in valid_values. 25 is not in valid_values. 27 is not in valid_values. 29 is not in valid_values. 31 is not in valid_values. 33 is not in valid_values. 35 is not in valid_values. 37 is not in valid_values. 39 is not in valid_values. 41 is not in valid_values. 42 is not in valid_values. 43 is not in valid_values. 44 is not in valid_values. 45 is not in valid_values. 47 is not in valid_values. 49 is not in valid_values. 50 is not in valid_values. 51 is not in valid_values. 52 is not in valid_values. 53 is not in valid_values. 54 is not in valid_values. 55 is not in valid_values. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1781915/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1782119] [NEW] Downtime of volume backed live migration between two compute nodes (different version: liberty-mitaka) is too high
Public bug reported: Hi everyone, I'm working on upgrading OpenStack from Liberty to Mitaka. I've upgraded my controller to mitaka. Mitaka controller will manage liberty computes and mitaka computes. After that I do live migration VMs from liberty compute to mitaka compute. when live migrate between two computes different version, I recognized downtime was too high (30 ICMP packets loss with 200ms interval) than two computes same version (5 ICMP packets loss with 200ms interval), summary: live-migration between liberty-liberty computes: 5 ICMP packets loss with 200ms interval live-migration between liberty-mitaka computes: 30 ICMP packets loss with 200ms interval I don't know why it happened My ENV: 1 controller mitaka 2 compute liberty 1 compute mitaka OVS ML2 plugin with DVR Ceph Backend Storage Thanks ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1782119 Title: Downtime of volume backed live migration between two compute nodes (different version: liberty-mitaka) is too high Status in OpenStack Compute (nova): New Bug description: Hi everyone, I'm working on upgrading OpenStack from Liberty to Mitaka. I've upgraded my controller to mitaka. Mitaka controller will manage liberty computes and mitaka computes. After that I do live migration VMs from liberty compute to mitaka compute. when live migrate between two computes different version, I recognized downtime was too high (30 ICMP packets loss with 200ms interval) than two computes same version (5 ICMP packets loss with 200ms interval), summary: live-migration between liberty-liberty computes: 5 ICMP packets loss with 200ms interval live-migration between liberty-mitaka computes: 30 ICMP packets loss with 200ms interval I don't know why it happened My ENV: 1 controller mitaka 2 compute liberty 1 compute mitaka OVS ML2 plugin with DVR Ceph Backend Storage Thanks To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1782119/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1713104] Re: Network.service not always finishing before cloud-init.service starts on centOS
Thank you for the report. Since the report is about CentOS rather than Ubuntu I'm changing the bug task from cloud-init in Ubuntu to cloud-init upstream. ** Package changed: cloud-init (Ubuntu) => cloud-init -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1713104 Title: Network.service not always finishing before cloud-init.service starts on centOS Status in cloud-init: New Bug description: While https://github.com/cloud-init/cloud- init/commit/dcbe479575fac9f293c5c4089f4bcb46ab887206#diff- 27b6e651d75e0328135ddc43d3a83703 helped standardize systemd across distros, we have run into intermittent network failures for CentOS after using it. ~5% of provisions fail with "Job network.service/start deleted to break ordering cycle starting with cloud-init.service/start" A re-run of cloud-init will fix this problem as a workaround. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1713104/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1713104] [NEW] Network.service not always finishing before cloud-init.service starts on centOS
You have been subscribed to a public bug: While https://github.com/cloud-init/cloud- init/commit/dcbe479575fac9f293c5c4089f4bcb46ab887206#diff- 27b6e651d75e0328135ddc43d3a83703 helped standardize systemd across distros, we have run into intermittent network failures for CentOS after using it. ~5% of provisions fail with "Job network.service/start deleted to break ordering cycle starting with cloud-init.service/start" A re-run of cloud-init will fix this problem as a workaround. ** Affects: cloud-init Importance: Undecided Status: New -- Network.service not always finishing before cloud-init.service starts on centOS https://bugs.launchpad.net/bugs/1713104 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1743425] Re: Changes to VLAN mapping results in "is not mapped" error
** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1743425 Title: Changes to VLAN mapping results in "is not mapped" error Status in neutron: Fix Released Bug description: With neutron-server if you enable the type_drivers = vlan and set a mapping in [ml2_type_vlan] then install the neutron database, the service will start and map successfully. However if you change the mapping after and restart the service you will receive error: 2018-01-15 11:50:36.875 9764 ERROR neutron File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/type_vlan.py", line 133, in _sync_vlan_allocations 2018-01-15 11:50:36.875 9764 ERROR neutron ctx.session.delete(alloc) 2018-01-15 11:50:36.875 9764 ERROR neutron File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1744, in delete 2018-01-15 11:50:36.875 9764 ERROR neutron raise exc.UnmappedInstanceError(instance) 2018-01-15 11:50:36.875 9764 ERROR neutron UnmappedInstanceError: Class 'neutron.objects.plugins.ml2.vlanallocation.VlanAllocation' is not mapped 2018-01-15 11:50:36.875 9764 ERROR neutron plugin.ini: [ml2] type_drivers = vlan tenant_network_types = vlan mechanism_drivers = openvswitch [ml2_type_vlan] network_vlan_ranges = physnet0:2:4000 sync DB: su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron Start neutron-server, Success! Now change plugin.ini to the following: [ml2_type_vlan] network_vlan_ranges = physnet1:2:4000 Restart neutron-server. Failure! Versions: openstack-neutron-11.0.2-3.el7.noarch openstack-neutron-ml2-11.0.2-3.el7.noarch openstack-neutron-common-11.0.2-3.el7.noarch openstack-neutron-openvswitch-11.0.2-3.el7.noarch openstack-neutron-linuxbridge-11.0.2-3.el7.noarch python-neutron-11.0.2-3.el7.noarch python-neutron-lib-1.9.1-1.el7.noarch python2-neutronclient-6.5.0-1.el7.noarch pip neutron==11.0.2 neutron-lib==1.9.1 python-neutronclient==6.5.0 CentOS 7 @ 3.10.0-693.11.6.el7.x86_64 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1743425/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1777547] Re: [Doc] [FWaaS] Configuration of FWaaS v1 is confused
Reviewed: https://review.openstack.org/576337 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=fe4bec79916c2c92932a4cb6be86515f719372ce Submitter: Zuul Branch:master commit fe4bec79916c2c92932a4cb6be86515f719372ce Author: miaoyuliang Date: Tue Jun 19 11:34:09 2018 -0400 Fix fwaas v1 configuration doc Modify the fwaas v1 config about driver Change-Id: Id6821174a15838713435a499a258f6d37a9cad2a Closes-Bug: #1777547 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1777547 Title: [Doc] [FWaaS] Configuration of FWaaS v1 is confused Status in neutron: Fix Released Bug description: In the page of https://docs.openstack.org/neutron/latest/admin/fwaas-v1-scenario.html the configuration is confused. The configration in step 1 is: [fwaas] driver = neutron_fwaas.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver enabled = True But in the step 2, the configuration is: [fwaas] agent_version = v1 driver = iptables enabled = True conntrack_driver = conntrack The "driver" is different, Although it works. And it confused user. This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [ ] This doc is inaccurate in this way: __ - [ ] This is a doc addition request. - [ ] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: 13.0.0.0b3.dev44 on 2018-06-19 01:09 SHA: abbd534fdfa8ba64fa71648503810f6b543ddd6d Source: https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/admin/fwaas-v1-scenario.rst URL: https://docs.openstack.org/neutron/latest/admin/fwaas-v1-scenario.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1777547/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1782075] [NEW] Image Create using the Task API, leaves Image in saving state when Task fails
Public bug reported: Image Create using the Task API, leaves Image in saving state when Task API fails. Steps To reproduce : - Provide Invalid URL / Provide unsupported URL [Apart from HTTP] to the task API. - Task API fails with error message : "Task failed due to Internal Error" - Neither Image ID is returned nor the Image is cleaned up from Glance when the Task API fails. Output from the glance image-list : +--+--+-+--+-++ | ID | Name | Disk Format | Container Format | Size| Status | +--+--+-+--+-++ | 79b600bc-9cff-4754-9bbd-5d0de5f76852 | With_HTTPS | qcow2 | bare | | saving | | 8f16ce54-f20f-434a-af7d-bb11be4c5b3d | Test_JUNK_URL | qcow2 | bare | | saving | Expected : - Either Image should be cleaned up when the Task API fails (OR) Image ID must be returned back to the user with right error message /status. - Appropriate Error Message must be returned stating "HTTPS is NOT Supported / Image URL is not reachable" instead of Generic error message : "Task failed due to Internal Error" Currently we do see some error in the Glance API logs ONLY when Unsupported URL is provided, but no error logs when the URL is not reachable. 018-07-17 12:00:58.002 36721 INFO glance.domain [-] Task [83f30963-5334-41b0-996b-10d429562178] status changing from processing to processing 2018-07-17 12:00:58.031 36721 ERROR glance.async.flows.base_import [-] Bad task configuration: Task was not configured properly 2018-07-17 12:00:58.155 36721 INFO glance.common.scripts.image_import.main [-] Task 83f30963-5334-41b0-996b-10d429562178: Got image data uri https://10.43.33.103/nfvo_local_repo/ to be imported 2018-07-17 12:00:58.167 36721 WARNING glance.common.scripts.image_import.main [-] Task 83f30963-5334-41b0-996b-10d429562178 failed with exception 2018-07-17 12:00:58.167 36721 INFO glance.common.scripts.image_import.main [-] Task 83f30963-5334-41b0-996b-10d429562178: Could not import image file https://10.43.33.103/nfvo_local_repo/ 2018-07-17 12:00:58.217 36721 WARNING glance.async.taskflow_executor [-] Task 'import-ImportToStore-83f30963-5334-41b0-996b-10d429562178' (2dd9f3f9-c22c-41b5-a826-602af0c79baa) transitioned into state 'FAILURE' from state 'RUNNING' 3 predecessors (most recent first): Atom 'import-CreateImage-83f30963-5334-41b0-996b-10d429562178' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': '79b600bc-9cff-4754-9bbd-5d0de5f76852'} |__Atom 'import_retry' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': [(None, {})]} |__Flow 'import' ** Affects: glance Importance: Undecided Status: New ** Tags: in saving stuck -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1782075 Title: Image Create using the Task API, leaves Image in saving state when Task fails Status in Glance: New Bug description: Image Create using the Task API, leaves Image in saving state when Task API fails. Steps To reproduce : - Provide Invalid URL / Provide unsupported URL [Apart from HTTP] to the task API. - Task API fails with error message : "Task failed due to Internal Error" - Neither Image ID is returned nor the Image is cleaned up from Glance when the Task API fails. Output from the glance image-list : +--+--+-+--+-++ | ID | Name | Disk Format | Container Format | Size| Status | +--+--+-+--+-++ | 79b600bc-9cff-4754-9bbd-5d0de5f76852 | With_HTTPS | qcow2 | bare | | saving | | 8f16ce54-f20f-434a-af7d-bb11be4c5b3d | Test_JUNK_URL | qcow2 | bare | | saving | Expected : - Either Image should be cleaned up when the Task API fails (OR) Image ID must be returned back to the user with right error message /status. - Appropriate Error Message must be returned stating "HTTPS is NOT Supported / Image URL is not reachable" instead of Generic error message : "Task failed due to Internal Error" Currently we do see some error in the Glance API logs ONLY when Unsupported URL is