[Yahoo-eng-team] [Bug 1536226] Re: Not all .po files compiled
Reviewed: https://review.openstack.org/319276 Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=0adde01a06d81ab39f536794e178d0434e9783a2 Submitter: Jenkins Branch:master commit 0adde01a06d81ab39f536794e178d0434e9783a2 Author: Sven AndersonDate: Fri May 20 16:20:48 2016 +0200 Let setup.py compile_catalog process all language files Two years ago the translation files have been split into several files, separating the log messages of different log levels from each other, like X.pot, X-log-warning.pot, X-log-info.pot, and so on. However, the setup.py command `compile_catalogs`, that comes from the babel package and compiles the corresponding .po files into .mo files, only supported one file per python package. This means that during packaging `compile_catalogs` never compiled the X-log-*.po files, so the corresponding translations were always missing. Since babel 2.3 the domain can be set to a space separated list of domains. This change adds the the additional log level files to the domain list. The obsolete check that .po and .pot files are valid is removed from tox.ini. Change-Id: I149c2254cb04297e598cfd3ca73b24efd0c8ef18 Closes-Bug: #1536226 ** Changed in: cinder Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1536226 Title: Not all .po files compiled Status in Cinder: Fix Released Status in Glance: In Progress Status in heat: Fix Released Status in OpenStack Identity (keystone): Fix Released Status in neutron: New Status in OpenStack Compute (nova): Fix Released Status in openstack i18n: New Bug description: python setup.py compile_catalog only compiles one .po file per language to a .mo file. By default is the project name, that is nova.po. This means all other nova-log-*.po files are never compiled. The only way to get setup.py compile the other files is calling it several times with different domains set, like for instance `python setup.py --domain nova-log-info` and so on. Since this is not usual, it can be assumed that the usual packages don't contain all the .mo files. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1536226/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614538] Re: neutron: instance.info_cache isn't refreshed after deleting associated floating IP
Reviewed: https://review.openstack.org/357494 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=cdb9b6820dc17971bca24adfc0b56f030f0ae827 Submitter: Jenkins Branch:master commit cdb9b6820dc17971bca24adfc0b56f030f0ae827 Author: Michael WurtzDate: Thu Aug 18 14:53:33 2016 -0500 Refresh info_cache after deleting floating IP When deleting a floating IP associated with Neutron's info_cache we don't refresh the info_cache after it is deleted. This patch makes it so the info_cache is refreshed when an associated floating IP is deleted. If there is no info_cache associated with the floating IP then info_cache is not refreshed. Change-Id: I8a8ae8cdbe2d9d77e7f1ae94ebdf6e4ad46eaf00 Closes-Bug: #1614538 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1614538 Title: neutron: instance.info_cache isn't refreshed after deleting associated floating IP Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) liberty series: In Progress Status in OpenStack Compute (nova) mitaka series: Confirmed Bug description: Shown in a tempest test here: http://logs.openstack.org/95/356095/2/check/gate-tempest-dsvm-neutron- full-ubuntu-xenial/8d3cbb2/console.html#_2016-08-18_03_18_38_290951 You can see from this patch that we refresh the instance's network info_cache (server.addresses) when deleting a floating IP associated with that instance but only when using nova-network, we don't do it for neutron. This is related to bug 1586931 and investigation that happened in https://review.openstack.org/#/c/351960/. Basically the problem is that this method isn't decorated with the refresh_cache decorator: https://github.com/openstack/nova/blob/d14fc79f65e04cc39a3988783344aecd84621291/nova/network/neutronv2/api.py#L1826 But notice that this is: https://github.com/openstack/nova/blob/d14fc79f65e04cc39a3988783344aecd84621291/nova/network/neutronv2/api.py#L1845 That's the method that's called from the REST API when disassociating, but not deleting, a floating IP from a server. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1614538/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1590587] Re: assigning a domain-specific role in domain A for a user to a project in domain B should be prohibited
Reviewed: https://review.openstack.org/365177 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=73bdbe1f87ac3571bb5a348158ad1e4ece73fbcc Submitter: Jenkins Branch:master commit 73bdbe1f87ac3571bb5a348158ad1e4ece73fbcc Author: Sean PerryDate: Fri Sep 2 16:48:54 2016 -0700 Project domain must match role domain for assignment When assigning a Domain specific role to a user it is OK if the user is from a different domain, but the project's domain must match the role's domain. Closes-Bug: 1590587 Change-Id: I1d63415de0130794939998c3e142ebdce9ddf39d ** Changed in: keystone Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1590587 Title: assigning a domain-specific role in domain A for a user to a project in domain B should be prohibited Status in OpenStack Identity (keystone): Fix Released Bug description: Domain-specific roles are visible in their owning domains only. Therefore, assigning a domain-specific role in a domain to users for a project in another domain should be prohibited. To reproduce: 1. create a domain-specific "foo_domain_role" in the "foo" domain. 2. create a project "bar_project" in "bar" domain. 3. create a user "bar_user" in "bar" domain. 4. now assign the role "foo_domain_role" to user "bar_user" for "bar_project", this should yield 403 instead of 201. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1590587/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621310] [NEW] Material theme missing filter icon
Public bug reported: The material theme is missing an icon for filter. so, if a horizon plugin uses fa-filter, the material theme just shows an empty box. ** Affects: horizon Importance: Undecided Status: New ** Changed in: horizon Milestone: None => ocata-2 ** Changed in: horizon Milestone: ocata-2 => newton-rc1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1621310 Title: Material theme missing filter icon Status in OpenStack Dashboard (Horizon): New Bug description: The material theme is missing an icon for filter. so, if a horizon plugin uses fa-filter, the material theme just shows an empty box. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1621310/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1538620] Re: Attach with host and instance_uuid not backwards compatible
** Changed in: cinder Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1538620 Title: Attach with host and instance_uuid not backwards compatible Status in Cinder: Won't Fix Status in OpenStack Compute (nova): Fix Released Bug description: Patch https://review.openstack.org/#/c/266006/ added the ability for Cinder to accept both host and instance_uuid when doing an attach. This is not backwards compatible to earlier API versions, so when Nova calls attach with versions prior to this change with both arguments it causes a failure. This information is needed for the multiattach work, but we should revert that change and try to find a cleaner way to do this that will not break backwards compatibility. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1538620/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1429684] Re: Nova and Brick can log each other out of iscsi sessions
I don't believe this is an issue anymore, other than the caveat discussed elsewhere that nova and cinder need to use the same lock_dir if run on the same host. Please feel free to reopen if this is not the case. ** Changed in: cinder Status: Triaged => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1429684 Title: Nova and Brick can log each other out of iscsi sessions Status in Cinder: Invalid Status in OpenStack Compute (nova): Incomplete Bug description: Brick and nova are not synchronized with the same connect_volume lock. This can cause nova or cinder to logout of an iscsi portal when the other one is attempting to use it. if nova and cinder are on the same node. This may seem like a rare situation but commonly occurs in our CI system as we perform many operations involving both Nova and Brick concurrently. Likely when attaching/detaching to an instance while attaching to the node directly for image operations. In the below case, cinder logged out of the iscsi session while nova was retrying rescans attempting to detect the new LUN. Cinder volume logs: 2015-03-07 17:27:14.288 28940 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf iscsiadm -m node -T iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815 -p 10.250.119.127:3260 --logout execute /usr/local/lib/python2.7/dist- packages/oslo_concurrency/processutils.py:199 2015-03-07 17:27:14.875 28940 DEBUG oslo_concurrency.processutils [-] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf iscsiadm -m node -T iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815 -p 10.250.119.127:3260 --logout" returned: 0 in 0.588s execute /usr/local/lib/python2.7/dist- packages/oslo_concurrency/processutils.py:225 2015-03-07 17:27:14.876 28940 DEBUG cinder.brick.initiator.connector [-] iscsiadm ('--logout',): stdout=Logging out of session [sid: 1, target: iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815, portal: 10.250.119.127,3260] Logout of [sid: 1, target: iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815, portal: 10.250.119.127,3260] successful. Nova compute logs: 2015-03-07 17:27:12.617 DEBUG nova.virt.libvirt.volume [req- 55f33c70-ec85-4041-aaf6-205f74abf979 VolumesV1SnapshotTestJSON-1966398854 VolumesV1SnapshotTestJSON-1188982339] iscsiadm ('--rescan',): stdout=Rescanning session [sid: 1, target: iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815, portal: 10.250.119.127,3260] stderr= _run_iscsiadm /opt/stack/new/nova/nova/virt/libvirt/volume.py:364 2015-03-07 17:27:12.617 WARNING nova.virt.libvirt.volume [req- 55f33c70-ec85-4041-aaf6-205f74abf979 VolumesV1SnapshotTestJSON-1966398854 VolumesV1SnapshotTestJSON-1188982339] ISCSI volume not yet found at: vdb. Will rescan & retry. Try number: 0 2015-03-07 17:27:12.618 DEBUG oslo_concurrency.processutils [req- 55f33c70-ec85-4041-aaf6-205f74abf979 VolumesV1SnapshotTestJSON-1966398854 VolumesV1SnapshotTestJSON-1188982339] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815 -p 10.250.119.127:3260 --rescan execute /usr/local/lib/python2.7/dist- packages/oslo_concurrency/processutils.py:199 2015-03-07 17:27:13.503 DEBUG oslo_concurrency.processutils [req- 55f33c70-ec85-4041-aaf6-205f74abf979 VolumesV1SnapshotTestJSON-1966398854 VolumesV1SnapshotTestJSON-1188982339] CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815 -p 10.250.119.127:3260 --rescan" returned: 0 in 0.885s execute /usr/local/lib/python2.7/dist- packages/oslo_concurrency/processutils.py:225 2015-03-07 17:27:13.504 DEBUG nova.virt.libvirt.volume [req- 55f33c70-ec85-4041-aaf6-205f74abf979 VolumesV1SnapshotTestJSON-1966398854 VolumesV1SnapshotTestJSON-1188982339] iscsiadm ('--rescan',): stdout=Rescanning session [sid: 1, target: iqn.1992-08.com.netapp:sn.60a8eb0cc4bb11e4b041123478563412:vs.40815, portal: 10.250.119.127,3260] stderr= _run_iscsiadm /opt/stack/new/nova/nova/virt/libvirt/volume.py:364 2015-03-07 17:27:14.504 WARNING nova.virt.libvirt.volume [req- 55f33c70-ec85-4041-aaf6-205f74abf979 VolumesV1SnapshotTestJSON-1966398854 VolumesV1SnapshotTestJSON-1188982339] ISCSI volume not yet found at: vdb. Will rescan & retry. Try number: 1 2015-03-07 17:27:14.505 DEBUG oslo_concurrency.processutils [req- 55f33c70-ec85-4041-aaf6-205f74abf979 VolumesV1SnapshotTestJSON-1966398854 VolumesV1SnapshotTestJSON-1188982339] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T
[Yahoo-eng-team] [Bug 1621276] Re: keystone running under uwsgi generates warning "Could not load fernet"
Recloning the repo and reinstalling the venv fixed the issue. ** Changed in: keystone Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1621276 Title: keystone running under uwsgi generates warning "Could not load fernet" Status in OpenStack Identity (keystone): Invalid Bug description: When running keystone with uwsgi, the following warning and traceback is generated: http://paste.openstack.org/show/568396/ This doesn't happen in devstack where keystone is run under apache. The commit that introduces the issue is http://git.openstack.org/cgit/openstack/keystone/commit/?id=0edf1fe46c066a09f4a251e2505f5d6a18525bf3 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1621276/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621281] [NEW] 500 when deleting a flavor associated to an existing router
Public bug reported: 2016-09-07 17:17:36.330 DEBUG neutron.wsgi [-] (11553) accepted ('192.168.56.101', 44115) from (pid=11553) server /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:868 2016-09-07 17:17:36.331 INFO neutron.wsgi [req-9325d159-689c-4eec-a2f2-09a3cd2cdd5d tempest-RoutersFlavorTestCase-2139645665 a20e05e7617e4a0c99fb4b4c28ca018f] 192.168.56.101 - - [07/Sep/2016 17:17:36] "DE LETE /v2.0/flavors/343b36ce-d64f-4cb4-ae2c-23afae44eabf/service_profiles/1bf3da54-b456-4cd5-93b1-a27e4fb3508e HTTP/1.1" 204 168 0.2158262016-09-07 17:17:36.555 ERROR neutron.api.v2.resource [req-2b12a1f1-8b6a-4bea-9878-efb9f352f4b4 tempest-RoutersFlavorTestCase-2139645665 a20e05e7617e4a0c99fb4b4c28ca018f] delete failed: No details. 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource Traceback (most recent call last): 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource result = method(request=request, **args) 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/opt/stack/neutron/neutron/api/v2/base.py", line 526, in delete 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource return self._delete(request, id, **kwargs) 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource ectxt.value = e.inner_exc 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource self.force_reraise() 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource return f(*args, **kwargs) 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/opt/stack/neutron/neutron/db/api.py", line 82, in wrapped 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource traceback.format_exc()) 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource self.force_reraise() 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/opt/stack/neutron/neutron/db/api.py", line 77, in wrapped 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource return f(*args, **kwargs) 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/opt/stack/neutron/neutron/api/v2/base.py", line 548, in _delete 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs) 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/opt/stack/neutron/neutron/db/flavors_db.py", line 153, in delete_flavor 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource context.session.delete(fl_db) 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 490, in __exit__ 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource self.rollback() 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__ 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb) 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 487, in __exit__ 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource self.commit() 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 392, in commit 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource self._prepare_impl() 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 372, in _prepare_impl 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource self.session.flush() 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2019, in flush 2016-09-07 17:17:36.555 TRACE neutron.api.v2.resource self._flush(objects)
[Yahoo-eng-team] [Bug 1621276] [NEW] keystone running under uwsgi generates warning "Could not load fernet"
Public bug reported: When running keystone with uwsgi, the following warning and traceback is generated: http://paste.openstack.org/show/568396/ This doesn't happen in devstack where keystone is run under apache. The commit that introduces the issue is http://git.openstack.org/cgit/openstack/keystone/commit/?id=0edf1fe46c066a09f4a251e2505f5d6a18525bf3 ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1621276 Title: keystone running under uwsgi generates warning "Could not load fernet" Status in OpenStack Identity (keystone): New Bug description: When running keystone with uwsgi, the following warning and traceback is generated: http://paste.openstack.org/show/568396/ This doesn't happen in devstack where keystone is run under apache. The commit that introduces the issue is http://git.openstack.org/cgit/openstack/keystone/commit/?id=0edf1fe46c066a09f4a251e2505f5d6a18525bf3 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1621276/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1619624] Re: Pagination is not present for Identity-->Projects .
Looks like this still works for Keystone v2.0. I just tried it against a v2.0 endpoint with the latest Horizon master, and the images per page setting worked. Horizon just can't support it for v3 yet, since Keystone v3 doesn't support it. There was a blueprint for Keystone v3 to support it, but it got shot down: https://blueprints.launchpad.net/keystone/+spec/pagination ** Changed in: horizon Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1619624 Title: Pagination is not present for Identity-->Projects . Status in OpenStack Dashboard (Horizon): Invalid Bug description: In Mitaka, there is no pagination for Identity-->Projects and Number of items per page settings does not work in this page . To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1619624/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1619722] Re: in placement api we must be able to update inventory to violate allocations
Reviewed: https://review.openstack.org/365068 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=09627f2a0bd0c4ad05671d4646557066e606f2ad Submitter: Jenkins Branch:master commit 09627f2a0bd0c4ad05671d4646557066e606f2ad Author: Chris DentDate: Fri Sep 2 15:58:38 2016 + [placement] Allow inventory to violate allocations When a compute node is reconfigured to have different inventory it needs to be able to tell the placement service of these changes, even if they violate existing allocations. Thus, when updating inventory we no longer check the new capacity against existing allocations, we just let it happen (but log a warning). This is safe because allocations in excess of capacity will still prevent resources from being used. Change-Id: I48ae1d7cd6bc309243493ddac99ce990c0146534 Closes-Bug: #1619722 Co-Authored-By: Dan Smith ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1619722 Title: in placement api we must be able to update inventory to violate allocations Status in OpenStack Compute (nova): Fix Released Bug description: If a compute node is reconfigured in a way that makes its inventory change, those changes must be reflected in the placement service, even if they violate the existing allocations, otherwise the node is left in a difficult state. This is safe because with this new inventory the node won't be scheduled to: it doesn't have available capacity. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1619722/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621257] [NEW] VNC console keeps reporting "setkeycodes 00" exception
Public bug reported: For VM boot on KVM HV, the noVNC console keeps popping up the following exception whenever any key is pressed. Please refer to attached screenshot for details. "[32.786640] atkbd serio0: Use 'setkeycodes 00 ' to make it know. [60.026326] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). " I am using tot master code for all projects. Previously, it was working correctly and I started to see this issue this week. ** Affects: nova Importance: Undecided Status: New ** Attachment added: "Console output" https://bugs.launchpad.net/bugs/1621257/+attachment/4736471/+files/Screen%20Shot%202016-09-06%20at%204.46.17%20PM.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1621257 Title: VNC console keeps reporting "setkeycodes 00" exception Status in OpenStack Compute (nova): New Bug description: For VM boot on KVM HV, the noVNC console keeps popping up the following exception whenever any key is pressed. Please refer to attached screenshot for details. "[32.786640] atkbd serio0: Use 'setkeycodes 00 ' to make it know. [60.026326] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). " I am using tot master code for all projects. Previously, it was working correctly and I started to see this issue this week. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1621257/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621236] [NEW] Attempt to not set location on non active or queued image
Public bug reported: https://review.openstack.org/324012 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/glance" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit 7c7dd626896d732d75c6b802a33b9462aee885fd Author: Mike FedosinDate: Wed Jun 1 19:38:48 2016 +0300 Attempt to not set location on non active or queued image Currently, an authorized user can set custom location string to an image that is in any non-deleted status. If a custom location is set to an image in ``saving`` status, it may result in a race condition between the data stream that is trying to save the image data to the backend and the custom location user is attempting to set on the image record. This will result in a bad experience for the user who is streaming the image data to Glance as it is a better experiece to set the location after the image data has been saved and image is in ``active`` status, in this case. If a custom location is set to an image in ``deactivated`` status, the purpose of setting the image to ``deactivated`` is void. This will result in a worse experience for the security team that is trying to evaluate the validity of the image data of the deactivated image. Avoiding setting custom location in this case will ensure data consistency until image is pulled out of the deactivation process. This commit introduces the following change in behavior: * If an image is in ``saving`` or ``deactivated`` status, the staus of that image will be checked while trying to set the custom location and a HTTP 409 Conflict status will be retured in response to the user. * If an image is in ``active`` or ``queued`` status, setting custom locations on that image will be allowed so there is no change in behavior for this case. * If an image is in ``deleted`` or ``pending_delete`` status, a HTTP 409 Conflict status will be retured, if that image is visible to the user (in case of admins). Otherwise, the location cannot be set anyway. Setting a location to a ``deleted`` image is fruitless as the image cannot be used. Please note ``pending_delete`` is another form of the ``deleted`` status and behavior in either case should be expected to be same. * If an image is in ``killed`` status, a HTTP 409 Conflict status will be retured. Nevertheless, it is again fruitless to attempt to set a location on such images as they are unusable. * This operation still involves the following race conditions: * In case where the status of the image is ``saving`` and it has just moved to ```active`` status, ideally setting custom location should be allowed however, due to lack of atomicity in setting image status glance will ignore setting the location and a 409 will be returned. * In case where the status of the image is ``deactivated`` and it has just been moved to ``active`` status, ideally setting custom location should be allowed however, due to lack of atomicity in setting image status glance will ignore setting the location and a 409 will be returned. * In case where the status of the image is ``active`` and it has just been moved to ``deactivated`` status, due to lack of atomicity in setting image status, glance may set another location on that image. * In case where the status of the image is ``queued`` and it has just been moved to ``saving`` status, due to lack of atomicity in setting image status, glance may set another location on that image. * In case where the status of the image is ``queued`` or ``active`` and location is attempted to be set on it, if the image first goes into ``deleted``, ``pending_delete`` or ``killed`` status then the user will get a HTTP 409 Conflict status back. This occurs again due to lack of atomicity in setting image status. Please note: * We will plan to add further granularity in setting locations to the images atomically in subsequent commits. * Fow now, though, this commit does resolve the issue of setting locations on an image incorrectly (in unexpected circumstances) to a significant degree. So, this is good progress in ensuring rightful use of the image locations feature. APIImpact DocImpact Co-Authored-By: Mike Fedosin Co-Authored-By: Nikhil Komawar
[Yahoo-eng-team] [Bug 1586931] Re: TestServerBasicOps: Test fails when deleting server and floating ip almost at the same time
** Changed in: nova/mitaka Status: Confirmed => In Progress ** Changed in: nova/mitaka Importance: Undecided => Medium ** Changed in: nova/mitaka Assignee: (unassigned) => Ken'ichi Ohmichi (oomichi) ** Also affects: nova/liberty Importance: Undecided Status: New ** Tags removed: liberty-backport-potential mitaka-backport-potential ** Changed in: nova/liberty Status: New => In Progress ** Changed in: nova/liberty Importance: Undecided => Medium ** Changed in: nova/liberty Assignee: (unassigned) => Ken'ichi Ohmichi (oomichi) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1586931 Title: TestServerBasicOps: Test fails when deleting server and floating ip almost at the same time Status in neutron: Fix Released Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) liberty series: In Progress Status in OpenStack Compute (nova) mitaka series: In Progress Status in openstack-ansible: Fix Released Status in tempest: Won't Fix Bug description: In tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops, after last step: self.servers_client.delete_server(self.instance['id']) it doesn't wait for the server to be deleted, and then deletes the floating ip immediately in the clean up, this will cause faiure: Here is the partial log: 2016-05-29 21:51:29.499 29791 INFO tempest.lib.common.rest_client [req-c3588ac4-21ca-47c3-bdb1-62088efd7a8b ] Request (TestServerBasicOps:test_server_basic_ops): 204 DELETE https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/servers/6d44763b-ea79-4b5b-b57e-714191802c7c 0.465s 2016-05-29 21:51:29.499 29791 DEBUG tempest.lib.common.rest_client [req-c3588ac4-21ca-47c3-bdb1-62088efd7a8b ] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''} Body: None Response - Headers: {'status': '204', 'content-length': '0', 'content-location': 'https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/servers/6d44763b-ea79-4b5b-b57e-714191802c7c', 'date': 'Mon, 30 May 2016 02:51:29 GMT', 'x-compute-request-id': 'req-c3588ac4-21ca-47c3-bdb1-62088efd7a8b', 'content-type': 'application/json', 'connection': 'close'} Body: _log_request_full tempest/lib/common/rest_client.py:422 2016-05-29 21:51:30.410 29791 INFO tempest.lib.common.rest_client [req-db2323f5-3d58-4fd7-ae51-44f5525c6689 ] Request (TestServerBasicOps:_run_cleanups): 500 DELETE https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/os-floating-ips/948912f6-ce03-4856-922b-59c4f16d3740 0.910s 2016-05-29 21:51:30.410 29791 DEBUG tempest.lib.common.rest_client [req-db2323f5-3d58-4fd7-ae51-44f5525c6689 ] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''} Body: None Response - Headers: {'status': '500', 'content-length': '224', 'content-location': 'https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/os-floating-ips/948912f6-ce03-4856-922b-59c4f16d3740', 'date': 'Mon, 30 May 2016 02:51:30 GMT', 'x-compute-request-id': 'req-db2323f5-3d58-4fd7-ae51-44f5525c6689', 'content-type': 'application/json; charset=UTF-8', 'connection': 'close'} Body: {"computeFault": {"message": "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n", "code": 500}} To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1586931/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621180] Re: specifying apt_mirror of '' renders empty entries in /etc/apt/sources.list for uri
** Merge proposal linked: https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/305137 ** No longer affects: curtin -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1621180 Title: specifying apt_mirror of '' renders empty entries in /etc/apt/sources.list for uri Status in cloud-init: Confirmed Status in cloud-init package in Ubuntu: Confirmed Status in juju-core package in Ubuntu: New Bug description: $ cat /tmp/foo.ud #cloud-config apt_mirror: '' $ lxc launch ubuntu-daily:yakkety sm-y0 "--config=user.user-data=$(cat /tmp/foo.ud)" $ sleep 10 $ lxc exec sm-y0 grep yakkety /etc/apt/sources.list | head -n 3 deb yakkety main restricted deb-src yakkety main restricted deb yakkety-updates main restricted basically if you provide an empty apt_mirror in the old format, then it is taken as providing an apt mirror. This non-true value should just be the same as not providing it. ProblemType: Bug DistroRelease: Ubuntu 16.10 Package: cloud-init 0.7.7-22-g763f403-0ubuntu1 ProcVersionSignature: Ubuntu 4.4.0-9136.55-generic 4.4.16 Uname: Linux 4.4.0-9136-generic x86_64 ApportVersion: 2.20.3-0ubuntu7 Architecture: amd64 Date: Wed Sep 7 17:12:11 2016 PackageArchitecture: all ProcEnviron: TERM=xterm-256color PATH=(custom, no user) SourcePackage: cloud-init UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1621180/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621200] [NEW] MySQLOpportunisticIdentityDriverTestCase.test_change_password fails in UTC+N timezone
Public bug reported: Steps to reproduce: 1. dpkg-reconfigure tzdata and select there Europe/Moscow (UTC+3). 2. Restart mysql 3. Configure opportunistic tests with the following command in mysql: GRANT ALL PRIVILEGES ON *.* TO 'openstack_citest' @'%' identified by 'openstack_citest' WITH GRANT OPTION; 4. Run keystone.tests.unit.identity.backends.test_sql.MySQLOpportunisticIdentityDriverTestCase.test_change_password Expected result: test pass Actual result: Traceback (most recent call last): File "keystone/tests/unit/identity/backends/test_base.py", line 255, in test_change_password self.driver.authenticate(user['id'], new_password) File "keystone/identity/backends/sql.py", line 65, in authenticate raise AssertionError(_('Invalid user / password')) AssertionError: Invalid user / password ** Affects: keystone Importance: Undecided Status: New ** Description changed: Steps to reproduce: 1. dpkg-reconfigure tzdata and select there Europe/Moscow (UTC+3). - 2. Configure opportunistic tests with the following command in mysql: + 2. Restart mysql + 3. Configure opportunistic tests with the following command in mysql: GRANT ALL PRIVILEGES ON *.* TO 'openstack_citest' @'%' identified by 'openstack_citest' WITH GRANT OPTION; - 3. Run keystone.tests.unit.identity.backends.test_sql.MySQLOpportunisticIdentityDriverTestCase.test_change_password + 4. Run keystone.tests.unit.identity.backends.test_sql.MySQLOpportunisticIdentityDriverTestCase.test_change_password Expected result: test pass Actual result: - Traceback (most recent call last): - File "keystone/tests/unit/identity/backends/test_base.py", line 255, in test_change_password - self.driver.authenticate(user['id'], new_password) - File "keystone/identity/backends/sql.py", line 65, in authenticate - raise AssertionError(_('Invalid user / password')) - AssertionError: Invalid user / password + Traceback (most recent call last): + File "keystone/tests/unit/identity/backends/test_base.py", line 255, in test_change_password + self.driver.authenticate(user['id'], new_password) + File "keystone/identity/backends/sql.py", line 65, in authenticate + raise AssertionError(_('Invalid user / password')) + AssertionError: Invalid user / password -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1621200 Title: MySQLOpportunisticIdentityDriverTestCase.test_change_password fails in UTC+N timezone Status in OpenStack Identity (keystone): New Bug description: Steps to reproduce: 1. dpkg-reconfigure tzdata and select there Europe/Moscow (UTC+3). 2. Restart mysql 3. Configure opportunistic tests with the following command in mysql: GRANT ALL PRIVILEGES ON *.* TO 'openstack_citest' @'%' identified by 'openstack_citest' WITH GRANT OPTION; 4. Run keystone.tests.unit.identity.backends.test_sql.MySQLOpportunisticIdentityDriverTestCase.test_change_password Expected result: test pass Actual result: Traceback (most recent call last): File "keystone/tests/unit/identity/backends/test_base.py", line 255, in test_change_password self.driver.authenticate(user['id'], new_password) File "keystone/identity/backends/sql.py", line 65, in authenticate raise AssertionError(_('Invalid user / password')) AssertionError: Invalid user / password To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1621200/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621180] Re: specifying apt_mirror of '' renders empty entries in /etc/apt/sources.list for uri
** Package changed: juju (Ubuntu) => juju-core (Ubuntu) ** Also affects: curtin Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1621180 Title: specifying apt_mirror of '' renders empty entries in /etc/apt/sources.list for uri Status in cloud-init: Confirmed Status in curtin: New Status in cloud-init package in Ubuntu: Confirmed Status in juju-core package in Ubuntu: New Bug description: $ cat /tmp/foo.ud #cloud-config apt_mirror: '' $ lxc launch ubuntu-daily:yakkety sm-y0 "--config=user.user-data=$(cat /tmp/foo.ud)" $ sleep 10 $ lxc exec sm-y0 grep yakkety /etc/apt/sources.list | head -n 3 deb yakkety main restricted deb-src yakkety main restricted deb yakkety-updates main restricted basically if you provide an empty apt_mirror in the old format, then it is taken as providing an apt mirror. This non-true value should just be the same as not providing it. ProblemType: Bug DistroRelease: Ubuntu 16.10 Package: cloud-init 0.7.7-22-g763f403-0ubuntu1 ProcVersionSignature: Ubuntu 4.4.0-9136.55-generic 4.4.16 Uname: Linux 4.4.0-9136-generic x86_64 ApportVersion: 2.20.3-0ubuntu7 Architecture: amd64 Date: Wed Sep 7 17:12:11 2016 PackageArchitecture: all ProcEnviron: TERM=xterm-256color PATH=(custom, no user) SourcePackage: cloud-init UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1621180/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1430005] Re: Improve security rule notification message
Reviewed: https://review.openstack.org/313086 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=9a535a1f848c7813169fdd7d042a29678c554092 Submitter: Jenkins Branch:master commit 9a535a1f848c7813169fdd7d042a29678c554092 Author: Yosef HoffmanDate: Thu May 5 11:12:23 2016 -0400 Improve security rule notification message If from_port and to_port are both "-1" then return "any port" instead of "-1:-1". For example, instead of ``ALLOW -1:-1/icmp from 0.0.0.0/0`` return ``ALLOW any port/icmp from 0.0.0.0/0`` Change-Id: I0ded50a40089406fd69c8fa869839e46eecef352 Closes-Bug: #1430005 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1430005 Title: Improve security rule notification message Status in OpenStack Dashboard (Horizon): Fix Released Bug description: If we create a rule that allow all ports to go thru, we are displaying the following notification message (see attached image too): Successfully added rule: -1:-1/icmp from 0.0.0.0/0 where -1:-1 doesn't deliver useful information. Suggest to change -1:-1 to "any port" instead. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1430005/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1428176] Re: DiskFilter uses Ephemeral size from Flavor, not actual ephemeral size
Based on the discussion on the proposed patch [1], we don't want to propagate the one-off setting of swap and ephemeral disk [2] from the API further throughout the codebase. If anything, those options should be removed from the API. [1] https://review.openstack.org/#/c/352522/ [2] http://docs.openstack.org/user-guide/cli-nova-launch-instance-from-volume.html#attach-swap-or-ephemeral-disk-to-an-instance ** Changed in: nova Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1428176 Title: DiskFilter uses Ephemeral size from Flavor, not actual ephemeral size Status in OpenStack Compute (nova): Won't Fix Bug description: DiskFilter checks if instance can fit into host and uses 'ephemeral_gb' value from Flavor. However, it is possible to reduce ephemeral (and swap) size by providing custom block device mapping on instance create [1]. In this case we are having false-negative answer form DiskFilter. [1]. http://docs.openstack.org/user-guide/content/attach-disk-to- instance.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1428176/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621180] [NEW] specifying apt_mirror of '' renders empty entries in /etc/apt/sources.list for uri
Public bug reported: $ cat /tmp/foo.ud #cloud-config apt_mirror: '' $ lxc launch ubuntu-daily:yakkety sm-y0 "--config=user.user-data=$(cat /tmp/foo.ud)" $ sleep 10 $ lxc exec sm-y0 grep yakkety /etc/apt/sources.list | head -n 3 deb yakkety main restricted deb-src yakkety main restricted deb yakkety-updates main restricted basically if you provide an empty apt_mirror in the old format, then it is taken as providing an apt mirror. This non-true value should just be the same as not providing it. ProblemType: Bug DistroRelease: Ubuntu 16.10 Package: cloud-init 0.7.7-22-g763f403-0ubuntu1 ProcVersionSignature: Ubuntu 4.4.0-9136.55-generic 4.4.16 Uname: Linux 4.4.0-9136-generic x86_64 ApportVersion: 2.20.3-0ubuntu7 Architecture: amd64 Date: Wed Sep 7 17:12:11 2016 PackageArchitecture: all ProcEnviron: TERM=xterm-256color PATH=(custom, no user) SourcePackage: cloud-init UpgradeStatus: No upgrade log present (probably fresh install) ** Affects: cloud-init Importance: Medium Status: Confirmed ** Affects: cloud-init (Ubuntu) Importance: Medium Status: Confirmed ** Affects: juju (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug uec-images yakkety ** Also affects: cloud-init Importance: Undecided Status: New ** Changed in: cloud-init Status: New => Confirmed ** Changed in: cloud-init (Ubuntu) Status: New => Confirmed ** Changed in: cloud-init Importance: Undecided => Medium ** Changed in: cloud-init (Ubuntu) Importance: Undecided => Medium ** Also affects: juju (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1621180 Title: specifying apt_mirror of '' renders empty entries in /etc/apt/sources.list for uri Status in cloud-init: Confirmed Status in cloud-init package in Ubuntu: Confirmed Status in juju package in Ubuntu: New Bug description: $ cat /tmp/foo.ud #cloud-config apt_mirror: '' $ lxc launch ubuntu-daily:yakkety sm-y0 "--config=user.user-data=$(cat /tmp/foo.ud)" $ sleep 10 $ lxc exec sm-y0 grep yakkety /etc/apt/sources.list | head -n 3 deb yakkety main restricted deb-src yakkety main restricted deb yakkety-updates main restricted basically if you provide an empty apt_mirror in the old format, then it is taken as providing an apt mirror. This non-true value should just be the same as not providing it. ProblemType: Bug DistroRelease: Ubuntu 16.10 Package: cloud-init 0.7.7-22-g763f403-0ubuntu1 ProcVersionSignature: Ubuntu 4.4.0-9136.55-generic 4.4.16 Uname: Linux 4.4.0-9136-generic x86_64 ApportVersion: 2.20.3-0ubuntu7 Architecture: amd64 Date: Wed Sep 7 17:12:11 2016 PackageArchitecture: all ProcEnviron: TERM=xterm-256color PATH=(custom, no user) SourcePackage: cloud-init UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1621180/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1605083] Re: not remove items from select list after batch deleting
Reviewed: https://review.openstack.org/352436 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=db032d6ad5e75d66d08f4b1a76b4b01ec2e0fc10 Submitter: Jenkins Branch:master commit db032d6ad5e75d66d08f4b1a76b4b01ec2e0fc10 Author: majikDate: Mon Aug 8 21:55:24 2016 +0800 fix table delete bug & collect table events Bug occurs in using batch delete more than one time in one page. Deleted items haven't been removed from selections after batch delete action. Change-Id: I13b4da7deb026c28e26910e0bd763960de4bffa8 Closes-Bug: #1605083 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1605083 Title: not remove items from select list after batch deleting Status in OpenStack Dashboard (Horizon): Fix Released Bug description: Bug occurs in using batch delete more than one time in one page. Deleted items have not been removed from the select list after batch delete action. hz-resource-table -> hz-dynamic-table -> hz-table To fix this, hz-resource-table should call hz-table.clearSelected(). However, Angular is not well-design for multi nested directives. Exposing the func from inside hz-table to outside hz-resource-table may dirty the code. I will find a solution in an action way. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1605083/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1620684] Re: nova list --status soft_deleted is not showing soft deleted Instances
So, after a bit of investigation, I found that's actually not a regression and we had other bug reports about that. The one I could refer is https://bugs.launchpad.net/nova/+bug/1526715 which is technically not a duplicate but where the proposal is very close to the one you wrote : https://review.openstack.org/#/c/258472/6 Anyway, given it would mean the API would accept a new body argument and merging that would change the API behaviour by returning more instances, I think we would need to open a spec for that and ask for a new API microversion (see http://docs.openstack.org/developer/nova/process.html#overview) I'm marking that bug report as Opinion to clarify my thoughts. ** Changed in: nova Status: In Progress => Opinion ** Changed in: nova Importance: Undecided => Low ** Tags removed: newton-rc-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1620684 Title: nova list --status soft_deleted is not showing soft deleted Instances Status in OpenStack Compute (nova): Opinion Bug description: Steps to reproduce: 1. Set reclaim_instance_interval to a value in nova.conf 2. Boot an instance. 3. delete the instance(instance will be soft_deleted) 4. nova list --status soft_deleted Expected result: should display the soft_deleted instances based on the reclaim_instance_interval. Actual result: No instances are displayed. This bug is reported in the admin context. Environment: current master devstack To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1620684/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1620254] Re: OVO to_dict() is returning timestamps with microseconds
Reviewed: https://review.openstack.org/365684 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=3756bc510526531e3442da6d825cc90f44faa355 Submitter: Jenkins Branch:master commit 3756bc510526531e3442da6d825cc90f44faa355 Author: Kevin BentonDate: Mon Sep 5 08:26:01 2016 -0600 Convert OVO fields to primitives for to_dict() to_dict() is used for conversions of OVO objects into regular dictionaries to be used as plugin return values to the API layer, etc. It provides the equivalent of make__dict that we use now (without the extension processing). The values in these dictionaries should be ready for representation in the API. The issue was that the OVO to_dict() implementation was placing complex types right into the dictionary which would mean that the API would serialize them just by calling str() on them (as part of json encoding). This ignored the 'to_primitive' method defined on the OVO type that defines how a field should be converted. Therefore, when it came to timestamps to_dict() was placing native datetime objects into the dictionary which would convert to microsecond resolution, violating the expected format of the OVO DateTime type. This patch fixes the issue by calling 'to_primitive' on each non-synthetic field in the to_dict() method to ensure we match the expected format of the type before we send it out the API. Closes-Bug: #1620254 Change-Id: Ic0be54b1d4b23119e1458d4532e2f70bff0ff9f6 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1620254 Title: OVO to_dict() is returning timestamps with microseconds Status in neutron: Fix Released Bug description: to_dict() doesn't seem to be calling the stringify or to_primitive method on the objects because microseconds are being leaked out when trying to expose the created_at/updated_at fields via the API. Traceback (most recent call last): File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_qos.py", line 338, in test_get_policy_that_is_shared self.assertEqual(obtained_policy, policy) File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 411, in assertEqual self.assertThat(observed, matcher, message) File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 498, in assertThat raise mismatch_error testtools.matchers._impl.MismatchError: !=: reference = {u'created_at': u'2016-09-03 01:08:00', u'description': u'shared policy', u'id': u'b24af3be-d19d-4543-8b12-25ac6d752abd', u'name': u'test-policy-shared', u'rules': [], u'shared': True, u'tenant_id': u'a88290171ee648dfa97b8e164b9ede31', u'updated_at': u'2016-09-03 01:08:00'} actual= {u'created_at': u'2016-09-03 01:08:00.106856', u'description': u'shared policy', u'id': u'b24af3be-d19d-4543-8b12-25ac6d752abd', u'name': u'test-policy-shared', u'rules': [], u'shared': True, u'tenant_id': u'a88290171ee648dfa97b8e164b9ede31', u'updated_at': u'2016-09-03 01:08:00.106856'} To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1620254/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621161] [NEW] api-ref: need version history on versions page
Public bug reported: Need release name/API version/status info on 'versions' api-ref page. ** Affects: glance Importance: High Assignee: Brian Rosmaita (brian-rosmaita) Status: In Progress ** Tags: api-ref -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1621161 Title: api-ref: need version history on versions page Status in Glance: In Progress Bug description: Need release name/API version/status info on 'versions' api-ref page. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1621161/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621138] [NEW] block_device_mappings column is not large enough in table build_requests
Public bug reported: When deploying with several volumes the block_device_mappings column in table build_requests can run out of space causing the below error. 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last): 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters context) 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters) 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters self.errorhandler(self, exc, value) 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters raise errorclass, errorvalue 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters DataError: (1406, "Data too long for column 'block_device_mappings' at row 1") ** Affects: nova Importance: Undecided Assignee: Kenneth Burger (burgerk) Status: New ** Changed in: nova Assignee: (unassigned) => Kenneth Burger (burgerk) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1621138 Title: block_device_mappings column is not large enough in table build_requests Status in OpenStack Compute (nova): New Bug description: When deploying with several volumes the block_device_mappings column in table build_requests can run out of space causing the below error. 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last): 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters context) 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters) 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters self.errorhandler(self, exc, value) 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters raise errorclass, errorvalue 2016-09-07 01:22:24.936 23522 ERROR oslo_db.sqlalchemy.exc_filters DataError: (1406, "Data too long for column 'block_device_mappings' at row 1") To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1621138/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621137] [NEW] wrong warning (The Keystone URL (either in Horizon settings or in service catalog) points to a v2.0 Keystone endpoint)
Public bug reported: wrong warning printed in current code I added some log and seems we need use ["/v2.0"] instead "/v2.0" [Wed Sep 07 15:09:40.951167 2016] [:error] [pid 29875] path is /v3, subs is /v2.0, t is True [Wed Sep 07 15:09:40.951254 2016] [:error] [pid 29875] The Keystone URL (either in Horizon settings or in service catalog) points to a v2.0 Keystone endpoint, but v3 is specified as the API version to use by Horizon. Using v3 endpoint for authentication.https://9.60.29.98:35357/v3 def has_in_url_path(url, subs): """Test if any of `subs` strings is present in the `url` path.""" scheme, netloc, path, query, fragment = urlparse.urlsplit(url) t = any([sub in path for sub in subs]) LOG.warn('path is %s, subs is %s, t is %s' % (path, subs, t)) return t if get_keystone_version() >= 3 and has_in_url_path(auth_url, "/v2.0"): LOG.warning("The Keystone URL (either in Horizon settings or in " "service catalog) points to a v2.0 Keystone endpoint, " "but v3 is specified as the API version to use by " "Horizon. Using v3 endpoint for authentication.%s", auth_url) auth_url = url_path_replace(auth_url, "/v2.0", "/v3", 1) ** Affects: horizon Importance: Undecided Assignee: jichenjc (jichenjc) Status: New ** Changed in: horizon Assignee: (unassigned) => jichenjc (jichenjc) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1621137 Title: wrong warning (The Keystone URL (either in Horizon settings or in service catalog) points to a v2.0 Keystone endpoint) Status in OpenStack Dashboard (Horizon): New Bug description: wrong warning printed in current code I added some log and seems we need use ["/v2.0"] instead "/v2.0" [Wed Sep 07 15:09:40.951167 2016] [:error] [pid 29875] path is /v3, subs is /v2.0, t is True [Wed Sep 07 15:09:40.951254 2016] [:error] [pid 29875] The Keystone URL (either in Horizon settings or in service catalog) points to a v2.0 Keystone endpoint, but v3 is specified as the API version to use by Horizon. Using v3 endpoint for authentication.https://9.60.29.98:35357/v3 def has_in_url_path(url, subs): """Test if any of `subs` strings is present in the `url` path.""" scheme, netloc, path, query, fragment = urlparse.urlsplit(url) t = any([sub in path for sub in subs]) LOG.warn('path is %s, subs is %s, t is %s' % (path, subs, t)) return t if get_keystone_version() >= 3 and has_in_url_path(auth_url, "/v2.0"): LOG.warning("The Keystone URL (either in Horizon settings or in " "service catalog) points to a v2.0 Keystone endpoint, " "but v3 is specified as the API version to use by " "Horizon. Using v3 endpoint for authentication.%s", auth_url) auth_url = url_path_replace(auth_url, "/v2.0", "/v3", 1) To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1621137/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1112670] Re: Can't create image: InvalidArgument error
Closing as "Won't Fix". 1. No action since 2014 2. Looks like a configuration problem. ** Changed in: glance Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1112670 Title: Can't create image: InvalidArgument error Status in Glance: Won't Fix Bug description: Getting this error when trying to create an image. In glance's api.log: 2013-02-01 16:23:16 DEBUG glance.registry.client [73a0c211-f9df-4ff1-a958-8092ffea93a5 None None] Registry request PUT /images/19b67ee3-6a22-4fee-a12a-144c228700ed HTTP 200 request id req-e29d2511-5d13-42db-9f40-162532c0e35c do_request /usr/lib/python2.7/dist-packages/glance/registry/client.py:95 2013-02-01 16:23:16 INFO glance.api.v1.images [73a0c211-f9df-4ff1-a958-8092ffea93a5 None None] Triggering asynchronous copy from external source 2013-02-01 16:23:17 18293 DEBUG glance.api.v1.images [-] Setting image 19b67ee3-6a22-4fee-a12a-144c228700ed to status 'saving' _upload /usr/lib/python2.7/dist-packages/glance/api/v1/images.py:414 2013-02-01 16:23:17 18293 DEBUG glance.registry [-] Updating image metadata for image 19b67ee3-6a22-4fee-a12a-144c228700ed... update_image_metadata /usr/lib/python2.7/dist-packages/glance/registry/__init__.py:148 2013-02-01 16:23:17 18293 DEBUG glance.common.client [-] Constructed URL: http://0.0.0.0:9191/images/19b67ee3-6a22-4fee-a12a-144c228700ed _construct_url /usr/lib/python2.7/dist-packages/glance/common/client.py:396 2013-02-01 16:23:17 18293 DEBUG glance.registry.client [-] Registry request PUT /images/19b67ee3-6a22-4fee-a12a-144c228700ed HTTP 200 request id req-3ae75a8a-9f8a-428e-b566-5c170faabbee do_request /usr/lib/python2.7/dist-packages/glance/registry/client.py:95 2013-02-01 16:23:17 18293 DEBUG glance.api.v1.images [-] Uploading image data for image 19b67ee3-6a22-4fee-a12a-144c228700ed to rbd store _upload /usr/lib/python2.7/dist-packages/glance/api/v1/images.py:419 2013-02-01 16:23:17 18293 DEBUG glance.store.rbd [-] creating image 19b67ee3-6a22-4fee-a12a-144c228700ed with order 23 add /usr/lib/python2.7/dist-packages/glance/store/rbd.py:240 2013-02-01 16:23:17 18293 ERROR glance.api.v1.images [-] Failed to upload image 2013-02-01 16:23:17 18293 TRACE glance.api.v1.images Traceback (most recent call last): 2013-02-01 16:23:17 18293 TRACE glance.api.v1.images File "/usr/lib/python2.7/dist-packages/glance/api/v1/images.py", line 425, in _upload 2013-02-01 16:23:17 18293 TRACE glance.api.v1.images image_meta['size']) 2013-02-01 16:23:17 18293 TRACE glance.api.v1.images File "/usr/lib/python2.7/dist-packages/glance/store/rbd.py", line 243, in add 2013-02-01 16:23:17 18293 TRACE glance.api.v1.images image_size, order) 2013-02-01 16:23:17 18293 TRACE glance.api.v1.images File "/usr/lib/python2.7/dist-packages/glance/store/rbd.py", line 206, in _create_image 2013-02-01 16:23:17 18293 TRACE glance.api.v1.images features=rbd.RBD_FEATURE_LAYERING) 2013-02-01 16:23:17 18293 TRACE glance.api.v1.images File "/usr/lib/python2.7/dist-packages/rbd.py", line 194, in create 2013-02-01 16:23:17 18293 TRACE glance.api.v1.images raise make_ex(ret, 'error creating image') 2013-02-01 16:23:17 18293 TRACE glance.api.v1.images InvalidArgument: error creating image 2013-02-01 16:23:17 18293 TRACE glance.api.v1.images It doesn't matter if I create an image from the GUI (dashboard) or the command-line (glance create-image), the backtrace is the same. Versions: - ubuntu: precise - glance: 2013.1~g2-0ubuntu1~cloud0 from http://ppa.launchpad.net/ubuntu-cloud-archive/grizzly-staging/ubuntu/ - ceph: 0.48.2-0ubuntu2~cloud0, from http://ubuntu-cloud.archive.canonical.com/ubuntu/ precise-updates/folsom/main To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1112670/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621086] [NEW] Port delete on router interface remove
Public bug reported: 1. I create port, then router and then use add_router_interface. 2. Then I use remove_router_interface. 3. Port is deleted - and this is unexpected (for me, at least). I was using Heat on devstack master to test this. Template for stack with port: resources: media_port: type: OS::Neutron::Port properties: name: media_port network: private Template for stack with router and router interface: heat_template_version: newton resources: media_router: type: OS::Neutron::Router media_router_interface: type: OS::Neutron::RouterInterface properties: router: { get_resource: media_router } port: media_port When I delete second stack, port from first stack is also deleted in neutron. https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L873-L876 that is called method and body here will be: { 'port_id': 'SOMEID' } ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1621086 Title: Port delete on router interface remove Status in neutron: New Bug description: 1. I create port, then router and then use add_router_interface. 2. Then I use remove_router_interface. 3. Port is deleted - and this is unexpected (for me, at least). I was using Heat on devstack master to test this. Template for stack with port: resources: media_port: type: OS::Neutron::Port properties: name: media_port network: private Template for stack with router and router interface: heat_template_version: newton resources: media_router: type: OS::Neutron::Router media_router_interface: type: OS::Neutron::RouterInterface properties: router: { get_resource: media_router } port: media_port When I delete second stack, port from first stack is also deleted in neutron. https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L873-L876 that is called method and body here will be: { 'port_id': 'SOMEID' } To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1621086/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1480400] Re: Documentation error for properties
Fixed as part of the api-ref WADL->RST migration ** Changed in: openstack-api-site Assignee: Manav (manav-kiit) => (unassigned) ** Changed in: glance Assignee: Manav (manav-kiit) => (unassigned) ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1480400 Title: Documentation error for properties Status in Glance: Fix Released Status in openstack-api-site: Invalid Bug description: Document: http://developer.openstack.org/api-ref-image-v2.html Under the image create drop down there is the following line: properties (Optional) plain | xsd:dict | Properties, if any, that are associated with the image. This suggests that the properties are a dict and would look like this: -d '{"name": "thename", "properties": {"myprop", "mydata"}}' this is not the case, as if you want to define custom properties in a curl command you do it by defining it like any other property eg. -d '{"name": "thename", "myprop": "mydata"}' The documentation is not clear about this distinction. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1480400/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621082] [NEW] Complain loudly and clearly on the console when cloud-init fails to run
Public bug reported: When cloud-init itself fails to run, as in the case discussed in bug 1621075, an experienced user can detect the problem in the split-second the console is still alive, but most users are lost. Could we make it more obvious what the problem we encountered was? I'm thinking of a very common problem in MAAS, which is that the initial network configuration is wrong, or drivers are not available for the NICs in the host. ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1621082 Title: Complain loudly and clearly on the console when cloud-init fails to run Status in cloud-init: New Bug description: When cloud-init itself fails to run, as in the case discussed in bug 1621075, an experienced user can detect the problem in the split- second the console is still alive, but most users are lost. Could we make it more obvious what the problem we encountered was? I'm thinking of a very common problem in MAAS, which is that the initial network configuration is wrong, or drivers are not available for the NICs in the host. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1621082/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1602797] Re: Missing 2.3 microversion API samples functional tests
Reviewed: https://review.openstack.org/347544 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=47b19ffb54082b9fe1b45e33b05a49a9e1ebc431 Submitter: Jenkins Branch:master commit 47b19ffb54082b9fe1b45e33b05a49a9e1ebc431 Author: Sarafraj SinghDate: Tue Jul 26 15:12:19 2016 -0500 Adding functional tests for 2.3 microversion Change-Id: Id5cf7ef5c3c7049e36da42f59fd67a61cd3df2d9 Closes-Bug: #1602797 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1602797 Title: Missing 2.3 microversion API samples functional tests Status in OpenStack Compute (nova): Fix Released Bug description: Related to bug 1600186 - we don't have any API samples functional testing in nova for the 2.3 microversion. It looks like we did back when it was added in: https://review.openstack.org/#/c/155853/ But those no longer exist, probably when we folded the v3 directories. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1602797/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621076] [NEW] Can't detach interface from VM (if VM has two interface with same mac addresses)
Public bug reported: How to reproduce: 1. Run any VM. 2. Create two networks. 3. Create two ports for each network with same mac addresses 4. Attach those ports to VM 5. Try to detach any interface. Expected result: The interface should be detached from VM. Actual result: We don't get any errors (via API) on previous steps but an interface still attached to VM Environment: * fuel_release: 9.0 * fuel_openstack_version: mitaka-9.0 * libvirt0: 1.2.9.3-9~u14.04+mos10 * hypervisor: Libvirt + KVM Example: (OpenStack-venv)agent@laptop ~/projects $ nova interface-list c5ae5a9a-54a2-47b8-800e-619b8bc286f5 ++--+--++---+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | ++--+--++---+ | ACTIVE | 0d9377b4-4f8f-467a-bae7-54d0d70e5262 | 3f5adcaa-d3e5-4caf-be8f-474751de5589 | 192.168.0.1| fa:16:3e:60:46:1f | | ACTIVE | 13cac036-7b6d-4188-879f-650c8d9e1f63 | 28939866-7379-4279-800c-b64c2776e1e0 | 192.168.111.82 | fa:16:3e:24:0d:a4 | | ACTIVE | 310b9883-806d-4038-a095-1625abecbcb1 | 311c5a7e-5cb0-47e2-8aa0-20d74c4ee8c2 | 192.168.0.1| fa:16:3e:60:46:1f | ++--+--++---+ (OpenStack-venv)agent@laptop ~/projects $ nova interface-detach c5ae5a9a-54a2-47b8-800e-619b8bc286f5 0d9377b4-4f8f-467a-bae7-54d0d70e5262 (OpenStack-venv)agent@laptop ~/projects $ nova interface-list c5ae5a9a-54a2-47b8-800e-619b8bc286f5 ++--+--++---+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | ++--+--++---+ | ACTIVE | 0d9377b4-4f8f-467a-bae7-54d0d70e5262 | 3f5adcaa-d3e5-4caf-be8f-474751de5589 | 192.168.0.1| fa:16:3e:60:46:1f | | ACTIVE | 13cac036-7b6d-4188-879f-650c8d9e1f63 | 28939866-7379-4279-800c-b64c2776e1e0 | 192.168.111.82 | fa:16:3e:24:0d:a4 | | ACTIVE | 310b9883-806d-4038-a095-1625abecbcb1 | 311c5a7e-5cb0-47e2-8aa0-20d74c4ee8c2 | 192.168.0.1| fa:16:3e:60:46:1f | ++--+--++---+ logs from compute: libvirt: <11>Sep 7 12:18:00 node-9 libvirtd: 11320: error : virDomainNetFindIdx:11005 : operation failed: multiple devices matching mac address fa:16:3e:60:46:1f found nova compute: <183>Sep 6 17:58:04 node-7 nova-compute: 2016-09-06 17:58:04.348 7652 DEBUG nova.objects.instance [req-db381757-3fbf-4bb8-a4df-160e1422a005 2b96e098d62147d8b9a15f635e0dd51a 4b867602d6974059afb2489a71dfaabb - - -] Lazy-loading 'flavor' on Instance uuid d95ab3d6-6b8f-42f0-8bf8-a553e99ee41a obj_load_attr /usr/lib/python2.7/dist-packages/nova/objects/instance.py:895 <183>Sep 6 17:58:04 node-7 nova-compute: 2016-09-06 17:58:04.430 7652 DEBUG nova.virt.libvirt.vif [req-db381757-3fbf-4bb8-a4df-160e1422a005 2b96e098d62147d8b9a15f635e0dd51a 4b867602d6974059afb2489a71dfaabb - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='True',created_at=2016-09-06T17:41:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description='4894d01a-e464-4d17-b309-3c39a9875147',display_name='4894d01a-e464-4d17-b309-3c39a9875147',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(211),host='node-7.domain.tld',hostname='4894d01a-e464-4d17-b309-3c39a9875147',id=277,image_ref='9213a377-e2c2-4cc1-b1e8-483369db03fa',info_cache=InstanceInfoCache,instance_type_id=211,kernel_id='',key_data=None,key_name=None,launch_index=0,launched_at=2016-09-06T17:41:08Z,laun
[Yahoo-eng-team] [Bug 1621073] [NEW] Neutron NeutronKeystoneContext object doesn't retrieve user_domain attribute
Public bug reported: Neutron object does not retrieve keystone domain attribute from request headers. Neutron policies use context to check rules so we are not able to use domains. Context is formed from headers in __call__ of NeutronKeystoneContext object, which initializes Context object. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1621073 Title: Neutron NeutronKeystoneContext object doesn't retrieve user_domain attribute Status in neutron: New Bug description: Neutron object does not retrieve keystone domain attribute from request headers. Neutron policies use context to check rules so we are not able to use domains. Context is formed from headers in __call__ of NeutronKeystoneContext object, which initializes Context object. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1621073/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621059] [NEW] placement api needs GET /resource_providers/{uuid}/allocations
Public bug reported: Somehow in the spec process this was overlooked but it is required for the resource tracker to effectively update allocation. ** Affects: nova Importance: Undecided Assignee: Chris Dent (cdent) Status: New ** Tags: api placement scheduler -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1621059 Title: placement api needs GET /resource_providers/{uuid}/allocations Status in OpenStack Compute (nova): New Bug description: Somehow in the spec process this was overlooked but it is required for the resource tracker to effectively update allocation. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1621059/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1620434] Re: nova-compute fails to boot when wrong setting value in pci_whitelist
I'm not really sure we should gracefully handle configuration issues where operators did a typo with the PCI whitelist. I mean, most of our conf opts are needed to be right and not wrong, because if so, Nova could be trampled, right? So, here, you propose to only strip() the strings for your usecase, but I'd rather prefer to leave nova-compute be down rather than leaving something wrong within the CONF file. ** Changed in: nova Status: In Progress => Opinion ** Changed in: nova Importance: Undecided => Low ** Tags added: pci -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1620434 Title: nova-compute fails to boot when wrong setting value in pci_whitelist Status in OpenStack Compute (nova): Opinion Bug description: Description === when I wrong config nova.conf with a space in the head of product_id in pci_whitelist, it would cause the nova-compute service boot failed. it shows: Trace: get_pci_dev_info(self, 'product_id', MAX_PRODUCT_ID, '%04x') File "/opt/stack/nova/nova/pci/devspec.py", line 37, in \ get_pci_dev_info v = get_value(a) File "/opt/stack/nova/nova/pci/devspec.py", line 30, in \ get_value return ast.literal_eval("0x" + v) File "/usr/lib/python2.7/ast.py", line 49, in literal_eval node_or_string = parse(node_or_string, mode='eval') File "/usr/lib/python2.7/ast.py", line 37, in parse return compile(source, filename, mode, PyCF_ONLY_AST) File "", line 1 0x 1347 ^ SyntaxError: invalid token Notes that same operation for vendor_id is ok. Steps to reproduce == 1. pci_passthrough_whitelist=[{"vendor_id":"8086","product_id":" 15a3"}] ^ 2. restart nova-compute service. Expected result === nova-compute boot success. Actual result = nova-compute boot failure. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1620434/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1399779] Re: Update glance REST api docs
This was fixed as part of the api-ref WADL -> RST migration ** Changed in: glance Assignee: Charles Bitter (cbitter78) => (unassigned) ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1399779 Title: Update glance REST api docs Status in Glance: Fix Released Bug description: The glance version 2 API docs are not up to date: http://developer.openstack.org/api-ref-image-v2.html The image version 2 schema shows the following json object: { "additionalProperties": { "type": "string" }, "name": "image", "links": [{ "href": "{self}", "rel": "self" }, { "href": "{file}", "rel": "enclosure" }, { "href": "{schema}", "rel": "describedby" }], "properties": { "status": { "enum": ["queued", "saving", "active", "killed", "deleted", "pending_delete"], "type": "string", "description": "Status of the image (READ-ONLY)" }, "tags": { "items": { "type": "string", "maxLength": 255 }, "type": "array", "description": "List of strings related to the image" }, "kernel_id": { "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": "string", "description": "ID of image stored in Glance that should be used as the kernel when booting an AMI-style image." }, "container_format": { "enum": ["ami", "ari", "aki", "bare", "ovf", "ova"], "type": "string", "description": "Format of the container" }, "min_ram": { "type": "integer", "description": "Amount of ram (in MB) required to boot image." }, "ramdisk_id": { "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": "string", "description": "ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image." }, "locations": { "items": { "required": ["url", "metadata"], "type": "object", "properties": { "url": { "type": "string", "maxLength": 255 }, "metadata": { "type": "object" } } }, "type": "array", "description": "A set of URLs to access the image file kept in external store" }, "visibility": { "enum": ["public", "private"], "type": "string", "description": "Scope of image accessibility" }, "updated_at": { "type": "string", "description": "Date and time of the last image modification (READ-ONLY)" }, "owner": { "type": "string", "description": "Owner of the image", "maxLength": 255 }, "file": { "type": "string", "description": "(READ-ONLY)" }, "min_disk": { "type": "integer", "description": "Amount of disk space (in GB) required to boot image." }, "virtual_size": { "type": "integer", "description": "Virtual size of image in bytes (READ-ONLY)" }, "id": { "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": "string", "description": "An identifier for the image" }, "size": { "type": "integer", "description": "Size of image file in bytes (READ-ONLY)" }, "instance_uuid": { "type": "string", "description": "ID of instance used to create this image." }, "os_distro": { "type": "string", "description": "Common name of operating system distribution as specified in http://docs.openstack.org/trunk/openstack-compute/admin/content/adding-images.html; }, "name": { "type": "string", "description": "Descriptive name for the image", "maxLength": 255 }, "checksum": { "type": "string", "description": "md5 hash of image contents. (READ-ONLY)", "maxLength": 32
[Yahoo-eng-team] [Bug 1244666] Re: incorrect URL for common image properties docs in image schema, dev docs
Fixed as part of the api-ref WADL->RST migration, though this is an ongoing problem that needs to be checked occasionally. ** Changed in: glance Assignee: Brian Rosmaita (brian-rosmaita) => (unassigned) ** Changed in: glance Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1244666 Title: incorrect URL for common image properties docs in image schema, dev docs Status in Glance: Fix Released Bug description: Three things: (1) For the common image properties added in Grizzly, the image schema contains several references to http://docs.openstack.org/trunk/openstack-compute/admin/content/adding-images.html as the location for info on what the standard values should be for those properties. This doc has moved ... somewhere (I can't find it, will update when I do). (2) The URL also needs to be updated in the developer docs. (3) Possibly before 1 & 2 are fixed, need to have a discussion with the documentation group about getting a stable URL for this so we don't have to update every time they reorganize the documentation! To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1244666/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621043] [NEW] VMware-NSX: unable to get a DHCP address when using the simple-dvs plugin
Public bug reported: The vmware-nsx repo has a number of different plugins. The DVS plugin enables one to be able to use Neutron with a number of limitation (no security groups and no layer 3 support). If one wants to achieve this with the DVS plugin then there are many different examples that they can use - for example https://github.com/openstack/networking-vsphere/blob/master/networking_vsphere/drivers/dvs_driver.py (that requires the ML2 plugin). In the vmware-nsx dvs case there are no agents running. A instance is unable to get a DHCP address as we are unable to plugin the DHCP agent to the OVS. This cannot be done as the DeviceManager is hard coded and we need to make use of the network information to get the VLAN tag. Enabling the plu and unplug opertaions for the DHCP agent to be overriden will enable the plugin to expose this and for people a chance to get neutron up and running in an existing vsphere env. ** Affects: neutron Importance: Undecided Assignee: Gary Kotton (garyk) Status: In Progress ** Tags: rfe ** Changed in: neutron Milestone: None => newton-rc1 ** Tags added: rfe -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1621043 Title: VMware-NSX: unable to get a DHCP address when using the simple-dvs plugin Status in neutron: In Progress Bug description: The vmware-nsx repo has a number of different plugins. The DVS plugin enables one to be able to use Neutron with a number of limitation (no security groups and no layer 3 support). If one wants to achieve this with the DVS plugin then there are many different examples that they can use - for example https://github.com/openstack/networking-vsphere/blob/master/networking_vsphere/drivers/dvs_driver.py (that requires the ML2 plugin). In the vmware-nsx dvs case there are no agents running. A instance is unable to get a DHCP address as we are unable to plugin the DHCP agent to the OVS. This cannot be done as the DeviceManager is hard coded and we need to make use of the network information to get the VLAN tag. Enabling the plu and unplug opertaions for the DHCP agent to be overriden will enable the plugin to expose this and for people a chance to get neutron up and running in an existing vsphere env. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1621043/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1619466] Re: Enabling q-lbaasv2 in devstack should add neutron_lbaas.conf to q-svc command line
** Project changed: devstack => neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1619466 Title: Enabling q-lbaasv2 in devstack should add neutron_lbaas.conf to q-svc command line Status in neutron: In Progress Bug description: When q-lbaasv2 is enabled in your devstack local.conf, this implies that LBaaS v2 is going to be used, and neutron-lbaas's corresponding devstack plugin.sh script creates a new /etc/neutron/neutron_lbaas.conf file with come configuration parameters. However, under several circumstances, some of the options in this file are needed by other neutron daemons, such as the q-svc daemon. So, if q-lbaasv2 is enabled in devstack local.conf, then the command- line for the q-svc agent should also include '--config-file /etc/neutron/neutron_lbaas.conf' so that these configuration parameters are pulled in for that daemon. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1619466/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1619466] [NEW] Enabling q-lbaasv2 in devstack should add neutron_lbaas.conf to q-svc command line
You have been subscribed to a public bug: When q-lbaasv2 is enabled in your devstack local.conf, this implies that LBaaS v2 is going to be used, and neutron-lbaas's corresponding devstack plugin.sh script creates a new /etc/neutron/neutron_lbaas.conf file with come configuration parameters. However, under several circumstances, some of the options in this file are needed by other neutron daemons, such as the q-svc daemon. So, if q-lbaasv2 is enabled in devstack local.conf, then the command- line for the q-svc agent should also include '--config-file /etc/neutron/neutron_lbaas.conf' so that these configuration parameters are pulled in for that daemon. ** Affects: neutron Importance: Medium Assignee: Nir Magnezi (nmagnezi) Status: In Progress -- Enabling q-lbaasv2 in devstack should add neutron_lbaas.conf to q-svc command line https://bugs.launchpad.net/bugs/1619466 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621015] [NEW] Network field is missing while creating an Instance when running on port in Horizon
Public bug reported: In Stable/Mitaka, in Horizon while creating an Instance Network field is missing when running on port. You can find the screenshots in comments. ** Affects: horizon Importance: Undecided Status: New ** Tags: horizon-core ** Attachment added: "port.png" https://bugs.launchpad.net/bugs/1621015/+attachment/4736085/+files/port.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1621015 Title: Network field is missing while creating an Instance when running on port in Horizon Status in OpenStack Dashboard (Horizon): New Bug description: In Stable/Mitaka, in Horizon while creating an Instance Network field is missing when running on port. You can find the screenshots in comments. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1621015/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1620736] Re: Incorrect SQL in placement API causes spurious InvalidAllocationCapacityExceeded error
Reviewed: https://review.openstack.org/366245 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=e84396b5232cb52228bc75dee92429913d72468e Submitter: Jenkins Branch:master commit e84396b5232cb52228bc75dee92429913d72468e Author: Sean DagueDate: Tue Sep 6 12:33:46 2016 -0400 correctly join the usage to inventory for capacity accounting This updates the SQL code to match the comment about how the tables should be joined. It also adds a detailed warning when we go over capacity, that will be useful to developers and operators when fail to land on an environment. The bug was deep in the SQL, and can only be tested by testing the SQL, which we may want to do later. In leiu of a unit test, an additional safety check is put in the code that if we ever end up with multiple records with the same key, we explode with a KeyError. Closes-Bug: #1620736 Change-Id: Ic3f83a881e7d98fc6ed007b62a179b3153190543 ** Changed in: nova Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1620736 Title: Incorrect SQL in placement API causes spurious InvalidAllocationCapacityExceeded error Status in OpenStack Compute (nova): Fix Released Bug description: Upstream master The SQL that joins the allocations to the inventory for _check_capacity_exceeded was incorrect. It was doing a left outer join with the allocations which meant we got resource accounting in an NxN matrix with all inventory. 1 2 3 4 5 6 7 8 9 10 2016-09-06 12:12:10.978 DEBUG nova.objects.resource_provider [req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: (1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 0, 4, 0, 16.0, Decimal('0')) from (pid=32242) _check_capacity_exceeded /opt/stack/nova/nova/objects/resource_provider.py:706 2016-09-06 12:12:10.979 DEBUG nova.objects.resource_provider [req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: (1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 0, 4, 0, 16.0, Decimal('11')) from (pid=32242) _check_capacity_exceeded /opt/stack/nova/nova/objects/resource_provider.py:706 2016-09-06 12:12:10.979 DEBUG nova.objects.resource_provider [req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: (1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 0, 4, 0, 16.0, Decimal('704')) from (pid=32242) _check_capacity_exceeded /opt/stack/nova/nova/objects/resource_provider.py:706 2016-09-06 12:12:10.980 DEBUG nova.objects.resource_provider [req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: (1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 1, 15947, 512, 1.5, Decimal('0')) from (pid=32242) _check_capacity_exceeded /opt/stack/nova/nova/objects/resource_provider.py:706 2016-09-06 12:12:10.980 DEBUG nova.objects.resource_provider [req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: (1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 1, 15947, 512, 1.5, Decimal('11')) from (pid=32242) _check_capacity_exceeded /opt/stack/nova/nova/objects/resource_provider.py:706 2016-09-06 12:12:10.981 DEBUG nova.objects.resource_provider [req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: (1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 1, 15947, 512, 1.5, Decimal('704')) from (pid=32242) _check_capacity_exceeded /opt/stack/nova/nova/objects/resource_provider.py:706 2016-09-06 12:12:10.981 DEBUG nova.objects.resource_provider [req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: (1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 2, 218, 0, 1.0, Decimal('0')) from (pid=32242) _check_capacity_exceeded /opt/stack/nova/nova/objects/resource_provider.py:706 2016-09-06 12:12:10.982 DEBUG nova.objects.resource_provider [req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: (1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 2, 218, 0, 1.0, Decimal('11')) from (pid=32242) _check_capacity_exceeded /opt/stack/nova/nova/objects/resource_provider.py:706 2016-09-06 12:12:10.983 DEBUG nova.objects.resource_provider [req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: (1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 2, 218, 0, 1.0, Decimal('704')) from (pid=32242) _check_capacity_exceeded /opt/stack/nova/nova/objects/resource_provider.py:706 2016-09-06 12:12:10.983 WARNING nova.objects.resource_provider [req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Attempting to allocate 1 for VCPU. Currently using 704, amount available 64.0 The Decimal allocation for memory ('704') is reported here against CPU and Disk resources in addition to Memory. Depending on the
[Yahoo-eng-team] [Bug 1620989] [NEW] When booting a VM with creation of a new volume, the host AZ info is not passed to Cinder
Public bug reported: Description === When attaching of volumes across Nova/Cinder AZs is forbidden (cross_az_attach = False in nova.conf) and you try to boot an instance without specifying an AZ (i.e. you are ok with any of the AZs the instance will be scheduled to), and block device mapping states that a new volume must be created (e.g. in order to boot from it), then the info about the AZ won't be passed to Cinder and it will create the new volume in the default AZ. Steps to reproduce == 1. Configure multiple AZs in Nova and Cinder. 2. Disable attaching of volumes across AZs in nova.conf: [cinder] cross_az_attach = False 3. Restart nova-compute service. 4. Boot a new VM *without* specifying an AZ explicitly (so that Nova can pick up a host in *any* of the AZs) + state in the block device mapping, that a new volume must be created, e.g.: nova boot --block-device source=image,id=decd5d33-fdd5-4736-b10a- fd2ceebbd224,dest=volume,size=1,shutdown=remove,bootindex=0 --nic net- id=68038c06-f160-4405-9acc-b3480e3e8830 --flavor m1.nano demo Expected result === Instance is booted successfully. Actual result = Instance failed to boot and goes to ERROR state (Block Device Mapping is Invalid) nova-compute log says: 2016-09-07 09:54:26.396 13021 ERROR nova.compute.manager [instance: 1c7de927-9755-4081-99cf-3d1132a9d45a] InvalidVolume: Invalid volume: Instance 10 and volume e3970ecc-796a-46f8-952f-1bc804aab4a4 are not in the same availability_zone. Instance is in az1. Volume is in az2 ^ this is because *null* value was passed to Cinder on creation of a new volume and cinder-scheduler picked the cinder-volume in the *default* AZ configured in cinder.conf, instead of using the AZ of the host a Nova instance was scheduled to. Environment === DevStack, libvirt, Cinder LVM Nova version: master (f1b70d9457ae6c1fba3e7ac7c5f8b08d9042f2ba) Two AZs configured in Nova and Cinder ** Affects: nova Importance: Undecided Assignee: Roman Podoliaka (rpodolyaka) Status: New ** Tags: cinder volumes ** Changed in: nova Assignee: (unassigned) => Roman Podoliaka (rpodolyaka) ** Summary changed: - When a booting a VM with creation of a new volume, the host AZ info is not passed to Cinder + When booting a VM with creation of a new volume, the host AZ info is not passed to Cinder ** Tags added: cinder volumes -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1620989 Title: When booting a VM with creation of a new volume, the host AZ info is not passed to Cinder Status in OpenStack Compute (nova): New Bug description: Description === When attaching of volumes across Nova/Cinder AZs is forbidden (cross_az_attach = False in nova.conf) and you try to boot an instance without specifying an AZ (i.e. you are ok with any of the AZs the instance will be scheduled to), and block device mapping states that a new volume must be created (e.g. in order to boot from it), then the info about the AZ won't be passed to Cinder and it will create the new volume in the default AZ. Steps to reproduce == 1. Configure multiple AZs in Nova and Cinder. 2. Disable attaching of volumes across AZs in nova.conf: [cinder] cross_az_attach = False 3. Restart nova-compute service. 4. Boot a new VM *without* specifying an AZ explicitly (so that Nova can pick up a host in *any* of the AZs) + state in the block device mapping, that a new volume must be created, e.g.: nova boot --block-device source=image,id=decd5d33-fdd5-4736-b10a- fd2ceebbd224,dest=volume,size=1,shutdown=remove,bootindex=0 --nic net- id=68038c06-f160-4405-9acc-b3480e3e8830 --flavor m1.nano demo Expected result === Instance is booted successfully. Actual result = Instance failed to boot and goes to ERROR state (Block Device Mapping is Invalid) nova-compute log says: 2016-09-07 09:54:26.396 13021 ERROR nova.compute.manager [instance: 1c7de927-9755-4081-99cf-3d1132a9d45a] InvalidVolume: Invalid volume: Instance 10 and volume e3970ecc-796a-46f8-952f-1bc804aab4a4 are not in the same availability_zone. Instance is in az1. Volume is in az2 ^ this is because *null* value was passed to Cinder on creation of a new volume and cinder-scheduler picked the cinder-volume in the *default* AZ configured in cinder.conf, instead of using the AZ of the host a Nova instance was scheduled to. Environment === DevStack, libvirt, Cinder LVM Nova version: master (f1b70d9457ae6c1fba3e7ac7c5f8b08d9042f2ba) Two AZs configured in Nova and Cinder To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1620989/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to :
[Yahoo-eng-team] [Bug 1620967] [NEW] Neutron API behind SSL terminating haproxy returns http version URL's instead of https
Public bug reported: This is a re-post of an issue that was reported for an older OpenStack version. Unfortunately, I am confronted with the same problem in OpenStack Mitaka. Keystone has a proper support for the case, when you use SSL terminating via HAProxy. Have a look here: https://bugzilla.redhat.com/show_bug.cgi?id=1259351 Description of problem: When using haproxy with SSL termination in front of neutron, neutron will return version URL's with http:// prefix instead of https://. This causes API clients to fail. How reproducible: Steps to Reproduce: 1. Configure HAproxy in front of Neutron with SSL termination (so client talks to neutron over SSL, HAproxy talks to Neutron over plain HTTP) 2. curl https://openstack-api.example.com:9696 Actual results: {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "http://openstack-api.example.com:9696/v2.0;, "rel": "self"}]}]} Expected results: {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "https://openstack-api.example.com:9696/v2.0;, "rel": "self"}]}]} Additional info: I patched this issue in /usr/lib/python2.7/site- packages/neutron/api/views/versions.py: def get_view_builder(req): base_url = req.application_url if req.environ.get('HTTP_X_FORWARDED_PROTO', None) != None: base_url = base_url.replace('http://', 'https://') return ViewBuilder(base_url) Then neutron returns the proper https URL. The X-Forwarded-Proto header is inserted by haproxy. Note: this issue is present in other openstack api's as well but can be worked around by setting public_endpoint explicitly. This option is not available in neutron however. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1620967 Title: Neutron API behind SSL terminating haproxy returns http version URL's instead of https Status in neutron: New Bug description: This is a re-post of an issue that was reported for an older OpenStack version. Unfortunately, I am confronted with the same problem in OpenStack Mitaka. Keystone has a proper support for the case, when you use SSL terminating via HAProxy. Have a look here: https://bugzilla.redhat.com/show_bug.cgi?id=1259351 Description of problem: When using haproxy with SSL termination in front of neutron, neutron will return version URL's with http:// prefix instead of https://. This causes API clients to fail. How reproducible: Steps to Reproduce: 1. Configure HAproxy in front of Neutron with SSL termination (so client talks to neutron over SSL, HAproxy talks to Neutron over plain HTTP) 2. curl https://openstack-api.example.com:9696 Actual results: {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "http://openstack-api.example.com:9696/v2.0;, "rel": "self"}]}]} Expected results: {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "https://openstack-api.example.com:9696/v2.0;, "rel": "self"}]}]} Additional info: I patched this issue in /usr/lib/python2.7/site- packages/neutron/api/views/versions.py: def get_view_builder(req): base_url = req.application_url if req.environ.get('HTTP_X_FORWARDED_PROTO', None) != None: base_url = base_url.replace('http://', 'https://') return ViewBuilder(base_url) Then neutron returns the proper https URL. The X-Forwarded-Proto header is inserted by haproxy. Note: this issue is present in other openstack api's as well but can be worked around by setting public_endpoint explicitly. This option is not available in neutron however. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1620967/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1580642] Re: Add API to retrieve default quotas
API-ref docs merged: https://review.openstack.org/#/c/358344/ CLI docs merged: https://review.openstack.org/#/c/358345/ ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1580642 Title: Add API to retrieve default quotas Status in neutron: Fix Released Bug description: https://review.openstack.org/306200 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/neutron" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit f5a2ee300d109ac9d403ee1f39d6e056ac925133 Author: Abhishek RautDate: Thu Apr 7 13:59:15 2016 -0700 Add API to retrieve default quotas Currently there is no support to retrieve default quotas set for all projects. This patch adds a new API function to get default quotas. GET /v2.0/quotas//default DocImpact: Document new API to used to retrieve default quotas APIImpact: New Read-only API to retrieve default quotas Change-Id: If40a44348e305da444acd6196d2e0c04202b8f7a Closes-Bug: #1204956 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1580642/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1503686] Re: unable to update enable_snat using router-update command
** Also affects: neutron Importance: Undecided Status: New ** Description changed: Currently enable_snat is allowed only when setting a gateway. $ neutron router-gateway-set --disable-net $ neutron router-gateway-set --enable-net There should be provision to set this flag with update command too. Like $ neutron router-update --enable-snat $ neutron router-update --disable-snat + + + On Neutron, with the below command: + curl -g -i -X PUT http://10.0.4.130:9696/v2.0/routers/deecfcf8-6a4d-494d-938e-515f5c9d5885.json -H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: b964f5aed06147efa06d27392db4f4f4" -d '{"router": {"external_gateway_info": {"enable_snat": false}}}' + + Actual Response: + HTTP/1.1 400 Bad Request + Content-Length: 234 + Content-Type: application/json; charset=UTF-8 + X-Openstack-Request-Id: req-ac54539c-74eb-4fc1-8eac-339c928c69a6 + Date: Wed, 07 Sep 2016 08:31:22 GMT + + {"NeutronError": {"message": "Invalid input for external_gateway_info. + Reason: Validation of dictionary's keys failed. Expected keys: + set(['network_id']) Provided keys: set([u'enable_snat'])." + + Expected Response : That the external_gateway_info would have the SNAT + disabled, even without the gateway network ID + + + In Other words, + Expectation is that user can be allowed to enable/disable SNAT independently if the External Gateway Network ID is set. If not, + then it should be avoided ** Changed in: neutron Assignee: (unassigned) => Reedip (reedip-banerjee) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1503686 Title: unable to update enable_snat using router-update command Status in neutron: New Status in python-neutronclient: Expired Bug description: Currently enable_snat is allowed only when setting a gateway. $ neutron router-gateway-set --disable-net $ neutron router-gateway-set --enable-net There should be provision to set this flag with update command too. Like $ neutron router-update --enable-snat $ neutron router-update --disable-snat On Neutron, with the below command: curl -g -i -X PUT http://10.0.4.130:9696/v2.0/routers/deecfcf8-6a4d-494d-938e-515f5c9d5885.json -H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: b964f5aed06147efa06d27392db4f4f4" -d '{"router": {"external_gateway_info": {"enable_snat": false}}}' Actual Response: HTTP/1.1 400 Bad Request Content-Length: 234 Content-Type: application/json; charset=UTF-8 X-Openstack-Request-Id: req-ac54539c-74eb-4fc1-8eac-339c928c69a6 Date: Wed, 07 Sep 2016 08:31:22 GMT {"NeutronError": {"message": "Invalid input for external_gateway_info. Reason: Validation of dictionary's keys failed. Expected keys: set(['network_id']) Provided keys: set([u'enable_snat'])." Expected Response : That the external_gateway_info would have the SNAT disabled, even without the gateway network ID In Other words, Expectation is that user can be allowed to enable/disable SNAT independently if the External Gateway Network ID is set. If not, then it should be avoided To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1503686/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1605966] Re: L3 HA: VIP doesn't changed if qr interface or qg interface was down
Marking this as Incomplete seeing as how the no progress has been made on the bug report or on the patch. ** Changed in: neutron Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1605966 Title: L3 HA: VIP doesn't changed if qr interface or qg interface was down Status in neutron: Invalid Bug description: === Problem Description == Currently, in L3 HA, we track "ha" interface to determine whether a VIP address should be failover. Unfortunately, if a qr or qg interface was down, VIP address will not failover. Because we don't track these interfaces in a router. === How to reproduce === Create a HA router and attaching a subnet on it. So that there will be a keepalived process to monitor this router. Go into the L3 router we created it above. Execute "ip link set qr- xxx down". As we don't except, VIP address doesn't failover. == How to resolve it == In current keepalived configure file, like this: vrrp_instance VR_2 { state BACKUP interface ha-c00c7b49-d5 virtual_router_id 2 priority 50 garp_master_delay 60 nopreempt advert_int 2 track_interface { ha-c00c7b49-d5 } virtual_ipaddress { 169.254.0.2/24 dev ha-c00c7b49-d5 } virtual_ipaddress_excluded { 2.2.2.1/24 dev qr-b312f788-9b fe80::f816:3eff:feac:fa12/64 dev qr-b312f788-9b scope link } } Track interfaces only include "ha" interface, so VIP will not changed if "qr" or "qg" interface was down. To address this, we track both "qr" and "qg" interfaces, like this: vrrp_instance VR_2 { state BACKUP interface ha-c00c7b49-d5 virtual_router_id 2 priority 50 garp_master_delay 60 nopreempt advert_int 2 track_interface { qr-xxx qg-xxx ha-c00c7b49-d5 } virtual_ipaddress { 169.254.0.2/24 dev ha-c00c7b49-d5 } virtual_ipaddress_excluded { 2.2.2.1/24 dev qr-b312f788-9b fe80::f816:3eff:feac:fa12/64 dev qr-b312f788-9b scope link } } By doing this, if qr or qg interface was down unfortunately, HA router will failover. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1605966/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1605282] Re: Transaction rolled back while creating HA router
This should have been mitigated by https://review.openstack.org/#/c/364278/10/neutron/scheduler/l3_agent_scheduler.py@207 so I'm closing this. ** Changed in: neutron Status: In Progress => Fix Released ** Changed in: neutron Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1605282 Title: Transaction rolled back while creating HA router Status in neutron: Fix Released Bug description: The stacktrace can be found here: http://paste.openstack.org/show/539052/ This was discovered while running the create_and_delete_router rally test with a high (~10) concurrency number. I encountered this on stable/mitaka so it's interesting to see if this reproduces on master. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1605282/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1620835] Re: Add timestamp fields for neutron ext resources
** Also affects: openstack-manuals Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1620835 Title: Add timestamp fields for neutron ext resources Status in neutron: New Status in openstack-manuals: New Bug description: https://review.openstack.org/312873 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/neutron" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit 17b88cd4539cd5fa096115b76fd4a21036395360 Author: ZhaoBoDate: Thu May 5 17:16:23 2016 +0800 Add timestamp fields for neutron ext resources Propose a new extension named "timestamp_ext" to add timestamp to neutron ext resources like router/floatingip/security_group/security_group_rule. APIImpact DocImpact: Neutron ext resources now contain 'timestamp' fields like 'created_at' and 'updated_at' Implements: blueprint add-neutron-extension-resource-timestamp Change-Id: I78b00516e31ce83376d37f57299b2229b6fb8fcf To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1620835/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp