[Yahoo-eng-team] [Bug 1266962] Re: Remove set_time_override in timeutils
** Also affects: keystonemiddleware Importance: Undecided Status: New ** Changed in: keystonemiddleware Status: New => In Progress ** Changed in: keystonemiddleware Assignee: (unassigned) => zhangyangyang (zhangyangyang) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1266962 Title: Remove set_time_override in timeutils Status in Ceilometer: Fix Released Status in Cinder: Fix Released Status in gantt: New Status in Glance: Fix Released Status in Ironic: Fix Released Status in OpenStack Identity (keystone): Fix Released Status in keystonemiddleware: In Progress Status in Manila: Fix Released Status in neutron: In Progress Status in OpenStack Compute (nova): Fix Released Status in oslo.messaging: Fix Released Status in oslo.utils: New Status in python-keystoneclient: Fix Released Status in python-novaclient: Fix Released Status in tuskar: Fix Released Status in zaqar: Fix Released Bug description: set_time_override was written as a helper function to mock utcnow in unittests. However we now use mock or fixture to mock our objects so set_time_override has become obsolete. We should first remove all usage of set_time_override from downstream projects before deleting it from oslo. List of attributes and functions to be removed from timeutils: * override_time * set_time_override() * clear_time_override() * advance_time_delta() * advance_time_seconds() To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1266962/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1713901] Re: Icon of status on network topology is showing wrong
Reviewed: https://review.openstack.org/499021 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=b26b17caec5482f58baec9f9d05192694ffb7954 Submitter: Jenkins Branch:master commit b26b17caec5482f58baec9f9d05192694ffb7954 Author: Feilong Wang Date: Wed Aug 30 16:32:34 2017 +1200 Fix icon of status on network topology Currently, the icon of status on network topology is wrong because of incorrect icon. There are two issues: 1. The icon of router status is in red color though it's active. 2. The icon of interface status is missing. Closes-Bug: #1713901 Change-Id: Icf676c7267f64b3704cd16e6a1cccf9ae4d95e91 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1713901 Title: Icon of status on network topology is showing wrong Status in OpenStack Dashboard (Horizon): Fix Released Bug description: Currently, the icon of status on flat network topology and graph is wrong because of incorrect icon. There are 3 issues: 1. The icon of router status is in red color though it's active on flat network topology. 2. The icon of interface status is missing on flat network topology. 2. The icon of interface status of router is missing on graph network topology. See https://ibb.co/dkru3k and https://ibb.co/kN2yw5 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1713901/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1714528] Re: l2pop config opts not documented
Reviewed: https://review.openstack.org/502547 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=1b4559a5e498eac31d051ba5d1c5a86597588560 Submitter: Jenkins Branch:master commit 1b4559a5e498eac31d051ba5d1c5a86597588560 Author: Boden R Date: Mon Sep 11 13:32:33 2017 -0600 fix missing l2pop config option docs The l2pop ML2 mechanism driver's configuration options are no longer being generated during the docs build. This patch adds the l2pop config options back into the ml2 sample config generation and also fixes a link to them in the admin docs. This fix is a candidate for backport to pike. Change-Id: Ia26b4d1995690e7291b6476dc683271e12de09ab Closes-Bug: #1714528 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1714528 Title: l2pop config opts not documented Status in neutron: Fix Released Bug description: In ocata we had config reference for l2pop config opts [1]. However these seem to be missing in pike [2] and are referenced in the config-ml2.rst doc. [1] https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-l2-population-mechanism-configuration-options [2] https://docs.openstack.org/neutron/pike/configuration/ml2-conf.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1714528/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1714068] Re: AttributeError in get_device_details when segment=None
Reviewed: https://review.openstack.org/501852 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=4833852abb864230daecf97bc672ad9a4a9c4267 Submitter: Jenkins Branch:master commit 4833852abb864230daecf97bc672ad9a4a9c4267 Author: Brian Haley Date: Thu Sep 7 15:42:33 2017 -0400 Treat lack of segment info in port object as unbound A push notifications change added segment information to the get_device_details() RPC call, but sometimes the segment information is not present, resulting in an AttributeError. Just treat the lack of segment info as if the port was unbound, since the port is probably in the process of being removed. Change-Id: I631c6e1f02fa07eed330c99a96aa66d747784f37 Closes-bug: #1714068 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1714068 Title: AttributeError in get_device_details when segment=None Status in neutron: Fix Released Bug description: Found an AttributeError in this tempest log file: http://logs.openstack.org/67/347867/52/check/gate-tempest-dsvm- neutron-dvr-ubuntu-xenial/64d8d9a/logs/screen-q-agt.txt.gz Aug 29 18:57:12.585412 ubuntu-xenial-rax-iad-10687932 neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc [None req-0de9bdcb-9659-4971-9e03-4e2849af0169 None None] Failed to get details for device 4fe2030b-4af8-47e7-8dd3-52b544918e16: AttributeError: 'NoneType' object has no attribute 'network_type' Aug 29 18:57:12.585578 ubuntu-xenial-rax-iad-10687932 neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc Traceback (most recent call last): Aug 29 18:57:12.585712 ubuntu-xenial-rax-iad-10687932 neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc File "/opt/stack/new/neutron/neutron/agent/rpc.py", line 216, in get_devices_details_list_and_failed_devices Aug 29 18:57:12.585858 ubuntu-xenial-rax-iad-10687932 neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc self.get_device_details(context, device, agent_id, host)) Aug 29 18:57:12.585987 ubuntu-xenial-rax-iad-10687932 neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc File "/opt/stack/new/neutron/neutron/agent/rpc.py", line 244, in get_device_details Aug 29 18:57:12.586162 ubuntu-xenial-rax-iad-10687932 neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc 'network_type': segment.network_type, Aug 29 18:57:12.586289 ubuntu-xenial-rax-iad-10687932 neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc AttributeError: 'NoneType' object has no attribute 'network_type' Aug 29 18:57:12.586414 ubuntu-xenial-rax-iad-10687932 neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc From the same log just a second earlier, we can see that the port was updated, and the 'segment' field of the port binding_level was cleared: Aug 29 18:57:11.569684 ubuntu-xenial-rax-iad-10687932 neutron- openvswitch-agent[28658]: DEBUG neutron.agent.resource_cache [None req-1d914886-cbd7-4568-9290-bcbc98bd7ba6 None None] Resource Port 4fe2030b-4af8-47e7-8dd3-52b544918e16 updated (revision_number 4->5). Old fields: {'status': u'DOWN', 'binding_levels': [PortBindingLevel(driver='openvswitch',host='ubuntu-xenial-rax- iad-10687932',level=0,port_id=4fe2030b- 4af8-47e7-8dd3-52b544918e16,segment=NetworkSegment(ef53b2b6-f51e-495b- 9dda-b174fbed1122))]} New fields: {'status': u'ACTIVE', 'binding_levels': [PortBindingLevel(driver='openvswitch',host='ubuntu- xenial-rax-iad-10687932',level=0,port_id=4fe2030b- 4af8-47e7-8dd3-52b544918e16,segment=None)]} {{(pid=28658) record_resource_update /opt/stack/new/neutron/neutron/agent/resource_cache.py:185}} The code in question doesn't check if 'segment' is valid before using it, and I'm not sure if such a simple change as that is appropriate to fix it. This code was added in the push notifications changes: commit c3db9d6b0b990da6664955e4ce1c72758dc600e1 Author: Kevin Benton Date: Sun Jan 22 17:01:47 2017 -0800 Use push-notificates for OVSPluginAPI Replace the calls to the OVSPluginAPI info retrieval functions with reads directly from the push notification cache. Since we now depend on the cache for the source of truth, the 'port_update'/'port_delete'/'network_update' handlers are configured to be called whenever the cache receives a corresponding resource update. The OVS agent will no longer subscribe to topic notifications for ports or networks from the legacy notification API. Partially-Implements: blueprint push-notifications Change-Id: Ib2234ec1f5d328649c6bb1c3fe07799d3e351f48 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1714068/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to
[Yahoo-eng-team] [Bug 1712787] Re: Documentation link broken on Open vSwitch L2 Agent in neutron
Reviewed: https://review.openstack.org/501718 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=cfb3dc5c2db7ae0bb07f651507db333c93b742ef Submitter: Jenkins Branch:master commit cfb3dc5c2db7ae0bb07f651507db333c93b742ef Author: mohit.mohit2atcognizant.com Date: Thu Sep 7 05:29:53 2017 -0700 Fixing hyperlink issue Fixes a documentation hyperlink issue -: * Refer to doc page -: https://docs.openstack.org/neutron/latest/contributor/internals/openvswitch_agent.html for string "Networking in too much detail" * Hyperlink for above string is broken It is pointing to http://openstack.redhat.com/Networking_in_too_much_detail#Networking_in_too_much_detail * Instead it should point to http://openstack.redhat.com/networking/networking-in-too-much-detail/ * This fix would correct the hyperlink on the doc page. Change-Id: I53f10286a109629d72f8d7a3e8db46568fdb72bf Closes-Bug: #1712787 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1712787 Title: Documentation link broken on Open vSwitch L2 Agent in neutron Status in neutron: Fix Released Bug description: External Hyperlink on doc page is broken for string GRE Tunneling is documented in depth in the by RedHat. points to http://openstack.redhat.com/Networking_in_too_much_detail Instead it should point to http://openstack.redhat.com/networking/networking-in-too-much-detail/ --- Release: 11.0.0.0rc2.dev52 on 2017-08-24 09:16 SHA: f0e4809ca82628faa43d6ac83892b4451e1512f6 Source: https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/contributor/internals/openvswitch_agent.rst URL: https://docs.openstack.org/neutron/latest/contributor/internals/openvswitch_agent.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1712787/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1715629] Re: External hyperlink broken on L2 Networking with SR-IOV enabled NICs in neutron
Reviewed: https://review.openstack.org/501725 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=67b9402ea65677ab5e0a930a16bfbba25eac376b Submitter: Jenkins Branch:master commit 67b9402ea65677ab5e0a930a16bfbba25eac376b Author: mohit.mohit2atcognizant.com Date: Thu Sep 7 06:10:18 2017 -0700 Fixing external hyperlink. Fixes a documentation hyperlink issue -: * Goto doc page - https://docs.openstack.org/neutron/latest/contributor/internals/sriov_nic_agent.html * Refer the hyperlink for text "SR-IOV Passthrough For Networking" It is currently broken. * It should point to https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking instead of the current one https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking/ * This fixes the hyperlink on the doc page. Closes-Bug: #1715629 Change-Id: I863fde51f488b76cc2613dbbc59455801cfd4195 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1715629 Title: External hyperlink broken on L2 Networking with SR-IOV enabled NICs in neutron Status in neutron: Fix Released Bug description: Hyperlink on page - https://docs.openstack.org/neutron/latest/contributor/internals/sriov_nic_agent.html for text SR-IOV Passthrough For Networking is broken , it should point to https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking instead of the current one https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking/ --- Release: 11.0.0.0rc2.dev122 on 2017-09-07 02:23 SHA: e711efc7db005ff9ee903bd24f7a266053a4f90f Source: https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/contributor/internals/sriov_nic_agent.rst URL: https://docs.openstack.org/neutron/latest/contributor/internals/sriov_nic_agent.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1715629/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717358] Re: gate-neutron-dsvm-api-ubuntu-xenial fails because of skip_checks()
Reviewed: https://review.openstack.org/504227 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b9d0c5418dcf397fe06976aae5d1749ec99a662a Submitter: Jenkins Branch:master commit b9d0c5418dcf397fe06976aae5d1749ec99a662a Author: Hirofumi Ichihara Date: Thu Sep 14 14:45:15 2017 -0600 Fix missing super's skip_checks() The skip_checks of NetworksTestDHCPv6 didn't call super's skip_checks(). Change-Id: I1c0902e3c06886812029fae0e4435bb6674f57df Closes-Bug: 1717358 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1717358 Title: gate-neutron-dsvm-api-ubuntu-xenial fails because of skip_checks() Status in neutron: Fix Released Bug description: 2017-09-14 19:44:27.502259 | == 2017-09-14 19:44:27.502290 | Failed 1 tests - output below: 2017-09-14 19:44:27.502313 | == 2017-09-14 19:44:27.502326 | 2017-09-14 19:44:27.502358 | setUpClass (neutron.tests.tempest.api.test_dhcp_ipv6.NetworksTestDHCPv6) 2017-09-14 19:44:27.502391 | 2017-09-14 19:44:27.502403 | 2017-09-14 19:44:27.502421 | Captured traceback: 2017-09-14 19:44:27.502439 | ~~~ 2017-09-14 19:44:27.502462 | Traceback (most recent call last): 2017-09-14 19:44:27.502490 | File "tempest/test.py", line 152, in setUpClass 2017-09-14 19:44:27.502512 | "skip_checks" % cls.__name__) 2017-09-14 19:44:27.502549 | RuntimeError: skip_checks for NetworksTestDHCPv6 did not call the super's skip_checks Related tempest patch: https://review.openstack.org/#/c/493967/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1717358/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717000] Re: InstanceNotFound prevents putting over-quota instance into ERROR state
Reviewed: https://review.openstack.org/503839 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=7e02f02d1501925ddeb15266c05d4d95f852e21a Submitter: Jenkins Branch:master commit 7e02f02d1501925ddeb15266c05d4d95f852e21a Author: Matt Riedemann Date: Wed Sep 13 17:30:59 2017 -0400 Target context when setting instance to ERROR when over quota When conductor does the quota recheck, the instances are created in a cell but when we update the instance and set it to ERROR state, we were not targeting the context to the cell that the instance lives in, which leads to an InstanceNotFound error and then the instance is stuck in BUILD/scheduling state. This targets the context to the cell when updating the instance. Change-Id: I45faffaba4d329433a33cfb5e64c89ce4885df46 Closes-Bug: #1717000 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1717000 Title: InstanceNotFound prevents putting over-quota instance into ERROR state Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) pike series: In Progress Bug description: I found this when trying to recreate bug 1716706. https://bugs.launchpad.net/nova/+bug/1716706/comments/4 Basically I can get conductor to fail the quota recheck and go to set the instance into ERROR state but it fails to find the instance since we don't have the cell context targeted: Sep 13 17:58:26 devstack-queens nova-conductor[3129]: WARNING nova.scheduler.utils [None req-90a115b2-5838-4be2-afe2-a3b755015e19 demo demo] [instance: 888925b0-164a-4d4a-bb6c-c0426f904e95] Setting instance to ERROR state.: TooManyInstances: Quota exceeded for instances: Requested 1, but already used 10 of 10 instances Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR root [None req-90a115b2-5838-4be2-afe2-a3b755015e19 demo demo] Original exception being dropped: ['Traceback (most recent call last):\n', ' File "/opt/stack/nova/nova/conductor/manager.py", line 1003, in schedule_and_build_instances\n orig_num_req=len(build_requests))\n', ' File "/opt/stack/nova/nova/compute/utils.py", line 764, in check_num_instances_quota\n allowed=total_alloweds)\n', 'TooManyInstances: Quota exceeded for instances: Requested 1, but already used 10 of 10 instances\n']: InstanceNotFound: Instance 888925b0-164a-4d4a-bb6c-c0426f904e95 could not be found. Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR oslo_messaging.rpc.server [None req-90a115b2-5838-4be2-afe2-a3b755015e19 demo demo] Exception during message handling: InstanceNotFound: Instance 888925b0-164a-4d4a-bb6c-c0426f904e95 could not be found. Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR oslo_messaging.rpc.server InstanceNotFound: Instance 888925b0-164a-4d4a-bb6c-c0426f904e95 could not be found. Because we don't target the cell when updating the instance. https://github.com/openstack/nova/blob/cfdec41eeec5fab220702efefdaafc45559aeb14/nova/conductor/manager.py#L1168 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1717000/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717386] [NEW] Heat docker plugin unable to start containers on latest Docker
Public bug reported: Recent versions of Docker no longer seem to support parameters when starting a container. Thus the Heat docker plugin will cause the following error to occur: starting container with non-empty request body was deprecated since v1.10 and removed in v1.12" Where problem is occurring: https://github.com/openstack/heat/blob/master/contrib/heat_docker/heat_docker/resources/docker_container.py#L449 Example template to use: heat_template_version: 2013-05-23 description: > Heat template to deploy Docker containers to an existing host resources: hello: type: DockerInc::Docker::Container properties: image: nginx docker_endpoint: tcp://10.0.0.5:2376 ** Affects: heat Importance: Undecided Status: New ** Project changed: horizon => heat -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1717386 Title: Heat docker plugin unable to start containers on latest Docker Status in OpenStack Heat: New Bug description: Recent versions of Docker no longer seem to support parameters when starting a container. Thus the Heat docker plugin will cause the following error to occur: starting container with non-empty request body was deprecated since v1.10 and removed in v1.12" Where problem is occurring: https://github.com/openstack/heat/blob/master/contrib/heat_docker/heat_docker/resources/docker_container.py#L449 Example template to use: heat_template_version: 2013-05-23 description: > Heat template to deploy Docker containers to an existing host resources: hello: type: DockerInc::Docker::Container properties: image: nginx docker_endpoint: tcp://10.0.0.5:2376 To manage notifications about this bug go to: https://bugs.launchpad.net/heat/+bug/1717386/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1452641] Re: Static Ceph mon IP addresses in connection_info can prevent VM startup
Talked about this at the queens ptg, notes are in here: https://etherpad.openstack.org/p/cinder-ptg-queens ** Also affects: nova Importance: Undecided Status: New ** Changed in: nova Status: New => Confirmed ** Changed in: nova Importance: Undecided => Medium ** No longer affects: cinder ** Tags removed: drivers ** Tags added: volumes -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1452641 Title: Static Ceph mon IP addresses in connection_info can prevent VM startup Status in OpenStack Compute (nova): Confirmed Bug description: The Cinder rbd driver extracts the IP addresses of the Ceph mon servers from the Ceph mon map when the instance/volume connection is established. This info is then stored in nova's block-device-mapping table and is never re-validated down the line. Changing the Ceph mon servers' IP adresses will prevent the instance from booting as the stale connection info will enter the instance's XML. One idea to fix this would be to use the information from ceph.conf, which should be an alias or a loadblancer, directly. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1452641/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1547142] Re: A shelved_offload VM's volumes are still attached to a host
Reviewed: https://review.openstack.org/257275 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=e89e1bdc60211622440c964f8be8563da89341ac Submitter: Jenkins Branch:master commit e89e1bdc60211622440c964f8be8563da89341ac Author: Andrea Rosa Date: Thu Sep 14 13:47:06 2017 -0400 Call terminate_connection when shelve_offloading When nova performs a shelve offload for an instance, it needs to terminate all the volume connections for that instance as with the shelve offload it is not guaranteed that the instance will be placed on the same host once it gets unshelved. This change adds the call to the terminate_volume_connections on the _shelve_offload_instance method in the compute manager. Closes-Bug: #1547142 Change-Id: I8849ae0f54605e003d5b294ca3d66dcef89d7d27 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1547142 Title: A shelved_offload VM's volumes are still attached to a host Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) ocata series: Confirmed Status in OpenStack Compute (nova) pike series: In Progress Bug description: When shelve_offloading a VM, the VM loses it's connection to a host. However, connection to the host is not terminated to it's volumes, so they are still attached to a host. Afterwards, when the VM is unshleved, nova calls initialize_connection to the new host for it's volumes, and they are now connected to 2 hosts. The correct behaviour is to call terminate_connection on the VM's volumes when it's being shelved_offloaded To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1547142/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717365] Re: binding:profile is None breaks migration
** Also affects: nova/newton Importance: Undecided Status: New ** Also affects: nova/ocata Importance: Undecided Status: New ** Also affects: nova/pike Importance: Undecided Status: New ** Tags added: live-migration neutron ** Changed in: nova Status: New => Triaged ** Changed in: nova/newton Status: New => Confirmed ** Changed in: nova Importance: Undecided => Medium ** Changed in: nova/ocata Status: New => Confirmed ** Changed in: nova/ocata Importance: Undecided => Medium ** Changed in: nova/pike Importance: Undecided => Medium ** Changed in: nova/newton Importance: Undecided => Medium ** Changed in: nova/pike Status: New => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1717365 Title: binding:profile is None breaks migration Status in OpenStack Compute (nova): Triaged Status in OpenStack Compute (nova) newton series: Confirmed Status in OpenStack Compute (nova) ocata series: Confirmed Status in OpenStack Compute (nova) pike series: Confirmed Bug description: Nova Newton (commit: d8b30c3772 as pulled in with OSA 14.2.7) During a live-migration, setup_networks_at_host tries to lookup some information from the network port. at https://review.openstack.org/#/c/275073/45/nova/network/neutronv2/api.py@289 If the port has None assigned to "binding:profile" further code breaks with a TypeError assigning to NoneType. mriedem suggested catching this with an extended .get(): 16:27 < mriedem> since the port_profile should default to {}, UNLESS the port has binding:profile=None... 16:27 < mriedem> it should be: port_profile = p.get(BINDING_PROFILE, {}) or {} To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1717365/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717365] [NEW] binding:profile is None breaks migration
Public bug reported: Nova Newton (commit: d8b30c3772 as pulled in with OSA 14.2.7) During a live-migration, setup_networks_at_host tries to lookup some information from the network port. at https://review.openstack.org/#/c/275073/45/nova/network/neutronv2/api.py@289 If the port has None assigned to "binding:profile" further code breaks with a TypeError assigning to NoneType. mriedem suggested catching this with an extended .get(): 16:27 < mriedem> since the port_profile should default to {}, UNLESS the port has binding:profile=None... 16:27 < mriedem> it should be: port_profile = p.get(BINDING_PROFILE, {}) or {} ** Affects: nova Importance: Medium Status: Triaged ** Affects: nova/newton Importance: Medium Status: Confirmed ** Affects: nova/ocata Importance: Medium Status: Confirmed ** Affects: nova/pike Importance: Medium Status: Confirmed ** Tags: live-migration neutron ** Attachment added: "binding.profile.catch.none.patch" https://bugs.launchpad.net/bugs/1717365/+attachment/4950383/+files/binding.profile.catch.none.patch -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1717365 Title: binding:profile is None breaks migration Status in OpenStack Compute (nova): Triaged Status in OpenStack Compute (nova) newton series: Confirmed Status in OpenStack Compute (nova) ocata series: Confirmed Status in OpenStack Compute (nova) pike series: Confirmed Bug description: Nova Newton (commit: d8b30c3772 as pulled in with OSA 14.2.7) During a live-migration, setup_networks_at_host tries to lookup some information from the network port. at https://review.openstack.org/#/c/275073/45/nova/network/neutronv2/api.py@289 If the port has None assigned to "binding:profile" further code breaks with a TypeError assigning to NoneType. mriedem suggested catching this with an extended .get(): 16:27 < mriedem> since the port_profile should default to {}, UNLESS the port has binding:profile=None... 16:27 < mriedem> it should be: port_profile = p.get(BINDING_PROFILE, {}) or {} To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1717365/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717358] [NEW] gate-neutron-dsvm-api-ubuntu-xenial fails because of skip_checks()
Public bug reported: 2017-09-14 19:44:27.502259 | == 2017-09-14 19:44:27.502290 | Failed 1 tests - output below: 2017-09-14 19:44:27.502313 | == 2017-09-14 19:44:27.502326 | 2017-09-14 19:44:27.502358 | setUpClass (neutron.tests.tempest.api.test_dhcp_ipv6.NetworksTestDHCPv6) 2017-09-14 19:44:27.502391 | 2017-09-14 19:44:27.502403 | 2017-09-14 19:44:27.502421 | Captured traceback: 2017-09-14 19:44:27.502439 | ~~~ 2017-09-14 19:44:27.502462 | Traceback (most recent call last): 2017-09-14 19:44:27.502490 | File "tempest/test.py", line 152, in setUpClass 2017-09-14 19:44:27.502512 | "skip_checks" % cls.__name__) 2017-09-14 19:44:27.502549 | RuntimeError: skip_checks for NetworksTestDHCPv6 did not call the super's skip_checks Related tempest patch: https://review.openstack.org/#/c/493967/ ** Affects: neutron Importance: Undecided Assignee: Hirofumi Ichihara (ichihara-hirofumi) Status: In Progress ** Tags: gate-failure ** Changed in: neutron Assignee: (unassigned) => Hirofumi Ichihara (ichihara-hirofumi) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1717358 Title: gate-neutron-dsvm-api-ubuntu-xenial fails because of skip_checks() Status in neutron: In Progress Bug description: 2017-09-14 19:44:27.502259 | == 2017-09-14 19:44:27.502290 | Failed 1 tests - output below: 2017-09-14 19:44:27.502313 | == 2017-09-14 19:44:27.502326 | 2017-09-14 19:44:27.502358 | setUpClass (neutron.tests.tempest.api.test_dhcp_ipv6.NetworksTestDHCPv6) 2017-09-14 19:44:27.502391 | 2017-09-14 19:44:27.502403 | 2017-09-14 19:44:27.502421 | Captured traceback: 2017-09-14 19:44:27.502439 | ~~~ 2017-09-14 19:44:27.502462 | Traceback (most recent call last): 2017-09-14 19:44:27.502490 | File "tempest/test.py", line 152, in setUpClass 2017-09-14 19:44:27.502512 | "skip_checks" % cls.__name__) 2017-09-14 19:44:27.502549 | RuntimeError: skip_checks for NetworksTestDHCPv6 did not call the super's skip_checks Related tempest patch: https://review.openstack.org/#/c/493967/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1717358/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1485578] Re: It is not possible to select AZ for new Cinder volume during the VM creation
This bug was last updated over 2 years ago, and the launch instance has been entirely re-written. Can you reproduce the error with a current release of horizon? If the issue still exists, please feel free to reopen it. ** Tags added: cinder ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1485578 Title: It is not possible to select AZ for new Cinder volume during the VM creation Status in OpenStack Dashboard (Horizon): Invalid Bug description: Steps To Reproduce: 1. Deploy OpenStack cluster with several Nova availability zones, for example, 'nova1' and 'nova2' and with several Cinder availability zones, for example, 'storage1' and 'storage2' (availability zones for Nova and Cinder should be different). 2. Login to Horizon dashboard and navigate to Project > Instances 3. Click on 'Launch Instance' button 4. Set all required parameters, select Nova AZ 'nova1' for new VM and select Instance Boot Source = "Boot from image (creates new volume)" 5. Click on 'Launch' button Observed Result: Instance will fail with "Failure prepping block device" error (please see attached screenshot horizon_az_bug.png) Expected Result: As a user I expect that Horizon UI will provide me the ability to select the availability zone for new volume if I want to create new volume and boot VM from it. We can't use Nova AZ as availability zone for Cinder volume because these zones are different availability zones (we can have, for example, 1 Nova availability zones and many Cinder availability zone or one Cinder AZ and many Nova AZs - it depends on users needs). To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1485578/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1461118] Re: limit maximum length of gen_random_resource
Closing since it was already fixed elsewhere. ** Changed in: horizon Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1461118 Title: limit maximum length of gen_random_resource Status in OpenStack Dashboard (Horizon): Invalid Bug description: in example, name of flavor can't be longer than 25 characters, but this function produce much longer strings and it can lead to raise exception like 'abcdefgh' != 'abc' To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1461118/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1462503] Re: Need a specific error message for disk too small boot failures
This bug was last updated over 2 years ago, and the last commenter was unable to reproduce it on a more recent build. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1462503 Title: Need a specific error message for disk too small boot failures Status in OpenStack Dashboard (Horizon): Invalid Bug description: When using a compressed image format it's possible to have late boot failures because the expanded image can't fit on disk. Horizon should show a specific error message for this failure. To create: use Admin->System->Image->Create Image Download a largish compressed image such as: http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-i386-disk1.img Format QCOW2 Copy Data: true Public: True This disk is a compressed image which takes 241M, but expands to 2.2G ubuntu@unbuntu-drf-3:/opt/stack/data/glance/cache$ qemu-img info a22f195e-e64a-4170-90ff-e76f49a3d6f8 image: a22f195e-e64a-4170-90ff-e76f49a3d6f8 file format: qcow2 virtual size: 2.2G (2361393152 bytes) disk size: 241M cluster_size: 65536 Launch an Instance based on this image Project->Compute->Instances->Launch Instance Choose m1.nano, m1.tiny, or some other flavor with not enough disk Boot From Image (creates new volume) It fails with an obscure error message. We should catch the exception related to this specific circumstance and show a relevant error message. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1462503/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1455213] Re: Permission error given when selecting Project Name and not Set as Active Project
Closing, per the comments. ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1455213 Title: Permission error given when selecting Project Name and not Set as Active Project Status in OpenStack Dashboard (Horizon): Invalid Bug description: Users receive a "You do not have permission to access the resource: /identity/XX/detail/ Login as different user or go back to home page" When they select the name of the project, instead of clicking the button "Set as active Project" To reproduce: User must belong to more than 30 projects so that the Project selector in the Upper Left hand corner is more than what can be displayed. This leads the user to select the option "More Projects'. Once in the projects listing page, the user selects the link that contains the name of the project. This will produce the above error. If the user select "Set as Active Project", they will not see an error and everything would work as expected. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1455213/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1456943] Re: Page "Hypervisor Servers" got instances which not on this hypervisor
This bug was last updated over 2 years ago, and as there have been many changes to horizon since then such that this code no longer exists, so this bug is getting marked as Invalid. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Assignee: Kahou Lei (kahou82) => (unassigned) ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1456943 Title: Page "Hypervisor Servers" got instances which not on this hypervisor Status in OpenStack Dashboard (Horizon): Invalid Bug description: 1. My environment below: dashboard: openstack-dashboard-2014.1 Two compute nodes: compute1, compute10 2. Reproduce: 1) Launch instances on two compute nodes: compute1 has two instances: instance-001c, instance-001d compute10 has three instances: instance-0003, instance-0004, instance-0005 2) Click into page 'Hypervisor Servers' of compute1 * expect: show instance-001c, instance-001d * actual: show instance-001c, instance-001d, instance-0003, instance-0004, instance-0005 3. This issue relates with the RESTful API "os-hypervisors/computes/servers". This API uses pattern "%compute1%" to query instances. Should it need to add some filter to get correct instances? To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1456943/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1454305] Re: Error in displaying ceilometer measurements plot
This bug was last updated over 2 years ago, and as there have been many changes to horizon and its theming support since then, this is getting marked as Invalid. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1454305 Title: Error in displaying ceilometer measurements plot Status in OpenStack Dashboard (Horizon): Invalid Bug description: Dear colleague, When installing ceilometer (controller node) and trying to visualize the measurements through the dashboard there is an error. A black sandbox is shown instead of the graph / plot. This is shown in a tab named 'Resource Usage' within horizon; clearly, it can only be seen when ceilometer is installed. I discovered there were two css files causing issues, namely datepicker3.css and rickshaw.css. Below is the output from /var/log/apache2/error_log [Tue May 12 14:07:26.756949 2015] [wsgi:error] [pid 4158] Not Found: /horizon/lib/bootstrap_datepicker/datepicker3.css [Tue May 12 14:07:26.759705 2015] [wsgi:error] [pid 4155] Not Found: /horizon/lib/rickshaw.css I realised other files, e.g., js files, were under the /static parent directory, and so the missing .css files. In brief, when looking at /srv/www/openstack- dashboard/openstack_dashboard/static/dashboard/scss> horizon.scss and substituting the first two lines: // Pure CSS Vendor @import "/horizon/lib/bootstrap_datepicker/datepicker3.css"; @import "/horizon/lib/rickshaw.css"; by the following: @import "/static/horizon/lib/bootstrap_datepicker/datepicker3.css"; @import "/static/horizon/lib/rickshaw.css"; the problem is fixed and the plot is shown on the screen. Hope it helps, Sergio. P.D. ***openstack version (Kilo for SUSE SLES 12) 2015.2.0-2015.2.dev282 *** Information for package python-horizon: --- Repository: OpenStack (Devel) (SLE_12) Name: python-horizon Version: 2015.2.0.dev205-2.1 Arch: noarch Vendor: obs://build.opensuse.org/Cloud:OpenStack Installed: Yes Status: out-of-date (version 2015.2.0.dev195-1.1 installed) Installed Size: 1.4 MiB Summary: OpenStack Dashboard (Horizon) - Python Module Description: The Python module horizon is the core component of the OpenStack dashboard. *** Ceilometer version: 1.2.1 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1454305/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1454823] Re: Error on compress: AttributeError: 'FileSystemStorage' object has no attribute 'prefix'
This bug was last updated over 2 years ago, and as the comments indicate, this appears to have been fixed, so this is getting marked as Invalid. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1454823 Title: Error on compress: AttributeError: 'FileSystemStorage' object has no attribute 'prefix' Status in OpenStack Dashboard (Horizon): Invalid Bug description: I have found that in some stable/kilo environments it is not possible to compress js/css. drf@drf-VirtualBox:~/horizon$ .venv/bin/python manage.py compress --force RemovedInDjango18Warning: 'The `firstof` template tag is changing to escape its arguments; the non-autoescaping version is deprecated. Load it from the `future` tag library to start using the new behavior. WARNING:py.warnings:RemovedInDjango18Warning: 'The `firstof` template tag is changing to escape its arguments; the non-autoescaping version is deprecated. Load it from the `future` tag library to start using the new behavior. RemovedInDjango18Warning: 'The `cycle` template tag is changing to escape its arguments; the non-autoescaping version is deprecated. Load it from the `future` tag library to start using the new behavior. WARNING:py.warnings:RemovedInDjango18Warning: 'The `cycle` template tag is changing to escape its arguments; the non-autoescaping version is deprecated. Load it from the `future` tag library to start using the new behavior. Found 'compress' tags in: /home/drf/horizon/horizon/templates/horizon/_scripts.html /home/drf/horizon/horizon/templates/horizon/_conf.html /home/drf/horizon/openstack_dashboard/templates/_stylesheets.html Compressing... CommandError: An error occured during rendering /home/drf/horizon/openstack_dashboard/templates/_stylesheets.html: Error parsing block: [the entire output of the file openstack_dashboard/static/dashboard/scss/horizon.scss is dumped here, I've removed it] From :0 Traceback: File "/home/drf/horizon/.venv/local/lib/python2.7/site-packages/scss/__init__.py", line 498, in manage_children self._manage_children_impl(rule, scope) File "/home/drf/horizon/.venv/local/lib/python2.7/site-packages/scss/__init__.py", line 548, in _manage_children_impl self._do_import(rule, scope, block) File "/home/drf/horizon/.venv/local/lib/python2.7/site-packages/django_pyscss/scss.py", line 118, in _do_import source_file = self._find_source_file(name, relative_to) File "/home/drf/horizon/.venv/local/lib/python2.7/site-packages/django_pyscss/scss.py", line 86, in _find_source_file full_filename, storage = self.get_file_and_storage(name) File "/home/drf/horizon/.venv/local/lib/python2.7/site-packages/django_pyscss/scss.py", line 53, in get_file_and_storage return self.get_file_from_finders(filename) File "/home/drf/horizon/.venv/local/lib/python2.7/site-packages/django_pyscss/scss.py", line 46, in get_file_from_finders for file_and_storage in find_all_files(filename): File "/home/drf/horizon/.venv/local/lib/python2.7/site-packages/django_pyscss/utils.py", line 16, in find_all_files if fnmatch.fnmatchcase(os.path.join(storage.prefix or '', path), AttributeError: 'FileSystemStorage' object has no attribute 'prefix' I was able to force this to happen in my development environment by putting this update in requirements.txt Babel>=1.3 -Django>=1.4.2,<1.8 +#Django>=1.4.2,<1.8 +Django==1.7.7 Pint>=0.5 # BSD django_compressor>=1.4 django_openstack_auth>=1.1.7,!=1.1.8,<1.3.0 -django-pyscss>=1.0.3,<2.0.0 # BSD License (2 clause) +#django-pyscss>=1.0.3,<2.0.0 # BSD License (2 clause) +django-pyscss==1.0.3 eventlet>=0.16.1,!=0.17.0 Note that these are in bounds (but not the latest) requirements. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1454823/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1451358] Re: admin project see packstack created demo network under network-topology
This bug was last updated over 2 years ago, and as there have been many changes to both neutron and horizon since then, this is getting marked as Invalid. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1451358 Title: admin project see packstack created demo network under network- topology Status in OpenStack Dashboard (Horizon): Invalid Bug description: Using Kilo I noticed that I can see demo private network via horizon while logging via project admin. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1451358/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1439443] Re: Fix web-server memory overrun when downloading objects from Swift
Closing, since the dev doc change was part of the horizon change. ** Changed in: horizon Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1439443 Title: Fix web-server memory overrun when downloading objects from Swift Status in OpenStack Dashboard (Horizon): Fix Released Status in openstack-manuals: Invalid Bug description: https://review.openstack.org/161204 commit 46405d456d9b056e648a4e60235b4c1b251f1236 Author: Timur Sufiev Date: Wed Mar 4 16:15:04 2015 +0300 Fix web-server memory overrun when downloading objects from Swift To prevent memory overrun when downloading large objects from Swift * `resp_chunk_size` keyword should be passed to swiftclient * `obj.data` iterator returned from swiftclient is passed to HttpResponse (or StreamingHttpResponse for Django>=1.5) as usual since both response classes work with iterators/files/byte strings (yet StreamingHttpResponse does it better). The commit introduces new setting SWIFT_FILE_TRANSFER_CHUNK_SIZE that defines the size of chunk in bytes for Swift objects downloading. DocImpact Change-Id: I18e5809b86bfa24948dc642da2a55dffaa1a4ce1 Closes-Bug: #1427819 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1439443/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1444031] Re: Horizon doesn't allow environment source to be a url
Marking as wontfix since, per the comment above, the functionality is not available via the python api. ** Changed in: horizon Assignee: Liyingjun (liyingjun) => (unassigned) ** Changed in: horizon Importance: Undecided => Wishlist ** Changed in: horizon Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1444031 Title: Horizon doesn't allow environment source to be a url Status in OpenStack Dashboard (Horizon): Won't Fix Bug description: On Project->Orchestration->Stacks->Create Stack the dialog does not include URL as an option for Environment Source. It seems this is supported by the python client. Horizon should add this option. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1444031/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1716868] Re: config file is not read
It's a packaging issue ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1716868 Title: config file is not read Status in OpenStack Dashboard (Horizon): Invalid Bug description: we are running horizon on an Ubuntu 16 cluster with the official repos for the new pike release and figured out that the configuration file /etc/openstack-dashboard/local_settings.py is no longer read. it seems the symlink /usr/share/openstack- dashboard/openstack_dashboard/local/local_settings.py is no longer working. we removed that symlink and copied the file directly into /usr/share/openstack-dashboard/openstack_dashboard/local/ which seems to help for the moment. installed package: python-django-horizon - 3:12.0.0-0ubuntu1~cloud0 OS: Ubuntu 16.04.3 LTS apache virtual host config: http://paste.openstack.org/show/621008/ file permissions: -rw-r--r-- 1 root horizon 34573 Sep 13 10:01 /etc/openstack-dashboard/local_settings.py To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1716868/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1422046] Re: cinder backup-list is always listing all tenants's bug for admin
Closing the horizon portion of this bug since it is now outside of the support window. ** Changed in: horizon Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1422046 Title: cinder backup-list is always listing all tenants's bug for admin Status in OpenStack Dashboard (Horizon): Won't Fix Status in ospurge: Fix Released Status in OpenStack Security Advisory: Won't Fix Status in python-cinderclient: Fix Released Status in python-cinderclient package in Ubuntu: Fix Released Bug description: cinder backup-list doesn't support '--all-tenants' argument for admin wright now. This lead to admin always getting all tenants's backups. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1422046/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1414138] Re: Integration tests - extend test flavors with update action
This is not a bug in horizon, but a gap in unit test coverage. Feel free to submit changes into gerrit at any time to improve test coverage :-) ** Changed in: horizon Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1414138 Title: Integration tests - extend test flavors with update action Status in OpenStack Dashboard (Horizon): Invalid Bug description: Current test case just create, check existency and delete flavor. There is missing update action - Edit flavor. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1414138/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1413426] Re: Forbidden: Policy doesn't allow compute:get_all_tenants to be performed. (HTTP 403)
This bug was last updated nearly a years ago, and the comments suggest that this has been addressed, so this is getting marked as Invalid. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1413426 Title: Forbidden: Policy doesn't allow compute:get_all_tenants to be performed. (HTTP 403) Status in OpenStack Dashboard (Horizon): Invalid Bug description: Horizon is making requests to admin-only APIs in the project dashboard: Error while checking action permissions. Traceback (most recent call last): File "/home/kspear/openstack/horizon/horizon/tables/base.py", line 1260, in _filter_action return action._allowed(request, datum) and row_matched File "/home/kspear/openstack/horizon/horizon/tables/actions.py", line 137, in _allowed return self.allowed(request, datum) File "/home/kspear/openstack/horizon/openstack_dashboard/dashboards/project/access_and_security/floating_ips/tables.py", line 52, in allowed usages = quotas.tenant_quota_usages(request) File "/home/kspear/openstack/horizon/horizon/utils/memoized.py", line 90, in wrapped value = cache[key] = func(*args, **kwargs) File "/home/kspear/openstack/horizon/openstack_dashboard/usage/quotas.py", line 353, in tenant_quota_usages _get_tenant_compute_usages(request, usages, disabled_quotas, tenant_id) File "/home/kspear/openstack/horizon/openstack_dashboard/usage/quotas.py", line 258, in _get_tenant_compute_usages request, search_opts={'tenant_id': tenant_id}, all_tenants=True) File "/home/kspear/openstack/horizon/openstack_dashboard/api/nova.py", line 580, in server_list for s in c.servers.list(True, search_opts)] File "/home/kspear/openstack/horizon/.venv/local/lib/python2.7/site-packages/novaclient/v1_1/servers.py", line 603, in list return self._list("/servers%s%s" % (detail, query_string), "servers") File "/home/kspear/openstack/horizon/.venv/local/lib/python2.7/site-packages/novaclient/base.py", line 67, in _list _resp, body = self.api.client.get(url) File "/home/kspear/openstack/horizon/.venv/local/lib/python2.7/site-packages/novaclient/client.py", line 487, in get return self._cs_request(url, 'GET', **kwargs) File "/home/kspear/openstack/horizon/.venv/local/lib/python2.7/site-packages/novaclient/client.py", line 465, in _cs_request resp, body = self._time_request(url, method, **kwargs) File "/home/kspear/openstack/horizon/.venv/local/lib/python2.7/site-packages/novaclient/client.py", line 439, in _time_request resp, body = self.request(url, method, **kwargs) File "/home/kspear/openstack/horizon/.venv/local/lib/python2.7/site-packages/novaclient/client.py", line 433, in request raise exceptions.from_response(resp, body, url, method) Forbidden: Policy doesn't allow compute:get_all_tenants to be performed. (HTTP 403) (Request-ID: req-8c0549aa-4a3e-4c07-8911-a35196be0a13) Looks like this commit is the culprit: commit f5b77f9a145337c22cf29d8017f5df67a6bacb7c Author: eric Date: Sun Nov 30 07:03:20 2014 -0700 Quotas for users with admin role do not work The quotas code does not isloate counts to resources within the current tenant/project. So if a user with the admin role makes calls for quota items, the admin role will have counts of a global list of resources. This changes that for the tenant quota call to fallback to the request.user.project_id if no project was otherwise specified for the tenant quota api call. Change-Id: Ib0e6ce7774c4c03686a044f233dbb9aa36dbe1b9 Closes-bug: #1391242 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1413426/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1716746] Re: functional job broken by new os-testr
Reviewed: https://review.openstack.org/503790 Committed: https://git.openstack.org/cgit/openstack/networking-bgpvpn/commit/?id=8f576a2ccb2aa16ff1585b93c4fbb58247d209bd Submitter: Jenkins Branch:master commit 8f576a2ccb2aa16ff1585b93c4fbb58247d209bd Author: Thomas Morin Date: Wed Sep 13 13:28:11 2017 -0600 Fix post gate hook to accommodate for new os-testr New os-testr uses stestr under the hood, which creates .stestr but not .testrepository directory in the current dir. Other than that, it doesn't seem like there is any difference in the format or names of files generated in the directory. (shamelessly stolen from I82d52bf0ad885bd36d2f0782a7c86ac61df532f2) Co-Authored-By: Ihar Hrachyshka This change also wraps up another fix (simultaneous voting test job breakage) related to a recent change in the neutron repository of the import statements for ml2 config. Change-Id: I6bfaca7de99c6ef6834375d4e5b3218a50a60491 Closes-Bug: 1716746 ** Changed in: bgpvpn Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1716746 Title: functional job broken by new os-testr Status in networking-bgpvpn: Fix Released Status in BaGPipe: Fix Released Status in networking-sfc: Fix Released Status in neutron: Fix Released Bug description: functional job fails with: 2017-09-12 16:09:20.705975 | 2017-09-12 16:09:20.705 | + /opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L67: testr_exit_code=0 2017-09-12 16:09:20.707372 | 2017-09-12 16:09:20.706 | + /opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L68: set -e 2017-09-12 16:09:20.718005 | 2017-09-12 16:09:20.717 | + /opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L71: generate_testr_results 2017-09-12 16:09:20.719619 | 2017-09-12 16:09:20.719 | + /opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L12: sudo -H -u stack chmod o+rw . 2017-09-12 16:09:20.720974 | 2017-09-12 16:09:20.720 | + /opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L13: sudo -H -u stack chmod o+rw -R .testrepository 2017-09-12 16:09:20.722284 | 2017-09-12 16:09:20.721 | chmod: cannot access '.testrepository': No such file or directory This is because new os-testr switched to stestr that has a different name for the directory (.stestr). To manage notifications about this bug go to: https://bugs.launchpad.net/bgpvpn/+bug/1716746/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1402703] Re: create port doesn't allow ip to be specified
Marking as invalid, since it appears to have been addressed in the change mentioned in the comments. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1402703 Title: create port doesn't allow ip to be specified Status in OpenStack Dashboard (Horizon): Invalid Bug description: On Admin->System->Networks->[network detail]->Create Port there is no option to provide the ip address for the port to be created on. This exists in the corresponding API (neutron port-create) and should be avail in Horizon as well. Also, I'd expect this to be available in any Project level create port function that might be created (under discussion in bug https://bugs.launchpad.net/horizon/+bug/1399252) To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1402703/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394549] Re: Unnecessary use of 'aggregate_instance_extra_specs' prefix while updating metadata on Host Aggregates
This bug was last updated nearly 3 years ago, and as there have been many changes to both nova and horizon since then, this is getting marked as Invalid. There is also no reference to 'aggregate_instance_extra_specs' in the horizon code base. If the issue still exists, please feel free to reopen it, and provide additional details on how to reproduce it. ** Changed in: horizon Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1394549 Title: Unnecessary use of 'aggregate_instance_extra_specs' prefix while updating metadata on Host Aggregates Status in OpenStack Dashboard (Horizon): Invalid Bug description: While updating metadata for hoist-aggregate there is added prefix 'aggregate_instace_extra_specs' by default. For example: "aggregate_instance_extra_specs:cpu_info:topology:cores = 2" This is totally unnecessary. AggregateInstanceExtraSpecsFilter requires only flavors (instance types) to have defined 'aggregate_instance_extra_specs' prefix. I believe that AggregateInstanceExtraSpecsFilter in nova is the only place where this prefix is used. Main goal of this bug is to remove adding 'aggregate_instace_extra_specs' prefix while updating metadata on Host Aggregates in horizon. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1394549/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1375908] Re: Login timeouts when logging in with LDAP user with non ascii characters in CN LDAP field
This bug was created 3 years ago, and as there have been many changes to both keystone and horizon since then, including a fix for this problem, this but is getting marked as Invalid. If the issue still exists, feel free to reopen it, and please provide additional details on how to reproduce it. ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1375908 Title: Login timeouts when logging in with LDAP user with non ascii characters in CN LDAP field Status in OpenStack Dashboard (Horizon): Invalid Bug description: Connected with bug report: https://bugs.launchpad.net/keystone/+bug/1375139 With enabled debug in local_settings this is the last thing that gets logged before timeout. If I disable Cinder (keystone endpoint delete) login succeed. from http error/debug log: [Tue Sep 30 18:10:08 2014] [error] REQ: curl -i http://192.168.122.11:8776/v1/faabdcb060924e15ab8c193b3f82864e/limits -X GET -H "X-Auth-Project-Id: faabdcb060924e15ab8c193b3f82864e" -H "User-Agent: python-cinderclient" -H "Accept: application/json" -H "X-Auth-Token: 7d061e89df785976e2547b48b7ef05e1" To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1375908/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1405530] Re: Error: Unable to retrieve volume information
This bug was last updated over 3 years ago, and it has not been duplicated since kilo, so this is getting marked as Invalid. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Assignee: Trung Trinh (trung-t-trinh) => (unassigned) ** Changed in: horizon Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1405530 Title: Error: Unable to retrieve volume information Status in OpenStack Dashboard (Horizon): Invalid Bug description: Version: stable/juno Bug description: In the view of "Volumes", provided that some volume whose status is "in-use" or it is being attached to a VM. If the associated VM is deleted, this will trigger the detaching of the volume. If the volume detaching process is failed by whatever reason, then the volume becomes attached to an already-deleted VM. On Horizon, we now can see the info for example of "Attached to None on /dev/vdb" in the column of "Attached To". If now, we try doing "Edit attachments" (for example, in order to detach the died VM) then Horizon dashboard always pop up the error of "Error: Unable to retrieve volume information" and the purpose of detaching is aborted. Proposal: Such an error has to be disabled and the process of "Edit attachments" should be continued. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1405530/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717312] Re: Keystone Installation Tutorial for Ubuntu in keystone
This is a duplicate of https://bugs.launchpad.net/keystone/+bug/1716899 and already has a fix in review. ** Changed in: keystone Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1717312 Title: Keystone Installation Tutorial for Ubuntu in keystone Status in OpenStack Identity (keystone): Invalid Bug description: - [x] This doc is inaccurate in this way: After "Verify Operation" page of "Finalize the installation" for keystone in Pike release the next page must be "Create a domain, projects, users, and roles" where as the next button is linked to "Getting started" page which should come after "Using the scripts" page. Because of this user can miss the "Create a domain, projects, users, and roles" page which results in missing project and user creation further leading to glance and other services configuration failure. --- Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01 SHA: 5a9aeefff06678d790d167b6dac752677f02edf9 Source: https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/index-ubuntu.rst URL: https://docs.openstack.org/keystone/pike/install/index-ubuntu.html To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1717312/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1325397] Re: Neutron: No way to change security group rules on port attached to VIP
Marking invalid since this functionality is no longer in horizon ** Changed in: horizon Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1325397 Title: Neutron: No way to change security group rules on port attached to VIP Status in OpenStack Dashboard (Horizon): Invalid Bug description: If you create a VIP through the lbaas stuff in neutron there is no way to change the security group rules on the port attached to the vip by the UI. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1325397/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717312] [NEW] Keystone Installation Tutorial for Ubuntu in keystone
Public bug reported: - [x] This doc is inaccurate in this way: After "Verify Operation" page of "Finalize the installation" for keystone in Pike release the next page must be "Create a domain, projects, users, and roles" where as the next button is linked to "Getting started" page which should come after "Using the scripts" page. Because of this user can miss the "Create a domain, projects, users, and roles" page which results in missing project and user creation further leading to glance and other services configuration failure. --- Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01 SHA: 5a9aeefff06678d790d167b6dac752677f02edf9 Source: https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/index-ubuntu.rst URL: https://docs.openstack.org/keystone/pike/install/index-ubuntu.html ** Affects: keystone Importance: Undecided Status: New ** Tags: doc glance keystone pike ** Attachment added: "keystone bug.jpg" https://bugs.launchpad.net/bugs/1717312/+attachment/4950293/+files/keystone%20bug.jpg ** Description changed: - - This bug tracker is for errors with the documentation, use the following - as a template and remove or add fields as you see fit. Convert [ ] into - [x] to check boxes: - [x] This doc is inaccurate in this way: After "Verify Operation" page of "Finalize the installation" for keystone in Pike release the next page must be "Create a domain, projects, users, and roles" where as the next button is linked to "Getting started" page which should come after "Using the scripts" page. Because of this user can miss the "Create a domain, projects, users, and roles" page which results in missing project and user creation further leading to glance and other services configuration failure. - - [ ] This is a doc addition request. - - [ ] I have a fix to the document that I can paste below including example: input and output. - - If you have a troubleshooting or support issue, use the following - resources: - - - Ask OpenStack: http://ask.openstack.org - - The mailing list: http://lists.openstack.org - - IRC: 'openstack' channel on Freenode --- Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01 SHA: 5a9aeefff06678d790d167b6dac752677f02edf9 Source: https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/index-ubuntu.rst URL: https://docs.openstack.org/keystone/pike/install/index-ubuntu.html -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1717312 Title: Keystone Installation Tutorial for Ubuntu in keystone Status in OpenStack Identity (keystone): New Bug description: - [x] This doc is inaccurate in this way: After "Verify Operation" page of "Finalize the installation" for keystone in Pike release the next page must be "Create a domain, projects, users, and roles" where as the next button is linked to "Getting started" page which should come after "Using the scripts" page. Because of this user can miss the "Create a domain, projects, users, and roles" page which results in missing project and user creation further leading to glance and other services configuration failure. --- Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01 SHA: 5a9aeefff06678d790d167b6dac752677f02edf9 Source: https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/index-ubuntu.rst URL: https://docs.openstack.org/keystone/pike/install/index-ubuntu.html To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1717312/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1003199] Re: Snapshots can fail silently
This bug was last updated over 5 years ago, and as there have been many changes to both nova and horizon since then, this is getting marked as Invalid. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1003199 Title: Snapshots can fail silently Status in Glance: Triaged Status in OpenStack Dashboard (Horizon): Invalid Bug description: If a snapshot fails when it is being created it will do so silently and will disappear from the snapshot list. Example: Creating a snap shot the status will go to queued then to saving. If saving fails eg swift backend issue in glance etc. then the snapshot disappears from the dashboard completely with no error message. Not sure exactly how to fix it as it involves nova and glance too. On the nova-compute node that the instance is running on that is being snapshoted and error is caught eg. 2012-05-22 05:14:31 ERROR nova.rpc.amqp [req-aa82909d-8fa2-4fae-9443-3db1411f9897 a4a0066852fe4073b65818c883b8625a 2e91ae6fe1334a8480cbb391f376db15] Exception during message handling 2012-05-22 05:14:31 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 252, in _process_data 2012-05-22 05:14:31 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped 2012-05-22 05:14:31 TRACE nova.rpc.amqp return f(*args, **kw) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 177, in decorated_function 2012-05-22 05:14:31 TRACE nova.rpc.amqp sys.exc_info()) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ 2012-05-22 05:14:31 TRACE nova.rpc.amqp self.gen.next() 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 171, in decorated_function 2012-05-22 05:14:31 TRACE nova.rpc.amqp return function(self, context, instance_uuid, *args, **kwargs) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 946, in snapshot_instance 2012-05-22 05:14:31 TRACE nova.rpc.amqp self.driver.snapshot(context, instance_ref, image_id) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped 2012-05-22 05:14:31 TRACE nova.rpc.amqp return f(*args, **kw) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line 711, in snapshot 2012-05-22 05:14:31 TRACE nova.rpc.amqp image_file) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 306, in update 2012-05-22 05:14:31 TRACE nova.rpc.amqp _reraise_translated_image_exception(image_id) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 304, in update 2012-05-22 05:14:31 TRACE nova.rpc.amqp image_meta = client.update_image(image_id, image_meta, data) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/glance/client.py", line 195, in update_image 2012-05-22 05:14:31 TRACE nova.rpc.amqp res = self.do_request("PUT", "/images/%s" % image_id, body, headers) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 58, in wrapped 2012-05-22 05:14:31 TRACE nova.rpc.amqp return func(self, *args, **kwargs) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 420, in do_request 2012-05-22 05:14:31 TRACE nova.rpc.amqp headers=headers) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 75, in wrapped 2012-05-22 05:14:31 TRACE nova.rpc.amqp return func(self, method, url, body, headers) 2012-05-22 05:14:31 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 542, in _do_request 2012-05-22 05:14:31 TRACE nova.rpc.amqp raise exception.Invalid(res.read()) 2012-05-22 05:14:31 TRACE nova.rpc.amqp Invalid: Data supplied was not valid. 2012-05-22 05:14:31 TRACE nova.rpc.amqp Details: 400 Bad Request 2012-05-22 05:14:31 TRACE nova.rpc.amqp 2012-05-22 05:14:31 TRACE nova.rpc.amqp The server could not comply with the request since it is either malformed or otherwise incorrect. 2012-05-22 05:14:31 TRACE nova.rpc.amqp 2012-05-22 05:14:31 TRACE nova.rpc.amqp E
[Yahoo-eng-team] [Bug 1018253] Re: No error message prompt during attaching when mountpoint is occupied
Expiring the Horizon bug also. See previous comment. ** Changed in: horizon Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1018253 Title: No error message prompt during attaching when mountpoint is occupied Status in OpenStack Dashboard (Horizon): Invalid Status in OpenStack Compute (nova): Expired Bug description: Correct me if I am wrong. When we attach a volume to an instance at the mountpoint /dev/vdb, I expect that there should be a error message prompt in horizon if /dev/vdb is already occupied by, for example, another instance. Currently there is no error message prompt. How to create this bug: 1, Launch one instance. 2. Create a first volume and a second volume. 3. Attach the first volume to the instance at the mountpoint /dev/vdb and succeed. 3. Attach the second volume to the same instance at the same mountpoint /dev/vdb. Expected output: A message should tell user that the mountpoint is occupied, not available or something. Actual output: No message shows. The second volume is still available. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1018253/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 884478] Re: Rows affected by last action should get visual distinction
This bug was last created nearly 6 years ago and is questionable (it even still has unanswered questions). If you feel that should still be done, please submit a blueprint for it. ** Changed in: horizon Importance: Low => Wishlist ** Changed in: horizon Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/884478 Title: Rows affected by last action should get visual distinction Status in OpenStack Dashboard (Horizon): Won't Fix Bug description: Right now unless a row happens to poll on some intermediate state it's not evident which rows were created, edited, etc. by the previous action. In most cases we *do* have enough information available to highlight these rows in some fashion in order to make them distinct. This would improve overall informativeness of the table. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/884478/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1211741] Re: Create Volume form shows incorrect Total Gigabytes
This bug was last updated over 2 years ago, and as there have been many changes to the code that retrieves the cinder limits, this is getting marked as Invalid. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1211741 Title: Create Volume form shows incorrect Total Gigabytes Status in OpenStack Dashboard (Horizon): Invalid Bug description: commit fe659b231a9542e4703c13e7e1173aa7e0767cfc On the Create Volume form, Volume Limits section, the Total Gigabytes displayed currently is only the total size used by all volumes. This should also include the total size used by volume snapshots. Volume Limits: Total Gigabytes = (Total volume size + Total Snapshot size) usages.gigabytesUsed = usages.volumesGigabytesUsed + usages.snapshotsGigabtyesUsed Similarly, this computation should be made for the Create Snapshot form. Please see attached screenshot. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1211741/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1283281] Re: Volume snapshots do not count towards the storage usage in Create Volume / Extend Volume / Create Volume Snapshot modal
*** This bug is a duplicate of bug 1211741 *** https://bugs.launchpad.net/bugs/1211741 ** This bug has been marked a duplicate of bug 1211741 Create Volume form shows incorrect Total Gigabytes -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1283281 Title: Volume snapshots do not count towards the storage usage in Create Volume / Extend Volume / Create Volume Snapshot modal Status in OpenStack Dashboard (Horizon): In Progress Bug description: There is a single storage quota value for both volumes and volume snapshots. This is reflected in Project Overview panel where the total sizes of all volumes and volume snapshots is displayed on the pie chart. However, in Create Volume, Extend Volume, and Create Volume Snapshot modals only the total sizes of all volumes is displayed. Even though the actual used storage includes snapshot usage as well. Storage quota applies to the total sizes of volumes and volume snapshots. To reproduce the issue within one project: 1. Set project quota for Gigabytes to 5 2. Create a volume "vol1" of size 2 GB 3. Create a snapshot of "vol1" called "snap1" 4. Create a volume "vol2" of size 2 GB. You'll notice the "Create Volume" modal shows 2 / 5 GB storage is used. But the volume creation fails. (if you change the quota to 6 volume creation succeeds). 5. Similarly create a second snapshot of "vol1" called "snap2". Again the "Create Volume Snapshot" modal shows 2 / 5 GB storage is used. But snapshot creation fails. (if you change the quota to 6 snapshot creation succeeds). Suggestion is to show the total sizes of all volumes and volume snapshots in those modals. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1283281/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1716746] Re: functional job broken by new os-testr
Reviewed: https://review.openstack.org/503561 Committed: https://git.openstack.org/cgit/openstack/networking-sfc/commit/?id=f9ea384efc86321e90b1bb93f6e1c09ffa868d62 Submitter: Jenkins Branch:master commit f9ea384efc86321e90b1bb93f6e1c09ffa868d62 Author: Bernard Cafarelli Date: Wed Sep 13 10:28:37 2017 +0200 Fix unit tests and test configuration Fix post gate hook to accommodate for new os-testr New versions now use .stestr instead of previous .testrepository directory Fix ml2 plugin config import Commit Ibc5a9ab268578c243ef13f7e0041bacd6c0c410b moved the ml2 plugin config file, which breaks unit tests Change-Id: Ib7541abb0bee619b87fbcc7fee5d5095255e1ec7 Closes-Bug: #1716746 ** Changed in: networking-sfc Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1716746 Title: functional job broken by new os-testr Status in networking-bgpvpn: In Progress Status in BaGPipe: Fix Released Status in networking-sfc: Fix Released Status in neutron: Fix Released Bug description: functional job fails with: 2017-09-12 16:09:20.705975 | 2017-09-12 16:09:20.705 | + /opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L67: testr_exit_code=0 2017-09-12 16:09:20.707372 | 2017-09-12 16:09:20.706 | + /opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L68: set -e 2017-09-12 16:09:20.718005 | 2017-09-12 16:09:20.717 | + /opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L71: generate_testr_results 2017-09-12 16:09:20.719619 | 2017-09-12 16:09:20.719 | + /opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L12: sudo -H -u stack chmod o+rw . 2017-09-12 16:09:20.720974 | 2017-09-12 16:09:20.720 | + /opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L13: sudo -H -u stack chmod o+rw -R .testrepository 2017-09-12 16:09:20.722284 | 2017-09-12 16:09:20.721 | chmod: cannot access '.testrepository': No such file or directory This is because new os-testr switched to stestr that has a different name for the directory (.stestr). To manage notifications about this bug go to: https://bugs.launchpad.net/bgpvpn/+bug/1716746/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1278066] Re: live migrate requires shared storage
This bug was last updated over 3 years ago, and as it appears to have been addressed in nova, this is getting marked as Invalid. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Status: Triaged => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1278066 Title: live migrate requires shared storage Status in OpenStack Dashboard (Horizon): Invalid Bug description: I have a setup of 1 controller and 2 computes. I have an instance which I try to live-migrate from one compute server to another, and It gives an error "failed to live migrate". I looked at the /var/log/nova/compute.log and it shows the following exception: 2014-02-09 12:05:14.072 19968 ERROR nova.openstack.common.rpc.amqp [req-1722e387-ab40-4228-b1c5-27164f0b45d5 55ebc145eb954c6f933f588407af2753 e433ae32fee7468fad9e80d96f4e92a0] Exception during message handling 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last): 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/nova/openstack/common/rpc/amqp.py", line 461, in _process_data 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp **args) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp result = getattr(proxyobj, method)(ctxt, **kwargs) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/nova/exception.py", line 90, in wrapped 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp payload) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp six.reraise(self.type_, self.value, self.tb) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/nova/exception.py", line 73, in wrapped 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp return f(self, context, *args, **kw) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4083, in check_can_live_migrate_destination 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp block_migration, disk_over_commit) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3968, in check_can_live_migrate_destination 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp self._compare_cpu(source_cpu_info) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4110, in _compare_cpu 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp LOG.error(m, {'ret': ret, 'u': u}) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp six.reraise(self.type_, self.value, self.tb) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4106, in _compare_cpu 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp ret = self._conn.compareCPU(cpu.to_xml(), 0) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 187, in doit 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp result = proxy_call(self._autowrap, f, *args, **kwargs) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 147, in proxy_call 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp rv = execute(f,*args,**kwargs) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 76, in tworker 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp rv = meth(*args,**kwargs) 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2889, in compareCPU 2014-02-09 12:05:14.072 19968 TRACE nova.openstack.common.rpc.amqp if ret == -1: raise libvirtError ('virConnectCompareC
[Yahoo-eng-team] [Bug 1282179] Re: Performance degradation on Horizon UI
This bug was last updated nearly 3 years ago. It may have been addressed by bug fixes in the notes above, and since there is no easy way for devs to reproduce this, it is getting marked as Invalid. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1282179 Title: Performance degradation on Horizon UI Status in OpenStack Dashboard (Horizon): Invalid Bug description: Pre-condition: Have 25 users operating in parallel for 24 hours. Stress the UI with 25 parallel user logins. The script just navigates through various web pages on Horizon UI iteratively and records the page load time. The concern with is the rise in response time over 24 hours of run (~200%). Please find attached the graph of average page load response time of Horizon UI components over 24-odd hours, which is gradually rising by upto 150%. We use HP Load Runner scripts to stress the UI. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1282179/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1715451] Re: Castellan 0.13.0 doesn't work with ConfKeyManager due to missing list() abstract method
Reviewed: https://review.openstack.org/502580 Committed: https://git.openstack.org/cgit/openstack/castellan/commit/?id=ffd9f484df8d4c6c7de36313beb3a76f0daa8296 Submitter: Jenkins Branch:master commit ffd9f484df8d4c6c7de36313beb3a76f0daa8296 Author: Kaitlin Farr Date: Mon Sep 11 16:54:28 2017 -0400 Makes list method not abstract Any implementations of key_manager that don't have "list" defined (i.e. ConfKeyManager in Nova and Cinder) will not be instantiable if they try to use a version of Castellan that was released after "list" was added. Adds a default implementation of "list" that returns nothing for backwards compatibility. Closes-Bug: #1715451 Change-Id: I1e413831163bffaed3a2580f039e242da7d303f8 ** Changed in: castellan Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1715451 Title: Castellan 0.13.0 doesn't work with ConfKeyManager due to missing list() abstract method Status in castellan: Fix Released Status in Cinder: Fix Released Status in OpenStack Compute (nova): In Progress Status in OpenStack Global Requirements: New Bug description: Seen here: https://review.openstack.org/#/c/500770/ http://logs.openstack.org/70/500770/7/check/gate-tempest-dsvm-neutron- full-ubuntu- xenial/b813494/logs/screen-c-api.txt.gz?level=TRACE#_Sep_06_17_25_08_182255 This change in castellan 0.13.0 breaks cinder's ConfKeyManager: https://github.com/openstack/castellan/commit/1a13c2b2030390e3c0a5d498da486d92ddd1152c Because the Cinder ConfKeyManager extends the abstract KeyManager class in castellan but doesn't implement the list() method. To manage notifications about this bug go to: https://bugs.launchpad.net/castellan/+bug/1715451/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1322722] Re: Document guideline and best practices for pluggable content
This is covered in the configuration guide, https://docs.openstack.org/horizon/latest/configuration/index.html . If there are specific gaps that you still see addressed in the documentation, please create bugs for them. ** Changed in: horizon Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1322722 Title: Document guideline and best practices for pluggable content Status in OpenStack Dashboard (Horizon): Invalid Bug description: https://etherpad.openstack.org/p/juno-summit-horizon-widgets To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1322722/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717302] [NEW] Tempest floatingip scenario tests failing on DVR Multinode setup with HA
Public bug reported: neutron.tests.tempest.scenario.test_floatingip.FloatingIpSameNetwork and neutron.tests.tempest.scenario.test_floatingip.FloatingIpSeparateNetwork are failing on every patch. This trace is seen on the node-2 l3-agent. Sep 13 07:16:43.404250 ubuntu-xenial-2-node-rax-dfw-10909819-895688 neutron-keepalived-state-change[5461]: ERROR neutron.agent.linux.ip_lib [-] Failed sending gratuitous ARP to 172.24.5.3 on qg-bf79c157-e2 in namespace qrouter-796b8715-ca01-43ad-bc08-f81a0b4db8cc: Exit code: 2; Stdin: ; Stdout: ; Stderr: bind: Cannot assign requested address : ProcessExecutionError: Exit code: 2; Stdin: ; Stdout: ; Stderr: bind: Cannot assign requested address ERROR neutron.agent.linux.ip_lib Traceback (most recent call last): ERROR neutron.agent.linux.ip_lib File "/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 1082, in _arping ERROR neutron.agent.linux.ip_lib ip_wrapper.netns.execute(arping_cmd, extra_ok_codes=[1]) ERROR neutron.agent.linux.ip_lib File "/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 901, in execute ERROR neutron.agent.linux.ip_lib log_fail_as_error=log_fail_as_error, **kwargs) ERROR neutron.agent.linux.ip_lib File "/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 151, in execute ERROR neutron.agent.linux.ip_lib raise ProcessExecutionError(msg, returncode=returncode) ERROR neutron.agent.linux.ip_lib ProcessExecutionError: Exit code: 2; Stdin: ; Stdout: ; Stderr: bind: Cannot assign requested address ERROR neutron.agent.linux.ip_lib ERROR neutron.agent.linux.ip_lib If this is a DVR router, then the GARP should not go through the qg interface for the floatingIP. More information can be seen here. http://logs.openstack.org/43/500143/5/check/gate-tempest-dsvm-neutron- dvr-multinode-scenario-ubuntu-xenial- nv/0a58fce/logs/subnode-2/screen-q-l3.txt.gz?level=TRACE#_Sep_13_07_16_47_864052 ** Affects: neutron Importance: Undecided Status: New ** Tags: l3-dvr-backlog l3-ha ** Summary changed: - Tempest floatingip scenario tests failing on DVR Multinode setup + Tempest floatingip scenario tests failing on DVR Multinode setup with HA -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1717302 Title: Tempest floatingip scenario tests failing on DVR Multinode setup with HA Status in neutron: New Bug description: neutron.tests.tempest.scenario.test_floatingip.FloatingIpSameNetwork and neutron.tests.tempest.scenario.test_floatingip.FloatingIpSeparateNetwork are failing on every patch. This trace is seen on the node-2 l3-agent. Sep 13 07:16:43.404250 ubuntu-xenial-2-node-rax-dfw-10909819-895688 neutron-keepalived-state-change[5461]: ERROR neutron.agent.linux.ip_lib [-] Failed sending gratuitous ARP to 172.24.5.3 on qg-bf79c157-e2 in namespace qrouter-796b8715-ca01-43ad-bc08-f81a0b4db8cc: Exit code: 2; Stdin: ; Stdout: ; Stderr: bind: Cannot assign requested address : ProcessExecutionError: Exit code: 2; Stdin: ; Stdout: ; Stderr: bind: Cannot assign requested address ERROR neutron.agent.linux.ip_lib Traceback (most recent call last): ERROR neutron.agent.linux.ip_lib File "/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 1082, in _arping ERROR neutron.agent.li
[Yahoo-eng-team] [Bug 1137060] Re: RPC timeouts (when using raw image backend for example)
This bug was last updated over 4 years ago, and as there have been many changes to both nova and horizon since then, this is getting marked as Invalid. If the issue still exists, please feel free to reopen it. ** Changed in: horizon Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1137060 Title: RPC timeouts (when using raw image backend for example) Status in OpenStack Dashboard (Horizon): Invalid Bug description: Release found: Folsom Environment: A single node running all services Ubuntu 12.04 Using Ubuntu cloud archive packages Steps to reproduce: 1. Upload a big image, for example a windows 7 image of 8GB (I guess any big image will do, but we verified this with a big win7 image) 2. Add the following flag to nova.conf: libvirt_images_type=raw and restart nova-compute 3. Use horizon to launch 20 instances at once, using the win7 image Current result: Only 3..7 out of 20 instances are able to launch successfuly, all other instances become in error state because of rpc timeout of nova-network Expected result: Higher ratio of success preferably 20, maybe by limiting the amount of allowed instances that can be started when raw backend is being used. Or maybe this should be done by limiting system io for the spawn instance process? Additional info: If we change to libvirt_images_type=default the exact same hardware/setup can launch 20 instances without any rpc timeouts. It seems that the big image copy part is causing heavy load, but when using qcow2 it uses a backing file so it doesn't have to copy the whole image. Related error log which shows the RPC timeout: http://paste.openstack.org/show/32687/ When you use nova-network in multi_host mode the same issue may be experiencied when launching lots of instances. So another way to reproduce is to deploy several nodes all running nova-network multi_host and then launch 20 x N instances where N == amount of compute nodes. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1137060/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1547142] Re: A shelved_offload VM's volumes are still attached to a host
** Tags added: shelve ** Also affects: nova/ocata Importance: Undecided Status: New ** Also affects: nova/pike Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1547142 Title: A shelved_offload VM's volumes are still attached to a host Status in OpenStack Compute (nova): Confirmed Status in OpenStack Compute (nova) ocata series: New Status in OpenStack Compute (nova) pike series: New Bug description: When shelve_offloading a VM, the VM loses it's connection to a host. However, connection to the host is not terminated to it's volumes, so they are still attached to a host. Afterwards, when the VM is unshleved, nova calls initialize_connection to the new host for it's volumes, and they are now connected to 2 hosts. The correct behaviour is to call terminate_connection on the VM's volumes when it's being shelved_offloaded To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1547142/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1597077] Re: token 'expires' padding differs between POST and GET/HEAD on Fernet tokens
The v2.0 GET /v2.0/tokens API is being removed this release [0]. Marking this as Invalid since we won't be supporting that API anymore. [0] https://review.openstack.org/#/c/499784/ ** Changed in: keystone Status: Triaged => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1597077 Title: token 'expires' padding differs between POST and GET/HEAD on Fernet tokens Status in OpenStack Identity (keystone): Invalid Bug description: We are using fernet tokens and found that with Mitaka the 'expires' values returned by the token POST and token GET/HEAD differ when one would expect these to be the same. POST /v2.0/tokens Response: {"access": { "token":{ "issued_at": "2016-06-28T18:48:56.00Z", "expires": "2016-06-28T20:48:56Z", "id": "gABXcsaYGn-YFLOkLMfgq0JeBePL9s4WxiYbgSOyrAC83nUJhJh4c3xMTi_ZhaXkWH1S5BmvsvJwj90I_bKgiJlv5fQf7-wCdyPtTd7O_TcAleIBj7uOhcFhC1au7Fx9qnAkdg6DBIX_EiQLaC_ylB87nl05nQ", "audit_ids": ["OGGd2bYeTQOi-ZHZ5vYqVw"] }, "serviceCatalog": [], "user":{ "username": "account1", "roles_links": [], "id": "af4012992a154f158201f0590013bc32", "roles": [], "name": "account1" }, "metadata":{ "is_admin": 0, "roles": [] } }} GET /v2.0/tokens/gABXcsaYGn- YFLOkLMfgq0JeBePL9s4WxiYbgSOyrAC83nUJhJh4c3xMTi_ZhaXkWH1S5BmvsvJwj90I_bKgiJlv5fQf7-wCdyPtTd7O_TcAleIBj7uOhcFhC1au7Fx9qnAkdg6DBIX_EiQLaC_ylB87nl05nQ Response: {"access": { "token":{ "issued_at": "2016-06-28T18:48:56.00Z", "expires": "2016-06-28T20:48:56.00Z", "id": "gABXcsaYGn-YFLOkLMfgq0JeBePL9s4WxiYbgSOyrAC83nUJhJh4c3xMTi_ZhaXkWH1S5BmvsvJwj90I_bKgiJlv5fQf7-wCdyPtTd7O_TcAleIBj7uOhcFhC1au7Fx9qnAkdg6DBIX_EiQLaC_ylB87nl05nQ", "audit_ids": ["OGGd2bYeTQOi-ZHZ5vYqVw"] }, "serviceCatalog": [], "user":{ "username": "account1", "roles_links": [], "id": "af4012992a154f158201f0590013bc32", "roles": [], "name": "account1" }, "metadata":{ "is_admin": 0, "roles": [] } }} The POST response:"expires": "2016-06-28T20:48:56Z", The GET response: "expires": "2016-06-28T20:48:56.00Z", To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1597077/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717266] [NEW] VPNaaS: VPN creation not working in case of distributed virtual routers (Pike)
Public bug reported: I have manually setup a fresh OpenStack Pike HA environment based on Ubuntu 16.04.3 in conjunction with DVR. VPN creation works fine in case of centralized routers, but when a VPN gets created in the context of distributed routers, all VPN services and connections turn their state to ACTIVE, but a connection between different clients connected via VPN is not possible. The error log does not contain any errors. My environment comprises 2 controller nodes (also functioning as network nodes) and 3 compute node. Each controller node runs a neutron-vpn- agent, whereas each compute node runs a neutron-l3-agent which is unaware of any VPN settings. Controller/Network node: # vpn_agent.ini # [ipsec] enable_detailed_logging = true ipsec_status_check_interval = 60 [vpnagent] vpn_device_driver = neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver neutron.conf [DEFAULT] allow_overlapping_ips = true auth_strategy = keystone base_mac = 02:05:69:00:00:00 bind_host = 10.30.200.101 bind_port = 9696 core_plugin = ml2 debug = true dhcp_agents_per_network = 2 dns_domain = openstack.mycompany.com. dvr_base_mac = 0A:05:69:00:00:00 endpoint_type = internalURL host = os-network01 interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver l3_ha = true l3_ha_net_cidr = 169.254.192.0/18 log_dir = /var/log/neutron max_l3_agents_per_router = 2 min_l3_agents_per_router = 2 notify_nova_on_port_data_changes = true notify_nova_on_port_status_changes = true router_distributed = true service_plugins = router,firewall,qos,lbaasv2,vpnaas state_path = /var/lib/neutron transport_url = rabbit://neutron:neutronpass@os-rabbit01:5672,neutron:neutronpass@os-rabbit02:5672/openstack [agent] root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf [database] connection = mysql+pymysql://neutron:neutronDBpass@os-controller/neutron max_retries = -1 [keystone_authtoken] auth_type = password auth_uri = https://os-cloud.mycompany.com:5000 auth_url = http://os-identity:35357 memcached_servers = os-memcache:11211 password = neutronpass project_domain_name = default project_name = service user_domain_name = default username = neutron [nova] auth_type = password auth_url = http://os-identity:35357 endpoint_type = internal password = novapass project_domain_name = default project_name = service region_name = RegionOne user_domain_name = default username = nova [oslo_concurrency] lock_path = /var/lock/neutron [oslo_messaging_notifications] driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = true rabbit_ha_queues = true rabbit_retry_backoff = 2 rabbit_retry_interval = 1 [oslo_middleware] enable_proxy_headers_parsing = true [service_providers] service_provider = FIREWALL:Iptables:neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver:default service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default service_provider = VPN:strongswan:neutron_vpnaas.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default $ ext-list | grep vpn neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. | vpnaas| VPN service | | vpn-endpoint-groups | VPN Endpoint Groups | | vpn-flavors | VPN Service Flavor Extension | "usr.lib.ipsec.charon" and "usr.lib.ipsec.stroke" have been disabled: ln -sf /etc/apparmor.d/usr.lib.ipsec.charon /etc/apparmor.d/disable/ ln -sf /etc/apparmor.d/usr.lib.ipsec.stroke /etc/apparmor.d/disable/ Any ideas? ** Affects: neutron Importance: Undecided Status: New ** Tags: vpnaas ** Summary changed: - VPNaaS: VPN creating not working in case of distributed routers (Pike) + VPNaaS: VPN creation not working in case of distributed virtual routers (Pike) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1717266 Title: VPNaaS: VPN creation not working in case of distributed virtual routers (Pike) Status in neutron: New Bug description: I have manually setup a fresh OpenStack Pike HA environment based on Ubuntu 16.04.3 in conjunction with DVR. VPN creation works fine in case of centralized routers, but when a VPN gets created in the context of distributed routers, all VPN services and connections turn their state to ACTIVE, but a connection between different clients connected via VPN is not possible. The error log does not contain any errors. My environment comprises 2 controller nodes (also functioning as network nodes) and 3 compute node. Each controller node runs a neutron-vpn-agent, whereas each
[Yahoo-eng-team] [Bug 1717245] [NEW] openstack router set --external-gateway creates port without tenant-id
Public bug reported: Adding an external gateway to router creates port without tenant-id. It also creates a security group without tenant-id and it's rules are also without tenant-id. http://paste.openstack.org/show/621108/ The port needs to have tenant-id. The security group is not used anywhere, so there's no point in creating it. Version: Ocata Linux distro: CentOS Linux release 7.3.1611 Deployment: Apex from OPNFV, which uses RDO ** Affects: neutron Importance: Undecided Status: New ** Description changed: Adding an external gateway to router creates port without tenant-id. It also creates a security group without tenant-id and it's rules are also without tenant-id. http://paste.openstack.org/show/621108/ - The port needs to have tenant-id (the reply from rest call contains - that, but it's not in the database). The security group is not used + The port needs to have tenant-id. The security group is not used anywhere, so there's no point in creating it. Version: Ocata Linux distro: CentOS Linux release 7.3.1611 Deployment: Apex from OPNFV, which uses RDO -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1717245 Title: openstack router set --external-gateway creates port without tenant-id Status in neutron: New Bug description: Adding an external gateway to router creates port without tenant-id. It also creates a security group without tenant-id and it's rules are also without tenant-id. http://paste.openstack.org/show/621108/ The port needs to have tenant-id. The security group is not used anywhere, so there's no point in creating it. Version: Ocata Linux distro: CentOS Linux release 7.3.1611 Deployment: Apex from OPNFV, which uses RDO To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1717245/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1422046] Re: cinder backup-list is always listing all tenants's bug for admin
** Changed in: ospurge Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1422046 Title: cinder backup-list is always listing all tenants's bug for admin Status in OpenStack Dashboard (Horizon): New Status in ospurge: Fix Released Status in OpenStack Security Advisory: Won't Fix Status in python-cinderclient: Fix Released Status in python-cinderclient package in Ubuntu: Fix Released Bug description: cinder backup-list doesn't support '--all-tenants' argument for admin wright now. This lead to admin always getting all tenants's backups. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1422046/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1620967] Re: Neutron API behind SSL terminating haproxy returns http version URL's instead of https
** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1620967 Title: Neutron API behind SSL terminating haproxy returns http version URL's instead of https Status in neutron: Fix Released Bug description: This is a re-post of an issue that was reported for an older OpenStack version. Unfortunately, I am confronted with the same problem in OpenStack Mitaka. Keystone has a proper support for the case, when you use SSL terminating via HAProxy. Have a look here: https://bugzilla.redhat.com/show_bug.cgi?id=1259351 Description of problem: When using haproxy with SSL termination in front of neutron, neutron will return version URL's with http:// prefix instead of https://. This causes API clients to fail. How reproducible: Steps to Reproduce: 1. Configure HAproxy in front of Neutron with SSL termination (so client talks to neutron over SSL, HAproxy talks to Neutron over plain HTTP) 2. curl https://openstack-api.example.com:9696 Actual results: {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "http://openstack-api.example.com:9696/v2.0";, "rel": "self"}]}]} Expected results: {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "https://openstack-api.example.com:9696/v2.0";, "rel": "self"}]}]} Additional info: I patched this issue in /usr/lib/python2.7/site- packages/neutron/api/views/versions.py: def get_view_builder(req): base_url = req.application_url if req.environ.get('HTTP_X_FORWARDED_PROTO', None) != None: base_url = base_url.replace('http://', 'https://') return ViewBuilder(base_url) Then neutron returns the proper https URL. The X-Forwarded-Proto header is inserted by haproxy. Note: this issue is present in other openstack api's as well but can be worked around by setting public_endpoint explicitly. This option is not available in neutron however. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1620967/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp