[Yahoo-eng-team] [Bug 1595864] Re: live_migration() takes exactly 7 arguments (6 given) if upgrade_levels compute=kilo
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1595864 Title: live_migration() takes exactly 7 arguments (6 given) if upgrade_levels compute=kilo Status in OpenStack Compute (nova): Expired Bug description: I have Liberty controller (nova-api, etc.) with [upgrade_levels] compute=kilo and Liberty compute node, when i try live-migration i see "live_migration() takes exactly 7 arguments (6 given)" in nova-compute.log. I can not completely remove compatibility with kilo, because i have kilo computes in my env. I also tried to add "upgrade_levels" to compute node, but with no luck. Environment == Libvirt+KVM, Ceph for VMs Liberty - Mirantis OpenStack 8.0 (2015.2) Steps to reproduce === 1) Install Liberty control plane (api, conductor, schduler, etc.) 2) Install Liberty computes 3) Add to nova.conf on controller [upgrade_levels] compute=kilo 3) Try "nova live-migration VM" Expected result = Migration will succeed Actual result == Traceback on compute node http://paste.openstack.org/show/521871/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1595864/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1588244] Re: When we delete the user "admin", it will cause the resource unavailable
[Expired for OpenStack Identity (keystone) because there has been no activity for 60 days.] ** Changed in: keystone Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1588244 Title: When we delete the user "admin", it will cause the resource unavailable Status in OpenStack Identity (keystone): Expired Bug description: According to policy, admin can disable or delete admin itself. However, when we delete admin, it will cause the resource under admin unavailable. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1588244/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1589221] Re: Nova-compute: "ImageNotFound: Image could not be found."
[Expired for nova-compute (Juju Charms Collection) because there has been no activity for 60 days.] ** Changed in: nova-compute (Juju Charms Collection) Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1589221 Title: Nova-compute: "ImageNotFound: Image could not be found." Status in OpenStack Compute (nova): Invalid Status in nova-compute package in Juju Charms Collection: Expired Bug description: Spawning a new instance using a newly created image most of the time fails. After 2-3 attempts, the problem gets resolved. The configs that I'm using in my HA-environment are: nova-compute: enable-live-migration: "True" enable-resize: "True" openstack-origin: "cloud:trusty-mitaka" migration-auth-type: "ssh" manage-neutron-plugin-legacy-mode: False glance: openstack-origin: "cloud:trusty-mitaka" region: "serverstack" vip: nova-cloud-controller: network-manager: "Neutron" console-access-protocol: "novnc" openstack-origin: "cloud:trusty-mitaka" region: "serverstack" vip: I have one compute. Compute logs: http://paste.ubuntu.com/17026140/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1589221/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1600268] Re: Upgrading from Liberty to Mitaka erased passwords from SQL backend
[Expired for OpenStack Identity (keystone) because there has been no activity for 60 days.] ** Changed in: keystone Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1600268 Title: Upgrading from Liberty to Mitaka erased passwords from SQL backend Status in OpenStack Identity (keystone): Expired Bug description: This bug was reported in IRC on July 7, 2016 by jmlowe. Creating this for tracking purposes. IRC log: http://eavesdrop.openstack.org/irclogs/%23openstack-keystone /%23openstack-keystone.2016-07-07.log.html Environment: - 3 total controllers (2 RDO and 2 Ubuntu) - all installed from vendor packages - 3 node Galera cluster - installed Liberty circa-December 2015 - upgraded to recent Mitaka version jmlowe found that after the migration LDAP users could login fine, but SQL users could not. Upon further investigation the password hashes were no longer in the database (the new `password` table was empty). The lack of password records in the Passwords table and the fact that the password column was removed from the User table leads me to believe that migration 091 is to blame. So far I have not been able to reproduce the issue. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1600268/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1608015] Re: When I use curl command to send requets, why the OS_TENANT_NAME must be set to the project_id.
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1608015 Title: When I use curl command to send requets, why the OS_TENANT_NAME must be set to the project_id. Status in OpenStack Compute (nova): Expired Bug description: I execute the commands('export OS_TENANT_NAME=admin', 'curl -s -H "X -Auth-Token: $OS_TOKEN" http://10.0.0.11:8774/v2/$OS_TENANT_NAME/flavors'). When I execute the above command,the error shows that URL's project_id 'admin' doesn't match Context's project_id '811a6248e6334cbb9b68e9dde5c7ce0'. When the OS_TENANT_NAME is '811a6248e6334cbb9b68e9dde5c7ce0', the result will be correct. What's reason of this situation? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1608015/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607729] Re: tempest API test fail on live migration
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607729 Title: tempest API test fail on live migration Status in OpenStack Compute (nova): Expired Bug description: I'm submitting this bug as it was advised in the traceback to do so and it does not appear to have been filed yet. 2016-07-27 21:28:48.049962 | == 2016-07-27 21:28:48.050005 | Failed 1 tests - output below: tempest.api.compute.test_live_block_migration_negative.LiveBlockMigrationNegativeTestJSON.test_invalid_host_for_migration[id-7fb7856e-ae92-44c9-861a-af62d7830bcb,negative] -- Captured traceback: ~~~ Traceback (most recent call last): File "tempest/api/compute/test_live_block_migration_negative.py", line 57, in test_invalid_host_for_migration server_id, target_host) File "/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py", line 485, in assertRaises self.assertThat(our_callable, matcher) File "/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py", line 496, in assertThat mismatch_error = self._matchHelper(matchee, matcher, message, verbose) File "/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py", line 547, in _matchHelper mismatch = matcher.match(matchee) File "/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/matchers/_exception.py", line 108, in match mismatch = self.exception_matcher.match(exc_info) File "/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py", line 62, in match mismatch = matcher.match(matchee) File "/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py", line 475, in match reraise(*matchee) File "/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/matchers/_exception.py", line 101, in match result = matchee() File "/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py", line 1049, in __call__ return self._callable_object(*self._args, **self._kwargs) File "tempest/api/compute/test_live_block_migration_negative.py", line 45, in _migrate_server_to disk_over_commit=False) File "tempest/lib/services/compute/servers_client.py", line 394, in live_migrate_server return self.action(server_id, 'os-migrateLive', **kwargs) File "tempest/lib/services/compute/servers_client.py", line 163, in action post_body) File "tempest/lib/common/rest_client.py", line 270, in post return self.request('POST', url, extra_headers, headers, body, chunked) File "tempest/lib/services/compute/base_compute_client.py", line 48, in request method, url, extra_headers, headers, body, chunked) File "tempest/lib/common/rest_client.py", line 664, in request resp, resp_body) File "tempest/lib/common/rest_client.py", line 828, in _error_checker message=message) tempest.lib.exceptions.ServerFault: Got server fault Details: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. Logs: http://logs.openstack.org/26/303626/31/check/gate-tempest-dsvm-neutron-dvr-multinode-full/2d4c18c/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1607729/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607400] Re: UEFI not supported on SLES
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607400 Title: UEFI not supported on SLES Status in OpenStack Compute (nova): Expired Bug description: Launching an image with UEFI bootloader on a SLES 12 SP1 instances gives 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] Traceback (most recent call last): 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2218, in _build_resources 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] yield resources 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2064, in _build_and_run_instance 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] block_device_info=block_device_info) 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2777, in spawn 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] write_to_disk=True) 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4730, in _get_guest_xml 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] context) 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4579, in _get_guest_config 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] root_device_name) 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4401, in _configure_guest_by_virt_type 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] raise exception.UEFINotSupported() 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] UEFINotSupported: UEFI is not supported this is because the function probes for files that are in different locations on SLES, namely it looks for "/usr/share/OVMF/OVMF_CODE.fd" / /usr/share/AAVMF/AAVMF_CODE.fd which are the documented upstream defaults. However the SLES libvirt is compiled to default to different paths, that exist. one possibility would be to introspect domCapabilities from libvirt, which works just fine. An alternative patch is to just add the alternative paths for now. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1607400/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629507] [NEW] VMware: Enforce minimum vCenter version of 5.5
Public bug reported: https://review.openstack.org/253666 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/nova" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit 2851ceaed3010c19f42e308be637b952edab092a Author: Eric BrownDate: Fri Dec 4 11:39:18 2015 -0800 VMware: Enforce minimum vCenter version of 5.5 As of Ocata, the minimum version of VMware vCenter will be enforced at 5.1.0. And any version less than 5.5.0 will be warned as deprecated. In two releases, 5.5.0 will become the new minimum version. The VMware driver CI will has already migrated to vCenter 6.0. DocImpact: Need version updates on the minimum vCenter included in http://docs.openstack.org/mitaka/config-reference/compute/hypervisor-vmware.html Change-Id: I9f13e6cd6a49699f2b3cdce892fbf02634bf7618 ** Affects: nova Importance: Undecided Status: New ** Tags: doc nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1629507 Title: VMware: Enforce minimum vCenter version of 5.5 Status in OpenStack Compute (nova): New Bug description: https://review.openstack.org/253666 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/nova" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit 2851ceaed3010c19f42e308be637b952edab092a Author: Eric Brown Date: Fri Dec 4 11:39:18 2015 -0800 VMware: Enforce minimum vCenter version of 5.5 As of Ocata, the minimum version of VMware vCenter will be enforced at 5.1.0. And any version less than 5.5.0 will be warned as deprecated. In two releases, 5.5.0 will become the new minimum version. The VMware driver CI will has already migrated to vCenter 6.0. DocImpact: Need version updates on the minimum vCenter included in http://docs.openstack.org/mitaka/config-reference/compute/hypervisor-vmware.html Change-Id: I9f13e6cd6a49699f2b3cdce892fbf02634bf7618 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1629507/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629503] [NEW] docs: update in-tree docs in stable/newton
Public bug reported: Some commits updating the api-ref and dev-docs didn't make it into the Newton RC. Update stable/newton with the following commits for a point release: 0a9bbd33c0295d972ad90b0bc118f3b1b472c885 cd65bfd0c7f8527a551050f1a9b275cfa75a97a1 5ac85fdd75076c59844f058978111bd2a6a6c292 ** Affects: glance Importance: Undecided Assignee: Brian Rosmaita (brian-rosmaita) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1629503 Title: docs: update in-tree docs in stable/newton Status in Glance: In Progress Bug description: Some commits updating the api-ref and dev-docs didn't make it into the Newton RC. Update stable/newton with the following commits for a point release: 0a9bbd33c0295d972ad90b0bc118f3b1b472c885 cd65bfd0c7f8527a551050f1a9b275cfa75a97a1 5ac85fdd75076c59844f058978111bd2a6a6c292 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1629503/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1617299] Re: NFS based Nova Live Migration eratically fails
** Changed in: nova Assignee: Matt Riedemann (mriedem) => Tom Patzig (tom-patzig) ** Also affects: nova/newton Importance: Undecided Status: New ** Changed in: nova/newton Status: New => Confirmed ** Changed in: nova/newton Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1617299 Title: NFS based Nova Live Migration eratically fails Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) newton series: Confirmed Bug description: Hello, in our productive Openstack environment we encountered in the last weeks that Openstack Nova VM Live migrations fails. Currently this is only visible in our automated test environment. Every 15 minutes an automated test is started and it fails 3-4 times a day. On the Nova instance path we have mounted a central NetApp NFS share to support real Live migrations between different hypervisors. When we analysed the issue we found the error message and trace: BadRequest: is not on shared storage: Live migration can not be used without shared storage except a booted from volume VM which does not have a local disk. (HTTP 400) (Request-ID: req-8e709fd1-9d72-453b-b4b1-1f26112ea3d3) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/rally/task/runner.py", line 66, in _run_scenario_once getattr(scenario_inst, method_name)(**scenario_kwargs) File "/usr/lib/python2.7/site-packages/rally/plugins/openstack/scenarios/nova/servers.py", line 640, in boot_and_live_migrate_server block_migration, disk_over_commit) File "/usr/lib/python2.7/site-packages/rally/task/atomic.py", line 84, in func_atomic_actions f = func(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/rally/plugins/openstack/scenarios/nova/utils.py", line 721, in _live_migrate disk_over_commit=disk_over_commit) File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 433, in live_migrate disk_over_commit) File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 370, in substitution return methods[-1].func(obj, *args, **kwargs) File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 1524, in live_migrate 'disk_over_commit': disk_over_commit}) File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 1691, in _action info=info, **kwargs) File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 1702, in _action_return_resp_and_body return self.api.client.post(url, body=body) File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 461, in post return self._cs_request(url, 'POST', **kwargs) File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 436, in _cs_request resp, body = self._time_request(url, method, **kwargs) File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 409, in _time_request resp, body = self.request(url, method, **kwargs) File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 403, in request raise exceptions.from_response(resp, body, url, method) BadRequest: is not on shared storage: Live migration can not be used without shared storage except a booted from volume VM which does not have a local disk. (HTTP 400) (Request-ID: req-8e709fd1-9d72-453b-b4b1-1f26112ea3d3) We examined the respective hypervisors for some problems with the NFS share/mount, but everything looks really good. Also the message log file shows no issues during the test timeframe. The next step was to examine the Nova code to get a hint why Nova is bringing up such an error. In the Nova code we found the test procedure how Nova checks if there is a shared filesystem between source and destination hypervisor. In "nova/nova/virt/libvirt/driver.py" In function „check_can_live_migrate_destination“ a temporary file is created on the destination hypervisor: # Create file on storage, to be checked on source host filename = self._create_shared_storage_test_file() After that – in the same class - in function „check_can_live_migrate_source“: dest_check_data.is_shared_instance_path = ( self._check_shared_storage_test_file( dest_check_data.filename)) will be checked if the temporary file exists. And this will sometimes fail and migration returns with this error message because the file on the source hypervisor is not yet available: elif not (dest_check_data.is_shared_block_storage or dest_check_data.is_shared_instance_path or (booted_from_volume and not has_local_disk)): reason = _("Live migration can not be used " "without shared storage except " "a booted from volume VM which
[Yahoo-eng-team] [Bug 1629484] [NEW] Key error when try to assign a floating ip to a VM
Public bug reported: We have an environment with designate and DNS plugin enabled. When we upgrade to newton we got a problem that we can't assign the floating ip to the VM because of an key error. Here is the error message from neutron server ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource Traceback (most recent call last): ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 79, in resource ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource result = method(request=request, **args) ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 604, in update ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource return self._update(request, id, body, **kwargs) ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 88, in wrapped ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource setattr(e, '_RETRY_EXCEEDED', True) ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource self.force_reraise() ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 84, in wrapped ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource return f(*args, **kwargs) ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource ectxt.value = e.inner_exc ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource self.force_reraise() ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource return f(*args, **kwargs) ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 124, in wrapped ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource traceback.format_exc()) ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource self.force_reraise() ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 119, in wrapped ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource return f(*dup_args, **dup_kwargs) ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 652, in _update ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource obj = obj_updater(request.context, id, **kwargs) ./neutron/neutron-server.log:2016-09-30 22:00:36.797 527 ERROR neutron.api.v2.resource File
[Yahoo-eng-team] [Bug 1618666] Re: deprecated warning for SafeConfigParser
Reviewed: https://review.openstack.org/368617 Committed: https://git.openstack.org/cgit/openstack/swift/commit/?id=1f690df60c0ce7b627c4ebceaecaa5370ff10042 Submitter: Jenkins Branch:master commit 1f690df60c0ce7b627c4ebceaecaa5370ff10042 Author: Luong Anh TuanDate: Mon Sep 12 15:01:29 2016 +0700 Use ConfigParser instead of SafeConfigParser SafeConfigParser supports interpolation on top of ConfigParser in Python 2, and SafeConfigParser is deprecated in Python 3.2 and log warning like " DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead." So we can use ConfigParser if we don't need interpolation. Change-Id: I7e399b3cb90ded909e0d777a4d10c44f1e9299da Closes-Bug: #1618666 ** Changed in: swift Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1618666 Title: deprecated warning for SafeConfigParser Status in Glance: In Progress Status in glance_store: In Progress Status in Ironic: In Progress Status in OpenStack Identity (keystone): Fix Released Status in neutron: Fix Released Status in OpenStack Compute (nova): Fix Released Status in PBR: Fix Released Status in python-ironicclient: Fix Released Status in python-swiftclient: In Progress Status in OpenStack Object Storage (swift): Fix Released Status in tempest: Fix Released Status in OpenStack DBaaS (Trove): In Progress Bug description: tox -e py34 is reporting a deprecation warning for SafeConfigParser /octavia/.tox/py34/lib/python3.4/site-packages/pbr/util.py:207: DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead. parser = configparser.SafeConfigParser() To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1618666/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1525163] Re: add-router-interface can not recover when failing create_port_post_commit
** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1525163 Title: add-router-interface can not recover when failing create_port_post_commit Status in neutron: Fix Released Bug description: When the following situation, a port created that no one can delete it: 1. Configure ML2 plugin. 2. Execute router-interface-add with subnet_id and failed in create_port_post_commit After that, undeletable port is created. Even if the admin tries to delete the port, he cannot delete it. case1: delete-port with port_id => Port {port_id} cannot be deleted directly via the port API: has device owner network:router_interface. case2: router-interface-delete with subnet_id or port_id => Router {router_id} has no interface on subnet {subnet_id} => Router {router_id} does not have an interface with id {port_id} [How to fix] create_port_post_commit has a recovery process. When failed, then calls delete_port. However, in this case, 'device_owner' and 'device_id' have already registered at port's DB. Therefore, delete_port fails due to device_owner check. Hence, I'll add 'l3_port_check=False' into delete_port's argument. [Environment] trunk(devstack all-in-one with ML2 plugin(openvswitch)) [How to reproduce] * You have to arrange create_port_post_commit shuld be failed. source devstack/openrc admin admin export TOKEN=`openstack token issue | grep ' id ' | get_field 2` curl -s -X GET -H "x-auth-token:$TOKEN" 192.168.122.253:9696/v2.0/ports | jq "." { "ports": [] } curl -i -X PUT -d '{"subnet_id":"214ebeb5-2d08-4ae5-9d60-3c7a76d56746"}' -H "x-auth- token:$TOKEN" 192.168.122.253:9696/v2.0/routers/7d1561d1-71f9-4355-9248-5ac313de8ee3/add_router_interface HTTP/1.1 409 Conflict Content-Type: application/json; charset=UTF-8 Content-Length: 204 X-Openstack-Request-Id: req-ec3bad1f-84a2-4865-9cac-e63723c0a3bb Date: Fri, 11 Dec 2015 10:11:11 GMT {"NeutronError": {"message": "Port 570a4166-d463-4ee6-894b- f8aab6cc63b2 cannot be deleted directly via the port API: has device owner network:router_interface.", "type": "ServicePortInUse", "detail": ""}} $ curl -s -X GET -H "x-auth-token:$TOKEN" 192.168.122.253:9696/v2.0/ports/570a4166-d463-4ee6-894b-f8aab6cc63b2 | jq "." { "port": { "mac_address": "fa:16:3e:c3:1c:8d", "tenant_id": "4c0b8881d3e24a1cb1afe9ea6b07d946", "binding:vif_type": "unbound", "binding:vnic_type": "normal", "binding:vif_details": {}, "binding:profile": {}, "port_security_enabled": false, "device_owner": "network:router_interface", "dns_assignment": [ { "fqdn": "host-172-16-1-1.openstacklocal.", "ip_address": "172.16.1.1", "hostname": "host-172-16-1-1" } ], "extra_dhcp_opts": [], "allowed_address_pairs": [], "binding:host_id": "", "status": "DOWN", "fixed_ips": [ { "ip_address": "172.16.1.1", "subnet_id": "214ebeb5-2d08-4ae5-9d60-3c7a76d56746" } ], "id": "570a4166-d463-4ee6-894b-f8aab6cc63b2", "security_groups": [], "device_id": "7d1561d1-71f9-4355-9248-5ac313de8ee3", "name": "", "admin_state_up": true, "network_id": "11515598-c20e-4e8a-94d0-1fef56f4607d", "dns_name": "" } } To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1525163/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1482773] Re: H405 violations: multi line docstring summary not separated with an empty line
** No longer affects: cinder -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1482773 Title: H405 violations: multi line docstring summary not separated with an empty line Status in OpenStack Identity (keystone): Fix Released Status in keystoneauth: Fix Released Status in keystonemiddleware: Fix Released Status in python-keystoneclient: Fix Released Bug description: Keystone's tox.ini contains an "ignore" entry for H405. All violations of H405 should be fixed so that H405 can be removed from the ignore list. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1482773/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1411719] Re: The full log of gate-horizon-docs job contains a lot of errors, yet it succeeds
Reviewed: https://review.openstack.org/374056 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=f880f6c723cabf3eb2bf9053d4560e6c1dc6844b Submitter: Jenkins Branch:master commit f880f6c723cabf3eb2bf9053d4560e6c1dc6844b Author: Akihiro MotokiDate: Wed Sep 21 07:08:28 2016 +0900 Turn on docs warning check in document generation Use -W (turn warnings into errors) option of sphinx-build in the commandline of 'docs' tox target so that developer can easily check sphinx warning. Also runs the same documentation check in 'pep8' tox target to detect sphinx warning in the gate. The current 'docs' job in the gate does not use 'tox -edocs' intentionally and calls build_sphinx via 'tox -evenv' [1], so sphinx warnings are not detected in the 'docs' job. Note that we no longer generate the whole code reference so this change does not increase the time of 'tox -epep8' much, while we can prevent new sphinx warnings. [1] https://github.com/openstack-infra/project-config/blob/6b50d7e3a69c49d1e035c6d026d8e3910e62a981/jenkins/scripts/run-docs.sh#L16-L19 Closes-Bug: #1411719 Closes-Bug: #1486222 Change-Id: Idc6e8a1c5762eba113b2d110d5fa223ab7406be3 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1411719 Title: The full log of gate-horizon-docs job contains a lot of errors, yet it succeeds Status in OpenStack Dashboard (Horizon): Fix Released Bug description: If you look at e.g. https://jenkins05.openstack.org/job/gate-horizon- docs/2699/consoleFull you'll find there are a lot of WARNING/SEVERE/ERROR messages due to sphinx failing to recognize some attributes/methods/whatever it is. Yet the gate-job succeeds in Jenkins. Putting aside the actual problems the Sphinx encounters, the problem that should be solved first is making gate-horizon-docs job more honest. In case you are not able to open the specified link, you could open the full log of any other horizon commit by clicking the gate-horizon- docs link inside any of horizon commits at http://status.openstack.org/zuul/ To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1411719/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1486222] Re: lots of docs warning showing up
Reviewed: https://review.openstack.org/374056 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=f880f6c723cabf3eb2bf9053d4560e6c1dc6844b Submitter: Jenkins Branch:master commit f880f6c723cabf3eb2bf9053d4560e6c1dc6844b Author: Akihiro MotokiDate: Wed Sep 21 07:08:28 2016 +0900 Turn on docs warning check in document generation Use -W (turn warnings into errors) option of sphinx-build in the commandline of 'docs' tox target so that developer can easily check sphinx warning. Also runs the same documentation check in 'pep8' tox target to detect sphinx warning in the gate. The current 'docs' job in the gate does not use 'tox -edocs' intentionally and calls build_sphinx via 'tox -evenv' [1], so sphinx warnings are not detected in the 'docs' job. Note that we no longer generate the whole code reference so this change does not increase the time of 'tox -epep8' much, while we can prevent new sphinx warnings. [1] https://github.com/openstack-infra/project-config/blob/6b50d7e3a69c49d1e035c6d026d8e3910e62a981/jenkins/scripts/run-docs.sh#L16-L19 Closes-Bug: #1411719 Closes-Bug: #1486222 Change-Id: Idc6e8a1c5762eba113b2d110d5fa223ab7406be3 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1486222 Title: lots of docs warning showing up Status in OpenStack Dashboard (Horizon): Fix Released Bug description: There are a high number of warnings in the doc generation process. These should all be fixed and have errors block merging. /home/david-lyle/horizon_test/doc/source/contributing.rst:299: WARNING: Enumerated list ends without a blank line; unexpected unindent. /home/david-lyle/horizon_test/horizon/forms/fields.py:docstring of horizon.forms.fields.SelectWidget:33: ERROR: Unexpected indentation. /home/david-lyle/horizon_test/horizon/forms/fields.py:docstring of horizon.forms.fields.SelectWidget:34: WARNING: Block quote ends without a blank line; unexpected unindent. /home/david-lyle/horizon_test/horizon/forms/fields.py:docstring of horizon.forms.fields.SelectWidget:37: SEVERE: Unexpected section title or transition. /home/david-lyle/horizon_test/horizon/forms/fields.py:docstring of horizon.forms.fields.SelectWidget:38: SEVERE: Unexpected section title or transition. /home/david-lyle/horizon_test/horizon/forms/fields.py:docstring of horizon.forms.fields.SelectWidget:42: WARNING: Block quote ends without a blank line; unexpected unindent. /home/david-lyle/horizon_test/horizon/forms/fields.py:docstring of horizon.forms.fields.SelectWidget:48: WARNING: Definition list ends without a blank line; unexpected unindent. /home/david-lyle/horizon_test/doc/source/ref/run_tests.rst:190: WARNING: Title underline too short. ESLint /home/david-lyle/horizon_test/horizon/tables/base.py:docstring of horizon.tables.Column:124: ERROR: Unexpected indentation. /home/david-lyle/horizon_test/horizon/tables/base.py:docstring of horizon.tables.Column:125: WARNING: Block quote ends without a blank line; unexpected unindent. /home/david-lyle/horizon_test/horizon/tables/base.py:docstring of horizon.tables.Column:127: WARNING: Definition list ends without a blank line; unexpected unindent. /home/david-lyle/horizon_test/horizon/tables/base.py:docstring of horizon.tables.Column:128: SEVERE: Unexpected section title. ... ... /home/david-lyle/horizon_test/horizon/__init__.py:docstring of horizon.Dashboard:8: WARNING: duplicate object description of horizon.Dashboard.name, other instance in /home/david-lyle/horizon_test/doc/source/ref/horizon.rst, use :noindex: for one of th# /home/david-lyle/horizon_test/horizon/__init__.py:docstring of horizon.Dashboard:14: WARNING: duplicate object description of horizon.Dashboard.slug, other
[Yahoo-eng-team] [Bug 1617262] Re: Some places are displaying unnecessary a bullet point
Reviewed: https://review.openstack.org/365429 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=88878488541befe786c60219988b06180117cf76 Submitter: Jenkins Branch:master commit 88878488541befe786c60219988b06180117cf76 Author: Kenji IshiiDate: Mon Sep 5 12:46:37 2016 +0900 Fix unnecessary bullet point There are some place being displayed unnecessary bullet point. - Error message in login page (This ticket has a screenshot). There is no need to use 'ul'. - The row of 'No users/groups' in Project Member/Group tab in Create/Update Project modal. - The row of 'No projects found' in Flavor Access tab in Edit Flavor modal. There are same structure as a membership. So it needs to add a nav class. This patch fix it. Change-Id: I3f906808a53807ebd8c0d6a342b0219383776e91 Closes-bug: #1617262 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1617262 Title: Some places are displaying unnecessary a bullet point Status in OpenStack Dashboard (Horizon): Fix Released Bug description: Please show screenshots. In change-Id: I805d2e4f9d0ae9703e725f9be9090f8fea5e948c, ul list-style has been removed. Some places seem to be needed yet. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1617262/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629460] [NEW] python35 values for http status codes not ints
Public bug reported: The values of Python 3.5 http status codes via http_client.OK, for example, are enums not ints like in py27 and py35. This can cause potential trouble and the unit tests are catching the difference. Python 2.7/3.4 >>> from six.moves import http_client >>> http_client.BAD_REQUEST 400 Python 3.5 >>> from six.moves import http_client >>> http_client.BAD_REQUEST https://bitbucket.org/gutworth/six/issues/161/inconsistent-value-returned-from https://bugs.python.org/issue26123 ** Affects: keystone Importance: Undecided Assignee: Eric Brown (ericwb) Status: New ** Changed in: keystone Assignee: (unassigned) => Eric Brown (ericwb) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1629460 Title: python35 values for http status codes not ints Status in OpenStack Identity (keystone): New Bug description: The values of Python 3.5 http status codes via http_client.OK, for example, are enums not ints like in py27 and py35. This can cause potential trouble and the unit tests are catching the difference. Python 2.7/3.4 >>> from six.moves import http_client >>> http_client.BAD_REQUEST 400 Python 3.5 >>> from six.moves import http_client >>> http_client.BAD_REQUEST https://bitbucket.org/gutworth/six/issues/161/inconsistent-value-returned-from https://bugs.python.org/issue26123 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1629460/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1628883] Re: Minimum requirements too low on oslo.log for keystone
This bug was fixed in the package keystone - 2:10.0.0~rc2-0ubuntu2 --- keystone (2:10.0.0~rc2-0ubuntu2) yakkety; urgency=medium * d/control: oslo.log min version level in global-requirements is too low, so set min version to upper-constraints level (LP: #1628883). -- Corey BryantThu, 29 Sep 2016 07:54:36 -0400 ** Changed in: keystone (Ubuntu) Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1628883 Title: Minimum requirements too low on oslo.log for keystone Status in OpenStack Identity (keystone): Fix Released Status in keystone package in Ubuntu: Fix Released Bug description: After upgrading keystone from mitaka to newton-rc1 on Xenial I am getting this error: $ keystone-manage db_sync Traceback (most recent call last): File "/usr/bin/keystone-manage", line 6, in from keystone.cmd.manage import main File "/usr/lib/python2.7/dist-packages/keystone/cmd/manage.py", line 32, in from keystone.cmd import cli File "/usr/lib/python2.7/dist-packages/keystone/cmd/cli.py", line 28, in from keystone.cmd import doctor File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/__init__.py", line 13, in from keystone.cmd.doctor import caching File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/caching.py", line 13, in import keystone.conf File "/usr/lib/python2.7/dist-packages/keystone/conf/__init__.py", line 26, in from keystone.conf import default File "/usr/lib/python2.7/dist-packages/keystone/conf/default.py", line 180, in deprecated_since=versionutils.deprecated.NEWTON, AttributeError: type object 'deprecated' has no attribute 'NEWTON' It seems due to the fact that the installed version of oslo.log is not updated properly: python-oslo.log: Installed: 3.2.0-2 Candidate: 3.16.0-0ubuntu1~cloud0 Version table: 3.16.0-0ubuntu1~cloud0 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/newton/main amd64 Packages 500 http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/newton/main i386 Packages *** 3.2.0-2 500 500 http://mirror/ubuntu xenial/main amd64 Packages 100 /var/lib/dpkg/status But looking at the requirements.txt in stable/newton, even oslo.log>=1.14.0 is claimed to work. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1628883/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629446] [NEW] 500 when a user logins in using federation
Public bug reported: A user part of a group in auth0 tries to login in using the mapping below just fine [ { "local": [ { "user": { "name": "{1}::{0}" } }, { "domain": { "id": "default" }, "groups": "{1}" } ], "remote": [ { "type": "HTTP_OIDC_CLAIM_EMAIL" }, { "type": "HTTP_OIDC_CLAIM_GROUPS" } ] } ] Once the user is removed from the group in auth0 and tries to login : Expected Result: Failed to log on to horizon as federation user using OpenID Connect protocol and got 401 code: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} Actual Result: Got 500 instead of 401 {"error": {"message": "An unexpected error prevented the server from fulfilling your request.", "code": 500, "title": "Internal Server Error"}} error in keystone-all.logs: 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi [req-f5f27f59-788b-494b-9719-bcdbb6b628c0 - - - - -] unexpected EOF while parsing (, line 0) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi Traceback (most recent call last): 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/common/wsgi.py", line 249, in __call__ 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi result = method(context, **params) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/federation/controllers.py", line 329, in federated_idp_specific_sso_auth 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi res = self.federated_authentication(context, idp_id, protocol_id) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/federation/controllers.py", line 302, in federated_authentication 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi return self.authenticate_for_token(context, auth=auth) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/auth/controllers.py", line 396, in authenticate_for_token 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi self.authenticate(context, auth_info, auth_context) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/auth/controllers.py", line 520, in authenticate 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi auth_context) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/auth/plugins/mapped.py", line 65, in authenticate 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi self.identity_api) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/auth/plugins/mapped.py", line 141, in handle_unscoped_token 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi federation_api, identity_api) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/auth/plugins/mapped.py", line 194, in apply_mapping_filter 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi identity_provider, protocol, assertion) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/common/manager.py", line 124, in wrapped 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi __ret_val = __f(*args, **kwargs) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/federation/core.py", line 98, in evaluate 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi mapped_properties = rule_processor.process(assertion_data) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/federation/utils.py", line 544, in process 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi mapped_properties = self._transform(identity_values) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File "/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/federation/utils.py", line 647, in _transform 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi identity_value['groups']) 2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi File
[Yahoo-eng-team] [Bug 1585100] Re: lbaas-poolmember: subnet is optional according to docs, but actually required
** Also affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1585100 Title: lbaas-poolmember: subnet is optional according to docs, but actually required Status in neutron: New Status in openstack-api-site: New Bug description: I have devstack master with neutron lbaas with octavia. And I tried to create poolmember with this Heat template: heat_template_version: 2013-05-23 resources: port: type: OS::Neutron::LBaaS::PoolMember properties: pool: 7abe4251-c643-414a-8776-7346b9c09e71 address: 5.5.5.5 protocol_port: And then I got an error: 2016-05-24 11:20:26.239 INFO heat.engine.resource [req-3734d004-fe1d-4e55-8978-15c9251d5f30 None demo] CREATE: PoolMember "port" Stack "test_pool" [0ae40413-9d58-4840-8915-67dbc43f9035] 2016-05-24 11:20:26.239 TRACE heat.engine.resource Traceback (most recent call last): 2016-05-24 11:20:26.239 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/resource.py", line 715, in _action_recorder 2016-05-24 11:20:26.239 TRACE heat.engine.resource yield 2016-05-24 11:20:26.239 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/resource.py", line 795, in _do_action 2016-05-24 11:20:26.239 TRACE heat.engine.resource yield self.action_handler_task(action, args=handler_args) 2016-05-24 11:20:26.239 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/scheduler.py", line 329, in wrapper 2016-05-24 11:20:26.239 TRACE heat.engine.resource step = next(subtask) 2016-05-24 11:20:26.239 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/resource.py", line 763, in action_handler_task 2016-05-24 11:20:26.239 TRACE heat.engine.resource done = check(handler_data) 2016-05-24 11:20:26.239 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/resources/openstack/neutron/lbaas/pool_member.py", line 168, in check_create_complete 2016-05-24 11:20:26.239 TRACE heat.engine.resource self.pool_id, {'member': properties})['member'] 2016-05-24 11:20:26.239 TRACE heat.engine.resource File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 1110, in create_lbaas_member 2016-05-24 11:20:26.239 TRACE heat.engine.resource return self.post(self.lbaas_members_path % lbaas_pool, body=body) 2016-05-24 11:20:26.239 TRACE heat.engine.resource File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 347, in post 2016-05-24 11:20:26.239 TRACE heat.engine.resource headers=headers, params=params) 2016-05-24 11:20:26.239 TRACE heat.engine.resource File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 282, in do_request 2016-05-24 11:20:26.239 TRACE heat.engine.resource self._handle_fault_response(status_code, replybody, resp) 2016-05-24 11:20:26.239 TRACE heat.engine.resource File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 257, in _handle_fault_response 2016-05-24 11:20:26.239 TRACE heat.engine.resource exception_handler_v20(status_code, error_body) 2016-05-24 11:20:26.239 TRACE heat.engine.resource File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 84, in exception_handler_v20 2016-05-24 11:20:26.239 TRACE heat.engine.resource request_ids=request_ids) 2016-05-24 11:20:26.239 TRACE heat.engine.resource BadRequest: Failed to parse request. Required attribute 'subnet_id' not specified 2016-05-24 11:20:26.239 TRACE heat.engine.resource Neutron server returns request_ids: ['req-307da359-cccb-4b57-a9cb-0a29186e62cb'] I also tried to create poolmember with cli, and got a message that subnet is required to create poolmember. But according to docs http://developer.openstack.org/api-ref- networking-v2-ext.html#createMemberv2 subnet is optional. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1585100/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1463525] Re: There is no volume encryption support for rbd-backed volumes
This appears to have been fixed on the Cinder side with Matt's Cinder patch referenced above. If there is more work to do on the Cinder side please reopen. ** Changed in: cinder Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1463525 Title: There is no volume encryption support for rbd-backed volumes Status in Cinder: Fix Released Status in OpenStack Compute (nova): Confirmed Bug description: This came up as a discussion point in the nova IRC channel today because someone was talking about adding encryption support to Ceph in Nova and I pointed out that there is already a ceph job that runs the tempest luks/cryptsetup encrypted volume tests successfully, so why aren't those failing if it's not supported today? We got looking at the code and logs and found that when nova tries to get volume encryption metadata from cinder for rbd-backed instances, nothing comes back so nova isn't doing anything with volume encryption using it's providers (luks / cryptsetup). Change https://review.openstack.org/#/c/189799/ in nova adds logging to see this: Confirmed that for LVM backed Cinder we get something back: http://logs.openstack.org/99/189799/2/check/check-tempest-dsvm- full/c3ee602/logs/screen-n-cpu.txt.gz#_2015-06-09_18_18_18_078 For Ceph we don't: http://logs.openstack.org/99/189799/2/check/check-tempest-dsvm-full- ceph/353db23/logs/screen-n-cpu.txt.gz#_2015-06-09_18_21_16_723 This might be working as designed, I'm not sure, but I'm opening the bug to track the effort since if you think you have encrypted volumes when using ceph and nova you're probably not, so there is a false sense of security here which is a bug. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1463525/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1626402] Re: ERROR (ClientException): Unexpected API Error
** Package changed: nova (Ubuntu) => nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1626402 Title: ERROR (ClientException): Unexpected API Error Status in OpenStack Compute (nova): New Bug description: Hi, I was going through the the openstack link and doing hands on practice as well. link: http://docs.openstack.org/admin-guide/compute-networking- nova.html Heading: Using multinic Below error I got after running the command: stack@mirantis-HP-Z400-Workstation:/opt/devstack/devstack$ nova network-create first-net --fixed-range-v4 20.20.0.0/24 --project-id nova ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-8ee17d86-7d8b-438a-a80e-26389fbf565a) stack@mirantis-HP-Z400-Workstation:/opt/devstack/devstack$ stack@mirantis-HP-Z400-Workstation:/opt/devstack/devstack$ I am using mitaka version in devstack. Thanks and Regards, Suraj To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1626402/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629133] Re: New neutron subnet pool support breaks multinode testing.
** Also affects: manila Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1629133 Title: New neutron subnet pool support breaks multinode testing. Status in devstack: In Progress Status in Ironic: New Status in Manila: New Status in neutron: New Bug description: The new subnet pool support in devstack breaks multinode testing bceause it results in the route for 10.0.0.0/8 being set to via br-ex when the host has interfaces that are actually on 10 nets and that is where we need the routes to go out. Multinode testing is affected because it uses these 10 net addresses to run the vxlan overlays between hosts. For many years devstack-gate has set FIXED_RANGE to ensure that the networks devstack uses do not interfere with the underlying test host's networking. However this setting was completely ignored when setting up the subnet pools. I think the correct way to fix this is to use a much smaller subnet pool range to avoid conflicting with every possible 10.0.0.0/8 network in the wild, possibly by defaulting to the existing FIXED_RANGE information. Using the existing information will help ensure that anyone with networks in 10.0.0.0/8 will continue to work if they have specified a range that doesn't conflict using this variable. In addition to this we need to clearly document what this new stuff in devstack does and how people can work around it should they conflict with the new defaults we end up choosing. I have proposed https://review.openstack.org/379543 which mostly works except it fails one tempest test that apparently depends on this new subnet pool stuff. Specifically the V6 range isn't large enough aiui. To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1629133/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629347] Re: test_check_sanity_6b461a21bcfc_dup_on_no_fixed_ip fails with "(sqlite3.OperationalError) table floatingips already exists"
Reviewed: https://review.openstack.org/380371 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=438d4a83b0967f97bef2fc4c86f956e5fa6ca685 Submitter: Jenkins Branch:master commit 438d4a83b0967f97bef2fc4c86f956e5fa6ca685 Author: Ihar HrachyshkaDate: Thu Sep 29 20:11:07 2016 + TestSanityCheck: drop test tables during cleanup The tests were creating tables in the memory but were not cleaning them up, making them clash on attempt to create floatingips table. Change-Id: If28ef4ce76cd36e7240d106b75e43d16f43e2c16 Closes-Bug: #1629347 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1629347 Title: test_check_sanity_6b461a21bcfc_dup_on_no_fixed_ip fails with "(sqlite3.OperationalError) table floatingips already exists" Status in neutron: Fix Released Bug description: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_check_sanity_6b461a21bcfc_dup_on_no_fixed_ip%5C%22 Traceback (most recent call last): File "neutron/tests/base.py", line 125, in func return f(self, *args, **kwargs) File "neutron/tests/functional/db/test_migrations.py", line 432, in test_check_sanity_6b461a21bcfc_dup_on_no_fixed_ip floatingips.create(conn) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", line 742, in create checkfirst=checkfirst) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1532, in _run_visitor **kwargs).traverse_single(element) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/sql/visitors.py", line 121, in traverse_single return meth(obj, **kw) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 767, in visit_table include_foreign_key_constraints=include_foreign_key_constraints File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 945, in execute return meth(self, multiparams, params) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 68, in _execute_on_connection return connection._execute_ddl(self, multiparams, params) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1002, in _execute_ddl compiled File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context context) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1393, in _handle_dbapi_exception exc_info File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 202, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context context) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 462, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table floatingips already exists [SQL: u'\nCREATE TABLE floatingips (\n\tfloating_network_id VARCHAR(36), \n\tfixed_port_id VARCHAR(36), \n\tfixed_ip_address VARCHAR(64)\n)\n\n'] 7 hits since the test landed the tree. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1629347/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1626402] [NEW] ERROR (ClientException): Unexpected API Error
You have been subscribed to a public bug: Hi, I was going through the the openstack link and doing hands on practice as well. link: http://docs.openstack.org/admin-guide/compute-networking-nova.html Heading: Using multinic Below error I got after running the command: stack@mirantis-HP-Z400-Workstation:/opt/devstack/devstack$ nova network-create first-net --fixed-range-v4 20.20.0.0/24 --project-id nova ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-8ee17d86-7d8b-438a-a80e-26389fbf565a) stack@mirantis-HP-Z400-Workstation:/opt/devstack/devstack$ stack@mirantis-HP-Z400-Workstation:/opt/devstack/devstack$ I am using mitaka version in devstack. Thanks and Regards, Suraj ** Affects: nova Importance: Undecided Status: New -- ERROR (ClientException): Unexpected API Error https://bugs.launchpad.net/bugs/1626402 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629396] [NEW] create images requires admin role ignoring policy.json
Public bug reported: Setup a default OpenStack environment using keystone's sample_data.sh This gives user "glance" the "_member_" role for project "service". Couple this with a policy.json containing the following: { "context_is_admin": "role:admin", "default": "", "add_image": "", "delete_image": "", . . } If you attempt to create a new image as "glance" user it fails with following error: 403 Forbidden: You are not authorized to complete this action. (HTTP 403) Delving into the code you can see is_admin is enforced: api/authorization.py:new_image(): if not self.context.is_admin: if owner is None or owner != self.context.owner: message = _("You are not permitted to create images " "owned by '%s'.") raise exception.Forbidden(message % owner) Thus indicating that the user creating images must have "admin" role for this project. However this same user can successfully delete images, as delete uses policy enforcement only and adheres to whatever is defined within policy.json: api/policy.py:delete(): def delete(self): self.policy.enforce(self.context, 'delete_image', self.target) return self.image.delete() This seems inconsistent, image creation should probably use policy enforcement and not have a hard coded requirement for admin role. ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1629396 Title: create images requires admin role ignoring policy.json Status in Glance: New Bug description: Setup a default OpenStack environment using keystone's sample_data.sh This gives user "glance" the "_member_" role for project "service". Couple this with a policy.json containing the following: { "context_is_admin": "role:admin", "default": "", "add_image": "", "delete_image": "", . . } If you attempt to create a new image as "glance" user it fails with following error: 403 Forbidden: You are not authorized to complete this action. (HTTP 403) Delving into the code you can see is_admin is enforced: api/authorization.py:new_image(): if not self.context.is_admin: if owner is None or owner != self.context.owner: message = _("You are not permitted to create images " "owned by '%s'.") raise exception.Forbidden(message % owner) Thus indicating that the user creating images must have "admin" role for this project. However this same user can successfully delete images, as delete uses policy enforcement only and adheres to whatever is defined within policy.json: api/policy.py:delete(): def delete(self): self.policy.enforce(self.context, 'delete_image', self.target) return self.image.delete() This seems inconsistent, image creation should probably use policy enforcement and not have a hard coded requirement for admin role. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1629396/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621161] Re: api-ref: need version history on versions page
Reviewed: https://review.openstack.org/366944 Committed: https://git.openstack.org/cgit/openstack/glance/commit/?id=5ac85fdd75076c59844f058978111bd2a6a6c292 Submitter: Jenkins Branch:master commit 5ac85fdd75076c59844f058978111bd2a6a6c292 Author: Brian RosmaitaDate: Wed Sep 7 15:55:41 2016 -0400 api-ref: add versions history The 'versions' response contains the status of the Images API versions, but does not indicate the releases when the statuses went into effect. This patch adds a Version History to the 'versions' api-ref page. The language used (for example, "Bexar changes") is consistent with what's been adopted for other parts of the images api-ref (see the discussion on https://review.openstack.org/#/c/356693/ ) This patch is current for Newton: - v1 deprecation: already merged as commit 63e6dbb1eb006758fbcf7cae83e1d2eacf46b4ab - v2 minor version bump: the dependency stated below Depends-On: I5d1c4380682efa4c15ff0f294f269c800fe6762a Change-Id: Id920a3284a4be23032cc4a23e04726fab6d24361 Closes-bug: #1621161 Partial-bug: #1618495 ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1621161 Title: api-ref: need version history on versions page Status in Glance: Fix Released Bug description: Need release name/API version/status info on 'versions' api-ref page. The 'versions' response contains the status of the Images API versions, but does not indicate the releases when the statuses went into effect. Address by adding a Version History to the versions page. Use language consistent with https://review.openstack.org/#/c/356693/ To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1621161/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629368] Re: Assert fails if Subnet has extra fields
I don't think it's neutron issue to solve. Also, please add details about tempest failure you see. It's not clear where you hit it. ** No longer affects: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1629368 Title: Assert fails if Subnet has extra fields Status in tempest: New Bug description: Why assertion checks are done in such a way that if a subnet has couple more fields other that {u'name', u'enable_dhcp', u'network_id', u'tenant_id', u'dns_nameservers' , u'ipv6_ra_mode', u'allocation_pools',u'gateway_ip', u'ipv6_address_mode', u'ip_version', u'host_routes', u'cidr', u'id', u'subnetpool_id'} the assertions fails ? The tests should check all these fields are present and it should just pass regrdless is that subnet has more fields. for ex: subnet-show outputs: assertion between [1] and [2] should pass. [1] {u'name': u'', u'enable_dhcp': True, u'network_id': u'047e846a-abb9-4174-a9f7-daef5c125003', u'tenant_id': u'14793945c3674cd28f6b795c991c6091', u'dns_nameservers': [], u'ipv6_ra_mode': None, u'allocation_pools': [{u'start': u'10.100.0.2', u'end': u'10.100.0.14'}], u'gateway_ip': u'10.100.0.1', u'ipv6_address_mode': None, u'ip_version': 4, u'host_routes': [], u'cidr': u'10.100.0.0/28', u'id': u'f9747d21-e2c7-417b-aab8-ad32e2322c56', u'subnetpool_id': None} [2] {u'name': u'', u'enable_dhcp': True, u'network_id': u'047e846a-abb9-4174-a9f7-daef5c125003', u'tenant_id': u'14793945c3674cd28f6b795c991c6091', u'dns_nameservers': [], u'ipv6_ra_mode': None, u'allocation_pools': [{u'start': u'10.100.0.2', u'end': u'10.100.0.14'}], u'gateway_ip': u'10.100.0.1', u'ipv6_address_mode': None, u'ip_version': 4, u'host_routes': [], u'cidr': u'10.100.0.0/28', u'id': u'f9747d21-e2c7-417b-aab8-ad32e2322c56', u'subnetpool_id': None, u'XYZ: u`ABC...etc} Thanks. To manage notifications about this bug go to: https://bugs.launchpad.net/tempest/+bug/1629368/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629374] [NEW] Instance Filter misidentified with equal sign
Public bug reported: GUI Bug. The Horizon Instances page/frame shows at the top a filter and the default filter is to filter by instance name (and this bug likely applies to other filters as well). It shows Instance Name= (enter name here) However, it really filters on something akin to %FILTER% ie, the term just has to APPEAR in the name, not be equal to it. The equal sign sends the wrong message and I'd suggest omitting it. I'll submit a patch when I can find this area in the code. ** Affects: horizon Importance: Undecided Status: New ** Attachment added: "screenshot" https://bugs.launchpad.net/bugs/1629374/+attachment/4751847/+files/Screenshot%202016-09-30%2009.55.01.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1629374 Title: Instance Filter misidentified with equal sign Status in OpenStack Dashboard (Horizon): New Bug description: GUI Bug. The Horizon Instances page/frame shows at the top a filter and the default filter is to filter by instance name (and this bug likely applies to other filters as well). It shows Instance Name= (enter name here) However, it really filters on something akin to %FILTER% ie, the term just has to APPEAR in the name, not be equal to it. The equal sign sends the wrong message and I'd suggest omitting it. I'll submit a patch when I can find this area in the code. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1629374/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629371] [NEW] libvirt: volume snapshot stacktraces if the qemu guest agent is not available
Public bug reported: The libvirt driver attempts to quiece the filesystem when doing a volume snapshot and will stacktrace if the qemu guest agent is not available: http://logs.openstack.org/86/147186/41/experimental/gate-tempest-dsvm- full-devstack-plugin-nfs- nv/fe58d89/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-09-30_10_36_32_039 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [req-93930c3c-4482-4c30-97ad-4dbcd251a9ba nova service] [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] Unable to create quiesced VM snapshot, attempting again with quiescing disabled. 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] Traceback (most recent call last): 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1827, in _volume_snapshot_create 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] reuse_ext=True, quiesce=True) 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] File "/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 508, in snapshot 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] self._domain.snapshotCreateXML(conf.to_xml(), flags=flags) 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] result = proxy_call(self._autowrap, f, *args, **kwargs) 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] rv = execute(f, *args, **kwargs) 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] six.reraise(c, e, tb) 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] rv = meth(*args, **kwargs) 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 2592, in snapshotCreateXML 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] if ret is None:raise libvirtError('virDomainSnapshotCreateXML() failed', dom=self) 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] libvirtError: The volume snapshot method then goes on to snapshot the guest without quiescing first. We shouldn't stacktrace an error on this, we should dump a warning in the logs at most. ** Affects: nova Importance: Medium Assignee: Matt Riedemann (mriedem) Status: Triaged ** Tags: libvirt snapshot ** Changed in: nova Assignee: (unassigned) => Matt Riedemann (mriedem) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1629371 Title: libvirt: volume snapshot stacktraces if the qemu guest agent is not available Status in OpenStack Compute (nova): Triaged Bug description: The libvirt driver attempts to quiece the filesystem when doing a volume snapshot and will stacktrace if the qemu guest agent is not available: http://logs.openstack.org/86/147186/41/experimental/gate-tempest-dsvm- full-devstack-plugin-nfs- nv/fe58d89/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-09-30_10_36_32_039 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [req-93930c3c-4482-4c30-97ad-4dbcd251a9ba nova service] [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] Unable to create quiesced VM snapshot, attempting again with quiescing disabled. 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] Traceback (most recent call last): 2016-09-30 10:36:32.039 10790 ERROR nova.virt.libvirt.driver [instance: 01211c7e-b3ac-4d17-80bb-30fbe8494c08] File
[Yahoo-eng-team] [Bug 1628301] Re: SR-IOV not working in Mitaka and Intel X series NIC
Adding Neutron since I believe the issue is the neutron-sriov-nic-agent not building the port so that nova can allocate it for the instance. ** Also affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1628301 Title: SR-IOV not working in Mitaka and Intel X series NIC Status in neutron: New Status in OpenStack Compute (nova): New Bug description: The SRIO functionality in Mitaka seems broken, all configuration options we evaluated lead to NovaException: Unexpected vif_type=binding_failed errors, stack following. We are currently using this code base, along with SRIOV configuration posted here Nova SHA 611efbe77c712d9ac35904f659d28dd0f0c1b3ff # HEAD of "stable/mitaka" as of 08.09.2016 Neutron SHA c73269fa480a8a955f440570fc2fa6c347e3bb3c # HEAD of "stable/mitaka" as of 08.09.2016 Stack : 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] Traceback (most recent call last): 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] File "/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/compute/manager.py", line 2218, in _build_resources 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] yield resources 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] File "/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/compute/manager.py", line 2064, in _build_and_run_instance 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] block_device_info=block_device_info) 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] File "/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2776, in spawn 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] write_to_disk=True) 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] File "/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4729, in _get_guest_xml 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] context) 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] File "/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4595, in _get_guest_config 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] flavor, virt_type, self._host) 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] File "/openstack/venvs/nova-13.3.4/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 447, in get_config 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] _("Unexpected vif_type=%s") % vif_type) 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] NovaException: Unexpected vif_type=binding_failed 2016-09-27 16:09:09.156 10248 ERROR nova.compute.manager [instance: 00c620f0-1b5d-43c2-89f6-d5a5c4ce98fa] Interestingly the nova resource tracker seem to be able to create a list of all available sriov devices and they show up correctly inside the database as pci_device table entries 2016-09-27 16:13:52.175 10248 INFO nova.compute.resource_tracker [req-284a7832-3794-4597-b939-273ea75d45f7 - - - - -] Total usable vcpus: 32, total allocated vcpus: 0 2016-09-27 16:13:52.175 10248 INFO nova.compute.resource_tracker [req-284a7832-3794-4597-b939-273ea75d45f7 - - - - -] Final resource view: name=compute01 phys_ram=25 MB used_ram=2048MB phys_disk=1935GB used_disk=2GB total_vcpus=32 used_vcpus=0 pci_stats=[PciDevicePool(count=15,numa_node=None,product_id='10ed',tags={dev_type='type-VF',physical_network='physnet1'},vendor _id='8086'), PciDevicePool(count=2,numa_node=None,product_id='10fb',tags={dev_type='type-PF',physical_network='physnet1'},vendor_id='8086')] Available ports inside DB: +-+--++---+--+--+---+ | compute_node_id | address | product_id | vendor_id | dev_type | dev_id | status| +-+--++---+--+--+---+ | 5 | :88:10.1 | 10ed | 8086 | type-VF | pci__88_10_1 | available | | 5 |
[Yahoo-eng-team] [Bug 1629368] [NEW] Assert fails if Subnet has extra fields
Public bug reported: Why assertion checks are done in such a way that if a subnet has couple more fields other that {u'name', u'enable_dhcp', u'network_id', u'tenant_id', u'dns_nameservers' , u'ipv6_ra_mode', u'allocation_pools',u'gateway_ip', u'ipv6_address_mode', u'ip_version', u'host_routes', u'cidr', u'id', u'subnetpool_id'} the assertions fails ? The tests should check all these fields are present and it should just pass regrdless is that subnet has more fields. for ex: subnet-show outputs: assertion between [1] and [2] should pass. [1] {u'name': u'', u'enable_dhcp': True, u'network_id': u'047e846a-abb9-4174-a9f7-daef5c125003', u'tenant_id': u'14793945c3674cd28f6b795c991c6091', u'dns_nameservers': [], u'ipv6_ra_mode': None, u'allocation_pools': [{u'start': u'10.100.0.2', u'end': u'10.100.0.14'}], u'gateway_ip': u'10.100.0.1', u'ipv6_address_mode': None, u'ip_version': 4, u'host_routes': [], u'cidr': u'10.100.0.0/28', u'id': u'f9747d21-e2c7-417b-aab8-ad32e2322c56', u'subnetpool_id': None} [2] {u'name': u'', u'enable_dhcp': True, u'network_id': u'047e846a-abb9-4174-a9f7-daef5c125003', u'tenant_id': u'14793945c3674cd28f6b795c991c6091', u'dns_nameservers': [], u'ipv6_ra_mode': None, u'allocation_pools': [{u'start': u'10.100.0.2', u'end': u'10.100.0.14'}], u'gateway_ip': u'10.100.0.1', u'ipv6_address_mode': None, u'ip_version': 4, u'host_routes': [], u'cidr': u'10.100.0.0/28', u'id': u'f9747d21-e2c7-417b-aab8-ad32e2322c56', u'subnetpool_id': None, u'XYZ: u`ABC...etc} Thanks. ** Affects: neutron Importance: Undecided Status: New ** Affects: tempest Importance: Undecided Status: New ** Also affects: tempest Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1629368 Title: Assert fails if Subnet has extra fields Status in neutron: New Status in tempest: New Bug description: Why assertion checks are done in such a way that if a subnet has couple more fields other that {u'name', u'enable_dhcp', u'network_id', u'tenant_id', u'dns_nameservers' , u'ipv6_ra_mode', u'allocation_pools',u'gateway_ip', u'ipv6_address_mode', u'ip_version', u'host_routes', u'cidr', u'id', u'subnetpool_id'} the assertions fails ? The tests should check all these fields are present and it should just pass regrdless is that subnet has more fields. for ex: subnet-show outputs: assertion between [1] and [2] should pass. [1] {u'name': u'', u'enable_dhcp': True, u'network_id': u'047e846a-abb9-4174-a9f7-daef5c125003', u'tenant_id': u'14793945c3674cd28f6b795c991c6091', u'dns_nameservers': [], u'ipv6_ra_mode': None, u'allocation_pools': [{u'start': u'10.100.0.2', u'end': u'10.100.0.14'}], u'gateway_ip': u'10.100.0.1', u'ipv6_address_mode': None, u'ip_version': 4, u'host_routes': [], u'cidr': u'10.100.0.0/28', u'id': u'f9747d21-e2c7-417b-aab8-ad32e2322c56', u'subnetpool_id': None} [2] {u'name': u'', u'enable_dhcp': True, u'network_id': u'047e846a-abb9-4174-a9f7-daef5c125003', u'tenant_id': u'14793945c3674cd28f6b795c991c6091', u'dns_nameservers': [], u'ipv6_ra_mode': None, u'allocation_pools': [{u'start': u'10.100.0.2', u'end': u'10.100.0.14'}], u'gateway_ip': u'10.100.0.1', u'ipv6_address_mode': None, u'ip_version': 4, u'host_routes': [], u'cidr': u'10.100.0.0/28', u'id': u'f9747d21-e2c7-417b-aab8-ad32e2322c56', u'subnetpool_id': None, u'XYZ: u`ABC...etc} Thanks. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1629368/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1596473] Re: Packet loss with DVR, router MAC learned and flapping
I've probably found the problem. On the compute nodes, I did not have enable_distributed_routing = True In the [agent] section of the ml2_conf.ini. As a result, the mechanism that prevents MAC address conflicts was disabled. It is interesting that it worked that good without it. ** Changed in: neutron Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1596473 Title: Packet loss with DVR, router MAC learned and flapping Status in neutron: Invalid Bug description: Already posted on the Operator mailing list without answer http://comments.gmane.org/gmane.comp.cloud.openstack.operators/5920 I've stumbled upon a weird condition in Neutron and couldn't find a bug filed for it. So even if it is happening with the Kilo release, it could still be relevant. I've also read the commit logs without finding anything relevant. The setup has 3 network nodes and 1 compute node currently hosting a virtual network (GRE based). DVR is enabled. I have just added IPv6 to this network and to the external network (VLAN based). The virtual network is set to SLAAC. Now, all four mentioned nodes have spawned a radvd process and VMs are getting globally routable addresses. Traffic has been statically routed to the subnet so reachability is OK in both ways. However, the link-local router address and associated MAC address is the same in all 4 qr namespaces. About 16% packets get lost in randomly occuring bursts. Openvswitch forwarding tables are flapping and I think that the packet loss occurs at the moment when all 4 switches learn the MAC address from another machine through a GRE tunnel simultaneously. With a second VM on the network on another compute node, the packet loss is 12%. Another router address and the external gateway address resides in a snat namespace, which exists in only one copy. When I tell the VM to route through that, there is no packet loss. My best solution for this so far is by passing a script to the VM through user-data that changes the gateway and adds a rc script to do the same on reboot. Is there any way to change the behavior to get rid of the MAC address conflict? I have determined that pushing a host route to the VMs is not supported for IPv6. Therefore, the workaround is not feasible if uninformed users will be launching VMs. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1596473/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1511589] Re: maas provider, hwclock out of sync means juju will not work
MAAS now provides NTP and keeps the MAAS servers as well as the machines in sync. As such, we are closing this one. Thanks. ** Changed in: maas Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1511589 Title: maas provider, hwclock out of sync means juju will not work Status in cloud-init: Confirmed Status in curtin: Triaged Status in falkor: Fix Released Status in juju-core: Invalid Status in MAAS: Fix Released Bug description: MAAS provides no means to ensure the hardware clock is set, and juju relies on accurate clocks. Leading to errors like this when you bootstrap on machines that otherwise works fine: "ERROR juju.cmd supercommand.go:430 gomaasapi: got error back from server: 401 OK (Authorization Error: \'Expired timestamp: given 1446087606 and now 1446094822 has a greater difference than threshold 300\')\nERROR failed to bootstrap environment: subprocess encountered error code 1\n\')'), 1), (u'waiting', 179), (u'succeeded', 10)]" The only thing a user can do is touch each machine, sometimes booting them into an OS to fix their hwclock (which can still drift from that point, of course). This error path is exposed when the stock 'ntpdate' from ubuntu does not work, for instance, if your lab is behind a proxy. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1511589/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629347] [NEW] test_check_sanity_6b461a21bcfc_dup_on_no_fixed_ip fails with "(sqlite3.OperationalError) table floatingips already exists"
Public bug reported: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_check_sanity_6b461a21bcfc_dup_on_no_fixed_ip%5C%22 Traceback (most recent call last): File "neutron/tests/base.py", line 125, in func return f(self, *args, **kwargs) File "neutron/tests/functional/db/test_migrations.py", line 432, in test_check_sanity_6b461a21bcfc_dup_on_no_fixed_ip floatingips.create(conn) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", line 742, in create checkfirst=checkfirst) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1532, in _run_visitor **kwargs).traverse_single(element) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/sql/visitors.py", line 121, in traverse_single return meth(obj, **kw) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 767, in visit_table include_foreign_key_constraints=include_foreign_key_constraints File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 945, in execute return meth(self, multiparams, params) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 68, in _execute_on_connection return connection._execute_ddl(self, multiparams, params) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1002, in _execute_ddl compiled File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context context) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1393, in _handle_dbapi_exception exc_info File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 202, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context context) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 462, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table floatingips already exists [SQL: u'\nCREATE TABLE floatingips (\n\tfloating_network_id VARCHAR(36), \n\tfixed_port_id VARCHAR(36), \n\tfixed_ip_address VARCHAR(64)\n)\n\n'] 7 hits since the test landed the tree. ** Affects: neutron Importance: High Status: Confirmed ** Tags: db functional-tests gate-failure newton-rc-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1629347 Title: test_check_sanity_6b461a21bcfc_dup_on_no_fixed_ip fails with "(sqlite3.OperationalError) table floatingips already exists" Status in neutron: Confirmed Bug description: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_check_sanity_6b461a21bcfc_dup_on_no_fixed_ip%5C%22 Traceback (most recent call last): File "neutron/tests/base.py", line 125, in func return f(self, *args, **kwargs) File "neutron/tests/functional/db/test_migrations.py", line 432, in test_check_sanity_6b461a21bcfc_dup_on_no_fixed_ip floatingips.create(conn) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", line 742, in create checkfirst=checkfirst) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1532, in _run_visitor **kwargs).traverse_single(element) File "/var/lib/jenkins/workspace/openstack/sqla_branch/rel_1_1/neutron/.tox/sqla_py27/lib/python2.7/site-packages/sqlalchemy/sql/visitors.py", line 121, in traverse_single return meth(obj, **kw) File
[Yahoo-eng-team] [Bug 1424728] Re: Remove old rpc alias(es) from nova.conf and code
** Also affects: neutron Importance: Undecided Status: New ** Tags added: deprecation oslo ** Changed in: neutron Importance: Undecided => Low ** Changed in: neutron Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka) ** Changed in: neutron Status: New => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1424728 Title: Remove old rpc alias(es) from nova.conf and code Status in grenade: New Status in neutron: In Progress Status in OpenStack Compute (nova): Confirmed Status in oslo.messaging: Confirmed Bug description: We have several TRANSPORT_ALIASES entries from way back (Essex, Havana) http://git.openstack.org/cgit/openstack/nova/tree/nova/rpc.py#n48 We need a way to warn end users that they need to fix their nova.conf So these can be removed in a later release (full cycle?) To manage notifications about this bug go to: https://bugs.launchpad.net/grenade/+bug/1424728/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575998] Re: The actual value of 'request_spec' should be reported when a MaxRetriesExceeded exception is raised
Reviewed: https://review.openstack.org/310639 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=1a80c8899d541fe451afb030d5011cf8c7543a3c Submitter: Jenkins Branch:master commit 1a80c8899d541fe451afb030d5011cf8c7543a3c Author: Wenzhi YuDate: Thu Apr 28 10:32:44 2016 +0800 Report actual request_spec when MaxRetriesExceeded raised If a MaxRetriesExceeded exception is raised by scheduler_utils.populate_retry then request_spec will be empty in the exception handler[1], then _set_vm_state_and_notify method will just put a empty dict as request_spec into the payload of notification[2]. It would make more sense if we report the actual value of request_spec in the notification. [1]https://github.com/openstack/nova/blob/13.0.0.0rc3/nova/conductor/manager.py#L382 [2]https://github.com/openstack/nova/blob/13.0.0.0rc3/nova/scheduler/utils.py#L109 Simply moving the initialization of request_spec up one line before the call to populate_retry should fix the issue. Change-Id: I7c51f635d52f368c8df549f62024cbdf64a032b3 Closes-Bug: #1575998 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1575998 Title: The actual value of 'request_spec' should be reported when a MaxRetriesExceeded exception is raised Status in OpenStack Compute (nova): Fix Released Bug description: If a MaxRetriesExceeded exception is raised by scheduler_utils.populate_retry then request_spec will be empty in the exception handler[1], then _set_vm_state_and_notify method will just put a empty dict as request_spec into the payload of notification[2]. It would make more sense if we report the actual value of request_spec in the notification. [1]https://github.com/openstack/nova/blob/13.0.0.0rc3/nova/conductor/manager.py#L382 [2]https://github.com/openstack/nova/blob/13.0.0.0rc3/nova/scheduler/utils.py#L109 Simply moving the initialization of request_spec up one line before the call to populate_retry should fix the issue. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1575998/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1624757] Re: i18n: string concatenations in JavaScript code
Reviewed: https://review.openstack.org/371987 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=64362120733682859e734b2f869b267226100760 Submitter: Jenkins Branch:master commit 64362120733682859e734b2f869b267226100760 Author: Akihiro MotokiDate: Sat Sep 17 21:41:16 2016 + i18n: Avoid string concatenations to make translation life happier Change-Id: Iea0cef814f212662dc4403f62a2c6bea02ab1390 Closes-Bug: #1624757 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1624757 Title: i18n: string concatenations in JavaScript code Status in OpenStack Dashboard (Horizon): Fix Released Bug description: String concatenation makes translation difficult in some cases. There are some string concatenations in JS code. openstack_dashboard/contrib/developer/static/dashboard/developer /resource-browser/resource-browser-item.controller.js toastService.add('error', gettext("resource load failed: " + reason)); openstack_dashboard/dashboards/project/static/dashboard/project/workflow /launch-instance/keypair/create-keypair.controller.js var errorMessage = gettext('Unable to generate') + ' "' + ctrl.keypair + '". ' + gettext('Please try again.'); openstack_dashboard/dashboards/project/static/dashboard/project/workflow /launch-instance/keypair/keypair.controller.js toastService.add('success', gettext('Created keypair: ' + newKeypair.name)); To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1624757/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1626302] Re: Boot source description in the Launch Instance dialog is missing the Volume Snapshot option
Reviewed: https://review.openstack.org/375904 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=49c942d9cb9cd2db28b42b8e868029159d1f8879 Submitter: Jenkins Branch:master commit 49c942d9cb9cd2db28b42b8e868029159d1f8879 Author: Akihiro MotokiDate: Sun Sep 25 01:12:31 2016 + Add volume snapshot to boot source description Also sorts boot source options in the order of the dropdown menu of "Select Boot Source". Change-Id: Icc9796123afe84af9dc1e8fc27cb56dd83a7a5d3 Closes-Bug: #1626302 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1626302 Title: Boot source description in the Launch Instance dialog is missing the Volume Snapshot option Status in OpenStack Dashboard (Horizon): Fix Released Bug description: openstack_dashboard/dashboards/project/static/dashboard/project/workflow /launch-instance/source/source.html The following description text which appears on the Source tab in the Launch Instance dialog is missing the Volume Snapshot option, that is actually available under the dropdown list. Instance source is the template used to create an instance. You can use a snapshot of an existing instance, an image, or a volume (if enabled). You can also choose to use persistent storage by creating a new volume. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1626302/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1403020] Re: Kwarg 'filter_class_names' is never passed to HostManager#get_filtered_hosts
Cinder's FilterScheduler has a few places called get_filtered_hosts() but none of these invocations passed filter_class_names, however this doesn't necessary mean current host manager or filter scheduler needs fixing. Deployers may have out-of-tree scheduler driver does pass in customized filter_class_names which can make use of this type of flexibility. Therefore, I don't think we need to fix this 'bug' in Cinder. ** Changed in: cinder Status: New => Won't Fix ** Changed in: cinder Importance: Undecided => Wishlist -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1403020 Title: Kwarg 'filter_class_names' is never passed to HostManager#get_filtered_hosts Status in Cinder: Won't Fix Status in Manila: New Status in OpenStack Compute (nova): Fix Released Bug description: the parameter filter_class_names from funciton get_filtered_hosts is not assigned values, so we always use the filters from CONF.scheduler_default_filters. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1403020/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629227] [NEW] Floating-ip-creation
Public bug reported: On network node a floating ip is being created by assigning a range of floating ip address to external network pool on routers, where gateway addresses and dhcp is disabled, but here neutronclient/v2_0/client.py returns an empty list to ceilometer which is wrong it should return a floating ip status or something meaningful. ** Affects: neutron Importance: Undecided Status: New ** Tags: ceilometer floating ip neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1629227 Title: Floating-ip-creation Status in neutron: New Bug description: On network node a floating ip is being created by assigning a range of floating ip address to external network pool on routers, where gateway addresses and dhcp is disabled, but here neutronclient/v2_0/client.py returns an empty list to ceilometer which is wrong it should return a floating ip status or something meaningful. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1629227/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622684] Re: Keycode error using novnc and Horizon console
** Changed in: nova Status: In Progress => Invalid ** Changed in: nova Assignee: Dr. Jens Rosenboom (j-rosenboom-j) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1622684 Title: Keycode error using novnc and Horizon console Status in OpenStack Dashboard (Horizon): Invalid Status in OpenStack Compute (nova): Invalid Bug description: When using Newton or Mitaka versons of OpenStack Horizon, I am unable to talk to the vm in the Horizon console window. I am using noVNC and I see the following in the console when ever pressing any key on the keyboard: atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 41.750245] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ 41.750591] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 41.815590] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). [ 41.816087] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 41.945017] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ 41.945848] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 42.393227] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). This appears to be related to recent code changes in noVNC. If I revert noVNC to the sha 4e0c36dda708628836dc6f5d68fc40d05c7716d9, everything works. This sha commit date is August 26, 2016. Phil To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1622684/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1280105] Re: urllib/urllib2 is incompatible for python 3
Reviewed: https://review.openstack.org/378203 Committed: https://git.openstack.org/cgit/openstack/swift/commit/?id=52473ea39277f6b3d8e7d77564189da6b1aec7c3 Submitter: Jenkins Branch:master commit 52473ea39277f6b3d8e7d77564189da6b1aec7c3 Author: Hanxi LiuDate: Wed Sep 28 10:58:30 2016 +0800 Use six.moves.urllib.parse instead of urllib Six urllib parse contains py3's urllib.parse and py2's urllib. Replace urllib with six.moves.urllib.parse to keep compatibility. Change-Id: Ie67987e4ffb981c2ee70360f7fa9b3fe873c2a96 Closes-bug: 1280105 ** Changed in: swift Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1280105 Title: urllib/urllib2 is incompatible for python 3 Status in Ceilometer: Fix Released Status in Cinder: Fix Released Status in Fuel for OpenStack: Fix Released Status in Glance: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in Magnum: Fix Released Status in Manila: Fix Committed Status in neutron: Fix Released Status in python-troveclient: Fix Released Status in refstack: Fix Released Status in Sahara: Fix Released Status in OpenStack Object Storage (swift): Fix Released Status in tacker: In Progress Status in tempest: Fix Released Status in OpenStack DBaaS (Trove): In Progress Status in Zuul: In Progress Bug description: urllib/urllib2 is incompatible for python 3 To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1280105/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1494358] Re: topology view does not support multi-networked instances where 1 network is outside of tenant
Reviewed: https://review.openstack.org/48 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=42ae3dbf058c7a02dbb9d61a8e924ec1e06af3c3 Submitter: Jenkins Branch:master commit 42ae3dbf058c7a02dbb9d61a8e924ec1e06af3c3 Author: ericDate: Thu Sep 10 09:26:13 2015 -0600 Topology filter out non tenant ports This change filters out ports presented to the topology view which are connected to networks which are not visible to the tenant. Such ports can be present when an admin creates an instance and ties and instance into two networks. Change-Id: Ib0e7ea38b42b580a65455c344f100e5aad67954e Closes-bug: #1494358 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1494358 Title: topology view does not support multi-networked instances where 1 network is outside of tenant Status in OpenStack Dashboard (Horizon): Fix Released Bug description: We have instances which have multiple ports, one port goes into the tenant network - and the other port goes into a network which is not visible to the tenant. ( This other network is used as a control plane for lbaas management) With these instances, they only *sometimes* show up in the topology view. I believe this is due to the order of the ports on the instance. If this first port listed is attached to a tenant network, it shows up in topology if the first port is attached to that outside network - then the instance will not show up. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1494358/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629198] [NEW] Router name is not displayed in the delete confirmation message window
Public bug reported: Reproduced in master. Steps to reproduce: 1. Go to delete Project/Network/Routers 2. Create router with some name 3. Click on the created router it takes to router details page. 4. Try to delete the router using "Delete Router" option found in the top right hand side drop down. Expected Result: Router name should be displayed in the delete confirmation message window. Actual Result: Router name is not displayed. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1629198 Title: Router name is not displayed in the delete confirmation message window Status in OpenStack Dashboard (Horizon): New Bug description: Reproduced in master. Steps to reproduce: 1. Go to delete Project/Network/Routers 2. Create router with some name 3. Click on the created router it takes to router details page. 4. Try to delete the router using "Delete Router" option found in the top right hand side drop down. Expected Result: Router name should be displayed in the delete confirmation message window. Actual Result: Router name is not displayed. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1629198/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp