[Yahoo-eng-team] [Bug 1697588] Re: update tempest plugin after removal of cred manager aliases
Reviewed: https://review.openstack.org/506983 Committed: https://git.openstack.org/cgit/openstack/ironic/commit/?id=5ef15a4dc376d19743f8ded07912faf0b6907923 Submitter: Jenkins Branch:master commit 5ef15a4dc376d19743f8ded07912faf0b6907923 Author: Luong Anh Tuan Date: Mon Sep 25 10:58:17 2017 +0700 Update after recent removal of cred manager aliases In tempest, alias 'admin_manager' has been moved to 'os_admin' 'alt_manager' to 'os_alt' and 'manager' to 'os_primary' in version Pike, and it will be removed in version Queens[1]. [1]I5f7164f7a7ec5d4380ca22885000caa0183a0bf7 Change-Id: Ifec0e661031647555dbc03ad1000c50c590afa8c Closes-bug: 1697588 ** Changed in: ironic Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1697588 Title: update tempest plugin after removal of cred manager aliases Status in Ironic: Fix Released Status in Ironic Inspector: Fix Released Status in networking-midonet: Fix Released Status in networking-sfc: Fix Released Status in neutron: Fix Released Status in tap-as-a-service: Fix Released Bug description: Update after tempest change I5f7164f7a7ec5d4380ca22885000caa0183a0bf7 To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1697588/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721686] [NEW] [Performance] Make image name column configurable
Public bug reported: This is a most like a RFE instead of a functional bug to improve the performance of instances panel. There are several reasons I think we can either remove the column or at least make it configurable. #1 AWS is not doing this :) #2 It's useless for more of the case, because I pretty much sure most of the openstack deployment (65%+) is using ceph, and most of them will use volume-based instance instead of image-based so that said, with volume- based instances you can't see any image info on the list #3 The API call to glance is very heavy and no cache (10:31:29) flwang1: https://api.mycloud.com:9292/v2/images?limit=1000&sort_key=created_at&sort_dir=desc Cloud provider may have hundreds public images and each time Horizon will try to list all of those public images and the private images of current tenant, even though all the instances are volume-based And I got some feedback from Rob and David: (02:04:26) robcresswell: flwang1: I disagree with removing the image column, personally. (02:04:45) robcresswell: Just axing all the functionality of the panel is not a good way to make it fast :p (04:00:46) david-lyle: robcresswell, you still have the image_id if you want to leave the column (04:01:04) david-lyle: what do you use the image name in that table for? (04:06:23) robcresswell: david-lyle: I would imagine there's some value in knowing what image its using, no? (04:06:36) david-lyle: again, there is the ID (04:06:52) david-lyle: but it's a very expensive call to map that ID to a name (04:06:52) robcresswell: Hey if you can remember images by UUID, by all means, remove it... (04:07:09) robcresswell: Yeah, true (04:07:27) david-lyle: same for the project name, but I think that's far more useful (04:07:45) david-lyle: maybe make image name column configurable (04:07:55) david-lyle: the ability to turn it off (04:08:27) david-lyle: so if you care you can leave it turned on, but if you want to optimize, enable a setting to skip it (04:08:52) robcresswell: I prefer that (04:09:03) david-lyle: leave the column actually (because that would get messy) but change the value between ID and name (04:09:09) robcresswell: yes (04:09:10) david-lyle: based on the setting (04:09:14) robcresswell: ID is free anyway (04:09:25) david-lyle: flwang1, ^^^ proposed compromise (04:09:30) david-lyle: when you're on ** Affects: horizon Importance: Undecided Assignee: Feilong Wang (flwang) Status: New ** Changed in: horizon Assignee: (unassigned) => Feilong Wang (flwang) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1721686 Title: [Performance] Make image name column configurable Status in OpenStack Dashboard (Horizon): New Bug description: This is a most like a RFE instead of a functional bug to improve the performance of instances panel. There are several reasons I think we can either remove the column or at least make it configurable. #1 AWS is not doing this :) #2 It's useless for more of the case, because I pretty much sure most of the openstack deployment (65%+) is using ceph, and most of them will use volume-based instance instead of image-based so that said, with volume-based instances you can't see any image info on the list #3 The API call to glance is very heavy and no cache (10:31:29) flwang1: https://api.mycloud.com:9292/v2/images?limit=1000&sort_key=created_at&sort_dir=desc Cloud provider may have hundreds public images and each time Horizon will try to list all of those public images and the private images of current tenant, even though all the instances are volume-based And I got some feedback from Rob and David: (02:04:26) robcresswell: flwang1: I disagree with removing the image column, personally. (02:04:45) robcresswell: Just axing all the functionality of the panel is not a good way to make it fast :p (04:00:46) david-lyle: robcresswell, you still have the image_id if you want to leave the column (04:01:04) david-lyle: what do you use the image name in that table for? (04:06:23) robcresswell: david-lyle: I would imagine there's some value in knowing what image its using, no? (04:06:36) david-lyle: again, there is the ID (04:06:52) david-lyle: but it's a very expensive call to map that ID to a name (04:06:52) robcresswell: Hey if you can remember images by UUID, by all means, remove it... (04:07:09) robcresswell: Yeah, true (04:07:27) david-lyle: same for the project name, but I think that's far more useful (04:07:45) david-lyle: maybe make image name column configurable (04:07:55) david-lyle: the ability to turn it off (04:08:27) david-lyle: so if you care you can leave it turned on, but if you want to optimize, enable a setting to skip it (04:08:52) robcresswell: I prefer that (04:09:03) david-lyle: leave the column actually (because that would g
[Yahoo-eng-team] [Bug 1721670] [NEW] Build notification in conductor fails to send due to InstanceNotFound
Public bug reported: I found this issue while working on improving the CellDatabases fixture by defaulting untargeted DB access to 'cell0' instead of 'cell1'. While building an instance in conductor, it sends a notification about the changed state using notifications.send_update_with_states. In notifications.send_update_with_states, one of the arguments is an Instance and if an attribute needs to be lazy-loaded, (example: tags, as part of the InstanceUpdatePayload) and the load method contains a _check_instance_exists_in_project call in the DB layer, InstanceNotFound is raised at that point because the context wasn't targeted to the instance's cell. We need to target the context in case the notification call needs to load something from the instance's cell database. ** Affects: nova Importance: Undecided Assignee: melanie witt (melwitt) Status: New ** Tags: cells -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1721670 Title: Build notification in conductor fails to send due to InstanceNotFound Status in OpenStack Compute (nova): New Bug description: I found this issue while working on improving the CellDatabases fixture by defaulting untargeted DB access to 'cell0' instead of 'cell1'. While building an instance in conductor, it sends a notification about the changed state using notifications.send_update_with_states. In notifications.send_update_with_states, one of the arguments is an Instance and if an attribute needs to be lazy-loaded, (example: tags, as part of the InstanceUpdatePayload) and the load method contains a _check_instance_exists_in_project call in the DB layer, InstanceNotFound is raised at that point because the context wasn't targeted to the instance's cell. We need to target the context in case the notification call needs to load something from the instance's cell database. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1721670/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721660] [NEW] Bugs in the openstack vendordata docs
Public bug reported: There are two issues with the docs in this page: http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html 1. http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html The link at the top to http://docs.openstack.org/admin-guide/compute- networking-nova.html#metadata-service no longer works. That should point here now: https://docs.openstack.org/nova/latest/admin/networking-nova.html #metadata-service 2. The "Vendor Data" link at the bottom of the page goes back to the same page: http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html #vendor-data I assume it was intended to go here: http://cloudinit.readthedocs.io/en/latest/topics/vendordata.html ** Affects: cloud-init Importance: Undecided Status: New ** Tags: docs -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1721660 Title: Bugs in the openstack vendordata docs Status in cloud-init: New Bug description: There are two issues with the docs in this page: http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html 1. http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html The link at the top to http://docs.openstack.org/admin-guide/compute- networking-nova.html#metadata-service no longer works. That should point here now: https://docs.openstack.org/nova/latest/admin/networking-nova.html #metadata-service 2. The "Vendor Data" link at the bottom of the page goes back to the same page: http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html #vendor-data I assume it was intended to go here: http://cloudinit.readthedocs.io/en/latest/topics/vendordata.html To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1721660/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721659] [NEW] Bugs in the openstack vendordata docs
Public bug reported: There are two issues with the docs in this page: http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html 1. http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html The link at the top to http://docs.openstack.org/admin-guide/compute- networking-nova.html#metadata-service no longer works. That should point here now: https://docs.openstack.org/nova/latest/admin/networking-nova.html #metadata-service 2. The "Vendor Data" link at the bottom of the page goes back to the same page: http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html #vendor-data I assume it was intended to go here: http://cloudinit.readthedocs.io/en/latest/topics/vendordata.html ** Affects: cloud-init Importance: Undecided Status: New ** Tags: docs -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1721659 Title: Bugs in the openstack vendordata docs Status in cloud-init: New Bug description: There are two issues with the docs in this page: http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html 1. http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html The link at the top to http://docs.openstack.org/admin-guide/compute- networking-nova.html#metadata-service no longer works. That should point here now: https://docs.openstack.org/nova/latest/admin/networking-nova.html #metadata-service 2. The "Vendor Data" link at the bottom of the page goes back to the same page: http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html #vendor-data I assume it was intended to go here: http://cloudinit.readthedocs.io/en/latest/topics/vendordata.html To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1721659/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721652] [NEW] Evacuate cleanup fails at _delete_allocation_for_moved_instance
Public bug reported: Description === After an evacuation, when nova-compute is restarted on the source host, the clean up of the old instance on the source host fails. The traceback in nova-compute.log ends with: 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 679, in _destroy_evacuated_instances 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service instance, migration.source_node) 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 1216, in delete_allocation_for_evacuated_instance 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service instance, node, 'evacuated', node_type) 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 1227, in _delete_allocation_for_moved_instance 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service cn_uuid = self.compute_nodes[node].uuid 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service KeyError: u'' 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service Steps to reproduce == Deploy instance on Host A. Shut down Host A. Evacuate instance to Host B. Turn back on Host A. Wait for cleanup of old instance allocation to occur Expected result === Clean up of old instance from Host A is successful Actual result = Old instance clean up appears to work but there's a traceback in the log and allocation is not cleaned up. Environment === (pike)nova-compute/now 10:16.0.0-201710030907 Additional Info: Problem seems to come from this change: https://github.com/openstack/nova/commit/0de806684f5d670dd5f961f7adf212961da3ed87 at: rt = self._get_resource_tracker() rt.delete_allocation_for_evacuated_instance That is called very early in init_host flow to clean up the allocations. The problem is that at this point in the startup the resource tracker's self.compute_node is still None. That makes delete_allocation_for_evacuated_instance blow up with a key error at: cn_uuid = self.compute_nodes[node].uuid The resource tracker's self.compute_node is actually initialized later on in the startup process via the update_available_resources() -> _update_available_resources() -> _init_compute_node(). It isn't initialized when the tracker is first created which appears to be the assumption made by the referenced commit. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1721652 Title: Evacuate cleanup fails at _delete_allocation_for_moved_instance Status in OpenStack Compute (nova): New Bug description: Description === After an evacuation, when nova-compute is restarted on the source host, the clean up of the old instance on the source host fails. The traceback in nova-compute.log ends with: 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 679, in _destroy_evacuated_instances 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service instance, migration.source_node) 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 1216, in delete_allocation_for_evacuated_instance 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service instance, node, 'evacuated', node_type) 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 1227, in _delete_allocation_for_moved_instance 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service cn_uuid = self.compute_nodes[node].uuid 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service KeyError: u'' 2017-10-04 05:32:18.725 5575 ERROR oslo_service.service Steps to reproduce == Deploy instance on Host A. Shut down Host A. Evacuate instance to Host B. Turn back on Host A. Wait for cleanup of old instance allocation to occur Expected result === Clean up of old instance from Host A is successful Actual result = Old instance clean up appears to work but there's a traceback in the log and allocation is not cleaned up. Environment === (pike)nova-compute/now 10:16.0.0-201710030907 Additional Info: Problem seems to come from this change: https://github.com/openstack/nova/commit/0de806684f5d670dd5f961f7adf212961da3ed87 at: rt = self._get_resource_tracker() rt.delete_allocation_for_evacuated_instance That is called very early in init_host flow to clean up the allocations. The problem is that at this point in the startup the re
[Yahoo-eng-team] [Bug 1662626] Re: live-migrate left in migrating as domain not found
** Also affects: cloud-archive Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1662626 Title: live-migrate left in migrating as domain not found Status in Ubuntu Cloud Archive: Confirmed Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) newton series: Fix Committed Status in OpenStack Compute (nova) ocata series: Fix Committed Bug description: A live-migration stress test was working fine when suddenly a VM stopped migrating. It failed with this error: ERROR nova.virt.libvirt.driver [req-df91ac40-820f-4aa9-945b- b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd- 9c2c-8a792aed3d6b] Error from libvirt during undefine. Code=42 Error=Domain not found: no domain with matching uuid '62034d78-3144 -4efd-9c2c-8a792aed3d6b' (instance-0431) The full stack trace: 2017-02-05 02:33:41.787 19770 INFO nova.virt.libvirt.driver [req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] Migration running for 240 secs, memory 9% remaining; (bytes processed=15198240264, remaining=1680875520, total=17314955264) 2017-02-05 02:33:45.795 19770 INFO nova.compute.manager [req-abff9c69-5f82-4ed6-af8a-fd1dc81a72a6 - - - - -] [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] VM Paused (Lifecycle Event) 2017-02-05 02:33:45.870 19770 INFO nova.compute.manager [req-abff9c69-5f82-4ed6-af8a-fd1dc81a72a6 - - - - -] [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] During sync_power_state the instance has a pending task (migrating). Skip. 2017-02-05 02:33:45.883 19770 INFO nova.virt.libvirt.driver [req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] Migration operation has completed 2017-02-05 02:33:45.884 19770 INFO nova.compute.manager [req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] _post_live_migration() is started.. 2017-02-05 02:33:46.156 19770 INFO os_vif [req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 669472610b194bfa9bf03f50f86d725a - - -] Successfully unplugged vif VIFBridge(active=True,address=fa:16:3e:a2:90:55,bridge_name='brq476ab6ba-b3',has_traffic_filtering=True,id=98d476b3-0ead-4adb-ad54-1dff63edcd65,network=Network(476ab6ba-b32e-409e-9711-9412e8475ea0),plugin='linux_bridge',port_profile=,preserve_on_delete=True,vif_name='tap98d476b3-0e') 2017-02-05 02:33:46.189 19770 INFO nova.virt.libvirt.driver [req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] Deleting instance files /var/lib/nova/instances/62034d78-3144-4efd-9c2c-8a792aed3d6b_del 2017-02-05 02:33:46.195 19770 INFO nova.virt.libvirt.driver [req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] Deletion of /var/lib/nova/instances/62034d78-3144-4efd-9c2c-8a792aed3d6b_del complete 2017-02-05 02:33:46.334 19770 ERROR nova.virt.libvirt.driver [req- df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd- 9c2c-8a792aed3d6b] Error from libvirt during undefine. Code=42 Error=Domain not found: no domain with matching uuid '62034d78-3144 -4efd-9c2c-8a792aed3d6b' (instance-0431) 2017-02-05 02:33:46.363 19770 WARNING nova.virt.libvirt.driver [req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] Error monitoring migration: Domain not found: no domain with matching uuid '62034d78-3144-4efd-9c2c-8a792aed3d6b' (instance-0431) 2017-02-05 02:33:46.363 19770 ERROR nova.virt.libvirt.driver [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] Traceback (most recent call last): 2017-02-05 02:33:46.363 19770 ERROR nova.virt.libvirt.driver [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] File "/openstack/venvs/nova-14.0.4/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6345, in _live_migration 2017-02-05 02:33:46.363 19770 ERROR nova.virt.libvirt.driver [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] finish_event, disk_paths) 2017-02-05 02:33:46.363 19770 ERROR nova.virt.libvirt.driver [instance: 62034d78-3144-4efd-9c2c-8a792aed3d6b] File "/openstack/venvs/nova-14.0.4/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
[Yahoo-eng-team] [Bug 1721644] [NEW] VolumesExtendAttachedTest.test_extend_attached_volume fails cells v1 job 100% A+
Public bug reported: This just merged: https://review.openstack.org/#/c/480778/ And the cells v1 API doesn't proxy the os-server-external-events API, so it's going to permafail that job. Need to blacklist the test. ** Affects: nova Importance: Critical Assignee: Matt Riedemann (mriedem) Status: In Progress ** Changed in: nova Status: New => Triaged ** Changed in: nova Importance: Undecided => Critical ** Changed in: nova Assignee: (unassigned) => Matt Riedemann (mriedem) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1721644 Title: VolumesExtendAttachedTest.test_extend_attached_volume fails cells v1 job 100% A+ Status in OpenStack Compute (nova): In Progress Bug description: This just merged: https://review.openstack.org/#/c/480778/ And the cells v1 API doesn't proxy the os-server-external-events API, so it's going to permafail that job. Need to blacklist the test. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1721644/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1716448] Re: Enable GVRP for vlan interfaces with linuxbridge agent option
Aside from the specific use case, I worry if enabling this option globally may pose a security risk where two tenant networks get accidentally cross-connected. ** Changed in: neutron Status: New => Won't Fix ** Changed in: neutron Status: Won't Fix => Confirmed ** Changed in: neutron Importance: Undecided => Wishlist -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1716448 Title: Enable GVRP for vlan interfaces with linuxbridge agent option Status in neutron: Confirmed Bug description: GARP VLAN registration protocol (GVRP) exchanges network VLAN information to allow switches to dynamically forward frames for one or more VLANs. By enabling gvrp on vlan interfaces created by linuxbridge agent operators will be able to dynamically create and destroy vlan based tenant networks. No additional switch configuration or software defined networking is required and brings the features of linuxbridge more in line with openvswitch based clouds. This should be enabled via an option in the linuxbridge agent config; however, there are no serious consequences for having it wrongly enabled. The changes required in the agent are checking the option, if true append 'gvrp on' to the 'ip link add' command that creates the vlan interface. For example 'ip link add link bond0 name bond0.365 type vlan id 365 gvrp on' creates a sub interface for vlan 365 on interface bond0 with gvrp enabled. Adding this capability greatly simplifies switch configuration and deployment of linuxbridge based clouds with minimal impact. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1716448/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717597] Re: Bad arping call in DVR centralized floating IP code
Reviewed: https://review.openstack.org/504252 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=92f1052d3c5d5bc7b89e49fdfff8f0ef24832a1c Submitter: Jenkins Branch:master commit 92f1052d3c5d5bc7b89e49fdfff8f0ef24832a1c Author: Brian Haley Date: Thu Sep 14 15:34:03 2017 -0600 DVR: Fix bad arping call in centralized floating IP code When the centralized floating IP code was added, the call to send_ip_addr_adv_notif() had an incorrect argument, leading to this failure in the l3-agent log: TypeError: range() integer end argument expected, got ConfigOpts. Closes-bug: #1717597 Change-Id: Ib0a5162912caac0508e19996fb13e431af39cfc4 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1717597 Title: Bad arping call in DVR centralized floating IP code Status in neutron: Fix Released Bug description: When the centralized floating IP code was added, the call to send_ip_addr_adv_notif() had an incorrect argument, leading to this failure in the l3-agent log: TypeError: range() integer end argument expected, got ConfigOpts. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1717597/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721514] Re: test_driver_spawn_fail_when_unshelving_instance fluctuates
Reviewed: https://review.openstack.org/509759 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b5dca17f74b7a789e16b78178ae5874a30f3e0da Submitter: Jenkins Branch:master commit b5dca17f74b7a789e16b78178ae5874a30f3e0da Author: Balazs Gibizer Date: Thu Oct 5 14:01:35 2017 +0200 fix unstable shelve offload functional tests The functional tests that are shelved offloaded instances and asserted that the resource allocation of the instance are freed were unstable. These tests only waited for the instance state to become SHELVED_OFFLOADED before checked the allocations. However the compute manager sets the instance state to SHELVED_OFFLOADED before deleting the allocations[1]. Therefore these tests were racy. With this patch the test will wait not only for the instance status to change but also for the instance host to be nulled as that happens after the resources are freed. [1] https://github.com/openstack/nova/blob/e4f89ed5dd4259188d020749fa8fb1c77be2c03a/nova/compute/manager.py#L4502-L4521 Change-Id: Ibb90571907cafcb649284e4ea30810a307f1737e Closes-Bug: #1721514 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1721514 Title: test_driver_spawn_fail_when_unshelving_instance fluctuates Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) pike series: In Progress Bug description: The functional test test_driver_spawn_fail_when_unshelving_instance fluctuates at the following place: Captured traceback: 2017-10-04 17:29:33.464455 | ~~~ 2017-10-04 17:29:33.464483 | b'Traceback (most recent call last):' 2017-10-04 17:29:33.464577 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/tests/functional/test_servers.py", line 2664, in test_driver_spawn_fail_when_unshelving_instance' 2017-10-04 17:29:33.464618 | b"{'vcpus': 0, 'ram': 0, 'disk': 0}, usages)" 2017-10-04 17:29:33.464695 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/tests/functional/test_servers.py", line 1117, in assertFlavorMatchesAllocation' 2017-10-04 17:29:33.464733 | b"self.assertEqual(flavor['vcpus'], allocation['VCPU'])" 2017-10-04 17:29:33.464816 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py", line 411, in assertEqual' 2017-10-04 17:29:33.464849 | b'self.assertThat(observed, matcher, message)' 2017-10-04 17:29:33.464931 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py", line 498, in assertThat' 2017-10-04 17:29:33.464953 | b'raise mismatch_error' 2017-10-04 17:29:33.464985 | b'testtools.matchers._impl.MismatchError: 0 != 1' 2017-10-04 17:29:33.464998 | b'' 2017-10-04 17:29:33.465008 | It is because the test waits for the instance state to be set to SHELVED_OFFLOADED and then asserts that the allocation of the instance is deleted in Placement. But the compute/manager set the instance state _before_ it deletes that allocation so the test is racy. [1] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_driver_spawn_fail_when_unshelving_instance%5C%22 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1721514/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1708252] Re: resource tracker updates its map of aggregates too often
Reviewed: https://review.openstack.org/489633 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=fc6caeeb5122ad4577ebd98f23309a68b8e9e738 Submitter: Jenkins Branch:master commit fc6caeeb5122ad4577ebd98f23309a68b8e9e738 Author: Chris Dent Date: Tue Aug 1 16:07:03 2017 +0100 Update RT aggregate map less frequently The _provider_aggregate_map was being updated every time _ensure_resource_provider was called. This is inefficient as aggregates are not updated that often. This change only updates the aggregate map for any single resource provider if it has not been updated in the last 300 seconds. The handling of the update has been moved out to its own method (instead of two calls in _ensure_resource_provider) so that if we want to move it to the periodic job or something like that, that will be easy. When a resource provider is new to the resource tracker, we always update the aggregate map for that resouce provider uuid. The initial pass of this used a single time for last update, instead of a time per resource provider uuid, but that would lead to some upredictability. The aggregate_refresh_map (which maps rp uuids to last update times) is public so that if we want to make it so the reset() on the compute manager can clear it, we have that option. Closes-Bug: #1708252 Change-Id: Ida7c79a3130ba1c159a37c984d382c789f46e111 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1708252 Title: resource tracker updates its map of aggregates too often Status in OpenStack Compute (nova): Fix Released Bug description: As of late in the Pike cycle, the resource tracker updates its map of aggregates associated with the resource provider it knows about everytime it calls `_ensure_resource_provider`. This method is called quite often, increasingly so as we do more stuff with resource providers from both the resource tracker and scheduler (both of which use the report client). This results in a lot of useless work that could create undue load on both client and server. There is a long standing TODO to have some kind of cache or timeout so that we update the aggregate map less often, as updates of those on the placement server side are relatively infrequent. We need to balance between doing the updates too often and there being a gap between when an aggregate change does happen and the map getting updated. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1708252/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721084] Re: openvswitch firewall driver is dropping packets when router migrated from DVR to HA
Reviewed: https://review.openstack.org/509228 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=0456515a7a06ee96c2929c684a82737a1067ce72 Submitter: Jenkins Branch:master commit 0456515a7a06ee96c2929c684a82737a1067ce72 Author: Jakub Libosvar Date: Tue Oct 3 16:58:32 2017 + br_int: Make removal of DVR flows more strict As ingres traffic to instance ports when using DVR uses same matching openflow rule as openvswitch firewall driver, it happens that setting admin_state_up of router deletes firewall rules. This patch makes the deletion more strict because DVR and ovs-firewall flows differ in priority. Thus using priority when removing DVR flows won't affect ovs-firewall flows. Closes-bug: #1721084 Change-Id: I4eb61b2824579a4f8ba219cd1b1dcf57d38ebc89 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1721084 Title: openvswitch firewall driver is dropping packets when router migrated from DVR to HA Status in neutron: Fix Released Bug description: Openvswitch firewall driver is dropping packets when router is migrated from DVR to HA. I see the packet is dropped at table 72 cookie=0x6b90d3f7582969b5, duration=62.044s, table=72, n_packets=7, n_bytes=518, idle_age=1, priority=50,ct_state=+inv+trk actions=drop complete br-int flows are - http://paste.openstack.org/show/622528/ output of "ovs-ofctl show br-int" http://paste.openstack.org/show/622530/ But with iptables firewall driver this works fine. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1721084/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717477] Re: cloud-init generates ordering cycle via After=cloud-init in systemd-fsck
This bug was fixed in the package cloud-init - 0.7.9-233-ge586fe35-0ubuntu1~16.04.2 --- cloud-init (0.7.9-233-ge586fe35-0ubuntu1~16.04.2) xenial-proposed; urgency=medium * cherry-pick a2f8ce9c: Do not provide systemd-fsck drop-in which could cause systemd ordering loops (LP: #1717477). -- Scott Moser Fri, 15 Sep 2017 15:23:38 -0400 ** Changed in: cloud-init (Ubuntu Xenial) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1717477 Title: cloud-init generates ordering cycle via After=cloud-init in systemd- fsck Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Xenial: Fix Released Status in cloud-init source package in Zesty: Fix Released Status in cloud-init source package in Artful: Fix Released Bug description: http://pad.lv/1717477 https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1717477 === Begin SRU Template === [Impact] Cloud-init's inclusion of a systemd drop-in file /lib/systemd/system/systemd-fsck@.service.d/cloud-init.conf Caused a regression on systems that had entries in /etc/fstab that were not authored by cloud-init (specifically that did not have something like 'x-systemd.requires=cloud-init.service' in their filesystem options. [Test Case] The test can be done on any cloud that has space to put a non-root filesystem. a.) launch instance b.) upgrade to cloud-init to -updates pocket c.) create a filesystem and put it in /etc/fstab bdev="/dev/sdb1" mkdir -p /mnt mkfs.ext4 -F "$bdev" echo "$bdev /mnt auto defaults 0 2" >> /etc/fstab reboot d.) see mention of 'ordering cycle' in journal $ journalctl -o short-precise | grep -i ordering.cycle Sep 15 14:08:48.331033 xenial-20170911-174122 systemd[1]: local-fs.target: Found ordering cycle on local-fs.target/start Sep 15 14:08:48.331097 xenial-20170911-174122 systemd[1]: local-fs.target: Breaking ordering cycle by deleting job mnt.mount/start Sep 15 14:08:48.331108 xenial-20170911-174122 systemd[1]: mnt.mount: Job mnt.mount/start deleted to break ordering cycle starting with local-fs.target/start e.) upgrade to proposed f.) reboot g.) expect no mention of ordering cycle as seen in 'd' $ journalctl -o short-precise | grep -i ordering.cycle || echo "no cycles" no cycles [Regression Potential] This change will mean that bug 1691489 is present again. That bug is much less severe and affects a much smaller set of users. [Other Info] Upstream commit at https://git.launchpad.net/cloud-init/commit/?id=a2f8ce9c80 === End SRU Template === We're running several machines with cloud-init_0.7.9-153-g16a7302f-0ubuntu1~16.04.2 without problems. Just upgraded all machines to cloud-init_0.7.9-233-ge586fe35-0ubuntu1~16.04.1 and rebooted them all. All machines report ordering cycles in their dmesg, resulting in systemd breaking the loop by NOT starting some important services, e.g. mouting local filesystems: Sep 14 15:43:52.487945 noname systemd[1]: networking.service: Found ordering cycle on networking.service/start Sep 14 15:43:52.487952 noname systemd[1]: networking.service: Found dependency on local-fs.target/start Sep 14 15:43:52.487960 noname systemd[1]: networking.service: Found dependency on home.mount/start Sep 14 15:43:52.487968 noname systemd[1]: networking.service: Found dependency on systemd-fsck@dev-disk-by\x2dlabel-Home.service/start Sep 14 15:43:52.487975 noname systemd[1]: networking.service: Found dependency on cloud-init.service/start Sep 14 15:43:52.487982 noname systemd[1]: networking.service: Found dependency on networking.service/start Sep 14 15:43:52.488297 noname systemd[1]: networking.service: Breaking ordering cycle by deleting job local-fs.target/start Sep 14 15:43:52.488306 noname systemd[1]: local-fs.target: Job local-fs.target/start deleted to break ordering cycle starting with networking.service/start % cat /etc/fstab LABEL=cloudimg-rootfs /ext4 defaults,discard0 1 LABEL=Home/homexfsdefaults,logbufs=8 0 2 In this case /home isn't mounted as a result of systemd breaking the loop, resulting in services depending on /home not being started. 1. Tell us your cloud provider AWS 2. dpkg-query -W -f='${Version}' cloud-init 0.7.9-233-ge586fe35-0ubuntu1~16.04.1 3. Any appropriate cloud-init configuration you can provide us Nothing special - worked with 0.7.9-153-g16a7302f-0ubuntu1~16.04.2 on all machines without hassle. The problem is this change: diff -uaNr 153/lib/systemd/system/systemd-fsck@.service.d/cloud-init.conf 233/lib/systemd/system/systemd-fsck@.service.d/cloud-init.conf --- 153/lib/sys
[Yahoo-eng-team] [Bug 1719055] Re: Add unnamed argument on translation file.
As far as I checked the current horizon code base in the master branch, I don't see any strings which violates the guideline above. I am marking this bug as Invalid. If you see any strings which do not follow the guideline, please point out the exact string (or place). If you find such strings in horizon plugins, please file a bug against a corresponding project. ** Changed in: horizon Status: Incomplete => Invalid ** Changed in: horizon Assignee: Sungjin Kang (sungjin) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1719055 Title: Add unnamed argument on translation file. Status in OpenStack Dashboard (Horizon): Invalid Bug description: This bus is refectoring bug. Nest message is an easy translation. Way? %s is naming. ``` gettext('The selected %(sourceType)s source requires a flavor with at least %(minRam)s MB of RAM. Select a flavor with more RAM or use a different %(sourceType)s source.') ``` It is difficult for some messages to translate work without naming `%s`. So I will change the text used in all the plugins used in `Horizon`. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1719055/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721157] Re: netplan render drops bridge_stp setting
** Also affects: cloud-init (Ubuntu) Importance: Undecided Status: New ** Changed in: cloud-init (Ubuntu) Status: New => Confirmed ** Changed in: cloud-init (Ubuntu) Importance: Undecided => Medium ** Changed in: cloud-init Status: Confirmed => Fix Committed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1721157 Title: netplan render drops bridge_stp setting Status in cloud-init: Fix Committed Status in cloud-init package in Ubuntu: Confirmed Bug description: cloud-init tip fails to render a netplan bridge configuration with a stp value. % cat network_configs/bridge-simple.yaml showtrace: true network: version: 1 config: # Physical interfaces. - type: physical name: eth0 mac_address: "52:54:00:12:34:00" subnets: - type: dhcp4 - type: physical name: eth1 mac_address: "52:54:00:12:34:02" # Bridge - type: bridge name: br0 bridge_interfaces: - eth1 params: bridge_fd: 150 bridge_stp: 'off' Currently renders a netplan config of: network: version: 2 ethernets: eth0: dhcp4: true match: macaddress: '52:54:00:12:34:00' set-name: eth0 eth1: match: macaddress: '52:54:00:12:34:02' set-name: eth1 bridges: br0: interfaces: - eth1 parameters: forward-delay: 150 netplan in artful with bridge stp support enables STP by default (unless disabled) This prevents setting some values of forward-delay. In any case, we should translate the bridge_stp value to the netplan boolean. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1721157/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721579] [NEW] azure: remove sriov device configuration
Public bug reported: Microsoft has requested that we remove the VF/sriov specific changes we made to the Azure datasource under private bug 1695119. There is another solution in place now in the linux kernel that will suffice. Essentially, this means backing out much of the commit ebc9ecbc8a76bdf511a456fb72339a7eb4c20568 https://git.launchpad.net/cloud-init/commit/?id=ebc9ecbc8a7 ** Affects: cloud-init Importance: High Status: Confirmed ** Affects: cloud-init (Ubuntu) Importance: Medium Status: Confirmed ** Affects: cloud-init (Ubuntu Xenial) Importance: Medium Status: Confirmed ** Affects: cloud-init (Ubuntu Zesty) Importance: Medium Status: Confirmed ** Affects: cloud-init (Ubuntu Artful) Importance: Medium Status: Confirmed ** Changed in: cloud-init Status: New => Confirmed ** Changed in: cloud-init Importance: Undecided => High ** Also affects: cloud-init (Ubuntu) Importance: Undecided Status: New ** Also affects: cloud-init (Ubuntu Artful) Importance: Undecided Status: New ** Also affects: cloud-init (Ubuntu Zesty) Importance: Undecided Status: New ** Also affects: cloud-init (Ubuntu Xenial) Importance: Undecided Status: New ** Changed in: cloud-init (Ubuntu Xenial) Status: New => Confirmed ** Changed in: cloud-init (Ubuntu Zesty) Status: New => Confirmed ** Changed in: cloud-init (Ubuntu Artful) Status: New => Confirmed ** Changed in: cloud-init (Ubuntu Xenial) Importance: Undecided => Medium ** Changed in: cloud-init (Ubuntu Zesty) Importance: Undecided => Medium ** Changed in: cloud-init (Ubuntu Artful) Importance: Undecided => Medium ** Summary changed: - revert + azure: remove sriov device configuration -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1721579 Title: azure: remove sriov device configuration Status in cloud-init: Confirmed Status in cloud-init package in Ubuntu: Confirmed Status in cloud-init source package in Xenial: Confirmed Status in cloud-init source package in Zesty: Confirmed Status in cloud-init source package in Artful: Confirmed Bug description: Microsoft has requested that we remove the VF/sriov specific changes we made to the Azure datasource under private bug 1695119. There is another solution in place now in the linux kernel that will suffice. Essentially, this means backing out much of the commit ebc9ecbc8a76bdf511a456fb72339a7eb4c20568 https://git.launchpad.net/cloud-init/commit/?id=ebc9ecbc8a7 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1721579/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1717477] Re: cloud-init generates ordering cycle via After=cloud-init in systemd-fsck
This bug was fixed in the package cloud-init - 0.7.9-233-ge586fe35-0ubuntu1~17.04.2 --- cloud-init (0.7.9-233-ge586fe35-0ubuntu1~17.04.2) zesty; urgency=medium * cherry-pick a2f8ce9c: Do not provide systemd-fsck drop-in which could cause systemd ordering cycles (LP: #1717477). -- Scott Moser Fri, 15 Sep 2017 15:30:01 -0400 ** Changed in: cloud-init (Ubuntu Zesty) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1717477 Title: cloud-init generates ordering cycle via After=cloud-init in systemd- fsck Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Xenial: Fix Committed Status in cloud-init source package in Zesty: Fix Released Status in cloud-init source package in Artful: Fix Released Bug description: http://pad.lv/1717477 https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1717477 === Begin SRU Template === [Impact] Cloud-init's inclusion of a systemd drop-in file /lib/systemd/system/systemd-fsck@.service.d/cloud-init.conf Caused a regression on systems that had entries in /etc/fstab that were not authored by cloud-init (specifically that did not have something like 'x-systemd.requires=cloud-init.service' in their filesystem options. [Test Case] The test can be done on any cloud that has space to put a non-root filesystem. a.) launch instance b.) upgrade to cloud-init to -updates pocket c.) create a filesystem and put it in /etc/fstab bdev="/dev/sdb1" mkdir -p /mnt mkfs.ext4 -F "$bdev" echo "$bdev /mnt auto defaults 0 2" >> /etc/fstab reboot d.) see mention of 'ordering cycle' in journal $ journalctl -o short-precise | grep -i ordering.cycle Sep 15 14:08:48.331033 xenial-20170911-174122 systemd[1]: local-fs.target: Found ordering cycle on local-fs.target/start Sep 15 14:08:48.331097 xenial-20170911-174122 systemd[1]: local-fs.target: Breaking ordering cycle by deleting job mnt.mount/start Sep 15 14:08:48.331108 xenial-20170911-174122 systemd[1]: mnt.mount: Job mnt.mount/start deleted to break ordering cycle starting with local-fs.target/start e.) upgrade to proposed f.) reboot g.) expect no mention of ordering cycle as seen in 'd' $ journalctl -o short-precise | grep -i ordering.cycle || echo "no cycles" no cycles [Regression Potential] This change will mean that bug 1691489 is present again. That bug is much less severe and affects a much smaller set of users. [Other Info] Upstream commit at https://git.launchpad.net/cloud-init/commit/?id=a2f8ce9c80 === End SRU Template === We're running several machines with cloud-init_0.7.9-153-g16a7302f-0ubuntu1~16.04.2 without problems. Just upgraded all machines to cloud-init_0.7.9-233-ge586fe35-0ubuntu1~16.04.1 and rebooted them all. All machines report ordering cycles in their dmesg, resulting in systemd breaking the loop by NOT starting some important services, e.g. mouting local filesystems: Sep 14 15:43:52.487945 noname systemd[1]: networking.service: Found ordering cycle on networking.service/start Sep 14 15:43:52.487952 noname systemd[1]: networking.service: Found dependency on local-fs.target/start Sep 14 15:43:52.487960 noname systemd[1]: networking.service: Found dependency on home.mount/start Sep 14 15:43:52.487968 noname systemd[1]: networking.service: Found dependency on systemd-fsck@dev-disk-by\x2dlabel-Home.service/start Sep 14 15:43:52.487975 noname systemd[1]: networking.service: Found dependency on cloud-init.service/start Sep 14 15:43:52.487982 noname systemd[1]: networking.service: Found dependency on networking.service/start Sep 14 15:43:52.488297 noname systemd[1]: networking.service: Breaking ordering cycle by deleting job local-fs.target/start Sep 14 15:43:52.488306 noname systemd[1]: local-fs.target: Job local-fs.target/start deleted to break ordering cycle starting with networking.service/start % cat /etc/fstab LABEL=cloudimg-rootfs /ext4 defaults,discard0 1 LABEL=Home/homexfsdefaults,logbufs=8 0 2 In this case /home isn't mounted as a result of systemd breaking the loop, resulting in services depending on /home not being started. 1. Tell us your cloud provider AWS 2. dpkg-query -W -f='${Version}' cloud-init 0.7.9-233-ge586fe35-0ubuntu1~16.04.1 3. Any appropriate cloud-init configuration you can provide us Nothing special - worked with 0.7.9-153-g16a7302f-0ubuntu1~16.04.2 on all machines without hassle. The problem is this change: diff -uaNr 153/lib/systemd/system/systemd-fsck@.service.d/cloud-init.conf 233/lib/systemd/system/systemd-fsck@.service.d/cloud-init.conf --- 153/lib/systemd/syste
[Yahoo-eng-team] [Bug 1721573] [NEW] ntp unit tests broken if no package program available in test environment
Public bug reported: apply this diff to show the error, just make 'ntp_installable' not find a package installer. At minimum, we just need to mock that out. $ git diff diff --git a/cloudinit/config/cc_ntp.py b/cloudinit/config/cc_ntp.py index 15ae1ecd..5ebdd461 100644 --- a/cloudinit/config/cc_ntp.py +++ b/cloudinit/config/cc_ntp.py @@ -147,6 +147,7 @@ def ntp_installable(): if util.system_is_snappy(): return False +return False if any(map(util.which, ['apt-get', 'dnf', 'yum', 'zypper'])): return True $ tox-venv py3 python3 -m nose tests/unittests/test_handler/test_handler_ntp.py:TestNtp ...EE == ERROR: Test ntp handler renders the shipped distro ntp.conf templates. -- Traceback (most recent call last): File "/home/smoser-public/src/cloud-init/cloud-init/tests/unittests/test_handler/test_handler_ntp.py", line 272, in test_ntp_handler_real_distro_templates cc_ntp.handle('notimportant', cfg, mycloud, None, None) File "/home/smoser-public/src/cloud-init/cloud-init/cloudinit/config/cc_ntp.py", line 128, in handle write_ntp_config_template(ntp_cfg, cloud, confpath, template=template_name) File "/home/smoser-public/src/cloud-init/cloud-init/cloudinit/config/cc_ntp.py", line 204, in write_ntp_config_template "not rendering %s"), path) RuntimeError: ('No template found, not rendering %s', '/etc/systemd/timesyncd.conf.d/cloud-init.conf') == ERROR: Ntp schema validation allows for an empty ntp: configuration. -- Traceback (most recent call last): File "/home/smoser-public/src/cloud-init/cloud-init/tests/unittests/test_handler/test_handler_ntp.py", line 305, in test_ntp_handler_schema_validation_allows_empty_ntp_config with open(ntp_conf) as stream: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/ci-TestNtp.nmgu_ja4/ntp.conf' == ERROR: Ntp schema validation warns of invalid keys present in ntp config. -- Traceback (most recent call last): File "/home/smoser-public/src/cloud-init/cloud-init/tests/unittests/test_handler/test_handler_ntp.py", line 374, in test_ntp_handler_schema_validation_warns_invalid_key_present with open(ntp_conf) as stream: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/ci-TestNtp.zj7sj7n8/ntp.conf' == ERROR: Ntp schema validation warns of non-strings in pools or servers. -- Traceback (most recent call last): File "/home/smoser-public/src/cloud-init/cloud-init/tests/unittests/test_handler/test_handler_ntp.py", line 331, in test_ntp_handler_schema_validation_warns_non_string_item_type with open(ntp_conf) as stream: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/ci-TestNtp.agut528j/ntp.conf' == ERROR: Ntp schema validation warns of duplicates in servers or pools. -- Traceback (most recent call last): File "/home/smoser-public/src/cloud-init/cloud-init/tests/unittests/test_handler/test_handler_ntp.py", line 400, in test_ntp_handler_schema_validation_warns_of_duplicates with open(ntp_conf) as stream: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/ci-TestNtp.ykjfhbx7/ntp.conf' == ERROR: Ntp schema validation warns of non-array pools or servers types. -- Traceback (most recent call last): File "/home/smoser-public/src/cloud-init/cloud-init/tests/unittests/test_handler/test_handler_ntp.py", line 352, in test_ntp_handler_schema_validation_warns_of_non_array_type with open(ntp_conf) as stream: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/ci-TestNtp.4f3ycmp1/ntp.conf' -- Ran 21 tests in 0.085s FAILED (errors=6) ** Affects: cloud-init Importance: Low Status: Confirmed ** Changed in: cloud-init Status: New => Confirmed ** Changed in: cloud-init Importance: Undecided => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1721573 Title: ntp unit tests broken if no package program available in test environment Status in cloud-init: Confirmed Bug description: apply this diff to show the error, j
[Yahoo-eng-team] [Bug 1721514] Re: test_driver_spawn_fail_when_unshelving_instance fluctuates
** Changed in: nova Importance: Undecided => Medium ** Also affects: nova/pike Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1721514 Title: test_driver_spawn_fail_when_unshelving_instance fluctuates Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) pike series: New Bug description: The functional test test_driver_spawn_fail_when_unshelving_instance fluctuates at the following place: Captured traceback: 2017-10-04 17:29:33.464455 | ~~~ 2017-10-04 17:29:33.464483 | b'Traceback (most recent call last):' 2017-10-04 17:29:33.464577 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/tests/functional/test_servers.py", line 2664, in test_driver_spawn_fail_when_unshelving_instance' 2017-10-04 17:29:33.464618 | b"{'vcpus': 0, 'ram': 0, 'disk': 0}, usages)" 2017-10-04 17:29:33.464695 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/tests/functional/test_servers.py", line 1117, in assertFlavorMatchesAllocation' 2017-10-04 17:29:33.464733 | b"self.assertEqual(flavor['vcpus'], allocation['VCPU'])" 2017-10-04 17:29:33.464816 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py", line 411, in assertEqual' 2017-10-04 17:29:33.464849 | b'self.assertThat(observed, matcher, message)' 2017-10-04 17:29:33.464931 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py", line 498, in assertThat' 2017-10-04 17:29:33.464953 | b'raise mismatch_error' 2017-10-04 17:29:33.464985 | b'testtools.matchers._impl.MismatchError: 0 != 1' 2017-10-04 17:29:33.464998 | b'' 2017-10-04 17:29:33.465008 | It is because the test waits for the instance state to be set to SHELVED_OFFLOADED and then asserts that the allocation of the instance is deleted in Placement. But the compute/manager set the instance state _before_ it deletes that allocation so the test is racy. [1] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_driver_spawn_fail_when_unshelving_instance%5C%22 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1721514/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1712565] Re: Instance disappears on admin panel
Reviewed: https://review.openstack.org/496699 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=65baa5fa6dbe55da06411f7e0c17af5dacfce68e Submitter: Jenkins Branch:master commit 65baa5fa6dbe55da06411f7e0c17af5dacfce68e Author: Ivan Kolodyazhny Date: Wed Aug 23 15:57:21 2017 +0300 Do not fail on AdminUpdateRow if tenant is not found We still can show instance info on admin/instances page even if tenant is deleted or we can't retrieve tenant's information. Change-Id: Idb1a5ffbb4103cce5258657d559bf4fe784b98d6 Closes-Bug: #1712565 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1712565 Title: Instance disappears on admin panel Status in OpenStack Dashboard (Horizon): Fix Released Bug description: Is somebody deletes tenant where instance was created, this instance won't be shown on admin/instances page To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1712565/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721423] Re: [Performance] Bad performance on instances list panel
Reviewed: https://review.openstack.org/509676 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=fa2e8327b9d0a10f29a4fbe3094a4533914b8ce3 Submitter: Jenkins Branch:master commit fa2e8327b9d0a10f29a4fbe3094a4533914b8ce3 Author: Feilong Wang Date: Thu Oct 5 14:37:34 2017 +1300 Add cache for get_microversion() against Nova Actions "Lock" and "Unlock" of instance on instances table are calling api.nova.is_feature_available() to check if the feature is supported by current Nova server. Unfortunately, the function get_microversion() called by is_feature_available() is not cached, which is causing about 40 unnecesary REST API calls. If the Nova's version is under Mitaka, it could be even worse, about 80 unnecesary API calls, see openstack_dashboard/api/nova.py#L60 and novaclient/v2/versions.py#L47 for more details. Closes-Bug: #1721423 Change-Id: Ie96b1a35e379d4cf407bfd53b1ee734178f9cb07 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1721423 Title: [Performance] Bad performance on instances list panel Status in OpenStack Dashboard (Horizon): Fix Released Bug description: I just found there is no cache for microversion check which may cause almost 40 unnecessary API calls. There are two actions for instances table need to check the microversion in allowed() function, see https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/tables.py#L870 but unfortunately, there is no cache for this check, see https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/nova.py#L60 And function get_microversion(request, feature) is also used by other functions. Based on my test, after adding cache for this function, about 3+ seconds are saved. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1721423/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721522] [NEW] encrypted volumes: Cannot format device /dev/sdb which is still in use
Public bug reported: Hi, I followed the guide here - https://docs.openstack.org/cinder/pike/configuration/block-storage /volume-encryption.html I also use Barbican and for that I added [barbican] auth_endpoint = http://controller:5000 to cinder.conf and nova.conf Creation of LUKS disks is successful. I also created normal disks and could easily attach them to an instance. Cinder disks are on LVM Attaching LUKS disks fails with the following trace: 017-10-05 11:44:57.445 1 INFO nova.compute.manager [req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 6a4c6d8b18714579a6e448e754d8838f - default default] [instance: 90376ab6-d553-477a-bda6-eeed8e70cc8d] Attaching volume 5c03f92f-470a-4f15-aaca-49d9232512a8 to /dev/vdc 2017-10-05 11:44:57.835 1 INFO oslo.privsep.daemon [req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 6a4c6d8b18714579a6e448e754d8838f - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpKwgHpn/privsep.sock'] 2017-10-05 11:44:58.598 1 INFO oslo.privsep.daemon [req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 6a4c6d8b18714579a6e448e754d8838f - default default] Spawned new privsep daemon via rootwrap 2017-10-05 11:44:58.548 80 INFO oslo.privsep.daemon [-] privsep daemon starting 2017-10-05 11:44:58.553 80 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0 2017-10-05 11:44:58.558 80 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none 2017-10-05 11:44:58.558 80 INFO oslo.privsep.daemon [-] privsep daemon running as pid 80 2017-10-05 11:45:01.468 1 INFO os_brick.initiator.connectors.iscsi [req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 6a4c6d8b18714579a6e448e754d8838f - default default] Trying to connect to iSCSI portal 10.10.245.211:3260 2017-10-05 11:45:05.762 1 WARNING os_brick.encryptors [req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 6a4c6d8b18714579a6e448e754d8838f - default default] Use of the in tree encryptor class nova.volume.encryptors.luks.LuksEncryptor by directly referencing the implementation class will be blocked in the Queens release of os-brick. 2017-10-05 11:45:07.431 1 WARNING os_brick.encryptors.luks [req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 6a4c6d8b18714579a6e448e754d8838f - default default] isLuks exited abnormally (status 1): Device /dev/sdb is not a valid LUKS device. Command failed with code 22: Invalid argument : ProcessExecutionError: Unexpected error while running command. Command: cryptsetup isLuks --verbose /dev/sdb Exit code: 1 Stdout: u'' Stderr: u'Device /dev/sdb is not a valid LUKS device.\nCommand failed with code 22: Invalid argument\n' 2017-10-05 11:45:07.432 1 INFO os_brick.encryptors.luks [req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 6a4c6d8b18714579a6e448e754d8838f - default default] /dev/sdb is not a valid LUKS device; formatting device for first use 2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 6a4c6d8b18714579a6e448e754d8838f - default default] [instance: 90376ab6-d553-477a-bda6-eeed8e70cc8d] Failed to attach volume at mountpoint: /dev/vdc: ProcessExecutionError: Unexpected error while running command. Command: cryptsetup --batch-mode luksFormat --key-file=- --cipher aes-xts-plain64 --key-size 256 /dev/sdb Exit code: 5 Stdout: u'' Stderr: u'Cannot format device /dev/sdb which is still in use.\n' 2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 90376ab6-d553-477a-bda6-eeed8e70cc8d] Traceback (most recent call last): 2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 90376ab6-d553-477a-bda6-eeed8e70cc8d] File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1250, in attach_volume 2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 90376ab6-d553-477a-bda6-eeed8e70cc8d] encryptor.attach_volume(context, **encryption) 2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 90376ab6-d553-477a-bda6-eeed8e70cc8d] File "/var/lib/kolla/venv/lib/python2.7/site-packages/os_brick/encryptors/luks.py", line 160, in attach_volume 2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 90376ab6-d553-477a-bda6-eeed8e70cc8d] self._format_volume(passphrase, **kwargs) 2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 90376ab6-d553-477a-bda6-eeed8e70cc8d] File "/var/lib/kolla/venv/lib/python2.7/site-packages/os_brick/encryptors/luks.py", line 87, in _format_volume 2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 90376ab6-d553-477a
[Yahoo-eng-team] [Bug 1721514] [NEW] test_driver_spawn_fail_when_unshelving_instance fluctuates
Public bug reported: The functional test test_driver_spawn_fail_when_unshelving_instance fluctuates at the following place: Captured traceback: 2017-10-04 17:29:33.464455 | ~~~ 2017-10-04 17:29:33.464483 | b'Traceback (most recent call last):' 2017-10-04 17:29:33.464577 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/tests/functional/test_servers.py", line 2664, in test_driver_spawn_fail_when_unshelving_instance' 2017-10-04 17:29:33.464618 | b"{'vcpus': 0, 'ram': 0, 'disk': 0}, usages)" 2017-10-04 17:29:33.464695 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/tests/functional/test_servers.py", line 1117, in assertFlavorMatchesAllocation' 2017-10-04 17:29:33.464733 | b"self.assertEqual(flavor['vcpus'], allocation['VCPU'])" 2017-10-04 17:29:33.464816 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py", line 411, in assertEqual' 2017-10-04 17:29:33.464849 | b'self.assertThat(observed, matcher, message)' 2017-10-04 17:29:33.464931 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py", line 498, in assertThat' 2017-10-04 17:29:33.464953 | b'raise mismatch_error' 2017-10-04 17:29:33.464985 | b'testtools.matchers._impl.MismatchError: 0 != 1' 2017-10-04 17:29:33.464998 | b'' 2017-10-04 17:29:33.465008 | It is because the test waits for the instance state to be set to SHELVED_OFFLOADED and then asserts that the allocation of the instance is deleted in Placement. But the compute/manager set the instance state _before_ it deletes that allocation so the test is racy. [1] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_driver_spawn_fail_when_unshelving_instance%5C%22 ** Affects: nova Importance: Undecided Assignee: Balazs Gibizer (balazs-gibizer) Status: New ** Tags: testing ** Changed in: nova Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer) ** Description changed: The functional test test_driver_spawn_fail_when_unshelving_instance fluctuates at the following place: Captured traceback: 2017-10-04 17:29:33.464455 | ~~~ 2017-10-04 17:29:33.464483 | b'Traceback (most recent call last):' 2017-10-04 17:29:33.464577 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/tests/functional/test_servers.py", line 2664, in test_driver_spawn_fail_when_unshelving_instance' 2017-10-04 17:29:33.464618 | b"{'vcpus': 0, 'ram': 0, 'disk': 0}, usages)" 2017-10-04 17:29:33.464695 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/tests/functional/test_servers.py", line 1117, in assertFlavorMatchesAllocation' 2017-10-04 17:29:33.464733 | b"self.assertEqual(flavor['vcpus'], allocation['VCPU'])" 2017-10-04 17:29:33.464816 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py", line 411, in assertEqual' 2017-10-04 17:29:33.464849 | b'self.assertThat(observed, matcher, message)' 2017-10-04 17:29:33.464931 | b' File "/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py", line 498, in assertThat' 2017-10-04 17:29:33.464953 | b'raise mismatch_error' 2017-10-04 17:29:33.464985 | b'testtools.matchers._impl.MismatchError: 0 != 1' 2017-10-04 17:29:33.464998 | b'' 2017-10-04 17:29:33.465008 | It is because the test waits for the instance state to be set to - SHELVED_OFFLOADED but and then asserts that the allocation of the - instance is deleted in Placement. But the compute/manager set the - instance state _before_ it deletes that allocation so the test is racy. + SHELVED_OFFLOADED and then asserts that the allocation of the instance + is deleted in Placement. But the compute/manager set the instance state + _before_ it deletes that allocation so the test is racy. - - [1] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_driver_spawn_fail_when_unshelving_instance%5C%22 + [1] + http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_driver_spawn_fail_when_unshelving_instance%5C%22 ** Tags added: testing -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1721514 Title: test_driver_spawn_fail_when_unshelving_instance fluctuates Status in OpenStack Compute (nova): New Bug description: The functional test test_driver_spawn_fail_when_unshe
[Yahoo-eng-team] [Bug 1717917] Re: test_resize_server_error_and_reschedule_was_failed failing due to missing notification
** Changed in: nova Status: Fix Released => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1717917 Title: test_resize_server_error_and_reschedule_was_failed failing due to missing notification Status in OpenStack Compute (nova): In Progress Bug description: The test_resize_server_error_and_reschedule_was_failed case was failed in jenkins couple of times[1]. It seems that the test only wait for the instance to go to ERROR state but compute.exception notification emitted after that which make the test racy. [1] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22testtools.matchers._impl.MismatchError%3A%202%20!%3D%201%3A%20Unexpected%20number%20of%20notifications%3A%5C%22 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1717917/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721503] [NEW] salt module not able to be used on FreeBSD
Public bug reported: unfortunately the salt module is not working on FreeBSD as the service name is not salt-minion but instead salt_minion. In addition the package is called differently on freeshports. see: https://www.freshports.org/sysutils/py-salt/ ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1721503 Title: salt module not able to be used on FreeBSD Status in cloud-init: New Bug description: unfortunately the salt module is not working on FreeBSD as the service name is not salt-minion but instead salt_minion. In addition the package is called differently on freeshports. see: https://www.freshports.org/sysutils/py-salt/ To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1721503/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1715437] Re: No docs for 'emulator_threads_policy', 'cpu_realtime', 'cpu_realtime_mask'
Reviewed: https://review.openstack.org/502056 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=bd3a4d242f7b0208d7ddb2571b31975ced91946e Submitter: Jenkins Branch:master commit bd3a4d242f7b0208d7ddb2571b31975ced91946e Author: Stephen Finucane Date: Fri Sep 8 14:05:38 2017 +0100 doc: Add documentation for cpu_realtime, cpu_realtime_mask This wasn't documented anywhere but the spec [1]. Fix this. We may want to provide a more in-depth overview of using RT features of OpenStack, but that's a future work item. [1] https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/libvirt-real-time.html Change-Id: Id30bc8447a6b482ad114ec6ebd3d5dab20ca0e3a Closes-Bug: #1715437 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1715437 Title: No docs for 'emulator_threads_policy', 'cpu_realtime', 'cpu_realtime_mask' Status in OpenStack Compute (nova): Fix Released Bug description: There is no documentation for the following extra spec properties: - 'emulator_threads_policy' - 'cpu_realtime' - 'cpu_realtime_mask' These should be included in [1]. [1] https://docs.openstack.org/nova/pike/admin/flavors.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1715437/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721502] [NEW] Date formats for qos panel are unintelligable
Public bug reported: Date format for created_at and updated_at is just a string that is not clear to the user what is actually meant by it. This should be in a clear format that shows date and time of creation and update. ** Affects: horizon Importance: Undecided Assignee: Beth Elwell (bethelwell) Status: In Progress ** Changed in: horizon Assignee: (unassigned) => Beth Elwell (bethelwell) ** Changed in: horizon Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1721502 Title: Date formats for qos panel are unintelligable Status in OpenStack Dashboard (Horizon): In Progress Bug description: Date format for created_at and updated_at is just a string that is not clear to the user what is actually meant by it. This should be in a clear format that shows date and time of creation and update. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1721502/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1484160] Re: Final decomposition of ML2 Cisco NCS driver
** Changed in: networking-cisco Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1484160 Title: Final decomposition of ML2 Cisco NCS driver Status in networking-cisco: Fix Released Status in neutron: Fix Released Bug description: Fully decompose the ncs driver from neutron. To manage notifications about this bug go to: https://bugs.launchpad.net/networking-cisco/+bug/1484160/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1721286] Re: Create Volume dialog on Instances tab displays incorrect AZs for cinder
** Also affects: horizon Importance: Undecided Status: New ** Tags added: sts -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1721286 Title: Create Volume dialog on Instances tab displays incorrect AZs for cinder Status in OpenStack openstack-dashboard charm: Invalid Status in OpenStack Dashboard (Horizon): New Bug description: Running openstack-origin cloud:ocata-xenial on stable/17.08 charms, we have an environment with the following Nova availability zones: west-java-1a west-java-1b west-java-1c When going to the Images tab of the project in Horizon, and selecting the far-right drop-down menu for any image and selecting "Create Volume" we are presented with a dialog which includes an Availability Zone drop-down which lists the above three AZs, none of which have a cinder-api or cinder-volume host residing within. When trying to create a volume from an instance on this dashboard, we get the error: Invalid input received: Availability zone 'west-java-1a' is invalid. (HTTP 400) When using Launch Instance with Create New Image = Yes, from same Image on the Images tab, we still get the same AZ dropdowns, but the system initializes the new volume and attaches to a running instance in that AZ properly. Also, when using the Volumes tab and pressing the Create New Volume button, we can create a volume from any image, and the Availability Zone in this dialog only shows the "nova" AZ. To re-create, build openstack ocata-xenial with three computes, one in each of 3 new AZs, cinder-api, cinder-ceph, and a minimal ceph cluster, all with defaults and load image into glance either with glance-simplestreams-sync or other method. Click into Horizon's Images tab of admin project and click the drop-down of an image and select Create Volume. Fill out the form, you should only see the 3 new AZs but no nova AZ for creation of the volume, it should give the 404 error. You'll notice that you might have the following availability zones: openstack availability zone list +--+-+ | Zone Name| Zone Status | +--+-+ | internal | available | | west-java-1a | available | | west-java-1b | available | | west-java-1c | available | | nova | available | | nova | available | | nova | available | +--+-+ This 404 error is coming from the cinder api and has nothing to do with glance/images. It's simply that cinder's availability zone is "nova" and the nova aggregate-based availability zones should not be used in a cinder availability zone pull-down on the Images tab Create Volume dialog. jujumanage@cmg01z00infr001:~/charms/cinder$ openstack volume create --availability-zone nova --size 50 foo +-+--+ | Field | Value| +-+--+ | attachments | [] | | availability_zone | nova | | bootable| false| | consistencygroup_id | None | | created_at | 2017-10-04T15:37:34.804855 | | description | None | | encrypted | False| | id | ca32eb14-60f8-42c8-a5ef-d7687d25d606 | | migration_status| None | | multiattach | False| | name| foo | | properties | | | replication_status | None | | size| 50 | | snapshot_id | None | | source_volid| None | | status | creating | | type| None | | updated_at | None | | user_id | b327544aba2a482b9f12f1e6e615c394 | +-+--+ jujumanage@cmg01z00infr001:~/charms/cinder$ openstack volume create --availability-zone west-java-1a --size 50 foo Invalid input received: Availability zone 'west-java-1a' is invalid. (HTTP 400) (Request-ID: req-2f7d7d00-f361-4772-9f71-66e4ebaefdc3) To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1721286/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-