[Yahoo-eng-team] [Bug 1718125] Re: Missing some contents for install prerequisites
** No longer affects: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1718125 Title: Missing some contents for install prerequisites Status in Glance: Fix Released Bug description: NOTE: Anyone redirected here from a duplicate, please read through. This has been fixed in Queens and the fix was backported to stable/pike. Description === Install glance followed by https://docs.openstack.org/glance/pike/install/install-rdo.html. But it missed content for create glance database. 'To create the database, complete these steps: Use the database access client to connect to the database server as the root user: $ mysql -u root -p' Environment === $ git log commit f8426378f892f250391b3d1004e27725d462481f Author: OpenStack Proposal Bot Date: Fri Sep 15 07:16:27 2017 + Imported Translations from Zanata For more information about this automatic import see: https://docs.openstack.org/i18n/latest/reviewing-translation-import.html Change-Id: Ie31a9ea996d8e42530a37ed9a9616cc44ebe65c8 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1718125/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1951669] [NEW] The 'flavor' field is Not available after launching vm
Public bug reported: When launching one vm and its status is active, but the 'flavor' field displayed "Not available". After refreshing the page, got correct flavor. $ git log commit 9d1bb3626bc1dbcf29a55aeb094f4350067317cd (HEAD -> master, tag: 20.2.0, origin/master, origin/HEAD) Author: Akihiro Motoki Date: Tue Oct 26 09:18:16 2021 +0900 Allow both Django 2.2 and 3.2 for smooth transition I believe we need the following steps and it is what I did in past when we bump the Django minimum version. 1. (already done) update global-requirements.txt to allow horizon to update requirements.txt to include Django 3.2. 2. specify the required Django version which includes both 2.2 and 3.2 (at this point upper-constraints uses 2.2) 3. update upper-constraints.txt in the requirements repo to use Django 3.2 4. bump the min version of Django in horizon (optionally) update non-primary-django tests to include non-primary versions of Django. It seems you drops 2.2 support together when we support 3.2, so perhaps this step is not the case though. https://review.opendev.org/c/openstack/horizon/+/811412 directly updated the min version to Django 3.2 which is incompatible with the global upper-constraints.txt. To avoid this, https://review.opendev.org/c/openstack/horizon/+/815206 made almost all tests non-voting. I am not a fan of such approach and believe there is a way to make the transition of Django version more smoothly. --- This commit reverts the zuul configuration changes in https://review.opendev.org/c/openstack/horizon/+/815206 and https://review.opendev.org/c/openstack/horizon/+/811412. horizon-tox-python3-django32 is voting now as we are making it the default version. Change-Id: I60bb672ef1b197e657a8b3bd86d07464bcb1759f ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1951669 Title: The 'flavor' field is Not available after launching vm Status in OpenStack Dashboard (Horizon): New Bug description: When launching one vm and its status is active, but the 'flavor' field displayed "Not available". After refreshing the page, got correct flavor. $ git log commit 9d1bb3626bc1dbcf29a55aeb094f4350067317cd (HEAD -> master, tag: 20.2.0, origin/master, origin/HEAD) Author: Akihiro Motoki Date: Tue Oct 26 09:18:16 2021 +0900 Allow both Django 2.2 and 3.2 for smooth transition I believe we need the following steps and it is what I did in past when we bump the Django minimum version. 1. (already done) update global-requirements.txt to allow horizon to update requirements.txt to include Django 3.2. 2. specify the required Django version which includes both 2.2 and 3.2 (at this point upper-constraints uses 2.2) 3. update upper-constraints.txt in the requirements repo to use Django 3.2 4. bump the min version of Django in horizon (optionally) update non-primary-django tests to include non-primary versions of Django. It seems you drops 2.2 support together when we support 3.2, so perhaps this step is not the case though. https://review.opendev.org/c/openstack/horizon/+/811412 directly updated the min version to Django 3.2 which is incompatible with the global upper-constraints.txt. To avoid this, https://review.opendev.org/c/openstack/horizon/+/815206 made almost all tests non-voting. I am not a fan of such approach and believe there is a way to make the transition of Django version more smoothly. --- This commit reverts the zuul configuration changes in https://review.opendev.org/c/openstack/horizon/+/815206 and https://review.opendev.org/c/openstack/horizon/+/811412. horizon-tox-python3-django32 is voting now as we are making it the default version. Change-Id: I60bb672ef1b197e657a8b3bd86d07464bcb1759f To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1951669/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1946534] [NEW] Failed to shutoff instance when in 'rescue'
Public bug reported: In nova, the instance can not be stopped in 'rescue' status, but the drop list of operations of instance has 'Shut off instance'. When the end-user click the menu, got error something like "Unable to shut off instance". We should block it to avoid user manipulation. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1946534 Title: Failed to shutoff instance when in 'rescue' Status in OpenStack Dashboard (Horizon): New Bug description: In nova, the instance can not be stopped in 'rescue' status, but the drop list of operations of instance has 'Shut off instance'. When the end-user click the menu, got error something like "Unable to shut off instance". We should block it to avoid user manipulation. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1946534/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1942977] [NEW] Escape characters in the returned exception prompt
Public bug reported: # curl -H "X-Auth-Token: $token" http://glance.openstack.svc.region-native-test.myinspurcloud.com//v2/images?protected=true11 400 Bad Request 400 Bad Request Invalid value 'true11' for 'protected' filter. Valid values are 'true' or 'false'. There are "'" in messages that should be removed. Glance version: commit ad39c12c64c8ff017918a8790d69d5278ac379da (HEAD -> stable/rocky, tag: rocky-em, tag: 17.0.1, origin/stable/rocky) Merge: 8d9ff5f5 f992a0b2 Author: Zuul Date: Fri Sep 20 18:25:41 2019 + Merge "Fix manpage building and remove glance-cache-manage" into stable/rocky ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1942977 Title: Escape characters in the returned exception prompt Status in Glance: New Bug description: # curl -H "X-Auth-Token: $token" http://glance.openstack.svc.region-native-test.myinspurcloud.com//v2/images?protected=true11 400 Bad Request 400 Bad Request Invalid value 'true11' for 'protected' filter. Valid values are 'true' or 'false'. There are "'" in messages that should be removed. Glance version: commit ad39c12c64c8ff017918a8790d69d5278ac379da (HEAD -> stable/rocky, tag: rocky-em, tag: 17.0.1, origin/stable/rocky) Merge: 8d9ff5f5 f992a0b2 Author: Zuul Date: Fri Sep 20 18:25:41 2019 + Merge "Fix manpage building and remove glance-cache-manage" into stable/rocky To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1942977/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1934412] [NEW] Find none logs when not fitting numa topology in nova-compute log
Public bug reported: Description === When 'host_topology' can not be satified with 'requested_topology', nova-compute returns the msg to nova-conductor but there is no logs in nova-compute.log. Thouth there are some debug logs, but `debug` configuration is not be set to true in production environment. So the warning log should be added. Steps to reproduce == * Set flavor with hugepage and total ram that exceeds the limit configured on the node openstack flavor create --vcpus 2 --ram 3000 --disk 10 --property hw:mem_page_size=2MB flv-2MB-huge On the node: vm.nr_hugepages = 20480 * Create one vm with the flavor openstack server create --image 1512209c-83ca-4a64-aab5-973120e61718 --flavor flv-2MB-huge --network 44dc3e3e-5146-458b-b9b9-6b65e5282efc --availability-zone ::compute01 test-vm Expected result === Find failed logs in nova-compute.log Actual result = Find no logs, and the failed logs can be found in nova-conductor.log Environment === Libvirt + KVM nova: $ git log commit 90455cdae3fae5289b07ae284db0f96e0544d9d2 (HEAD -> stable/wallaby, origin/stable/wallaby) Merge: 97c3517e7e 5d65680095 Author: Zuul Date: Sun Jun 27 07:27:03 2021 + Merge "libvirt: Set driver_iommu when attaching virtio devices to SEV instance" into stable/wallaby ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1934412 Title: Find none logs when not fitting numa topology in nova-compute log Status in OpenStack Compute (nova): New Bug description: Description === When 'host_topology' can not be satified with 'requested_topology', nova-compute returns the msg to nova-conductor but there is no logs in nova-compute.log. Thouth there are some debug logs, but `debug` configuration is not be set to true in production environment. So the warning log should be added. Steps to reproduce == * Set flavor with hugepage and total ram that exceeds the limit configured on the node openstack flavor create --vcpus 2 --ram 3000 --disk 10 --property hw:mem_page_size=2MB flv-2MB-huge On the node: vm.nr_hugepages = 20480 * Create one vm with the flavor openstack server create --image 1512209c-83ca-4a64-aab5-973120e61718 --flavor flv-2MB-huge --network 44dc3e3e-5146-458b-b9b9-6b65e5282efc --availability-zone ::compute01 test-vm Expected result === Find failed logs in nova-compute.log Actual result = Find no logs, and the failed logs can be found in nova-conductor.log Environment === Libvirt + KVM nova: $ git log commit 90455cdae3fae5289b07ae284db0f96e0544d9d2 (HEAD -> stable/wallaby, origin/stable/wallaby) Merge: 97c3517e7e 5d65680095 Author: Zuul Date: Sun Jun 27 07:27:03 2021 + Merge "libvirt: Set driver_iommu when attaching virtio devices to SEV instance" into stable/wallaby To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1934412/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1933062] [NEW] Got 404 not found in 'secure-live-migration-with-qemu-native-tls'
Public bug reported: In https://docs.openstack.org/nova/latest/admin/secure-live-migration- with-qemu-native-tls.html#prerequisites, when open (TLS everywhere)[https://docs.openstack.org/tripleo- docs/latest/install/advanced_deployment/tls_everywhere.html], gets the error "404 Not Found". Actually the link changes to https://docs.openstack.org/project-deploy- guide/tripleo-docs/latest/features/tls-everywhere.html. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1933062 Title: Got 404 not found in 'secure-live-migration-with-qemu-native-tls' Status in OpenStack Compute (nova): New Bug description: In https://docs.openstack.org/nova/latest/admin/secure-live-migration- with-qemu-native-tls.html#prerequisites, when open (TLS everywhere)[https://docs.openstack.org/tripleo- docs/latest/install/advanced_deployment/tls_everywhere.html], gets the error "404 Not Found". Actually the link changes to https://docs.openstack.org/project- deploy-guide/tripleo-docs/latest/features/tls-everywhere.html. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1933062/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1932126] [NEW] Cannot restore soft-deleted instance when node is failure
Public bug reported: Description === So far, the soft-deleted instance can not be evacuated. When the node can not be fixed or exceed the reclaim window, the instance can not be restored for ever. Steps to reproduce == * Set `reclaim_instance_interval` to 300s, and restart nova-api and nova-compute * then delete the instance A * then shutdown compute node for 500s * Use `openstack server evacuate xxx` and got the instance can not be evacuated. Expected result === Can be evacuated and restored Actual result = Do not be allowed to be evacuated Environment === # git log commit 22830d78b91946b108defe26b3a8ddefc2247363 (HEAD -> master, origin/master, origin/HEAD) Merge: fb020b360b 4d8bf15fec Author: Zuul Date: Wed Jun 16 00:53:08 2021 + Merge "libvirt: Set driver_iommu when attaching virtio devices to SEV instance" commit fb020b360b13faa53f64222bc81be0d965b47358 Merge: 7f83cbe9e2 0ac74f4e00 Author: Zuul Date: Tue Jun 15 21:47:10 2021 + Merge "Remove references to 'inst_type' Logs & Configs == nova evacuate a59b2915-5e2e-4541-8bba-235e05ab83dc ERROR (Conflict): Cannot 'evacuate' instance a59b2915-5e2e-4541-8bba-235e05ab83dc while it is in vm_state soft-delete (HTTP 409) (Request-ID: req-ab2bcfab-4a2e-41d8- ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1932126 Title: Cannot restore soft-deleted instance when node is failure Status in OpenStack Compute (nova): New Bug description: Description === So far, the soft-deleted instance can not be evacuated. When the node can not be fixed or exceed the reclaim window, the instance can not be restored for ever. Steps to reproduce == * Set `reclaim_instance_interval` to 300s, and restart nova-api and nova-compute * then delete the instance A * then shutdown compute node for 500s * Use `openstack server evacuate xxx` and got the instance can not be evacuated. Expected result === Can be evacuated and restored Actual result = Do not be allowed to be evacuated Environment === # git log commit 22830d78b91946b108defe26b3a8ddefc2247363 (HEAD -> master, origin/master, origin/HEAD) Merge: fb020b360b 4d8bf15fec Author: Zuul Date: Wed Jun 16 00:53:08 2021 + Merge "libvirt: Set driver_iommu when attaching virtio devices to SEV instance" commit fb020b360b13faa53f64222bc81be0d965b47358 Merge: 7f83cbe9e2 0ac74f4e00 Author: Zuul Date: Tue Jun 15 21:47:10 2021 + Merge "Remove references to 'inst_type' Logs & Configs == nova evacuate a59b2915-5e2e-4541-8bba-235e05ab83dc ERROR (Conflict): Cannot 'evacuate' instance a59b2915-5e2e-4541-8bba-235e05ab83dc while it is in vm_state soft-delete (HTTP 409) (Request-ID: req-ab2bcfab-4a2e-41d8- To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1932126/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1929360] [NEW] Fix typo of "Instance ID" in Chinese
Public bug reported: In Chinese, the 'Instance ID' should be "实例 ID", not '示例 ID'. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1929360 Title: Fix typo of "Instance ID" in Chinese Status in OpenStack Dashboard (Horizon): New Bug description: In Chinese, the 'Instance ID' should be "实例 ID", not '示例 ID'. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1929360/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1894975] [NEW] Cannot overwrite policy rule for 'os_compute_api:servers:create:forced_host:'
Public bug reported: Description === Change the rule of 'os_compute_api:servers:create:forced_host' to 'rule:admin_or_owner' in policy file. But when creating one server with member role, still got "Policy doesn't allow os_compute_api:servers:create:forced_host to be performed. (HTTP 403) (Request-ID: req-199cb105-4c4d-405d-89cf-9059182ec745)" Steps to reproduce == * Change policy file os_compute_api:servers:create:forced_host: rule:admin_or_owner * Reboot nova-api service * Create one server with specified host in member role openstack server create --image cirros051 --network cps_pxe --flavor m1.tiny --availability-zone :compute01: vm-0909-1 Expected result === Create server successfully Actual result = Got "Policy doesn't allow os_compute_api:servers:create:forced_host to be performed. (HTTP 403) (Request-ID: req-199cb105-4c4d-405d-89cf-9059182ec745)" Environment === git log commit 0d1fd02b301bbc25c75cb2476b24f3be5d7cda77 (HEAD -> stable/rocky, origin/stable/rocky) Merge: 837baac9fd c438fd9a0e Author: Zuul Date: Thu Sep 3 15:15:47 2020 + Merge "libvirt: Provide VIR_MIGRATE_PARAM_PERSIST_XML during live migration" into stable/rocky Logs & Configs == /etc/nova/policy.yaml os_compute_api:servers:create:forced_host: rule:admin_or_owner /etc/nova/nova.conf [oslo_policy] policy_file = /etc/nova/policy.yaml root@mgt01:~# openstack server create --image cirros051 --network cps_pxe --flavor m1.tiny --availability-zone :compute01: vm-0909-1 Policy doesn't allow os_compute_api:servers:create:forced_host to be performed. (HTTP 403) (Request-ID: req-199cb105-4c4d-405d-89cf-9059182ec745) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1894975 Title: Cannot overwrite policy rule for 'os_compute_api:servers:create:forced_host:' Status in OpenStack Compute (nova): New Bug description: Description === Change the rule of 'os_compute_api:servers:create:forced_host' to 'rule:admin_or_owner' in policy file. But when creating one server with member role, still got "Policy doesn't allow os_compute_api:servers:create:forced_host to be performed. (HTTP 403) (Request-ID: req-199cb105-4c4d-405d-89cf-9059182ec745)" Steps to reproduce == * Change policy file os_compute_api:servers:create:forced_host: rule:admin_or_owner * Reboot nova-api service * Create one server with specified host in member role openstack server create --image cirros051 --network cps_pxe --flavor m1.tiny --availability-zone :compute01: vm-0909-1 Expected result === Create server successfully Actual result = Got "Policy doesn't allow os_compute_api:servers:create:forced_host to be performed. (HTTP 403) (Request-ID: req-199cb105-4c4d-405d-89cf-9059182ec745)" Environment === git log commit 0d1fd02b301bbc25c75cb2476b24f3be5d7cda77 (HEAD -> stable/rocky, origin/stable/rocky) Merge: 837baac9fd c438fd9a0e Author: Zuul Date: Thu Sep 3 15:15:47 2020 + Merge "libvirt: Provide VIR_MIGRATE_PARAM_PERSIST_XML during live migration" into stable/rocky Logs & Configs == /etc/nova/policy.yaml os_compute_api:servers:create:forced_host: rule:admin_or_owner /etc/nova/nova.conf [oslo_policy] policy_file = /etc/nova/policy.yaml root@mgt01:~# openstack server create --image cirros051 --network cps_pxe --flavor m1.tiny --availability-zone :compute01: vm-0909-1 Policy doesn't allow os_compute_api:servers:create:forced_host to be performed. (HTTP 403) (Request-ID: req-199cb105-4c4d-405d-89cf-9059182ec745) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1894975/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1892176] [NEW] Inconsistent result between index and show of instance api
Public bug reported: Description === Using index API can get instance but show API get none. Steps to reproduce == 1. Launch one baremetal instance; 2. Then kill ironic-api process; 3. List and grep uuid; 4. Get uuid Expected result === Step 3 and 4 both get instance Actual result = Step 3 can get instance Step 4 return: "No server with a name or ID of '0614445f-6af0-40a7-802d-f66f81944544' exists." Environment === stable/rocky Logs & Configs == # openstack server list --all |grep 0614445f-6af0-40a7-802d-f66f81944544 | 0614445f-6af0-40a7-802d-f66f81944544 | CPS-202073110420 | BUILD | | cps-centos7.4-x64-20190109 | CPS_STD_BASIC | # openstack server show 0614445f-6af0-40a7-802d-f66f81944544 No server with a name or ID of '0614445f-6af0-40a7-802d-f66f81944544' exists. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1892176 Title: Inconsistent result between index and show of instance api Status in OpenStack Compute (nova): New Bug description: Description === Using index API can get instance but show API get none. Steps to reproduce == 1. Launch one baremetal instance; 2. Then kill ironic-api process; 3. List and grep uuid; 4. Get uuid Expected result === Step 3 and 4 both get instance Actual result = Step 3 can get instance Step 4 return: "No server with a name or ID of '0614445f-6af0-40a7-802d-f66f81944544' exists." Environment === stable/rocky Logs & Configs == # openstack server list --all |grep 0614445f-6af0-40a7-802d-f66f81944544 | 0614445f-6af0-40a7-802d-f66f81944544 | CPS-202073110420 | BUILD | | cps-centos7.4-x64-20190109 | CPS_STD_BASIC | # openstack server show 0614445f-6af0-40a7-802d-f66f81944544 No server with a name or ID of '0614445f-6af0-40a7-802d-f66f81944544' exists. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1892176/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1892033] [NEW] Failed to start nova-compute with libvirt-xen
Public bug reported: Description === I deployed ussuri env from ubuntu-cloud:ussuri. Configure one compute node with xen and libvirt, then nova-compute serivce can not be started. Got error 'libvirt.libvirtError: this function is not supported by the connection driver: virNodeGetCPUMap'. Steps to reproduce == 1. Install nova-compute 2. Configure nova.conf as below: [libvirt] virt_type = xen 3. Start nova-compute service Expected result === Nova-compute starts successfully Actual result = Got error Environment === root@xen-cmp01:~# dpkg -l | grep nova-compute ii nova-compute 2:21.0.0-0ubuntu0.20.04.1~cloud0 all OpenStack Compute - compute node base ii nova-compute-kvm 2:21.0.0-0ubuntu0.20.04.1~cloud0 all OpenStack Compute - compute node (KVM) ii nova-compute-libvirt 2:21.0.0-0ubuntu0.20.04.1~cloud0 all OpenStack Compute - compute node libvirt support root@xen-cmp01:~# dpkg -l | grep libvirt ii libvirt-clients 6.0.0-0ubuntu8.2~cloud0 amd64Programs for the libvirt library ii libvirt-daemon 6.0.0-0ubuntu8.2~cloud0 amd64Virtualization daemon ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.2~cloud0 amd64Virtualization daemon QEMU connection driver ii libvirt-daemon-driver-storage-rbd6.0.0-0ubuntu8.2~cloud0 amd64Virtualization daemon RBD storage driver ii libvirt-daemon-driver-xen6.0.0-0ubuntu8.2~cloud0 amd64Virtualization daemon Xen connection driver ii libvirt-daemon-system6.0.0-0ubuntu8.2~cloud0 amd64Libvirt daemon configuration files ii libvirt-daemon-system-systemd6.0.0-0ubuntu8.2~cloud0 amd64Libvirt daemon configuration files (systemd) ii libvirt0:amd64 6.0.0-0ubuntu8.2~cloud0 amd64library for interfacing with different virtualization systems ii nova-compute-libvirt 2:21.0.0-0ubuntu0.20.04.1~cloud0 all OpenStack Compute - compute node libvirt support ii python3-libvirt 6.1.0-1~cloud0 amd64libvirt Python 3 bindings root@xen-cmp01:~# dpkg -l | grep xen ii grub-xen-bin 2.02-2ubuntu8.17 amd64GRand Unified Bootloader, version 2 (Xen binaries) ii grub-xen-host2.02-2ubuntu8.17 amd64GRand Unified Bootloader, version 2 (Xen host version) ii libvirt-daemon-driver-xen6.0.0-0ubuntu8.2~cloud0 amd64Virtualization daemon Xen connection driver ii libxen-4.9:amd64 4.9.2-0ubuntu1 amd64Public libs for Xen ii libxenstore3.0:amd64 4.9.2-0ubuntu1 amd64Xenstore communications library for Xen ii python3-os-xenapi0.3.4-0ubuntu3~cloud0 all XenAPI library for OpenStack projects - Python 3.x ii xen-hypervisor-4.9-amd64 4.9.2-0ubuntu1 amd64Xen Hypervisor on AMD64 ii xen-utils-4.94.9.2-0ubuntu1 amd64XEN administrative tools ii xen-utils-common 4.9.2-0ubuntu1 all Xen administrative tools - common files ii xenstore-utils 4.9.2-0ubuntu1 amd64Xenstore command line utilities for Xen Logs & Configs == 2020-08-18 12:23:30.739 12029 ERROR nova.compute.manager [req-81171101-de82-430a-a8e9-32d295706cae - - - - -] Error updating resources for node xen-cmp01.: libvirt.libvirtError: this function is not supported by the connection driver: virNodeGetCPUMap 2020-08-18 12:23:30.739 12029 ERROR nova.compute.manager Traceback (most recent call last): 2020-08-18 12:23:30.739 12029 ERROR nova.compute.manager File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 9685, in _update_available_resource_for_node 2020-08-18 12:23:30.739 12029 ERROR nova.compute.manager startup=startup) 2020-08-18 12:23:30.739 12029 ERROR nova.compute.manager File "/usr/lib/python3/dist-packag
[Yahoo-eng-team] [Bug 1890219] [NEW] nova-compute can not boot cause of old resource provider
Public bug reported: Description === The service 'nova-compute' will register resource provider in placement when it's starting. But if old one exist with same name, nova-compute serivce can not boot successfully. Steps to reproduce == * Boot nova-compute with hostname 'host1' * Create one instance placed on the compute node * Change hostname to 'host2' and boot nova-compute service * Rollback hostname to 'host1' and boot nova-compute serivce Expected result === Service 'nova-compute' booted successfully Actual result = Got error 'Failed to create resource provider' Environment === 1. nova: stable/rocky $ git log commit e3093d42f46af810f316421a9b59eafe94039807 (HEAD -> stable/rocky, origin/stable/rocky) Author: Luigi Toscano Date: Fri Jul 10 13:26:48 2020 +0200 zuul: remove legacy-tempest-dsvm-neutron-dvr-multinode-full The job was part of the neutron experimental queue but then removed during the ussuri lifecycle. See https://review.opendev.org/#/c/693630/ Conflicts: .zuul.yaml The content of .zuul.yaml changed slightly. Change-Id: I04717b95dd44ae89f24bd74525d1c9607e3bc0fc (cherry picked from commit bce4a3ab97320bdc2a6a43e2a961a0aa0b8ffb63) (cherry picked from commit cf399a363ca530151895c4b7cf49ad7b2a79e01b) (cherry picked from commit b1ead1fb2adf25493e5cab472d529fde31f985f0) (cherry picked from commit 7b005f37853a56e3ec6da455008fa5ef0d03c21b) 2. Which hypervisor did you use? libvirt+KVM Logs & Configs == 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager [req-52534aeb-4dd3-4f83-83f1-e6e47e1aa13e - - - - -] Error updating resources for node compute01.: ResourceProviderCreationFailed: Failed to create resource provider compute01 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager Traceback (most recent call last): 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/compute/manager.py", line 8157, in _update_available_resource_for_node 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 724, in update_available_resource 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager self._update_available_resource(context, resources) 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager File "/var/lib/openstack/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager return f(*args, **kwargs) 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 801, in _update_available_resource 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager self._update(context, cn) 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager File "/var/lib/openstack/lib/python2.7/site-packages/retrying.py", line 49, in wrapped_f 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager return Retrying(*dargs, **dkw).call(f, *args, **kw) 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager File "/var/lib/openstack/lib/python2.7/site-packages/retrying.py", line 206, in call 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager return attempt.get(self._wrap_exception) 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager File "/var/lib/openstack/lib/python2.7/site-packages/retrying.py", line 247, in get 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager six.reraise(self.value[0], self.value[1], self.value[2]) 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager File "/var/lib/openstack/lib/python2.7/site-packages/retrying.py", line 200, in call 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager attempt = Attempt(fn(*args, **kwargs), attempt_number, False) 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 963, in _update 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager self._update_to_placement(context, compute_node) 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 915, in _update_to_placement 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager File "/var/lib/openstack/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 35, in __run_method 2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) 2020-08-03 08:38:01.296 21734 ERROR no
[Yahoo-eng-team] [Bug 1887588] [NEW] Should add user's domian when using cinder as store backend
Public bug reported: When using cinder as store backend, there are some configurations of user which can call cinder's API. cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_project_name cinder_os_region_name cinder_catalog_info = volumev3:cinderv3:internalURL In the multi-domain, user maybe not belongs to 'Default' domain, got auth error. ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1887588 Title: Should add user's domian when using cinder as store backend Status in Glance: New Bug description: When using cinder as store backend, there are some configurations of user which can call cinder's API. cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_project_name cinder_os_region_name cinder_catalog_info = volumev3:cinderv3:internalURL In the multi-domain, user maybe not belongs to 'Default' domain, got auth error. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1887588/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1881557] [NEW] Can not resize to same host for libvirt driver
Public bug reported: Description === As before, the instance can be resized or cold-migrated to same host for libvirt driver if CONF.allow_resize_to_same_host was set to true. When use latest source, got error "UnableToMigrateToSelf: Unable to migrate instance". Steps to reproduce == * Configure CONF.allow_resize_to_same_host to true * Create one instance * Cold-migrate the instance Expected result === The instance can be cold-migrated on same host Actual result = Got error "UnableToMigrateToSelf: Unable to migrate instance" Environment === $ git log commit f571151e79dbd87a76ae3222a9f5b507d85648b1 Merge: 3233392 236f1b2 Author: Zuul Date: Sat May 30 06:55:18 2020 + Merge "zuul: Make devstack-plugin-ceph-tempest-py3 a voting check job again" libvirt + KVM Logs & Configs == [DEFAULT] allow_resize_to_same_host = true nova-compute 2020-06-01 06:53:24.367 28545 ERROR nova.compute.manager [instance: 982f9273-eb50-443a-8bbc-fa728ceac8e4] UnableToMigrateToSelf: Unable to migrate instance (982f9273-eb50-443a-8bbc-fa728ceac8e4) to current host (compute04). ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1881557 Title: Can not resize to same host for libvirt driver Status in OpenStack Compute (nova): New Bug description: Description === As before, the instance can be resized or cold-migrated to same host for libvirt driver if CONF.allow_resize_to_same_host was set to true. When use latest source, got error "UnableToMigrateToSelf: Unable to migrate instance". Steps to reproduce == * Configure CONF.allow_resize_to_same_host to true * Create one instance * Cold-migrate the instance Expected result === The instance can be cold-migrated on same host Actual result = Got error "UnableToMigrateToSelf: Unable to migrate instance" Environment === $ git log commit f571151e79dbd87a76ae3222a9f5b507d85648b1 Merge: 3233392 236f1b2 Author: Zuul Date: Sat May 30 06:55:18 2020 + Merge "zuul: Make devstack-plugin-ceph-tempest-py3 a voting check job again" libvirt + KVM Logs & Configs == [DEFAULT] allow_resize_to_same_host = true nova-compute 2020-06-01 06:53:24.367 28545 ERROR nova.compute.manager [instance: 982f9273-eb50-443a-8bbc-fa728ceac8e4] UnableToMigrateToSelf: Unable to migrate instance (982f9273-eb50-443a-8bbc-fa728ceac8e4) to current host (compute04). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1881557/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1870558] [NEW] Server's host not changed but actually on dest node after live-migrating
Public bug reported: Description === The insances has been migrating for more than two hours. Then it got error 'Unauthorized'. The host of output of CLI `openstack server show` was stall the old one. But the instances had already been running on dest node. Steps to reproduce == 1. Create one instance with large mem 2. Run some application which cosume mem, like `memtester` 3. Execute live-migrate Expected result === Rollback instance to old one, or update instance's host to dest node Actual result = Instance on dest node but the host is src node in DB Environment === $ git log -1 commit ee6af34437069a23284f4521330057a95f86f9b7 (HEAD -> stable/rocky, origin/stable/rocky) Author: Luigi Toscano Date: Wed Dec 18 00:28:15 2019 +0100 Zuul v3: use devstack-plugin-nfs-tempest-full ... and replace its legacy ancestor. Change-Id: Ifd4387a02b3103e1258e146e63c73be1ad10030c (cherry picked from commit e7e39b8c2e20f5d7b5e70020f0e42541dc772e68) (cherry picked from commit e82e1704caa1c2baea29f05e8d426337e8de7a3c) (cherry picked from commit 99aa8ebc12949f9bba76f22e877b07d02791bf5b) Logs & Configs == 2020-04-02 21:08:32,890.890 6358 INFO nova.virt.libvirt.driver [req-b8d694f5-f60a-4866-bcd2-c107b2caa809 bdb83637364c4db4ba1a01f6ea879ff1 496db91424 254a85a4130a26801447c9 - default default] [instance: 8e76d7a1-e7f4-4476-94b3-724db6bfd467] Migration running for 30 secs, memory 80% remaining; (byt es processed=3503551373, remaining=27653689344, total=34364792832) 2020-04-02 23:08:05,165.165 6358 INFO nova.virt.libvirt.driver [req-f22d9bee-9c1f-47a6-a2d5-3611f5b2529c bdb83637364c4db4ba1a01f6ea879ff1 496db91424254a85a4130a26801447c9 - default default] [instance: 8e76d7a1-e7f4-4476-94b3-724db6bfd467] Migration operation has completed 2020-04-02 23:08:05,166.166 6358 INFO nova.compute.manager [req-f22d9bee-9c1f-47a6-a2d5-3611f5b2529c bdb83637364c4db4ba1a01f6ea879ff1 496db91424254a85a4130a26801447c9 - default default] [instance: 8e76d7a1-e7f4-4476-94b3-724db6bfd467] _post_live_migration() is started.. 2020-04-02 23:08:05,535.535 6358 WARNING nova.virt.libvirt.driver [req-f22d9bee-9c1f-47a6-a2d5-3611f5b2529c bdb83637364c4db4ba1a01f6ea879ff1 496db91424254a85a4130a26801447c9 - default default] [instance: 8e76d7a1-e7f4-4476-94b3-724db6bfd467] Error monitoring migration: The request you have made requires authentication. (HTTP 401): Unauthorized: The request you have made requires authentication. (HTTP 401) 2020-04-02 23:08:05,537.537 6358 ERROR nova.compute.manager [instance: 8e76d7a1-e7f4-4476-94b3-724db6bfd467] Unauthorized: The request you have made requires authentication. (HTTP 401) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1870558 Title: Server's host not changed but actually on dest node after live- migrating Status in OpenStack Compute (nova): New Bug description: Description === The insances has been migrating for more than two hours. Then it got error 'Unauthorized'. The host of output of CLI `openstack server show` was stall the old one. But the instances had already been running on dest node. Steps to reproduce == 1. Create one instance with large mem 2. Run some application which cosume mem, like `memtester` 3. Execute live-migrate Expected result === Rollback instance to old one, or update instance's host to dest node Actual result = Instance on dest node but the host is src node in DB Environment === $ git log -1 commit ee6af34437069a23284f4521330057a95f86f9b7 (HEAD -> stable/rocky, origin/stable/rocky) Author: Luigi Toscano Date: Wed Dec 18 00:28:15 2019 +0100 Zuul v3: use devstack-plugin-nfs-tempest-full ... and replace its legacy ancestor. Change-Id: Ifd4387a02b3103e1258e146e63c73be1ad10030c (cherry picked from commit e7e39b8c2e20f5d7b5e70020f0e42541dc772e68) (cherry picked from commit e82e1704caa1c2baea29f05e8d426337e8de7a3c) (cherry picked from commit 99aa8ebc12949f9bba76f22e877b07d02791bf5b) Logs & Configs == 2020-04-02 21:08:32,890.890 6358 INFO nova.virt.libvirt.driver [req-b8d694f5-f60a-4866-bcd2-c107b2caa809 bdb83637364c4db4ba1a01f6ea879ff1 496db91424 254a85a4130a26801447c9 - default default] [instance: 8e76d7a1-e7f4-4476-94b3-724db6bfd467] Migration running for 30 secs, memory 80% remaining; (byt es processed=3503551373, remaining=27653689344, total=34364792832) 2020-04-02 23:08:05,165.165 6358 INFO nova.virt.libvirt.driver [req-f22d9bee-9c1f-47a6-a2d5-3611f5b2529c bdb83637364c4db4ba1a01f6ea879ff1 496db91424254a85a4130a26801447c9 - default default] [instance: 8e76d7a1-e7f4-4476-94b3-724db6bfd467] Migration opera
[Yahoo-eng-team] [Bug 1812335] Re: Cannot connect neutron when clicking compute - instances in horizon
** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1812335 Title: Cannot connect neutron when clicking compute - instances in horizon Status in OpenStack Dashboard (Horizon): Invalid Bug description: When i click project - compute - instances, got this error. Error: Unable to connect to Neutron. LOG: Unable to connect to Neutron: 'frozenset' object has no attribute '__getitem__' Version: # apt list --installed | grep horizon WARNING: apt does not have a stable CLI interface. Use with caution in scripts. python-django-horizon/2018.4.0,now 3:12.0.2-1~u16.04 all [installed,automatic] To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1812335/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1861552] Re: Failed to unset space character tag
** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1861552 Title: Failed to unset space character tag Status in Glance: Invalid Bug description: Reproduce: 1. Set tag openstack image unset 1efee0bd-d53e-4e57-b069-30a554e5c523 --tag " " 2. Unset tag openstack image unset --tag " " 1efee0bd-d53e-4e57-b069-30a554e5c523 The output of `unset` command: $ openstack image unset --tag " " 1efee0bd-d53e-4e57-b069-30a554e5c523 tag unset failed, ' ' is a nonexistent tag Failed to unset 1 of 1 tags. Version: commit ad39c12c64c8ff017918a8790d69d5278ac379da (HEAD -> stable/rocky) Merge: 8d9ff5f5 f992a0b2 Author: Zuul Date: Fri Sep 20 18:25:41 2019 + Merge "Fix manpage building and remove glance-cache-manage" into stable/rocky To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1861552/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1866269] [NEW] Testcase 'test_encrypted_cinder_volumes_luks' is broken
Public bug reported: CI job:https://zuul.opendev.org/t/openstack/job/nova-next == Failed 1 tests - output below: == tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks[compute,id-79165fb4-5534-4b9d-8429-97ccffb8f86e,image,slow,volume] --- Captured traceback: ~~~ Traceback (most recent call last): File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in wrapper return f(*func_args, **func_kwargs) File "/opt/stack/tempest/tempest/scenario/test_encrypted_cinder_volumes.py", line 63, in test_encrypted_cinder_volumes_luks self.attach_detach_volume(server, volume) File "/opt/stack/tempest/tempest/scenario/test_encrypted_cinder_volumes.py", line 53, in attach_detach_volume attached_volume = self.nova_volume_attach(server, volume) File "/opt/stack/tempest/tempest/scenario/manager.py", line 640, in nova_volume_attach volume['id'], 'in-use') File "/opt/stack/tempest/tempest/common/waiters.py", line 215, in wait_for_volume_resource_status raise lib_exc.TimeoutException(message) tempest.lib.exceptions.TimeoutException: Request timed out Details: volume 201ccef3-07a9-4b5e-b726-e31c922d068d failed to reach in-use status (current available) within the required time (196 s). ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1866269 Title: Testcase 'test_encrypted_cinder_volumes_luks' is broken Status in OpenStack Compute (nova): New Bug description: CI job:https://zuul.opendev.org/t/openstack/job/nova-next == Failed 1 tests - output below: == tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks[compute,id-79165fb4-5534-4b9d-8429-97ccffb8f86e,image,slow,volume] --- Captured traceback: ~~~ Traceback (most recent call last): File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in wrapper return f(*func_args, **func_kwargs) File "/opt/stack/tempest/tempest/scenario/test_encrypted_cinder_volumes.py", line 63, in test_encrypted_cinder_volumes_luks self.attach_detach_volume(server, volume) File "/opt/stack/tempest/tempest/scenario/test_encrypted_cinder_volumes.py", line 53, in attach_detach_volume attached_volume = self.nova_volume_attach(server, volume) File "/opt/stack/tempest/tempest/scenario/manager.py", line 640, in nova_volume_attach volume['id'], 'in-use') File "/opt/stack/tempest/tempest/common/waiters.py", line 215, in wait_for_volume_resource_status raise lib_exc.TimeoutException(message) tempest.lib.exceptions.TimeoutException: Request timed out Details: volume 201ccef3-07a9-4b5e-b726-e31c922d068d failed to reach in-use status (current available) within the required time (196 s). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1866269/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1861749] [NEW] Instance is under rebuilding status always when use 'arm64' architecture image
-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/compute/api.py", line 3306, in rebuild 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi request_spec.image = objects.ImageMeta.from_dict(image) 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/objects/image_meta.py", line 98, in from_dict 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi image_meta.get("properties", {})) 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/objects/image_meta.py", line 591, in from_dict 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi obj._set_attr_from_current_names(image_props) 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/objects/image_meta.py", line 563, in _set_attr_from_current_names 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi setattr(self, key, image_props[key]) 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi File "/var/lib/openstack/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 72, in setter 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi field_value = field.coerce(self, name, value) 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi File "/var/lib/openstack/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py", line 201, in coerce 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi return self._type.coerce(obj, attr, value) 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/objects/fields.py", line 209, in coerce 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi raise ValueError(msg) 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi ValueError: Architecture name 'arm64' is not valid ** Affects: nova Importance: Undecided Assignee: Eric Xie (eric-xie) Status: New ** Changed in: nova Assignee: (unassigned) => Eric Xie (eric-xie) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1861749 Title: Instance is under rebuilding status always when use 'arm64' architecture image Status in OpenStack Compute (nova): New Bug description: Description === Got 'Unexpected API Error' when i use new image with property 'hw_architecture=arm64' to rebuild instance. And the status of instance stay 'REBUILD' always. Steps to reproduce == 1. On x86 env, boot instance 'test' 2. Set new image with 'hw_architecture=arm64' openstack image set --property hw_architecture=arm64 cirros-test 3. Use the image to rebuild instance openstack server rebuild --image cirros-test test Expected result === Got badrequest, the status of instance rollback to active Actual result = Got 'Unexpected API Error', the status of instance stay rebuild Environment === $ git log commit ee6af34437069a23284f4521330057a95f86f9b7 (HEAD -> stable/rocky, origin/stable/rocky) Author: Luigi Toscano Date: Wed Dec 18 00:28:15 2019 +0100 Zuul v3: use devstack-plugin-nfs-tempest-full ... and replace its legacy ancestor. Change-Id: Ifd4387a02b3103e1258e146e63c73be1ad10030c (cherry picked from commit e7e39b8c2e20f5d7b5e70020f0e42541dc772e68) (cherry picked from commit e82e1704caa1c2baea29f05e8d426337e8de7a3c) (cherry picked from commit 99aa8ebc12949f9bba76f22e877b07d02791bf5b) Logs & Configs == # openstack server rebuild --image cirros-test sjt-test-1 Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-383bbffc-5a85-40e7-86ff-ac7c8d563dfa) 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi [req-383bbffc-5a85-40e7-86ff-ac7c8d563dfa 40e7b8c3d59943e08a52acd24fe30652 d13f1690c08d41ac854d720ea510a710 - default default] Unexpected exception in API method: ValueError: Architecture name 'arm64' is not valid 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi Traceback (most recent call last): 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 801, in wrapped 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi return f(*args, **kwargs) 2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/api/v
[Yahoo-eng-team] [Bug 1861552] [NEW] Failed to unset space character tag
Public bug reported: Reproduce: 1. Set tag openstack image unset 1efee0bd-d53e-4e57-b069-30a554e5c523 --tag " " 2. Unset tag openstack image unset --tag " " 1efee0bd-d53e-4e57-b069-30a554e5c523 The output of `unset` command: $ openstack image unset --tag " " 1efee0bd-d53e-4e57-b069-30a554e5c523 tag unset failed, ' ' is a nonexistent tag Failed to unset 1 of 1 tags. Version: commit ad39c12c64c8ff017918a8790d69d5278ac379da (HEAD -> stable/rocky) Merge: 8d9ff5f5 f992a0b2 Author: Zuul Date: Fri Sep 20 18:25:41 2019 + Merge "Fix manpage building and remove glance-cache-manage" into stable/rocky ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1861552 Title: Failed to unset space character tag Status in Glance: New Bug description: Reproduce: 1. Set tag openstack image unset 1efee0bd-d53e-4e57-b069-30a554e5c523 --tag " " 2. Unset tag openstack image unset --tag " " 1efee0bd-d53e-4e57-b069-30a554e5c523 The output of `unset` command: $ openstack image unset --tag " " 1efee0bd-d53e-4e57-b069-30a554e5c523 tag unset failed, ' ' is a nonexistent tag Failed to unset 1 of 1 tags. Version: commit ad39c12c64c8ff017918a8790d69d5278ac379da (HEAD -> stable/rocky) Merge: 8d9ff5f5 f992a0b2 Author: Zuul Date: Fri Sep 20 18:25:41 2019 + Merge "Fix manpage building and remove glance-cache-manage" into stable/rocky To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1861552/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1861264] [NEW] Help information should be returned when executing 'placement-status' alone
Public bug reported: The output of executing `placement-manage` alone is help information, but it's not for executing `placement-status` alone. # placement-status usage: placement-status [-h] [--config-dir DIR] [--config-file PATH] [--debug] [--log-config-append PATH] [--log-date-format DATE_FORMAT] [--log-dir LOG_DIR] [--log-file PATH] [--nodebug] [--nouse-journal] [--nouse-json] [--nouse-syslog] [--nowatch-log-file] [--syslog-log-facility SYSLOG_LOG_FACILITY] [--use-journal] [--use-json] [--use-syslog] [--watch-log-file] {upgrade} ... placement-status: error: the following arguments are required: command I think it should return help information too. ** Affects: keystone Importance: Undecided Status: Invalid ** Changed in: keystone Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1861264 Title: Help information should be returned when executing 'placement-status' alone Status in OpenStack Identity (keystone): Invalid Bug description: The output of executing `placement-manage` alone is help information, but it's not for executing `placement-status` alone. # placement-status usage: placement-status [-h] [--config-dir DIR] [--config-file PATH] [--debug] [--log-config-append PATH] [--log-date-format DATE_FORMAT] [--log-dir LOG_DIR] [--log-file PATH] [--nodebug] [--nouse-journal] [--nouse-json] [--nouse-syslog] [--nowatch-log-file] [--syslog-log-facility SYSLOG_LOG_FACILITY] [--use-journal] [--use-json] [--use-syslog] [--watch-log-file] {upgrade} ... placement-status: error: the following arguments are required: command I think it should return help information too. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1861264/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1858410] [NEW] Got error 'NoneType' when executing unittest on stable/rocky
Public bug reported: I use 'master' branch, all testcases of unittest are passed. But use 'stable/rocky' branch, there are one testcase that is fail. ## log == Failed 1 tests - output below: == keystone.tests.unit.test_hacking_checks.TestBlockCommentsBeginWithASpace.test - Captured traceback: ~~~ b'Traceback (most recent call last):' b' File "/home/src/keystone-master/keystone/tests/unit/test_hacking_checks.py", line 80, in test' b'self.assert_has_errors(code, expected_errors=errors)' b' File "/home/src/keystone-master/keystone/tests/unit/test_hacking_checks.py", line 57, in assert_has_errors' b'actual_errors = [e[:3] for e in self.run_check(code)]' b' File "/home/src/keystone-master/keystone/tests/unit/test_hacking_checks.py", line 52, in run_check' b'checker.check_all()' b' File "/home/src/keystone-master/.tox/py3/lib/python3.6/site-packages/pep8.py", line 1438, in check_all' b'self.check_logical()' b' File "/home/src/keystone-master/.tox/py3/lib/python3.6/site-packages/pep8.py", line 1328, in check_logical' b' (start_row, start_col) = mapping[0][1]' b"TypeError: 'NoneType' object is not subscriptable" b'' ** Affects: keystone Importance: Undecided Assignee: Eric Xie (eric-xie) Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1858410 Title: Got error 'NoneType' when executing unittest on stable/rocky Status in OpenStack Identity (keystone): New Bug description: I use 'master' branch, all testcases of unittest are passed. But use 'stable/rocky' branch, there are one testcase that is fail. ## log == Failed 1 tests - output below: == keystone.tests.unit.test_hacking_checks.TestBlockCommentsBeginWithASpace.test - Captured traceback: ~~~ b'Traceback (most recent call last):' b' File "/home/src/keystone-master/keystone/tests/unit/test_hacking_checks.py", line 80, in test' b'self.assert_has_errors(code, expected_errors=errors)' b' File "/home/src/keystone-master/keystone/tests/unit/test_hacking_checks.py", line 57, in assert_has_errors' b'actual_errors = [e[:3] for e in self.run_check(code)]' b' File "/home/src/keystone-master/keystone/tests/unit/test_hacking_checks.py", line 52, in run_check' b'checker.check_all()' b' File "/home/src/keystone-master/.tox/py3/lib/python3.6/site-packages/pep8.py", line 1438, in check_all' b'self.check_logical()' b' File "/home/src/keystone-master/.tox/py3/lib/python3.6/site-packages/pep8.py", line 1328, in check_logical' b'(start_row, start_col) = mapping[0][1]' b"TypeError: 'NoneType' object is not subscriptable" b'' To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1858410/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1855883] [NEW] can not migrate serve on aarch64
Public bug reported: Description === We setup OpenStack env on aarch64 KylinOS. Failed to live migrate instance cause of 'This operating system kernel does not support vITS migration'. Steps to reproduce == 1. Setup OpenStack on aarch64 servers with openstack-helm 2. Live migrate instance from compute02 to compute03 Expected result === Success, instance locate on compute03 Actual result = Failed, instnace locate on compute02 Environment === 1. Exact version of OpenStack you are running. See the following stable/rocky # apt list --installed |egrep "libvirt|qemu" ipxe-qemu/now 1.0.0+git-20180124.fbe8c52d-0ubuntu2.2~cloud0 all [installed,local] ipxe-qemu-256k-compat-efi-roms/now 1.0.0+git-20150424.a25a16d-0ubuntu2~cloud0 all [installed,local] libvirt-bin/now 4.0.0-1ubuntu8.11~cloud0 arm64 [installed,local] libvirt-clients/now 4.0.0-1ubuntu8.11~cloud0 arm64 [installed,local] libvirt-daemon/now 4.0.0-1ubuntu8.11~cloud0 arm64 [installed,local] libvirt-daemon-system/now 4.0.0-1ubuntu8.11~cloud0 arm64 [installed,local] libvirt0/now 4.0.0-1ubuntu8.11~cloud0 arm64 [installed,local] qemu/now 1:2.11+dfsg-1ubuntu7.15~cloud1 arm64 [installed,local] # uname -a Linux compute03 4.4.131-20190726.kylin.server-generic #kylin SMP Tue Jul 30 16:44:09 CST 2019 aarch64 aarch64 aarch64 GNU/Linux 2. Which hypervisor did you use? libvirt+kvm Logs & Configs == nova-compute: File "/var/lib/openstack/local/lib/python2.7/site-packages/libvirt.py", line 1745, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: internal error: unable to execute QEMU command 'migrate': This operating system kernel does not support vITS migration libvirt: 2019-12-07 05:34:34.820+: 57546: error : qemuMonitorJSONCheckError:392 : internal error: unable to execute QEMU command 'migrate': This operating system kernel does not support vITS migration 2019-12-07 05:34:35.226+: 57546: error : virNetClientProgramDispatchError:177 : internal error: qemu unexpectedly closed the monitor: 2019-12-07T05:34:29.355638Z qemu-system-aarch64: Not a migration stream 2019-12-07T05:34:29.355781Z qemu-system-aarch64: load of migration failed: Invalid argument ** Affects: nova Importance: Undecided Assignee: Eric Xie (eric-xie) Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1855883 Title: can not migrate serve on aarch64 Status in OpenStack Compute (nova): New Bug description: Description === We setup OpenStack env on aarch64 KylinOS. Failed to live migrate instance cause of 'This operating system kernel does not support vITS migration'. Steps to reproduce == 1. Setup OpenStack on aarch64 servers with openstack-helm 2. Live migrate instance from compute02 to compute03 Expected result === Success, instance locate on compute03 Actual result = Failed, instnace locate on compute02 Environment === 1. Exact version of OpenStack you are running. See the following stable/rocky # apt list --installed |egrep "libvirt|qemu" ipxe-qemu/now 1.0.0+git-20180124.fbe8c52d-0ubuntu2.2~cloud0 all [installed,local] ipxe-qemu-256k-compat-efi-roms/now 1.0.0+git-20150424.a25a16d-0ubuntu2~cloud0 all [installed,local] libvirt-bin/now 4.0.0-1ubuntu8.11~cloud0 arm64 [installed,local] libvirt-clients/now 4.0.0-1ubuntu8.11~cloud0 arm64 [installed,local] libvirt-daemon/now 4.0.0-1ubuntu8.11~cloud0 arm64 [installed,local] libvirt-daemon-system/now 4.0.0-1ubuntu8.11~cloud0 arm64 [installed,local] libvirt0/now 4.0.0-1ubuntu8.11~cloud0 arm64 [installed,local] qemu/now 1:2.11+dfsg-1ubuntu7.15~cloud1 arm64 [installed,local] # uname -a Linux compute03 4.4.131-20190726.kylin.server-generic #kylin SMP Tue Jul 30 16:44:09 CST 2019 aarch64 aarch64 aarch64 GNU/Linux 2. Which hypervisor did you use? libvirt+kvm Logs & Configs == nova-compute: File "/var/lib/openstack/local/lib/python2.7/site-packages/libvirt.py", line 1745, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: internal error: unable to execute QEMU command 'migrate': This operating system kernel does not support vITS migration libvirt: 2019-12-07 05:34:34.820+: 57546: error : qemuMonitorJSONCheckError:392 : internal error: unable to execute QEMU command 'migrate': This operating system kernel does not support vITS migration 2019-12-07 05:34:35.226+: 57546: error : virNetClientProgramDispatchError:177 : internal error: qemu unexpectedly closed the monitor: 2019-12-07T05:34:29.
[Yahoo-eng-team] [Bug 1854628] [NEW] Got 'Duplicate entry' error when archive_deleted_rows
Public bug reported: Description === When archiving deleted rows of nova database, archive successfully at first time. Then archived again, got error. Steps to reproduce == * Execute `nova-manage db archive_deleted_rows --all-cells` * Create some instances, stop, reboot, and delete it * Execute `nova-manage db archive_deleted_rows --all-cells` Expected result === Archive successfully Actual result = Failed Environment === nova: # git log commit ab6834145f3fe1d33ce7f292727a6bc2be50efd9 (HEAD -> stable/train, origin/stable/train) Merge: 66585e8af1 821506a50c Author: Zuul Date: Mon Oct 21 18:50:20 2019 + Merge "Fix exception translation when creating volume" into stable/train log: oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry '1' for key 'PRIMARY'") [SQL: INSERT INTO shadow_instance_actions_events (created_at, updated_at, deleted_at, deleted, id, event, action_id, start_time, finish_time, result, traceback, host, details) SELECT instance_actions_events.created_at, instance_actions_events.updated_at, instance_actions_events.deleted_at, instance_actions_events.deleted, instance_actions_events.id, instance_actions_events.event, instance_actions_events.action_id, instance_actions_events.start_time, instance_actions_events.finish_time, instance_actions_events.result, instance_actions_events.traceback, instance_actions_events.host, instance_actions_events.details FROM instance_actions_events, instances, instance_actions WHERE instances.deleted != %(deleted_1)s AND instances.uuid = instance_actions.instance_uuid AND instance_actions.id = instance_actions_events.action_id ORDER BY instance_actions_events.id LIMIT %(param_1)s] [parameters: {'deleted_1': 0, 'param_1': 1000}] (Background on this error at: http://sqlalche.me/e/gkpj) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1854628 Title: Got 'Duplicate entry' error when archive_deleted_rows Status in OpenStack Compute (nova): New Bug description: Description === When archiving deleted rows of nova database, archive successfully at first time. Then archived again, got error. Steps to reproduce == * Execute `nova-manage db archive_deleted_rows --all-cells` * Create some instances, stop, reboot, and delete it * Execute `nova-manage db archive_deleted_rows --all-cells` Expected result === Archive successfully Actual result = Failed Environment === nova: # git log commit ab6834145f3fe1d33ce7f292727a6bc2be50efd9 (HEAD -> stable/train, origin/stable/train) Merge: 66585e8af1 821506a50c Author: Zuul Date: Mon Oct 21 18:50:20 2019 + Merge "Fix exception translation when creating volume" into stable/train log: oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry '1' for key 'PRIMARY'") [SQL: INSERT INTO shadow_instance_actions_events (created_at, updated_at, deleted_at, deleted, id, event, action_id, start_time, finish_time, result, traceback, host, details) SELECT instance_actions_events.created_at, instance_actions_events.updated_at, instance_actions_events.deleted_at, instance_actions_events.deleted, instance_actions_events.id, instance_actions_events.event, instance_actions_events.action_id, instance_actions_events.start_time, instance_actions_events.finish_time, instance_actions_events.result, instance_actions_events.traceback, instance_actions_events.host, instance_actions_events.details FROM instance_actions_events, instances, instance_actions WHERE instances.deleted != %(deleted_1)s AND instances.uuid = instance_actions.instance_uuid AND instance_actions.id = instance_actions_events.action_id ORDER BY instance_actions_events.id LIMIT %(param_1)s] [parameters: {'deleted_1': 0, 'param_1': 1000}] (Background on this error at: http://sqlalche.me/e/gkpj) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1854628/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1853926] [NEW] Failed to build docs cause of InvocationError
Public bug reported: Description === When i use `tox -edocs`, got error ERROR: InvocationError for command /home/src/bug_1853745/.tox/docs/bin/sphinx-build -W --keep-going -b html -d doc/build/doctrees doc/source doc/build/html (exited with code 1) summary _ ERROR: docs: commands failed Environment === git log commit 3ead7d00a58c445fee8403ef3df41eec586b250d (origin/master, origin/HEAD, gerrit/master) Merge: 12e0c04dc0 83baeaa9f2 Author: Zuul Date: Sun Nov 24 00:31:49 2019 + Merge "Remove nova-manage network, floating commands" ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1853926 Title: Failed to build docs cause of InvocationError Status in OpenStack Compute (nova): New Bug description: Description === When i use `tox -edocs`, got error ERROR: InvocationError for command /home/src/bug_1853745/.tox/docs/bin/sphinx-build -W --keep-going -b html -d doc/build/doctrees doc/source doc/build/html (exited with code 1) summary _ ERROR: docs: commands failed Environment === git log commit 3ead7d00a58c445fee8403ef3df41eec586b250d (origin/master, origin/HEAD, gerrit/master) Merge: 12e0c04dc0 83baeaa9f2 Author: Zuul Date: Sun Nov 24 00:31:49 2019 + Merge "Remove nova-manage network, floating commands" To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1853926/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1853745] [NEW] Doc 'isolate-aggregates' has some incorrect examples
Public bug reported: At https://docs.openstack.org/nova/latest/reference/isolate-aggregates.html, * command `host_id=$(openstack resource provider show )` can not get the uuid. * command `traits=$(openstack --os-placement-api-version 1.6 resource provider trait list -f value $server_id | sed 's/^/--trait /')` uses 'server_id' not 'host_id'. # git log commit ab6834145f3fe1d33ce7f292727a6bc2be50efd9 (HEAD -> stable/train, origin/stable/train) Merge: 66585e8af1 821506a50c Author: Zuul Date: Mon Oct 21 18:50:20 2019 + Merge "Fix exception translation when creating volume" into stable/train ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1853745 Title: Doc 'isolate-aggregates' has some incorrect examples Status in OpenStack Compute (nova): New Bug description: At https://docs.openstack.org/nova/latest/reference/isolate-aggregates.html, * command `host_id=$(openstack resource provider show )` can not get the uuid. * command `traits=$(openstack --os-placement-api-version 1.6 resource provider trait list -f value $server_id | sed 's/^/--trait /')` uses 'server_id' not 'host_id'. # git log commit ab6834145f3fe1d33ce7f292727a6bc2be50efd9 (HEAD -> stable/train, origin/stable/train) Merge: 66585e8af1 821506a50c Author: Zuul Date: Mon Oct 21 18:50:20 2019 + Merge "Fix exception translation when creating volume" into stable/train To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1853745/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1852853] [NEW] Got an error when executing `nova-manage db`
Public bug reported: Description === When i used `nova-manage db` and enter, got an error. Steps to reproduce == 1. Execute `nova-manage db` Expected result === Output helper info like as queens # nova-manage db usage: nova-manage db [-h] {archive_deleted_rows,ironic_flavor_migration,null_instance_uuid_scan,online_data_migrations,sync,version} ... nova-manage db: error: too few arguments Actual result = An error has occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/oslo_config/cfg.py", line 3179, in __getattr__ return getattr(self._conf._namespace, name) AttributeError: '_Namespace' object has no attribute 'action_fn' Environment === # git log commit ab6834145f3fe1d33ce7f292727a6bc2be50efd9 (HEAD -> stable/train, origin/stable/train) Merge: 66585e8af1 821506a50c Author: Zuul Date: Mon Oct 21 18:50:20 2019 + Merge "Fix exception translation when creating volume" into stable/train ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1852853 Title: Got an error when executing `nova-manage db` Status in OpenStack Compute (nova): New Bug description: Description === When i used `nova-manage db` and enter, got an error. Steps to reproduce == 1. Execute `nova-manage db` Expected result === Output helper info like as queens # nova-manage db usage: nova-manage db [-h] {archive_deleted_rows,ironic_flavor_migration,null_instance_uuid_scan,online_data_migrations,sync,version} ... nova-manage db: error: too few arguments Actual result = An error has occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/oslo_config/cfg.py", line 3179, in __getattr__ return getattr(self._conf._namespace, name) AttributeError: '_Namespace' object has no attribute 'action_fn' Environment === # git log commit ab6834145f3fe1d33ce7f292727a6bc2be50efd9 (HEAD -> stable/train, origin/stable/train) Merge: 66585e8af1 821506a50c Author: Zuul Date: Mon Oct 21 18:50:20 2019 + Merge "Fix exception translation when creating volume" into stable/train To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1852853/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1848400] [NEW] Can not change domain of role
Public bug reported: openstack --debug role set --domain default 707f0cc1809944c89c063420ccc0436f BadRequest: {} does not have enough properties Failed validating 'minProperties' in schema: {'additionalProperties': True, 'minProperties': 1, 'properties': {'name': {'maxLength': 255, 'minLength': 1, 'pattern': '[\\S]+', 'type': 'string'}}, 'type': 'object'} On instance: {} (HTTP 400) (Request-ID: req-7cd7-e6d5-4cc0-abfc-6d2c18aed525) END return value: 1 journalctl -f -u devstack@keystone.service Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: DEBUG keystone.common.authorization [None req-7cd7-e6d5-4cc0-abfc-6d2c18aed525 None admin] RBAC: Authorization granted {{(pid=1718198) check_policy /opt/stack/keystone/keystone/common/authorization.py:165}} Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: WARNING keystone.common.wsgi [None req-7cd7-e6d5-4cc0-abfc-6d2c18aed525 None admin] {} does not have enough properties Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: Failed validating 'minProperties' in schema: Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: {'additionalProperties': True, Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: 'minProperties': 1, Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: 'properties': {'name': {'maxLength': 255, Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: 'minLength': 1, Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: 'pattern': '[\\S]+', Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: 'type': 'string'}}, Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: 'type': 'object'} Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: On instance: Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: {}: SchemaValidationError: {} does not have enough properties Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: [pid: 1718198|app: 0|req: 21145/169188] 10.110.56.114 () {64 vars in 1335 bytes} [Tue Oct 15 08:39:50 2019] PATCH /identity/v3/roles/707f0cc1809944c89c063420ccc0436f => generated 452 bytes in 18 msecs (HTTP/1.1 400) 5 headers in 186 bytes (1 switches on core 0) Version: # git log commit 79ed42ee67915383242541329dd5aa186f087ff2 Author: Raildo Mascena Date: Wed Jul 24 10:20:17 2019 -0300 Fix python3 compatibility on LDAP search DN from id In Python 3, python-ldap no longer allows bytes for some fields (DNs, RDNs, attribute names, queries). Instead, text values are represented as str, the Unicode text type. [1] More details about byte/str usage in python-ldap can be found at: http://www.python-ldap.org/en/latest/bytes_mode.html#bytes-mode Change-Id: I63e3715032cd8edb11fbff7651f5ba1af506dc9d Related-Bug: #1798184 (cherry picked from commit 03531a56910b12922afde32b40e270b7d68a334b) ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1848400 Title: Can not change domain of role Status in OpenStack Identity (keystone): New Bug description: openstack --debug role set --domain default 707f0cc1809944c89c063420ccc0436f BadRequest: {} does not have enough properties Failed validating 'minProperties' in schema: {'additionalProperties': True, 'minProperties': 1, 'properties': {'name': {'maxLength': 255, 'minLength': 1, 'pattern': '[\\S]+', 'type': 'string'}}, 'type': 'object'} On instance: {} (HTTP 400) (Request-ID: req-7cd7-e6d5-4cc0-abfc-6d2c18aed525) END return value: 1 journalctl -f -u devstack@keystone.service Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: DEBUG keystone.common.authorization [None req-7cd7-e6d5-4cc0-abfc-6d2c18aed525 None admin] RBAC: Authorization granted {{(pid=1718198) check_policy /opt/stack/keystone/keystone/common/authorization.py:165}} Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: WARNING keystone.common.wsgi [None req-7cd7-e6d5-4cc0-abfc-6d2c18aed525 None admin] {} does not have enough properties Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: Failed validating 'minProperties' in schema: Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: {'additionalProperties': True, Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: 'minProperties': 1, Oct 15 08:39:50 openstack1 devstack@keystone.service[1718188]: 'properties': {'name': {'maxLeng
[Yahoo-eng-team] [Bug 1844621] [NEW] Unittest TestBlockCommentsBeginWithASpace not pass
Public bug reported: == Failed 1 tests - output below: == keystone.tests.unit.test_hacking_checks.TestBlockCommentsBeginWithASpace.test - Captured traceback: ~~~ Traceback (most recent call last): File "keystone/tests/unit/test_hacking_checks.py", line 80, in test self.assert_has_errors(code, expected_errors=errors) File "keystone/tests/unit/test_hacking_checks.py", line 57, in assert_has_errors actual_errors = [e[:3] for e in self.run_check(code)] File "keystone/tests/unit/test_hacking_checks.py", line 52, in run_check checker.check_all() File "/opt/jenkins_work/workspace/unittest-keystone/.tox/py27/local/lib/python2.7/site-packages/pep8.py", line 1438, in check_all self.check_logical() File "/opt/jenkins_work/workspace/unittest-keystone/.tox/py27/local/lib/python2.7/site-packages/pep8.py", line 1328, in check_logical (start_row, start_col) = mapping[0][1] TypeError: 'NoneType' object has no attribute '__getitem__' Version: Rocky 14.1.0 ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1844621 Title: Unittest TestBlockCommentsBeginWithASpace not pass Status in OpenStack Identity (keystone): New Bug description: == Failed 1 tests - output below: == keystone.tests.unit.test_hacking_checks.TestBlockCommentsBeginWithASpace.test - Captured traceback: ~~~ Traceback (most recent call last): File "keystone/tests/unit/test_hacking_checks.py", line 80, in test self.assert_has_errors(code, expected_errors=errors) File "keystone/tests/unit/test_hacking_checks.py", line 57, in assert_has_errors actual_errors = [e[:3] for e in self.run_check(code)] File "keystone/tests/unit/test_hacking_checks.py", line 52, in run_check checker.check_all() File "/opt/jenkins_work/workspace/unittest-keystone/.tox/py27/local/lib/python2.7/site-packages/pep8.py", line 1438, in check_all self.check_logical() File "/opt/jenkins_work/workspace/unittest-keystone/.tox/py27/local/lib/python2.7/site-packages/pep8.py", line 1328, in check_logical (start_row, start_col) = mapping[0][1] TypeError: 'NoneType' object has no attribute '__getitem__' Version: Rocky 14.1.0 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1844621/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1838385] [NEW] Volume reserved after instance deleted
Public bug reported: Description === I created one instance boot from volume. After deleting the instance when it's building, the volume's status was reserved. Steps to reproduce == * I created one instance with `openstack server create --volume vol-for-vm1-50G --flavor ecs_2C4G50G_general --network xtt-net-1 vm1` * then I deleted the instance when it's status was build | 3aa10ae1-2f3f-4c52-8f27-a557cf82de9e | vm1| BUILD | None | NOSTATE | | | | ecs_2C4G50G_general | 5af94a8a-aab9-4ba5-bf52-ddb815218e61 | cn-north-3a | None | | * then I showed the status of volume Expected result === The volume's status is available. Actual result = The volume's status is reserved. Environment === 1. Exact version of OpenStack you are running. See the following list for all releases: http://docs.openstack.org/releases/ # apt list --installed | grep nova WARNING: apt does not have a stable CLI interface. Use with caution in scripts. nova-api/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-common/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-conductor/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-consoleauth/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-consoleproxy/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-doc/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-placement-api/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-scheduler/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] python-nova/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed,automatic] python-novaclient/xenial,xenial,now 2:9.1.1-1~u16.04+mcp6 all [installed,automatic] # apt list --installed | grep cinder WARNING: apt does not have a stable CLI interface. Use with caution in scripts. cinder-backup/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed] cinder-common/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed,automatic] cinder-scheduler/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed] cinder-volume/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed] python-cinder/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed,automatic] python-cinderclient/xenial,xenial,now 1:3.5.0-1.0~u16.04+mcp5 all [installed,automatic] ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1838385 Title: Volume reserved after instance deleted Status in OpenStack Compute (nova): New Bug description: Description === I created one instance boot from volume. After deleting the instance when it's building, the volume's status was reserved. Steps to reproduce == * I created one instance with `openstack server create --volume vol-for-vm1-50G --flavor ecs_2C4G50G_general --network xtt-net-1 vm1` * then I deleted the instance when it's status was build | 3aa10ae1-2f3f-4c52-8f27-a557cf82de9e | vm1| BUILD | None | NOSTATE | | | | ecs_2C4G50G_general | 5af94a8a-aab9-4ba5-bf52-ddb815218e61 | cn-north-3a | None | | * then I showed the status of volume Expected result === The volume's status is available. Actual result = The volume's status is reserved. Environment === 1. Exact version of OpenStack you are running. See the following list for all releases: http://docs.openstack.org/releases/ # apt list --installed | grep nova WARNING: apt does not have a stable CLI interface. Use with caution in scripts. nova-api/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-common/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-conductor/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-consoleauth/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-consoleproxy/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-doc/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-placement-api/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-scheduler/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] python-nova/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed,automatic] python-novaclient/xenial,xenial,now 2:9.1.1-1~u16.04+mcp6 all [installed,automatic] # apt list --installed | grep cinder WARNING: apt does not have a stable CLI interface. Use with caution in scripts. cinder-backup/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed] cinder-common/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed,automatic] cinder-scheduler/xenial,xenial,now 2:12.0.4-2~u16.04 all [ins
[Yahoo-eng-team] [Bug 1837681] [NEW] Failed to create vgpu cause of IOError
Public bug reported: Description === I used 'Tesla V100' to create vm with vgpu. Got error. Steps to reproduce == * Create flavor with resources:VGPU='1' * Create vm with CLI `openstack server create --image 27dc8e63-6d28-4f80-a6f4-e5a855a02e46 --flavor 224e1385-7de4-4c0b-931d-a7431d329f78 --network net-1 ins-vgpu-t` Expected result === Create successfully Actual result = Got ERROR Environment === 1. Exact version of OpenStack you are running. See the following # apt list --installed | grep nova WARNING: apt does not have a stable CLI interface. Use with caution in scripts. nova-common/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-compute/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed,automatic] nova-compute-kvm/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] python-nova/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed,automatic] python-novaclient/xenial,xenial,now 2:9.1.1-1~u16.04 all [installed] 2. Which hypervisor did you use? Libvirt + KVM Logs & Configs == 2019-07-22 08:12:18,500.500 21346 ERROR nova.virt.libvirt.driver [req-4053b3df-ae7d-4378-b3c4-1c26e8482e24 4c31323efa7e4abf824399b63a687ff8 187e1165ec2a40e9a72efab673e940d9 - default default] [instance: c9737cde-af6c-40b5-b719-2190428a0a03] Failed to start libvirt guest: libvirtError: internal error: qemu unexpectedly closed the monitor: 2019-07-22T00:12:18.186786Z qemu-system-x86_64: -device vfio-pci,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/78c27f7b-e2ed-4fe8-afcf-84c6107620b9,bus=pci.0,addr=0x7: vfio error: 78c27f7b-e2ed-4fe8-afcf-84c6107620b9: error getting device from group 0: Input/output error ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1837681 Title: Failed to create vgpu cause of IOError Status in OpenStack Compute (nova): New Bug description: Description === I used 'Tesla V100' to create vm with vgpu. Got error. Steps to reproduce == * Create flavor with resources:VGPU='1' * Create vm with CLI `openstack server create --image 27dc8e63-6d28-4f80-a6f4-e5a855a02e46 --flavor 224e1385-7de4-4c0b-931d-a7431d329f78 --network net-1 ins-vgpu-t` Expected result === Create successfully Actual result = Got ERROR Environment === 1. Exact version of OpenStack you are running. See the following # apt list --installed | grep nova WARNING: apt does not have a stable CLI interface. Use with caution in scripts. nova-common/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] nova-compute/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed,automatic] nova-compute-kvm/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed] python-nova/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed,automatic] python-novaclient/xenial,xenial,now 2:9.1.1-1~u16.04 all [installed] 2. Which hypervisor did you use? Libvirt + KVM Logs & Configs == 2019-07-22 08:12:18,500.500 21346 ERROR nova.virt.libvirt.driver [req-4053b3df-ae7d-4378-b3c4-1c26e8482e24 4c31323efa7e4abf824399b63a687ff8 187e1165ec2a40e9a72efab673e940d9 - default default] [instance: c9737cde-af6c-40b5-b719-2190428a0a03] Failed to start libvirt guest: libvirtError: internal error: qemu unexpectedly closed the monitor: 2019-07-22T00:12:18.186786Z qemu-system-x86_64: -device vfio-pci,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/78c27f7b-e2ed-4fe8-afcf-84c6107620b9,bus=pci.0,addr=0x7: vfio error: 78c27f7b-e2ed-4fe8-afcf-84c6107620b9: error getting device from group 0: Input/output error To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1837681/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1829854] Re: volume_type not supported in version 2.72
** Project changed: nova => python-novaclient -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1829854 Title: volume_type not supported in version 2.72 Status in python-novaclient: Confirmed Status in python-novaclient stein series: Confirmed Bug description: Description === From https://developer.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server, i see 'volume_type' supported in version 2.67. From CLI 'nova help boot' volume_type=type of volume to create (either ID or name) when source is blank, image or snapshot and dest is volume (optional). (Supported by API versions '2.67' - '2.latest') But when i used the 2.latest to test, got unsupported in 2.72 # nova --os-compute-api-version 2.latest boot --flavor m1.tiny --image cirros-0.4.0-x86_64-disk --nic net-id=ec4a4958-7666-4bc0-9329-ce4d571d39a5 --block-device source=blank,dest=volume,volume_type=lvmdriver-1,size=1 test-t ERROR (CommandError): 'volume_type' in block device mapping is not supported in API version 2.72. Steps to reproduce == * Setup devstack env * nova --os-compute-api-version 2.latest boot --flavor m1.tiny --image cirros-0.4.0-x86_64-disk --nic net-id=ec4a4958-7666-4bc0-9329-ce4d571d39a5 --block-device source=blank,dest=volume,volume_type=lvmdriver-1,size=1 test-t Expected result === Create instance successfully Actual result = ERROR (CommandError): 'volume_type' in block device mapping is not supported in API version 2.72. Environment === # git log commit fc3890667e4971e3f0f35ac921c2a6c25f72adec Author: OpenDev Sysadmins Date: Fri Apr 19 19:45:52 2019 + OpenDev Migration Patch This commit was bulk generated and pushed by the OpenDev sysadmins as a part of the Git hosting and code review systems migration detailed in these mailing list posts: http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html Attempts have been made to correct repository namespaces and hostnames based on simple pattern matching, but it's possible some were updated incorrectly or missed entirely. Please reach out to us via the contact information listed at https://opendev.org/ with any questions you may have. To manage notifications about this bug go to: https://bugs.launchpad.net/python-novaclient/+bug/1829854/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1829854] Re: volume_type not supported in version 2.72
@melanie, thanks for your reply. I will try to fix it. ** Project changed: python-novaclient => nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1829854 Title: volume_type not supported in version 2.72 Status in OpenStack Compute (nova): Confirmed Status in python-novaclient stein series: Confirmed Bug description: Description === From https://developer.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server, i see 'volume_type' supported in version 2.67. From CLI 'nova help boot' volume_type=type of volume to create (either ID or name) when source is blank, image or snapshot and dest is volume (optional). (Supported by API versions '2.67' - '2.latest') But when i used the 2.latest to test, got unsupported in 2.72 # nova --os-compute-api-version 2.latest boot --flavor m1.tiny --image cirros-0.4.0-x86_64-disk --nic net-id=ec4a4958-7666-4bc0-9329-ce4d571d39a5 --block-device source=blank,dest=volume,volume_type=lvmdriver-1,size=1 test-t ERROR (CommandError): 'volume_type' in block device mapping is not supported in API version 2.72. Steps to reproduce == * Setup devstack env * nova --os-compute-api-version 2.latest boot --flavor m1.tiny --image cirros-0.4.0-x86_64-disk --nic net-id=ec4a4958-7666-4bc0-9329-ce4d571d39a5 --block-device source=blank,dest=volume,volume_type=lvmdriver-1,size=1 test-t Expected result === Create instance successfully Actual result = ERROR (CommandError): 'volume_type' in block device mapping is not supported in API version 2.72. Environment === # git log commit fc3890667e4971e3f0f35ac921c2a6c25f72adec Author: OpenDev Sysadmins Date: Fri Apr 19 19:45:52 2019 + OpenDev Migration Patch This commit was bulk generated and pushed by the OpenDev sysadmins as a part of the Git hosting and code review systems migration detailed in these mailing list posts: http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html Attempts have been made to correct repository namespaces and hostnames based on simple pattern matching, but it's possible some were updated incorrectly or missed entirely. Please reach out to us via the contact information listed at https://opendev.org/ with any questions you may have. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1829854/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1829854] Re: volume_type not supported in version 2.72
I tried 2.68 above, and found only support in 2.67. So i think the description of `nova help boot` should be modified. ** Changed in: nova Assignee: (unassigned) => Eric Xie (eric-xie) ** Project changed: nova => python-novaclient ** Changed in: python-novaclient Status: New => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1829854 Title: volume_type not supported in version 2.72 Status in python-novaclient: Confirmed Bug description: Description === From https://developer.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server, i see 'volume_type' supported in version 2.67. From CLI 'nova help boot' volume_type=type of volume to create (either ID or name) when source is blank, image or snapshot and dest is volume (optional). (Supported by API versions '2.67' - '2.latest') But when i used the 2.latest to test, got unsupported in 2.72 # nova --os-compute-api-version 2.latest boot --flavor m1.tiny --image cirros-0.4.0-x86_64-disk --nic net-id=ec4a4958-7666-4bc0-9329-ce4d571d39a5 --block-device source=blank,dest=volume,volume_type=lvmdriver-1,size=1 test-t ERROR (CommandError): 'volume_type' in block device mapping is not supported in API version 2.72. Steps to reproduce == * Setup devstack env * nova --os-compute-api-version 2.latest boot --flavor m1.tiny --image cirros-0.4.0-x86_64-disk --nic net-id=ec4a4958-7666-4bc0-9329-ce4d571d39a5 --block-device source=blank,dest=volume,volume_type=lvmdriver-1,size=1 test-t Expected result === Create instance successfully Actual result = ERROR (CommandError): 'volume_type' in block device mapping is not supported in API version 2.72. Environment === # git log commit fc3890667e4971e3f0f35ac921c2a6c25f72adec Author: OpenDev Sysadmins Date: Fri Apr 19 19:45:52 2019 + OpenDev Migration Patch This commit was bulk generated and pushed by the OpenDev sysadmins as a part of the Git hosting and code review systems migration detailed in these mailing list posts: http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html Attempts have been made to correct repository namespaces and hostnames based on simple pattern matching, but it's possible some were updated incorrectly or missed entirely. Please reach out to us via the contact information listed at https://opendev.org/ with any questions you may have. To manage notifications about this bug go to: https://bugs.launchpad.net/python-novaclient/+bug/1829854/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1829854] [NEW] volume_type not supported in version 2.72
Public bug reported: Description === >From >https://developer.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server, > i see 'volume_type' supported in version 2.67. >From CLI 'nova help boot' volume_type=type of volume to create (either ID or name) when source is blank, image or snapshot and dest is volume (optional). (Supported by API versions '2.67' - '2.latest') But when i used the 2.latest to test, got unsupported in 2.72 # nova --os-compute-api-version 2.latest boot --flavor m1.tiny --image cirros-0.4.0-x86_64-disk --nic net-id=ec4a4958-7666-4bc0-9329-ce4d571d39a5 --block-device source=blank,dest=volume,volume_type=lvmdriver-1,size=1 test-t ERROR (CommandError): 'volume_type' in block device mapping is not supported in API version 2.72. Steps to reproduce == * Setup devstack env * nova --os-compute-api-version 2.latest boot --flavor m1.tiny --image cirros-0.4.0-x86_64-disk --nic net-id=ec4a4958-7666-4bc0-9329-ce4d571d39a5 --block-device source=blank,dest=volume,volume_type=lvmdriver-1,size=1 test-t Expected result === Create instance successfully Actual result = ERROR (CommandError): 'volume_type' in block device mapping is not supported in API version 2.72. Environment === # git log commit fc3890667e4971e3f0f35ac921c2a6c25f72adec Author: OpenDev Sysadmins Date: Fri Apr 19 19:45:52 2019 + OpenDev Migration Patch This commit was bulk generated and pushed by the OpenDev sysadmins as a part of the Git hosting and code review systems migration detailed in these mailing list posts: http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html Attempts have been made to correct repository namespaces and hostnames based on simple pattern matching, but it's possible some were updated incorrectly or missed entirely. Please reach out to us via the contact information listed at https://opendev.org/ with any questions you may have. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1829854 Title: volume_type not supported in version 2.72 Status in OpenStack Compute (nova): New Bug description: Description === From https://developer.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server, i see 'volume_type' supported in version 2.67. From CLI 'nova help boot' volume_type=type of volume to create (either ID or name) when source is blank, image or snapshot and dest is volume (optional). (Supported by API versions '2.67' - '2.latest') But when i used the 2.latest to test, got unsupported in 2.72 # nova --os-compute-api-version 2.latest boot --flavor m1.tiny --image cirros-0.4.0-x86_64-disk --nic net-id=ec4a4958-7666-4bc0-9329-ce4d571d39a5 --block-device source=blank,dest=volume,volume_type=lvmdriver-1,size=1 test-t ERROR (CommandError): 'volume_type' in block device mapping is not supported in API version 2.72. Steps to reproduce == * Setup devstack env * nova --os-compute-api-version 2.latest boot --flavor m1.tiny --image cirros-0.4.0-x86_64-disk --nic net-id=ec4a4958-7666-4bc0-9329-ce4d571d39a5 --block-device source=blank,dest=volume,volume_type=lvmdriver-1,size=1 test-t Expected result === Create instance successfully Actual result = ERROR (CommandError): 'volume_type' in block device mapping is not supported in API version 2.72. Environment === # git log commit fc3890667e4971e3f0f35ac921c2a6c25f72adec Author: OpenDev Sysadmins Date: Fri Apr 19 19:45:52 2019 + OpenDev Migration Patch This commit was bulk generated and pushed by the OpenDev sysadmins as a part of the Git hosting and code review systems migration detailed in these mailing list posts: http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html Attempts have been made to correct repository namespaces and hostnames based on simple pattern matching, but it's possible some were updated incorrectly or missed entirely. Please reach out to us via the contact information listed at https://opendev.org/ with any questions you may have. To manage notifications about this
[Yahoo-eng-team] [Bug 1829852] [NEW] '--database_connection' changes when update cell with '--transport-url' only
Public bug reported: Description === I updated existed cell with '--transport-url' only. Then the 'database_connection' changed to value same as cell0. Steps to reproduce == * I created one cell nova-manage cell_v2 create_cell --name cell2 --database_connection mysql+pymysql://nova:XXX@172.16.1.20/nova --transport-url rabbit://openstack:XXX@172.16.1.21:5672/ * I updated cell2 nova-manage cell_v2 update_cell --cell_uuid 53c5c34d-b3c2-496f-986f-166e1d4d3845 --transport-url rabbit://openstack:XXX@172.16.1.19:5672/ * Check cells nova-manage cell_v2 list_cells Expected result === +---+--+---+--+ | Name | UUID | Transport URL | Database Connection| +---+--+---+--+ | cell0 | ---- | none:/ | mysql+pymysql://nova:@172.16.1.14/nova_cell0 | | cell2 | 53c5c34d-b3c2-496f-986f-166e1d4d3845 | rabbit://openstack:@172.16.1.19:5672/ | mysql+pymysql://nova:@172.16.1.20/nova| +---+--+---+--+ Actual result = +---+--+---+--+ | Name | UUID | Transport URL | Database Connection| +---+--+---+--+ | cell0 | ---- | none:/ | mysql+pymysql://nova:@172.16.1.14/nova_cell0 | | cell2 | 53c5c34d-b3c2-496f-986f-166e1d4d3845 | rabbit://openstack:@172.16.1.19:5672/ | mysql+pymysql://nova:@172.16.1.14/nova| +---+--+---+--+ Environment === # apt list --installed | grep nova ... python-nova/unknown,unknown,now 2:17.0.7-6~u16.01 all [installed,automatic] ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1829852 Title: '--database_connection' changes when update cell with '--transport- url' only Status in OpenStack Compute (nova): New Bug description: Description === I updated existed cell with '--transport-url' only. Then the 'database_connection' changed to value same as cell0. Steps to reproduce == * I created one cell nova-manage cell_v2 create_cell --name cell2 --database_connection mysql+pymysql://nova:XXX@172.16.1.20/nova --transport-url rabbit://openstack:XXX@172.16.1.21:5672/ * I updated cell2 nova-manage cell_v2 update_cell --cell_uuid 53c5c34d-b3c2-496f-986f-166e1d4d3845 --transport-url rabbit://openstack:XXX@172.16.1.19:5672/ * Check cells nova-manage cell_v2 list_cells Expected result === +---+--+---+--+ | Name | UUID | Transport URL | Database Connection| +---+--+---+--+ | cell0 | ---- | none:/ | mysql+pymysql://nova:@172.16.1.14/nova_cell0 | | cell2 | 53c5c34d-b3c2-496f-986f-166e1d4d3845 | rabbit://openstack:@172.16.1.19:5672/ | mysql+pymysql://nova:@172.16.1.20/nova| +---+--+---+--+ Actual result = +---+--+---+--+ | Name | UUID | Transport URL | Database Connection| +---+--+---+--+ | cell0 | ---- | none:/
[Yahoo-eng-team] [Bug 1826379] [NEW] Error: Unable to retrieve image list.
Public bug reported: When i clicked project - compute - instances, got error "Error: Unable to retrieve image list." logs: Recoverable error: 'list' object has no attribute 'id' Version: root@prx02:/var/log/horizon# apt list --installed | grep horizon WARNING: apt does not have a stable CLI interface. Use with caution in scripts. python-django-horizon/xenial,xenial,now 3:13.0.1-9~u16.04 all [installed,automatic] root@prx02:/var/log/horizon# apt list --installed | grep dashboard WARNING: apt does not have a stable CLI interface. Use with caution in scripts. openstack-dashboard/xenial,xenial,now 3:13.0.1-9~u16.04 all [installed ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1826379 Title: Error: Unable to retrieve image list. Status in OpenStack Dashboard (Horizon): New Bug description: When i clicked project - compute - instances, got error "Error: Unable to retrieve image list." logs: Recoverable error: 'list' object has no attribute 'id' Version: root@prx02:/var/log/horizon# apt list --installed | grep horizon WARNING: apt does not have a stable CLI interface. Use with caution in scripts. python-django-horizon/xenial,xenial,now 3:13.0.1-9~u16.04 all [installed,automatic] root@prx02:/var/log/horizon# apt list --installed | grep dashboard WARNING: apt does not have a stable CLI interface. Use with caution in scripts. openstack-dashboard/xenial,xenial,now 3:13.0.1-9~u16.04 all [installed To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1826379/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817396] [NEW] 'Namespace' object has no attribute 'os_user_id'
Public bug reported: Description === In my env, use keycloak as the IDP of keystone. But when i used the rc file to execute `nova force-delete INSTANCE`, got this error. Steps to reproduce == * Use keycloak as the IDP of keystone * then I created the rc file export OS_AUTH_TYPE=v3oidcpassword export OS_ACCESS_TOKEN_ENDPOINT=https://xxx/auth/realms/picp/protocol/openid-connect/token export OS_IDENTITY_PROVIDER=keycloak export OS_PROTOCOL=openid export OS_IDENTITY_API_VERSION=3 export OS_AUTH_URL=http://xxx/v3 export OS_PROJECT_DOMAIN_NAME=Default export OS_REGION_NAME=xxx #export OS_REGION_NAME=RegionOne export OS_USERNAME="test05" #export OS_PASSWORD=123456a? export OS_PASSWORD=passowrd export OS_CLIENT_ID=keyclient export OS_CLIENT_SECRET=4dcd201a-c387-4759-a362-4addb3acbcc8 export OS_PROJECT_NAME="test" export OS_INTERFACE=internal export OS_ENDPOINT_TYPE="internal" export OS_CACERT="/etc/ssl/certs/ca-certificates.crt" * then I sourced this rc file, executed `nova force-delete` CLI nova force-delete ECS-2019223142413-0008 Expected result === No output Actual result = Got 'ERROR (AttributeError): 'Namespace' object has no attribute 'os_user_id'' Environment === # apt list --installed | grep novaclient WARNING: apt does not have a stable CLI interface. Use with caution in scripts. python-novaclient/2018.4.0,now 2:9.1.1-1~u16.04 all [installed,automatic] Logs & Configs == ~# nova --debug force-delete ECS-2019223142413-0008 DEBUG (shell:951) 'Namespace' object has no attribute 'os_user_id' Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 949, in main OpenStackComputeShell().main(argv) File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 653, in main os_user_id = args.os_user_id AttributeError: 'Namespace' object has no attribute 'os_user_id' ERROR (AttributeError): 'Namespace' object has no attribute 'os_user_id' ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1817396 Title: 'Namespace' object has no attribute 'os_user_id' Status in OpenStack Compute (nova): New Bug description: Description === In my env, use keycloak as the IDP of keystone. But when i used the rc file to execute `nova force-delete INSTANCE`, got this error. Steps to reproduce == * Use keycloak as the IDP of keystone * then I created the rc file export OS_AUTH_TYPE=v3oidcpassword export OS_ACCESS_TOKEN_ENDPOINT=https://xxx/auth/realms/picp/protocol/openid-connect/token export OS_IDENTITY_PROVIDER=keycloak export OS_PROTOCOL=openid export OS_IDENTITY_API_VERSION=3 export OS_AUTH_URL=http://xxx/v3 export OS_PROJECT_DOMAIN_NAME=Default export OS_REGION_NAME=xxx #export OS_REGION_NAME=RegionOne export OS_USERNAME="test05" #export OS_PASSWORD=123456a? export OS_PASSWORD=passowrd export OS_CLIENT_ID=keyclient export OS_CLIENT_SECRET=4dcd201a-c387-4759-a362-4addb3acbcc8 export OS_PROJECT_NAME="test" export OS_INTERFACE=internal export OS_ENDPOINT_TYPE="internal" export OS_CACERT="/etc/ssl/certs/ca-certificates.crt" * then I sourced this rc file, executed `nova force-delete` CLI nova force-delete ECS-2019223142413-0008 Expected result === No output Actual result = Got 'ERROR (AttributeError): 'Namespace' object has no attribute 'os_user_id'' Environment === # apt list --installed | grep novaclient WARNING: apt does not have a stable CLI interface. Use with caution in scripts. python-novaclient/2018.4.0,now 2:9.1.1-1~u16.04 all [installed,automatic] Logs & Configs == ~# nova --debug force-delete ECS-2019223142413-0008 DEBUG (shell:951) 'Namespace' object has no attribute 'os_user_id' Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 949, in main OpenStackComputeShell().main(argv) File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 653, in main os_user_id = args.os_user_id AttributeError: 'Namespace' object has no attribute 'os_user_id' ERROR (AttributeError): 'Namespace' object has no attribute 'os_user_id' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1817396/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1812335] [NEW] Cannot connect neutron when clicking compute - instances in horizon
Public bug reported: When i click project - compute - instances, got this error. Error: Unable to connect to Neutron. LOG: Unable to connect to Neutron: 'frozenset' object has no attribute '__getitem__' Version: # apt list --installed | grep horizon WARNING: apt does not have a stable CLI interface. Use with caution in scripts. python-django-horizon/2018.4.0,now 3:12.0.2-1~u16.04 all [installed,automatic] ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1812335 Title: Cannot connect neutron when clicking compute - instances in horizon Status in OpenStack Dashboard (Horizon): New Bug description: When i click project - compute - instances, got this error. Error: Unable to connect to Neutron. LOG: Unable to connect to Neutron: 'frozenset' object has no attribute '__getitem__' Version: # apt list --installed | grep horizon WARNING: apt does not have a stable CLI interface. Use with caution in scripts. python-django-horizon/2018.4.0,now 3:12.0.2-1~u16.04 all [installed,automatic] To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1812335/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1810656] [NEW] Missing some configurations when genconfig
Public bug reported: Description === When i use `tox -egenconfig` to generate nova.conf, many configurations are not in nova.conf. Only include: # egrep -v "^#|^$" nova.conf.sample [DEFAULT] [oslo_concurrency] [profiler] Steps to reproduce == * Git clong nova source code * tox -egenconfig * check etc/nova/nova.conf.sample Expected result === Got whole configurations in nova.conf Actual result = Only # egrep -v "^#|^$" nova.conf.sample [DEFAULT] [oslo_concurrency] [profiler] Environment === # git log commit a8e992b1057a3c0c56478baaf6f090aad87438d4 Merge: 8ef3d25 d6c1f6a Author: Zuul Date: Fri Jan 4 23:28:29 2019 + Merge "libvirt: Add workaround to cleanup instance dir when using rbd" ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1810656 Title: Missing some configurations when genconfig Status in OpenStack Compute (nova): New Bug description: Description === When i use `tox -egenconfig` to generate nova.conf, many configurations are not in nova.conf. Only include: # egrep -v "^#|^$" nova.conf.sample [DEFAULT] [oslo_concurrency] [profiler] Steps to reproduce == * Git clong nova source code * tox -egenconfig * check etc/nova/nova.conf.sample Expected result === Got whole configurations in nova.conf Actual result = Only # egrep -v "^#|^$" nova.conf.sample [DEFAULT] [oslo_concurrency] [profiler] Environment === # git log commit a8e992b1057a3c0c56478baaf6f090aad87438d4 Merge: 8ef3d25 d6c1f6a Author: Zuul Date: Fri Jan 4 23:28:29 2019 + Merge "libvirt: Add workaround to cleanup instance dir when using rbd" To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1810656/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1808505] [NEW] 'Availability zone' not update after instance migrated to other node
Public bug reported: Description === As a admin, i want to live-migrate my instance to one target host which belongs to another availibility zone. Bug after successful migration, the avalibility zone is the old one when i show the info of the instance. Steps to reproduce == * Create one instance on host1 which belongs to zone1 * Use `nova live-migrate --force ID host1`, host1 belongs to zone2 * Use `nova show ID` to get detailed info Expected result === The 'availibity zone' should be zone2 Actual result = The 'availibity zone' should be zone1 Environment === 1. Exact version of OpenStack you are running. See the following list for all releases: http://docs.openstack.org/releases/ # dpkg -l | grep nova ii nova-api 2:16.1.0-1~u16.04 all OpenStack Compute - compute API frontend ii nova-common 2:16.1.0-1~u16.04 all OpenStack Compute - common files ii nova-conductor 2:16.1.0-1~u16.04 all OpenStack Compute - conductor service ii nova-consoleauth 2:16.1.0-1~u16.04 all OpenStack Compute - Console Authenticator ii nova-consoleproxy2:16.1.0-1~u16.04 all OpenStack Compute - NoVNC proxy ii nova-doc 2:16.1.0-1~u16.04 all OpenStack Compute - documentation ii nova-placement-api 2:16.1.0-1~u16.04 all OpenStack Compute - placement API frontend ii nova-scheduler 2:16.1.0-1~u16.04 all OpenStack Compute - virtual machine scheduler ii python-nova 2:16.1.0-1~u16.04 all OpenStack Compute - libraries ii python-novaclient2:9.1.1-1~u16.04 all client library for OpenStack Compute API - Python 2.7 2. Which hypervisor did you use? Libvirt + KVM ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1808505 Title: 'Availability zone' not update after instance migrated to other node Status in OpenStack Compute (nova): New Bug description: Description === As a admin, i want to live-migrate my instance to one target host which belongs to another availibility zone. Bug after successful migration, the avalibility zone is the old one when i show the info of the instance. Steps to reproduce == * Create one instance on host1 which belongs to zone1 * Use `nova live-migrate --force ID host1`, host1 belongs to zone2 * Use `nova show ID` to get detailed info Expected result === The 'availibity zone' should be zone2 Actual result = The 'availibity zone' should be zone1 Environment === 1. Exact version of OpenStack you are running. See the following list for all releases: http://docs.openstack.org/releases/ # dpkg -l | grep nova ii nova-api 2:16.1.0-1~u16.04 all OpenStack Compute - compute API frontend ii nova-common 2:16.1.0-1~u16.04 all OpenStack Compute - common files ii nova-conductor 2:16.1.0-1~u16.04 all OpenStack Compute - conductor service ii nova-consoleauth 2:16.1.0-1~u16.04 all OpenStack Compute - Console Authenticator ii nova-consoleproxy2:16.1.0-1~u16.04 all OpenStack Compute - NoVNC proxy ii nova-doc 2:16.1.0-1~u16.04 all OpenStack Compute - documentation ii nova-placement-api 2:16.1.0-1~u16.04 all OpenStack Compute - placement API frontend ii nova-scheduler 2:16.1.0-1~u16.04 all OpenStack Compute - virtual machine scheduler ii python-nova 2:16.1.0-1~u16.04 all OpenStack Compute - libraries ii python-novaclient2:9.1.1-1~u16.04 all client library for OpenStack Compute API - Python 2.7 2. Which hypervisor did you use? Libvirt + KVM To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1808505/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.n
[Yahoo-eng-team] [Bug 1790277] [NEW] Failed to detach volume cause of root device attribute
Public bug reported: Description === At first use one volume as bootable device for the instance, then use another attached bootable volume as the instance's bootable device. But when detaching the first volume from the instance, got error. Steps to reproduce == * Boot instance vm01 from volume vol01 * Create another volume vol02 from image, update 'bootable' is true * Shutoff vm01 * Attach vol02 to vm01 * Update 'bootable' of vol01 to false * Start vm01 * Detach vol01 from vm01 Expected result === Detach successfully Actual result = Failed to detached Environment === 1. Exact version of OpenStack you are running. # dpkg -l | grep nova ii nova-api 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - compute API frontend ii nova-common 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - common files ii nova-conductor 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - conductor service ii nova-consoleauth 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - Console Authenticator ii nova-consoleproxy2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - NoVNC proxy ii nova-doc 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - documentation ii nova-placement-api 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - placement API frontend ii nova-scheduler 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - virtual machine scheduler ii python-nova 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - libraries ii python-novaclient2:9.1.1-1~u16.04+mcp6 all client library for OpenStack Compute API - Python 2.7 2. Which hypervisor did you use? Libvirt + KVM 2. Which storage type did you use? Ceph 3. Which networking type did you use? Neutron with OpenVSwitch Logs & Configs == Forbidden: Can't detach root device volume (HTTP 403) (Request-ID: req-c6957e61-c375-4110-939e-6ebd6bf37077) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1790277 Title: Failed to detach volume cause of root device attribute Status in OpenStack Compute (nova): New Bug description: Description === At first use one volume as bootable device for the instance, then use another attached bootable volume as the instance's bootable device. But when detaching the first volume from the instance, got error. Steps to reproduce == * Boot instance vm01 from volume vol01 * Create another volume vol02 from image, update 'bootable' is true * Shutoff vm01 * Attach vol02 to vm01 * Update 'bootable' of vol01 to false * Start vm01 * Detach vol01 from vm01 Expected result === Detach successfully Actual result = Failed to detached Environment === 1. Exact version of OpenStack you are running. # dpkg -l | grep nova ii nova-api 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - compute API frontend ii nova-common 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - common files ii nova-conductor 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - conductor service ii nova-consoleauth 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - Console Authenticator ii nova-consoleproxy2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - NoVNC proxy ii nova-doc 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - documentation ii nova-placement-api 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - placement API frontend ii nova-scheduler 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - virtual machine scheduler ii python-nova 2:16.1.0-1~u16.04+mcp134 all OpenStack Compute - libraries ii python-novaclient2:9.1.1-1~u16.04+mcp6 all client library for OpenStack Compute API - Python 2.7 2. Which hypervisor did you use? Libvirt + KVM 2. Which storage type did you use? Ceph 3. Whi
[Yahoo-eng-team] [Bug 1741001] [NEW] Got an unexpected keyword argument when starting nova-api
Public bug reported: Description === After upgrading oslo.db, nova-api service was started failed. Steps to reproduce == * pip install oslo.db==4.24.0 * start nova-api service is OK * `pip install upgrade oslo.db` to 4.32.0 * failed to start nova-api Expected result === In the requirement.txt, oslo.db >= 4.24.0. But at version 4.32.0(latest), got 'unexpected keyword' error. Actual result = 'nova-api' is running OK. Environment === 1. nova version # rpm -qa | grep nova openstack-nova-console-16.0.3-2.el7.noarch openstack-nova-common-16.0.3-2.el7.noarch python2-novaclient-9.1.1-1.el7.noarch openstack-nova-scheduler-16.0.3-2.el7.noarch openstack-nova-api-16.0.3-2.el7.noarch openstack-nova-placement-api-16.0.3-2.el7.noarch python-nova-16.0.3-2.el7.noarch openstack-nova-conductor-16.0.3-2.el7.noarch openstack-nova-novncproxy-16.0.3-2.el7.noarch Logs & Configs == Jan 3 06:59:13 host-172-23-59-134 systemd: Starting OpenStack Nova API Server... Jan 3 06:59:16 host-172-23-59-134 nova-api: Traceback (most recent call last): Jan 3 06:59:16 host-172-23-59-134 nova-api: File "/usr/bin/nova-api", line 6, in Jan 3 06:59:16 host-172-23-59-134 nova-api: from nova.cmd.api import main Jan 3 06:59:16 host-172-23-59-134 nova-api: File "/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 29, in Jan 3 06:59:16 host-172-23-59-134 nova-api: from nova import config Jan 3 06:59:16 host-172-23-59-134 nova-api: File "/usr/lib/python2.7/site-packages/nova/config.py", line 23, in Jan 3 06:59:16 host-172-23-59-134 nova-api: from nova.db.sqlalchemy import api as sqlalchemy_api Jan 3 06:59:16 host-172-23-59-134 nova-api: File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 925, in Jan 3 06:59:16 host-172-23-59-134 nova-api: retry_on_request=True) Jan 3 06:59:16 host-172-23-59-134 nova-api: TypeError: __init__() got an unexpected keyword argument 'retry_on_request' Jan 3 06:59:16 host-172-23-59-134 systemd: openstack-nova-api.service: main process exited, code=exited, status=1/FAILURE ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1741001 Title: Got an unexpected keyword argument when starting nova-api Status in OpenStack Compute (nova): New Bug description: Description === After upgrading oslo.db, nova-api service was started failed. Steps to reproduce == * pip install oslo.db==4.24.0 * start nova-api service is OK * `pip install upgrade oslo.db` to 4.32.0 * failed to start nova-api Expected result === In the requirement.txt, oslo.db >= 4.24.0. But at version 4.32.0(latest), got 'unexpected keyword' error. Actual result = 'nova-api' is running OK. Environment === 1. nova version # rpm -qa | grep nova openstack-nova-console-16.0.3-2.el7.noarch openstack-nova-common-16.0.3-2.el7.noarch python2-novaclient-9.1.1-1.el7.noarch openstack-nova-scheduler-16.0.3-2.el7.noarch openstack-nova-api-16.0.3-2.el7.noarch openstack-nova-placement-api-16.0.3-2.el7.noarch python-nova-16.0.3-2.el7.noarch openstack-nova-conductor-16.0.3-2.el7.noarch openstack-nova-novncproxy-16.0.3-2.el7.noarch Logs & Configs == Jan 3 06:59:13 host-172-23-59-134 systemd: Starting OpenStack Nova API Server... Jan 3 06:59:16 host-172-23-59-134 nova-api: Traceback (most recent call last): Jan 3 06:59:16 host-172-23-59-134 nova-api: File "/usr/bin/nova-api", line 6, in Jan 3 06:59:16 host-172-23-59-134 nova-api: from nova.cmd.api import main Jan 3 06:59:16 host-172-23-59-134 nova-api: File "/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 29, in Jan 3 06:59:16 host-172-23-59-134 nova-api: from nova import config Jan 3 06:59:16 host-172-23-59-134 nova-api: File "/usr/lib/python2.7/site-packages/nova/config.py", line 23, in Jan 3 06:59:16 host-172-23-59-134 nova-api: from nova.db.sqlalchemy import api as sqlalchemy_api Jan 3 06:59:16 host-172-23-59-134 nova-api: File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 925, in Jan 3 06:59:16 host-172-23-59-134 nova-api: retry_on_request=True) Jan 3 06:59:16 host-172-23-59-134 nova-api: TypeError: __init__() got an unexpected keyword argument 'retry_on_request' Jan 3 06:59:16 host-172-23-59-134 systemd: openstack-nova-api.service: main process exited, code=exited, status=1/FAILURE To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1741001/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1718125] Re: Missing some contents for glance install prerequisites
** Also affects: neutron Importance: Undecided Status: New ** Changed in: neutron Assignee: (unassigned) => Eric Xie (eric-xie) ** Summary changed: - Missing some contents for glance install prerequisites + Missing some contents for install prerequisites -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1718125 Title: Missing some contents for install prerequisites Status in Glance: In Progress Status in neutron: New Bug description: Description === Install glance followed by https://docs.openstack.org/glance/pike/install/install-rdo.html. But it missed content for create glance database. 'To create the database, complete these steps: Use the database access client to connect to the database server as the root user: $ mysql -u root -p' Environment === $ git log commit f8426378f892f250391b3d1004e27725d462481f Author: OpenStack Proposal Bot Date: Fri Sep 15 07:16:27 2017 + Imported Translations from Zanata For more information about this automatic import see: https://docs.openstack.org/i18n/latest/reviewing-translation-import.html Change-Id: Ie31a9ea996d8e42530a37ed9a9616cc44ebe65c8 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1718125/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1718125] Re: Missing some contents for glance install prerequisites
** Project changed: openstack-manuals => glance -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1718125 Title: Missing some contents for glance install prerequisites Status in Glance: New Bug description: Description === Install glance followed by https://docs.openstack.org/glance/pike/install/install-rdo.html. But it missed content for create glance database. 'To create the database, complete these steps: Use the database access client to connect to the database server as the root user: $ mysql -u root -p' Environment === $ git log commit f8426378f892f250391b3d1004e27725d462481f Author: OpenStack Proposal Bot Date: Fri Sep 15 07:16:27 2017 + Imported Translations from Zanata For more information about this automatic import see: https://docs.openstack.org/i18n/latest/reviewing-translation-import.html Change-Id: Ie31a9ea996d8e42530a37ed9a9616cc44ebe65c8 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1718125/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1697564] [NEW] Failed to resize instance after changing ssh's port
Public bug reported: Description === Consider of security, default port(22) of sshd maybe be changed. After it was changed, the resize of instance got error. Steps to reproduce == * Modify the /etc/ssh/sshd_config, 'Port 22022',and restart sshd; * Resize one instance Expected result === Resize successfully Actual result = Resize fails Environment === 1. Libvirt + KVM 2. OpenStack Mitaka # rpm -qa | grep nova openstack-nova-conductor-13.1.2-1.el7.noarch openstack-nova-api-13.1.2-1.el7.noarch python-nova-13.1.2-1.el7.noarch openstack-nova-novncproxy-13.1.2-1.el7.noarch openstack-nova-cert-13.1.2-1.el7.noarch openstack-nova-scheduler-13.1.2-1.el7.noarch python2-novaclient-3.3.2-1.el7.noarch openstack-nova-common-13.1.2-1.el7.noarch openstack-nova-console-13.1.2-1.el7.noarch Logs & Configs == 2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher ResizeError: Resize error: not able to execute ssh command: Unexpected error while running command. 2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Command: ssh -o BatchMode=yes 172.23.30.7 mkdir -p /var/lib/nova/instances/67c23674-d6e9-40a2-95f0-5aa521074ff7 2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Exit code: 255 2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Stdout: u'' 2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Stderr: u'ssh: connect to host 172.23.30.7 port 22: Connection refused\r\n' ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1697564 Title: Failed to resize instance after changing ssh's port Status in OpenStack Compute (nova): New Bug description: Description === Consider of security, default port(22) of sshd maybe be changed. After it was changed, the resize of instance got error. Steps to reproduce == * Modify the /etc/ssh/sshd_config, 'Port 22022',and restart sshd; * Resize one instance Expected result === Resize successfully Actual result = Resize fails Environment === 1. Libvirt + KVM 2. OpenStack Mitaka # rpm -qa | grep nova openstack-nova-conductor-13.1.2-1.el7.noarch openstack-nova-api-13.1.2-1.el7.noarch python-nova-13.1.2-1.el7.noarch openstack-nova-novncproxy-13.1.2-1.el7.noarch openstack-nova-cert-13.1.2-1.el7.noarch openstack-nova-scheduler-13.1.2-1.el7.noarch python2-novaclient-3.3.2-1.el7.noarch openstack-nova-common-13.1.2-1.el7.noarch openstack-nova-console-13.1.2-1.el7.noarch Logs & Configs == 2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher ResizeError: Resize error: not able to execute ssh command: Unexpected error while running command. 2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Command: ssh -o BatchMode=yes 172.23.30.7 mkdir -p /var/lib/nova/instances/67c23674-d6e9-40a2-95f0-5aa521074ff7 2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Exit code: 255 2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Stdout: u'' 2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Stderr: u'ssh: connect to host 172.23.30.7 port 22: Connection refused\r\n' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1697564/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1664135] [NEW] Got none with improper name when querying resource provider list
Public bug reported: Description === I created one resource provider with name that included some special symbols, like '$', '@' at etc. Then i queried RP list with the name for filtering, and it returned none. Steps to reproduce == * POST http://**IP**/placement/resource_classes { "name": "RP_test-dks?#¥@!##" } * GET http://172.23.28.30/placement/resource_providers?name=RP_test-dks?#¥@!## { "resource_providers": [] } Expected result === Validate 'name' when creating RP with strict rules. Actual result = Only check if string. "name": { "type": "string", "maxLength": 200 }, Environment === 1. nova version [root@controller nova]# git log commit 50d402821be7476eb58ccd791c50d8ed801e85eb Author: Matt Riedemann Date: Wed Feb 8 10:23:14 2017 -0500 Consider startup scenario in _get_compute_nodes_in_db 2. Which hypervisor did you use? devstack + libvirt + kvm ** Affects: nova Importance: Undecided Status: New ** Tags: placement -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1664135 Title: Got none with improper name when querying resource provider list Status in OpenStack Compute (nova): New Bug description: Description === I created one resource provider with name that included some special symbols, like '$', '@' at etc. Then i queried RP list with the name for filtering, and it returned none. Steps to reproduce == * POST http://**IP**/placement/resource_classes { "name": "RP_test-dks?#¥@!##" } * GET http://172.23.28.30/placement/resource_providers?name=RP_test-dks?#¥@!## { "resource_providers": [] } Expected result === Validate 'name' when creating RP with strict rules. Actual result = Only check if string. "name": { "type": "string", "maxLength": 200 }, Environment === 1. nova version [root@controller nova]# git log commit 50d402821be7476eb58ccd791c50d8ed801e85eb Author: Matt Riedemann Date: Wed Feb 8 10:23:14 2017 -0500 Consider startup scenario in _get_compute_nodes_in_db 2. Which hypervisor did you use? devstack + libvirt + kvm To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1664135/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1664117] [NEW] Error message should not include SQL command
Public bug reported: Description === When i create one resource provider with existed one's name, returned error message includes SQL command. Steps to reproduce == * Create one resource provider with name 'RP_test' * Create another resource provider with name 'RP_test' Expected result === "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider RP_test already exists.", I think message above is detailed enough. Actual result = "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider already exists: (pymysql.err.IntegrityError) (1062, u\"Duplicate entry 'RP_test' for key 'uniq_resource_providers0name'\") [SQL: u'INSERT INTO resource_providers (created_at, updated_at, uuid, name, generation, can_host) VALUES (%(created_at)s, %(updated_at)s, %(uuid)s, %(name)s, %(generation)s, %(can_host)s)'] [parameters: {'uuid': 'cfafc096-4b15-4dc1-bb44-2bad0cd6d9e5', 'generation': 0, 'created_at': datetime.datetime(2017, 2, 13, 5, 27, 41, 686138), 'updated_at': None, 'can_host': 0, 'name': u'RP_test'}] ", Environment === 1. nova version [root@controller nova]# git log commit 50d402821be7476eb58ccd791c50d8ed801e85eb Author: Matt Riedemann Date: Wed Feb 8 10:23:14 2017 -0500 Consider startup scenario in _get_compute_nodes_in_db 2. Which hypervisor did you use? devstack + libvirt + kvm ** Affects: nova Importance: Undecided Status: New ** Tags: placement ** Tags added: placement -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1664117 Title: Error message should not include SQL command Status in OpenStack Compute (nova): New Bug description: Description === When i create one resource provider with existed one's name, returned error message includes SQL command. Steps to reproduce == * Create one resource provider with name 'RP_test' * Create another resource provider with name 'RP_test' Expected result === "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider RP_test already exists.", I think message above is detailed enough. Actual result = "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider already exists: (pymysql.err.IntegrityError) (1062, u\"Duplicate entry 'RP_test' for key 'uniq_resource_providers0name'\") [SQL: u'INSERT INTO resource_providers (created_at, updated_at, uuid, name, generation, can_host) VALUES (%(created_at)s, %(updated_at)s, %(uuid)s, %(name)s, %(generation)s, %(can_host)s)'] [parameters: {'uuid': 'cfafc096-4b15-4dc1-bb44-2bad0cd6d9e5', 'generation': 0, 'created_at': datetime.datetime(2017, 2, 13, 5, 27, 41, 686138), 'updated_at': None, 'can_host': 0, 'name': u'RP_test'}] ", Environment === 1. nova version [root@controller nova]# git log commit 50d402821be7476eb58ccd791c50d8ed801e85eb Author: Matt Riedemann Date: Wed Feb 8 10:23:14 2017 -0500 Consider startup scenario in _get_compute_nodes_in_db 2. Which hypervisor did you use? devstack + libvirt + kvm To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1664117/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1663456] [NEW] Field 'updated_at' always 'None' when show aggregate
Public bug reported: Description === When i got detailed info of one host aggregate with CLI `openstack aggregate show`, the field 'updated_at' always was 'None'. Steps to reproduce == * Create one host aggregate with CLI `openstack aggregate create t-sh` * Set some properties for the aggregate with CLI `openstack aggregate set --zone tztz --property foo=bar agg-sh` * Get detailed info of the aggregate with CLI `openstack aggregate show agg-sh` Expected result === | updated_at| 2017-02-10T03:27:25.535045 | Actual result = | updated_at| None | Environment === 1. nova version [root@controller nova]# git log commit 50d402821be7476eb58ccd791c50d8ed801e85eb Author: Matt Riedemann Date: Wed Feb 8 10:23:14 2017 -0500 Consider startup scenario in _get_compute_nodes_in_db 2. Which hypervisor did you use? devstack + libvirt + kvm Logs == Enable --debug in openstack command. * Set some properties for the aggregate with '--debug'. RESP BODY: {"aggregate": {"name": "agg-sh", "availability_zone": "tztz", "deleted": false, "created_at": "2017-02-10T03:26:21.00", "updated_at": "2017-02-10T03:27:25.535045", "hosts": [], "deleted_at": null, "id": 4, "metadata": {"foo": "bar", "availability_zone": "tztz"}}} Note: field 'updated_at' has valid value. * Get detailed info with '--debug' RESP BODY: {"aggregates": [{"name": "agg-1", "availability_zone": "tz1", "deleted": false, "created_at": "2017-02-10T02:09:47.00", "updated_at": null, "hosts": ["controller"], "deleted_at": null, "id": 1, "metadata": {"color": "green", "foo": "bar", "availability_zone": "tz1"}}, {"name": "agg-a", "availability_zone": "tz2", "deleted": false, "created_at": "2017-02-10T02:39:15.00", "updated_at": null, "hosts": [], "deleted_at": null, "id": 2, "metadata": {"foo": "tar", "availability_zone": "tz2"}}, {"name": "t-sh", "availability_zone": "tz3", "deleted": false, "created_at": "2017-02-10T02:39:24.00", "updated_at": null, "hosts": [], "deleted_at": null, "id": 3, "metadata": {"color": "blue", "hello": "world", "availability_zone": "tz3"}}, {"name": "agg-sh", "availability_zone": "tztz", "deleted": false, "created_at": "2017-02-10T03:26:21.00", "updated_at": null, "hosts": [], "deleted_at": null, "id": 4, "metadata": {"foo": "bar", "availability_zone": "tztz"}}]} Note: field 'updated_at' is null. ** Affects: nova Importance: Undecided Assignee: Eric Xie (eric-xie) Status: In Progress ** Tags: host-aggregate ** Changed in: nova Status: New => In Progress ** Changed in: nova Assignee: (unassigned) => Eric Xie (eric-xie) ** Tags added: host-aggregate -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1663456 Title: Field 'updated_at' always 'None' when show aggregate Status in OpenStack Compute (nova): In Progress Bug description: Description === When i got detailed info of one host aggregate with CLI `openstack aggregate show`, the field 'updated_at' always was 'None'. Steps to reproduce == * Create one host aggregate with CLI `openstack aggregate create t-sh` * Set some properties for the aggregate with CLI `openstack aggregate set --zone tztz --property foo=bar agg-sh` * Get detailed info of the aggregate with CLI `openstack aggregate show agg-sh` Expected result === | updated_at| 2017-02-10T03:27:25.535045 | Actual result = | updated_at| None | Environment === 1. nova version [root@controller nova]# git log commit 50d402821be7476eb58ccd791c50d8ed801e85eb Author: Matt Riedemann Date: Wed Feb 8 10:23:14 2017 -0500 Consider startup scenario in _get_compute_nodes_in_db 2. Which hypervisor did you use? devstack
[Yahoo-eng-team] [Bug 1663163] [NEW] Improper prompt when update existed resource class
Public bug reported: Description === When i updated the resource class 'CUSTOM_A' with name 'CUSTOM_B', which resource class 'CUSTOM_B' exists, the prompt returned by Placement API was 'Resource class already exists: CUSTOM_A'. But it should be 'CUSTOM_B' that already exists. Steps to reproduce == * POST http://**IP**/placement/resource_classes { "name": "CUSTOM_A" } * POST http://**IP**/placement/resource_classes { "name": "CUSTOM_B" } * PUT http://172.23.28.30/placement/resource_classes/CUSTOM_A { "name": "CUSTOM_B" } Expected result === Response: { "errors": [ { "status": 409, "request_id": "req-111941ae-839c-4e3e-92fb-eb76a692567c", "detail": "There was a conflict when trying to complete your request.\n\n Resource class already exists: CUSTOM_B ", "title": "Conflict" } ] } Actual result = { "errors": [ { "status": 409, "request_id": "req-111941ae-839c-4e3e-92fb-eb76a692567c", "detail": "There was a conflict when trying to complete your request.\n\n Resource class already exists: CUSTOM_A ", "title": "Conflict" } ] } Environment === 1. nova version [root@controller nova]# git log commit 50d402821be7476eb58ccd791c50d8ed801e85eb Author: Matt Riedemann Date: Wed Feb 8 10:23:14 2017 -0500 Consider startup scenario in _get_compute_nodes_in_db 2. Which hypervisor did you use? devstack + libvirt + kvm ** Affects: nova Importance: Undecided Assignee: Eric Xie (eric-xie) Status: In Progress ** Changed in: nova Status: New => In Progress ** Changed in: nova Assignee: (unassigned) => Eric Xie (eric-xie) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1663163 Title: Improper prompt when update existed resource class Status in OpenStack Compute (nova): In Progress Bug description: Description === When i updated the resource class 'CUSTOM_A' with name 'CUSTOM_B', which resource class 'CUSTOM_B' exists, the prompt returned by Placement API was 'Resource class already exists: CUSTOM_A'. But it should be 'CUSTOM_B' that already exists. Steps to reproduce == * POST http://**IP**/placement/resource_classes { "name": "CUSTOM_A" } * POST http://**IP**/placement/resource_classes { "name": "CUSTOM_B" } * PUT http://172.23.28.30/placement/resource_classes/CUSTOM_A { "name": "CUSTOM_B" } Expected result === Response: { "errors": [ { "status": 409, "request_id": "req-111941ae-839c-4e3e-92fb-eb76a692567c", "detail": "There was a conflict when trying to complete your request.\n\n Resource class already exists: CUSTOM_B ", "title": "Conflict" } ] } Actual result = { "errors": [ { "status": 409, "request_id": "req-111941ae-839c-4e3e-92fb-eb76a692567c", "detail": "There was a conflict when trying to complete your request.\n\n Resource class already exists: CUSTOM_A ", "title": "Conflict" } ] } Environment === 1. nova version [root@controller nova]# git log commit 50d402821be7476eb58ccd791c50d8ed801e85eb Author: Matt Riedemann Date: Wed Feb 8 10:23:14 2017 -0500 Consider startup scenario in _get_compute_nodes_in_db 2. Which hypervisor did you use? devstack + libvirt + kvm To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1663163/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1648417] Re: Failed to set admin pass
I checked it on master, and this issue was gone. ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1648417 Title: Failed to set admin pass Status in OpenStack Compute (nova): Invalid Bug description: Description === When i set 'admin pass' of one server, got 'AttributeError' error. Steps to reproduce == * Upload windows image with qemu-guest-agent * Then add metadata 'hw_qemu_guest_agent=yes' to the image * Then boot one server A with this image * Then use ``nova set-password A`` to change admin pass Expected result === Set admin password successfully. Actual result = ERROR (Conflict): Failed to set admin password on ba631d0f-bdad-4928-be5e-e52fee05f1e1 because error setting admin password (HTTP 409) (Request-ID: req-52de5986-409e-40d1-ac74-59bed6d3b797) Environment === 1. nova version # rpm -qa | grep nova openstack-nova-api-13.0.0-3.el7.noarch openstack-nova-console-13.0.0-3.el7.noarch python-nova-13.0.0-3.el7.noarch openstack-nova-conductor-13.0.0-3.el7.noarch openstack-nova-scheduler-13.0.0-3.el7.noarch openstack-nova-novncproxy-13.0.0-3.el7.noarch openstack-nova-common-13.0.0-3.el7.noarch python-novaclient-3.3.1-2.el7.noarch 2. Which hypervisor did you use? Libvirt + KVM # rpm -qa | grep libvirt libvirt-daemon-1.2.17-13.el7_2.5.x86_64 libvirt-client-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-secret-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64 libvirt-devel-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-network-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.5.x86_64 libvirt-docs-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-interface-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-storage-1.2.17-13.el7_2.5.x86_64 libvirt-python-1.2.18-1.el7.x86_64 # rpm -qa | grep qemu libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64 qemu-kvm-ev-2.3.0-31.el7.16.1.x86_64 qemu-kvm-common-ev-2.3.0-31.el7.16.1.x86_64 ipxe-roms-qemu-20130517-8.gitc4bce43.el7_2.1.noarch qemu-img-ev-2.3.0-31.el7.16.1.x86_64 centos-release-qemu-ev-1.0-1.el7.noarch (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...) What's the version of that? Logs & Configs == nova-compute.log 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [req-52de5986-409e-40d1-ac74-59bed6d3b797 455e4c768a414f12927dfed27657c707 bc7b1de930bf428295b69d5627513d9e - - -] [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] set_admin_password failed 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] Traceback (most recent call last): 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3301, in set_admin_password 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] self.driver.set_admin_password(instance, new_pass) 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1815, in set_admin_password 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] guest.set_user_password(user, new_pass) 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 387, in set_user_password 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] self._domain.setUserPassword(user, new_pass, 0) 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 176, in __getattr__ 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] f = getattr(self._obj, attr_name) 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] AttributeError: 'virDomain' object has no attribute 'setUserPassword' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1648417/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1648417] [NEW] Failed to set admin pass
Public bug reported: Description === When i set 'admin pass' of one server, got 'AttributeError' error. Steps to reproduce == * Upload windows image with qemu-guest-agent * Then add metadata 'hw_qemu_guest_agent=yes' to the image * Then boot one server A with this image * Then use ``nova set-password A`` to change admin pass Expected result === Set admin password successfully. Actual result = ERROR (Conflict): Failed to set admin password on ba631d0f-bdad-4928-be5e-e52fee05f1e1 because error setting admin password (HTTP 409) (Request-ID: req-52de5986-409e-40d1-ac74-59bed6d3b797) Environment === 1. nova version # rpm -qa | grep nova openstack-nova-api-13.0.0-3.el7.noarch openstack-nova-console-13.0.0-3.el7.noarch python-nova-13.0.0-3.el7.noarch openstack-nova-conductor-13.0.0-3.el7.noarch openstack-nova-scheduler-13.0.0-3.el7.noarch openstack-nova-novncproxy-13.0.0-3.el7.noarch openstack-nova-common-13.0.0-3.el7.noarch python-novaclient-3.3.1-2.el7.noarch 2. Which hypervisor did you use? Libvirt + KVM # rpm -qa | grep libvirt libvirt-daemon-1.2.17-13.el7_2.5.x86_64 libvirt-client-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-secret-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64 libvirt-devel-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-network-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.5.x86_64 libvirt-docs-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-interface-1.2.17-13.el7_2.5.x86_64 libvirt-daemon-driver-storage-1.2.17-13.el7_2.5.x86_64 libvirt-python-1.2.18-1.el7.x86_64 # rpm -qa | grep qemu libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64 qemu-kvm-ev-2.3.0-31.el7.16.1.x86_64 qemu-kvm-common-ev-2.3.0-31.el7.16.1.x86_64 ipxe-roms-qemu-20130517-8.gitc4bce43.el7_2.1.noarch qemu-img-ev-2.3.0-31.el7.16.1.x86_64 centos-release-qemu-ev-1.0-1.el7.noarch (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...) What's the version of that? Logs & Configs == nova-compute.log 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [req-52de5986-409e-40d1-ac74-59bed6d3b797 455e4c768a414f12927dfed27657c707 bc7b1de930bf428295b69d5627513d9e - - -] [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] set_admin_password failed 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] Traceback (most recent call last): 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3301, in set_admin_password 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] self.driver.set_admin_password(instance, new_pass) 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1815, in set_admin_password 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] guest.set_user_password(user, new_pass) 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 387, in set_user_password 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] self._domain.setUserPassword(user, new_pass, 0) 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 176, in __getattr__ 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] f = getattr(self._obj, attr_name) 2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: ba631d0f-bdad-4928-be5e-e52fee05f1e1] AttributeError: 'virDomain' object has no attribute 'setUserPassword' ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1648417 Title: Failed to set admin pass Status in OpenStack Compute (nova): New Bug description: Description === When i set 'admin pass' of one server, got 'AttributeError' error. Steps to reproduce == * Upload windows image with qemu-guest-agent * Then add metadata 'hw_qemu_guest_agent=yes' to the image * Then boot one server A with this image * Then use ``nova set-password A`` to change admin pass Expected result === Set admin password successfully. Actual result = ERROR (Conflict): Failed to set a
[Yahoo-eng-team] [Bug 1642138] [NEW] Jump to home page after updating image info
Public bug reported: Description === On dashboard, there are some pages for image list. When i modified some image's info with ``Edit image`` in non-home page, always jumped to the home page. But ``delete image`` not. Steps to reproduce == * On non-home page, ``Edit image``. Expected result === Stay the image location page. Actual result = Jump to the home page. Environment === 1. horizon version # rpm -qa | grep horizon python-django-horizon-9.0.1-1.el7.noarch 2. httpd version # rpm -qa | grep httpd httpd-2.4.6-40.el7.centos.4.x86_64 httpd-tools-2.4.6-40.el7.centos.4.x86_64 ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1642138 Title: Jump to home page after updating image info Status in OpenStack Dashboard (Horizon): New Bug description: Description === On dashboard, there are some pages for image list. When i modified some image's info with ``Edit image`` in non-home page, always jumped to the home page. But ``delete image`` not. Steps to reproduce == * On non-home page, ``Edit image``. Expected result === Stay the image location page. Actual result = Jump to the home page. Environment === 1. horizon version # rpm -qa | grep horizon python-django-horizon-9.0.1-1.el7.noarch 2. httpd version # rpm -qa | grep httpd httpd-2.4.6-40.el7.centos.4.x86_64 httpd-tools-2.4.6-40.el7.centos.4.x86_64 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1642138/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1638813] [NEW] CLI get-password got nothing
Public bug reported: Description === After one instance created, failed to use cli 'nova get-passwod' for getting admin's password. Steps to reproduce == * Config nova.conf [libvirt] inject_password=true inject_partition=-1 * Create one instance # nova boot --flavor 1 --image cirros test '| adminPass| cU8bi4mB4TxC ' * Use CLI nova get-password test to get admin's password # nova get-password test Expected result === Get password like 'cU8bi4mB4TxC'. Actual result = Get nothing. Environment === 1. nova and novaclient version # rpm -qa | grep nova openstack-nova-scheduler-13.1.0-1.el7.noarch openstack-nova-compute-13.1.0-1.el7.noarch openstack-nova-common-13.1.0-1.el7.noarch openstack-nova-conductor-13.1.0-1.el7.noarch python-nova-13.1.0-1.el7.noarch openstack-nova-api-13.1.0-1.el7.noarch python-novaclient-3.3.1-1.el7.noarch openstack-nova-console-13.1.0-1.el7.noarch openstack-nova-novncproxy-13.1.0-1.el7.noarch 2. libvirt+KVM Logs & Configs == [libvirt] inject_password=true inject_partition=-1 ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1638813 Title: CLI get-password got nothing Status in OpenStack Compute (nova): New Bug description: Description === After one instance created, failed to use cli 'nova get-passwod' for getting admin's password. Steps to reproduce == * Config nova.conf [libvirt] inject_password=true inject_partition=-1 * Create one instance # nova boot --flavor 1 --image cirros test '| adminPass| cU8bi4mB4TxC ' * Use CLI nova get-password test to get admin's password # nova get-password test Expected result === Get password like 'cU8bi4mB4TxC'. Actual result = Get nothing. Environment === 1. nova and novaclient version # rpm -qa | grep nova openstack-nova-scheduler-13.1.0-1.el7.noarch openstack-nova-compute-13.1.0-1.el7.noarch openstack-nova-common-13.1.0-1.el7.noarch openstack-nova-conductor-13.1.0-1.el7.noarch python-nova-13.1.0-1.el7.noarch openstack-nova-api-13.1.0-1.el7.noarch python-novaclient-3.3.1-1.el7.noarch openstack-nova-console-13.1.0-1.el7.noarch openstack-nova-novncproxy-13.1.0-1.el7.noarch 2. libvirt+KVM Logs & Configs == [libvirt] inject_password=true inject_partition=-1 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1638813/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1637368] [NEW] nova-compute got exception cause of residual instances
Public bug reported: Description === Nova-compute got 'KeyError' exception when there is residual instance on the compute node. Steps to reproduce == * Stop nova-compute service on the compute node * Then nova delete instance on the controller node * Then start nova-compute service on the compute node Expected result === nova-compute service can update its info to DB. Actual result = nova-compute service got the 'KeyError' exception. Environment === 1. nova version used: stable/mitaka openstack-nova-scheduler-13.1.0-1.el7.noarch openstack-nova-compute-13.1.0-1.el7.noarch openstack-nova-common-13.1.0-1.el7.noarch openstack-nova-conductor-13.1.0-1.el7.noarch python-nova-13.1.0-1.el7.noarch openstack-nova-api-13.1.0-1.el7.noarch python-novaclient-3.3.1-1.el7.noarch openstack-nova-console-13.1.0-1.el7.noarch openstack-nova-novncproxy-13.1.0-1.el7.noarch 2.Libvirt + KVM libvirt-1.2.17-13.el7_2.5.x86_64 qemu-kvm-ev-2.3.0-31.el7.16.1.x86_64 Logs & Configs == 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager Traceback (most recent call last): 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6449, in update_available_resource_for_node 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager rt.update_available_resource(context) 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 508, in update_available_resource 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager resources = self.driver.get_available_resource(self.nodename) 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5407, in get_available_resource 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager disk_over_committed = self._get_disk_over_committed_size_total() 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7070, in _get_disk_over_committed_size_total 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager local_instances[guest.uuid], bdms[guest.uuid]) 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager KeyError: 'ba0fb33a-bc61-4286-8820-5ee6271ff395' ** Affects: nova Importance: Undecided Status: New ** Tags: mitaka -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1637368 Title: nova-compute got exception cause of residual instances Status in OpenStack Compute (nova): New Bug description: Description === Nova-compute got 'KeyError' exception when there is residual instance on the compute node. Steps to reproduce == * Stop nova-compute service on the compute node * Then nova delete instance on the controller node * Then start nova-compute service on the compute node Expected result === nova-compute service can update its info to DB. Actual result = nova-compute service got the 'KeyError' exception. Environment === 1. nova version used: stable/mitaka openstack-nova-scheduler-13.1.0-1.el7.noarch openstack-nova-compute-13.1.0-1.el7.noarch openstack-nova-common-13.1.0-1.el7.noarch openstack-nova-conductor-13.1.0-1.el7.noarch python-nova-13.1.0-1.el7.noarch openstack-nova-api-13.1.0-1.el7.noarch python-novaclient-3.3.1-1.el7.noarch openstack-nova-console-13.1.0-1.el7.noarch openstack-nova-novncproxy-13.1.0-1.el7.noarch 2.Libvirt + KVM libvirt-1.2.17-13.el7_2.5.x86_64 qemu-kvm-ev-2.3.0-31.el7.16.1.x86_64 Logs & Configs == 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager Traceback (most recent call last): 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6449, in update_available_resource_for_node 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager rt.update_available_resource(context) 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 508, in update_available_resource 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager resources = self.driver.get_available_resource(self.nodename) 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5407, in get_available_resource 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager disk_over_committed = self._get_disk_over_committed_size_total() 2016-10-27 16:40:20.936 25310 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7070, in _get_di
[Yahoo-eng-team] [Bug 1621755] [NEW] Wrong description for "name" field of rebuild API
Public bug reported: Description === When I refer the detail info of rebuild API in http://developer.openstack.org/api-ref/compute/?expanded=change-administrative-password-changepassword-action-detail,rebuild-server-rebuild-action-detail#rebuild-server-rebuild-action, the description of "name" field is wrong. Steps to reproduce == Check the url http://developer.openstack.org/api-ref/compute/?expanded=change-administrative-password-changepassword-action-detail,rebuild-server-rebuild-action-detail#rebuild-server-rebuild-action Expected result === SHOULD be "Name for the new server". Actual result = "The security group name." ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1621755 Title: Wrong description for "name" field of rebuild API Status in OpenStack Compute (nova): New Bug description: Description === When I refer the detail info of rebuild API in http://developer.openstack.org/api-ref/compute/?expanded=change-administrative-password-changepassword-action-detail,rebuild-server-rebuild-action-detail#rebuild-server-rebuild-action, the description of "name" field is wrong. Steps to reproduce == Check the url http://developer.openstack.org/api-ref/compute/?expanded=change-administrative-password-changepassword-action-detail,rebuild-server-rebuild-action-detail#rebuild-server-rebuild-action Expected result === SHOULD be "Name for the new server". Actual result = "The security group name." To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1621755/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1591604] [NEW] Lack hardware configuration for libosinfo
Public bug reported: Description === In blueprint: https://blueprints.launchpad.net/nova/+spec/libvirt-hardware-policy-from-libosinfo. A new nova.conf setting needs to be added, like this: [libvirt] hardware_config=default|fixed|libosinfo But it cannot be found in Mitaka nova.conf. Steps to reproduce == * I checked the [libvirt] group of nova.conf Expected result === 'hardware_config' was exist. Actual result = No 'hardware_config'. Environment === 1. Exact version of OpenStack you are running. # rpm -qa | grep nova openstack-nova-compute-13.0.0-1.el7.noarch openstack-nova-novncproxy-13.0.0-1.el7.noarch openstack-nova-conductor-13.0.0-1.el7.noarch python-nova-13.0.0-1.el7.noarch openstack-nova-cert-13.0.0-1.el7.noarch python-novaclient-3.3.0-1.el7.noarch openstack-nova-console-13.0.0-1.el7.noarch openstack-nova-scheduler-13.0.0-1.el7.noarch openstack-nova-api-13.0.0-1.el7.noarch openstack-nova-common-13.0.0-1.el7.noarch 2. Which hypervisor did you use? libvirt+kvm 3. Which networking type did you use? Neutron with OpenVSwitch ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1591604 Title: Lack hardware configuration for libosinfo Status in OpenStack Compute (nova): New Bug description: Description === In blueprint: https://blueprints.launchpad.net/nova/+spec/libvirt-hardware-policy-from-libosinfo. A new nova.conf setting needs to be added, like this: [libvirt] hardware_config=default|fixed|libosinfo But it cannot be found in Mitaka nova.conf. Steps to reproduce == * I checked the [libvirt] group of nova.conf Expected result === 'hardware_config' was exist. Actual result = No 'hardware_config'. Environment === 1. Exact version of OpenStack you are running. # rpm -qa | grep nova openstack-nova-compute-13.0.0-1.el7.noarch openstack-nova-novncproxy-13.0.0-1.el7.noarch openstack-nova-conductor-13.0.0-1.el7.noarch python-nova-13.0.0-1.el7.noarch openstack-nova-cert-13.0.0-1.el7.noarch python-novaclient-3.3.0-1.el7.noarch openstack-nova-console-13.0.0-1.el7.noarch openstack-nova-scheduler-13.0.0-1.el7.noarch openstack-nova-api-13.0.0-1.el7.noarch openstack-nova-common-13.0.0-1.el7.noarch 2. Which hypervisor did you use? libvirt+kvm 3. Which networking type did you use? Neutron with OpenVSwitch To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1591604/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513464] [NEW] wrong description in developer doc
Public bug reported: doc address: http://docs.openstack.org/developer/keystone/key_terms.html#resources "The Identity portion of keystone includes Projects and Domains, and are commonly stored in an SQL backend." It is NOT Identity but Resources. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1513464 Title: wrong description in developer doc Status in OpenStack Identity (keystone): New Bug description: doc address: http://docs.openstack.org/developer/keystone/key_terms.html#resources "The Identity portion of keystone includes Projects and Domains, and are commonly stored in an SQL backend." It is NOT Identity but Resources. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1513464/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513335] [NEW] disk allocation ratio should move to resource tracker
Public bug reported: 1. version: nova 12.0.0 Liberty 2. As mentioned in https://blueprints.launchpad.net/nova/+spec/allocation-ratio-to-resource-tracker, cpu/mem allocation ratio have already been moved to resource tracker. And disk allocation ratio MUST be moved as same. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1513335 Title: disk allocation ratio should move to resource tracker Status in OpenStack Compute (nova): New Bug description: 1. version: nova 12.0.0 Liberty 2. As mentioned in https://blueprints.launchpad.net/nova/+spec/allocation-ratio-to-resource-tracker, cpu/mem allocation ratio have already been moved to resource tracker. And disk allocation ratio MUST be moved as same. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1513335/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1322921] Re: hypervisor-servers command always search by wildcard as '%hypervisor_hostname%'
** Changed in: nova Assignee: Eric Xie (mark-xiett) => (unassigned) ** Changed in: nova Status: In Progress => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1322921 Title: hypervisor-servers command always search by wildcard as '%hypervisor_hostname%' Status in OpenStack Compute (nova): Opinion Bug description: I searched servers by specific hypervisor. However the result is included with other hypervisors matched by wildcard with '%hypervisor_hostname%'. I found this bug by following command: admin@controller:~$ nova hypervisor-servers 10-0-0-1 +--+---+---+-+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--+---+---+-+ | db52fd93-cc80-4d5e-852c-b113dec35fbf | instance-00a0 | 1 | 10-0-0-10 | | 5b15fa8a-66d8-4db1-bb0e-c52fc3a030f3 | instance-00a1 | 1 | 10-0-0-10 | | 2b492995-007d-4435-8f6b-037ea57188dc | instance-00a2 | 2 | 10-0-0-11 | | 45b18880-c0f1-4b8b-a21d-80f9dd2566ff | instance-00a3 | 2 | 10-0-0-11 | +--+---+---+-+ admin@controller:~$ nova hypervisor-servers 10-0-0-11 +--+---+---+-+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--+---+---+-+ | 2b492995-007d-4435-8f6b-037ea57188dc | instance-00a2 | 2 | 10-0-0-11 | | 45b18880-c0f1-4b8b-a21d-80f9dd2566ff | instance-00a3 | 2 | 10-0-0-11 | +--+---+---+-+ This bug is contained in compute api v2 extensions at /v2/{tenant_id}/os-hypervisors/{hypervisor_hostname}/servers admin@controller:~$ curl -H "X-Auth-Token:MIIL" "http://localhost:8774/v2/771be698aba4431daf41c8012df97e7b/os-hypervisors/10-0-0-1/servers"; {"hypervisors": [{"id": 1, "hypervisor_hostname": "10-0-0-10", "servers": [{"uuid": "db52fd93-cc80-4d5e-852c-b113dec35fbf", "name": "instance-00a0"}, {"uuid": "5b15fa8a-66d8-4db1-bb0e-c52fc3a030f3", "name": "instance-00a1"}]}, {"id": 2, "hypervisor_hostname": "gtestcompute-172-16-227-11", "servers": [{"uuid": "2b492995-007d-4435-8f6b-037ea57188dc", "name": "instance-00a2"}, {"uuid": "45b18880-c0f1-4b8b-a21d-80f9dd2566ff", "name": "instance-00a3"}]}]} To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1322921/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1474198] [NEW] task_state not NONE after instance boot failed
Public bug reported: 1. Exact version of Nova: python-novaclient-2.23.0 openstack-nova-common-2015.1.0 python-nova-2015.1.0 openstack-nova-api-2015.1.0 openstack-nova-scheduler-2015.1.0 openstack-nova-conductor-2015.1.0 openstack-nova-compute-2015.1.0 openstack-nova-2015.1.0 2. Relevant log files: 2015-07-14 11:15:07.559 19984 ERROR nova.compute.manager [req-8b567c49-850a-4f00-a73b-c2879528ef39 - - - - -] [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] Instance failed to spawn 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] Traceback (most recent call last): 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2565, in _build_resources 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] yield resources 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2437, in _build_and_run_instance 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] block_device_info=block_device_info) 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2385, in spawn 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] write_to_disk=True) 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4232, in _get_guest_xml 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] context) 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4103, in _get_guest_config 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] flavor, virt_type) 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 374, in get_config 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] _("Unexpected vif_type=%s") % vif_type) 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] NovaException: Unexpected vif_type=binding_failed 2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] 2015-07-14 11:15:07.565 19984 INFO nova.compute.manager [req-a32fae7b-2a26-4d44-ab89-e16db804a9f0 58e88aff70dd4959ba5293dab8f6ceac c45dae15962c4797b70f6c278a232f3c - - -] [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] Terminating instance 2015-07-14 11:15:07.572 19984 INFO nova.virt.libvirt.driver [-] [instance: f0a16736-078a-4476-a56a-abee46fdc5f5] During wait destroy, instance disappeared. 3. Reproduce steps: * Stop neutron-openvswitch-agent on compute node; * Boot one instance Expected result: Task state of instance should be None Actual result: Task state of instance was always spawning # nova list +--++++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--++++-+--+ | f0a16736-078a-4476-a56a-abee46fdc5f5 | instance_test_vif_binding_fail | ERROR | spawning | NOSTATE | | +--++++-+--+ ** Affects: nova Importance: Undecided Status: New ** Tags: in-stable-kilo -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1474198 Title: task_state not NONE after instance boot failed Status in OpenStack Compute (nova): New Bug description: 1. Exact version of Nova: python-novaclient-2.23.0 openstack-nova-common-2015.1.0 python-nova-2015.1.0 openstack-nova-api-2015.1.0 openstack-nova-scheduler-2015.1.0 openstack-nova-conductor-2015.1.0 openstack-nova-compute-2015.1.0 openstack-nova-2015.1.0 2. Relevant log files: 2015-07-14 11:15:07.559 19984 ERROR nova.compute.manager [req-8b567c49-85
[Yahoo-eng-team] [Bug 1447977] [NEW] swap size not change after instance resized
Public bug reported: 1. My environment: nova: 2014.1, icehouse-stable hypervisor: libvirt 1.2.1, + kvm 2. Relevant log files: no 3. Reproduce steps: * Launch one instance with default flavor m1.tiny which swap size is 0MB * Resize instance from m1.tiny to flv_5_root_2_eph_1_swap which swap's size is 1MB, and get swap info with 'virsh' commands virsh # domblklist 30 vdc /var/lib/nova/instances/fa13d27f-3ddd-48a5-86a8-aeaf04c2046d/disk.swap virsh # domblkinfo 30 vdc Capacity: 1048576 * Resize instance from flv_5_root_2_eph_1_swap to flv_40_root_5_eph_4_swap, and get swap info with 'virsh' commands Expected results: *virsh # domblkinfo 5 vdc Capacity: 4194304 Actual result: * virsh # domblkinfo 5 vdc Capacity: 1048576 4. Reason maybe: nova.virt.libvirt.driver.py def _create_image(self, context, instance, disk_mapping, suffix='', disk_images=None, network_info=None, block_device_info=None, files=None, admin_pass=None, inject_files=True, fallback_from_host=None): ... if 'disk.swap' in disk_mapping: mapping = disk_mapping['disk.swap'] swap_mb = 0 swap = driver.block_device_info_get_swap(block_device_info) if driver.swap_is_usable(swap): swap_mb = swap['swap_size'] # use inst_type['swap']? elif (inst_type['swap'] > 0 and not block_device.volume_in_mapping( mapping['dev'], block_device_info)): swap_mb = inst_type['swap'] ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1447977 Title: swap size not change after instance resized Status in OpenStack Compute (Nova): New Bug description: 1. My environment: nova: 2014.1, icehouse-stable hypervisor: libvirt 1.2.1, + kvm 2. Relevant log files: no 3. Reproduce steps: * Launch one instance with default flavor m1.tiny which swap size is 0MB * Resize instance from m1.tiny to flv_5_root_2_eph_1_swap which swap's size is 1MB, and get swap info with 'virsh' commands virsh # domblklist 30 vdc /var/lib/nova/instances/fa13d27f-3ddd-48a5-86a8-aeaf04c2046d/disk.swap virsh # domblkinfo 30 vdc Capacity: 1048576 * Resize instance from flv_5_root_2_eph_1_swap to flv_40_root_5_eph_4_swap, and get swap info with 'virsh' commands Expected results: *virsh # domblkinfo 5 vdc Capacity: 4194304 Actual result: * virsh # domblkinfo 5 vdc Capacity: 1048576 4. Reason maybe: nova.virt.libvirt.driver.py def _create_image(self, context, instance, disk_mapping, suffix='', disk_images=None, network_info=None, block_device_info=None, files=None, admin_pass=None, inject_files=True, fallback_from_host=None): ... if 'disk.swap' in disk_mapping: mapping = disk_mapping['disk.swap'] swap_mb = 0 swap = driver.block_device_info_get_swap(block_device_info) if driver.swap_is_usable(swap): swap_mb = swap['swap_size'] # use inst_type['swap']? elif (inst_type['swap'] > 0 and not block_device.volume_in_mapping( mapping['dev'], block_device_info)): swap_mb = inst_type['swap'] To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1447977/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1441950] [NEW] instance on source host can not be cleaned after evacuating
Public bug reported: 1. Version nova: 2014.1 hypervisor: rhel7 + libvirt + kvm 2. Description After one instance was evacuated from hostA to hostB, then delete this instance. Then started 'nova-compute' service of hostA, and found in nova-compute.log: 2015-04-09 10:39:52.201 1977 WARNING nova.compute.manager [-] Found 0 in the database and 1 on the hypervisor. 3. Reproduce steps: * Launch one instance INST on hostA * Stop 'nova-compute' service on hostA, and wait for down(use 'nova service-list') * Evacuate INST to hostB * After evacuated successfully, delete INST * Start 'nova-compute' service on hostA Expected results: * INST on hostA's hypervisor should be destroyed Actual result: * INST was alive on hostA's hypervisor. 4. Tips I checked the source, and found: nova.compute.manager.py def _destroy_evacuated_instances(self, context): filters = {'deleted': False} # Here filtered the deleted instance. Is it more proper that checked the deleted instances? local_instances = self._get_instances_on_driver(context, filters) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1441950 Title: instance on source host can not be cleaned after evacuating Status in OpenStack Compute (Nova): New Bug description: 1. Version nova: 2014.1 hypervisor: rhel7 + libvirt + kvm 2. Description After one instance was evacuated from hostA to hostB, then delete this instance. Then started 'nova-compute' service of hostA, and found in nova-compute.log: 2015-04-09 10:39:52.201 1977 WARNING nova.compute.manager [-] Found 0 in the database and 1 on the hypervisor. 3. Reproduce steps: * Launch one instance INST on hostA * Stop 'nova-compute' service on hostA, and wait for down(use 'nova service-list') * Evacuate INST to hostB * After evacuated successfully, delete INST * Start 'nova-compute' service on hostA Expected results: * INST on hostA's hypervisor should be destroyed Actual result: * INST was alive on hostA's hypervisor. 4. Tips I checked the source, and found: nova.compute.manager.py def _destroy_evacuated_instances(self, context): filters = {'deleted': False} # Here filtered the deleted instance. Is it more proper that checked the deleted instances? local_instances = self._get_instances_on_driver(context, filters) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1441950/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1439919] [NEW] 'power-state' should not be 'running' when instance evacuate failed
Public bug reported: My environment: 1. nova 2014.1 2. novaclient 2.17.0 I checked the source: nova.compute.manager.py: def rebuild_instance(self, context, instance, orig_image_ref, image_ref, injected_files, new_pass, orig_sys_metadata, bdms, recreate, on_shared_storage, preserve_ephemeral=False): with self._error_out_instance_on_exception(context, instance): LOG.info(_LI("Rebuilding instance"), context=context, instance=instance) if recreate: if not self.driver.capabilities["supports_recreate"]: raise exception.InstanceRecreateNotSupported If raise InstanceRecreateNotSupported, only set vm_state to 'ERROR', but task_state was 'running' which should be set to 'NOSTAT' or else. def _error_out_instance_on_exception(self, context, instance, quotas=None, instance_state=vm_states.ACTIVE): ... except Exception: LOG.exception(_LE('Setting instance vm_state to ERROR'), instance_uuid=instance_uuid) with excutils.save_and_reraise_exception(): if quotas: quotas.rollback() self._set_instance_error_state(context, instance) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1439919 Title: 'power-state' should not be 'running' when instance evacuate failed Status in OpenStack Compute (Nova): New Bug description: My environment: 1. nova 2014.1 2. novaclient 2.17.0 I checked the source: nova.compute.manager.py: def rebuild_instance(self, context, instance, orig_image_ref, image_ref, injected_files, new_pass, orig_sys_metadata, bdms, recreate, on_shared_storage, preserve_ephemeral=False): with self._error_out_instance_on_exception(context, instance): LOG.info(_LI("Rebuilding instance"), context=context, instance=instance) if recreate: if not self.driver.capabilities["supports_recreate"]: raise exception.InstanceRecreateNotSupported If raise InstanceRecreateNotSupported, only set vm_state to 'ERROR', but task_state was 'running' which should be set to 'NOSTAT' or else. def _error_out_instance_on_exception(self, context, instance, quotas=None, instance_state=vm_states.ACTIVE): ... except Exception: LOG.exception(_LE('Setting instance vm_state to ERROR'), instance_uuid=instance_uuid) with excutils.save_and_reraise_exception(): if quotas: quotas.rollback() self._set_instance_error_state(context, instance) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1439919/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1433404] [NEW] CLI 'nova host-meta' delete some meta failed
Public bug reported: On one host, some instances have some metas, but the others do not have. When delete the 'some metas', got an error: ERROR: Metadata item was not found (HTTP 404) (Request-ID: req-c85f5212-82ee-4b30-ad0e-48d0e62186c7) My environment is following: 1 controller node, 1 compute node nova - 2014.1, stable-icehouse novaclient - 2.17.0 Reproduce: 1. Launch instance A with meta 'color=true' and 'foo=bar'; 2. Launch instance B with meta 'foo=bar'; 3. Use 'nova host-meta' to delete meta 'foo', and it worked. #nova host-meta 2C514_1_10_SBCJ delete foo 4. Use 'nova host-meta' to delete meta 'color', and it did not work. IMHO, 'host-meta' CLI should ignore that instance B did not have 'color' meta, and delete meta which any instances on this host has. It is very cumbersome that use 'nova meta' every instance. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1433404 Title: CLI 'nova host-meta' delete some meta failed Status in OpenStack Compute (Nova): New Bug description: On one host, some instances have some metas, but the others do not have. When delete the 'some metas', got an error: ERROR: Metadata item was not found (HTTP 404) (Request-ID: req-c85f5212-82ee-4b30-ad0e-48d0e62186c7) My environment is following: 1 controller node, 1 compute node nova - 2014.1, stable-icehouse novaclient - 2.17.0 Reproduce: 1. Launch instance A with meta 'color=true' and 'foo=bar'; 2. Launch instance B with meta 'foo=bar'; 3. Use 'nova host-meta' to delete meta 'foo', and it worked. #nova host-meta 2C514_1_10_SBCJ delete foo 4. Use 'nova host-meta' to delete meta 'color', and it did not work. IMHO, 'host-meta' CLI should ignore that instance B did not have 'color' meta, and delete meta which any instances on this host has. It is very cumbersome that use 'nova meta' every instance. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1433404/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1433374] [NEW] CLI "nova hypervisor-servers ID" shows all instances not instances of hypervisor "ID "
Public bug reported: env: nova - 2014.1 novaclient - 2.17.0 Reproduce: 1. nova hypervisor-list # nova hypervisor-list ++-+ | ID | Hypervisor hostname | ++-+ | 1 | 2C514_1_10_SBCJ | | 2 | 2C519_1_11_SBCJ | | 3 | 2C519_1_13_SBCJ | ++-+ 2. nova hypervisor-servers 1 # nova hypervisor-servers 1 +--+---+---+-+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--+---+---+-+ | 5068402f-3e1f-4e1a-95b3-744a9916f6e9 | instance-0096 | 1 | 2C514_1_10_SBCJ | | f8ee006b-e039-48fe-8673-df54ffbf82ff | instance-0099 | 1 | 2C514_1_10_SBCJ | | 1dec8899-2f0f-4627-9eb6-9cfe476baba0 | instance-009a | 1 | 2C514_1_10_SBCJ | | 869b561d-d0a0-404a-8a03-3b3bdf8b9f97 | instance-00ac | 1 | 2C514_1_10_SBCJ | | d693cde6-540d-495c-806c-17cf80be7382 | instance-00ad | 1 | 2C514_1_10_SBCJ | | d8097b96-c08c-420a-99c2-22510b2aa475 | instance-00ae | 1 | 2C514_1_10_SBCJ | | 75318765-e5ca-4fca-9d67-3678cd8e7626 | instance-00aa | 2 | 2C519_1_11_SBCJ | | fa27d9c8-3020-4690-a14c-ca24ae804921 | instance-00ab | 2 | 2C519_1_11_SBCJ | +--+---+---+-+ But use hostname, and got the right results: # nova hypervisor-servers 2C514_1_10_SBCJ +--+---+---+-+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--+---+---+-+ | 5068402f-3e1f-4e1a-95b3-744a9916f6e9 | instance-0096 | 1 | 2C514_1_10_SBCJ | | f8ee006b-e039-48fe-8673-df54ffbf82ff | instance-0099 | 1 | 2C514_1_10_SBCJ | | 1dec8899-2f0f-4627-9eb6-9cfe476baba0 | instance-009a | 1 | 2C514_1_10_SBCJ | | 869b561d-d0a0-404a-8a03-3b3bdf8b9f97 | instance-00ac | 1 | 2C514_1_10_SBCJ | | d693cde6-540d-495c-806c-17cf80be7382 | instance-00ad | 1 | 2C514_1_10_SBCJ | | d8097b96-c08c-420a-99c2-22510b2aa475 | instance-00ae | 1 | 2C514_1_10_SBCJ | +--+---+---+-+ ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1433374 Title: CLI "nova hypervisor-servers ID" shows all instances not instances of hypervisor "ID " Status in OpenStack Compute (Nova): New Bug description: env: nova - 2014.1 novaclient - 2.17.0 Reproduce: 1. nova hypervisor-list # nova hypervisor-list ++-+ | ID | Hypervisor hostname | ++-+ | 1 | 2C514_1_10_SBCJ | | 2 | 2C519_1_11_SBCJ | | 3 | 2C519_1_13_SBCJ | ++-+ 2. nova hypervisor-servers 1 # nova hypervisor-servers 1 +--+---+---+-+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--+---+---+-+ | 5068402f-3e1f-4e1a-95b3-744a9916f6e9 | instance-0096 | 1 | 2C514_1_10_SBCJ | | f8ee006b-e039-48fe-8673-df54ffbf82ff | instance-0099 | 1 | 2C514_1_10_SBCJ | | 1dec8899-2f0f-4627-9eb6-9cfe476baba0 | instance-009a | 1 | 2C514_1_10_SBCJ | | 869b561d-d0a0-404a-8a03-3b3bdf8b9f97 | instance-00ac | 1 | 2C514_1_10_SBCJ | | d693cde6-540d-495c-806c-17cf80be7382 | instance-00ad | 1 | 2C514_1_10_SBCJ | | d8097b96-c08c-420a-99c2-22510b2aa475 | instance-00ae | 1 | 2C514_1_10_SBCJ | | 75318765-e5ca-4fca-9d67-3678cd8e7626 | instance-00aa | 2 | 2C519_1_11_SBCJ | | fa27d9c8-3020-4690-a14c-ca24ae804921 | instance-00ab | 2 | 2C519_1_11_SBCJ | +--+---+---+-+ But use hostname, and got the right results: # nova hypervisor-servers 2C514_1_10_SBCJ +--+---+---+-+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +-
[Yahoo-eng-team] [Bug 1426806] [NEW] flavor created successfully when flavorid neither integer nor UUID
Public bug reported: version: 2014.1 icehouse-stable # nova help flavor-create ... Unique ID (integer or UUID) for the new flavor. If specifying 'auto', a UUID will be generated as id ... "id" should be integer or UUID. But when the "id" was set to non-integer or non-UUID, the flavor was also created successfully. # nova flavor-create flv-testdddfasdfsfdsfdsf jljdfsfojgnng 512 1 1 +---+--+---+--+---+--+---+-+---+ | ID| Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +---+--+---+--+---+--+---+-+---+ | jljdfsfojgnng | flv-testdddfasdfsfdsfdsf | 512 | 1| 0 | | 1 | 1.0 | True | +---+--+---+--+---+--+---+-+---+ ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1426806 Title: flavor created successfully when flavorid neither integer nor UUID Status in OpenStack Compute (Nova): New Bug description: version: 2014.1 icehouse-stable # nova help flavor-create ... Unique ID (integer or UUID) for the new flavor. If specifying 'auto', a UUID will be generated as id ... "id" should be integer or UUID. But when the "id" was set to non-integer or non-UUID, the flavor was also created successfully. # nova flavor-create flv-testdddfasdfsfdsfdsf jljdfsfojgnng 512 1 1 +---+--+---+--+---+--+---+-+---+ | ID| Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +---+--+---+--+---+--+---+-+---+ | jljdfsfojgnng | flv-testdddfasdfsfdsfdsf | 512 | 1| 0 | | 1 | 1.0 | True | +---+--+---+--+---+--+---+-+---+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1426806/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1420597] Re: metadata missed after aggregate's az updated
@ugvddm, i used the version: 2014.1 icehouse. ** Changed in: nova Status: Invalid => Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1420597 Title: metadata missed after aggregate's az updated Status in OpenStack Compute (Nova): Incomplete Bug description: After changing aggregate's availability_zone to another, the other metadatas of this aggregate missed. Reproduce: 1. create one aggregate which belong to "nova"; # nova aggregate-create hagg-test nova +-+---+---+---+--+ | Id | Name | Availability Zone | Hosts | Metadata | +-+---+---+---+--+ | 134 | hagg-test | nova | | 'availability_zone=nova' | +-+---+---+---+--+ 2. set metadata: foo=bar # nova aggregate-set-metadata hagg-test foo=bar Metadata has been successfully updated for aggregate 134. +-+---+---+---+-+ | Id | Name | Availability Zone | Hosts | Metadata | +-+---+---+---+-+ | 134 | hagg-test | nova | | 'availability_zone=nova', 'foo=bar' | +-+---+---+---+-+ 3. change the availability_zone # nova aggregate-update 134 hagg-test az-test Aggregate 136 has been successfully updated. +-+---+---+---+-+ | Id | Name | Availability Zone | Hosts | Metadata| +-+---+---+---+-+ | 136 | hagg-test | az-test | | 'availability_zone=az-test' | +-+---+---+---+-+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1420597/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1421086] [NEW] admin_password inject failed with config_drive
Public bug reported: Inject "admin_password" to instance with config_drive failed. Reproduce: 1. Set "force_config_drive=always" in nova.conf 2. Launch one instance with "admin_pass"; I checked the /dev/sr0 with console of this instance. And found the "meta_data.json" file that had info {"admin_pass": "admin",……} ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1421086 Title: admin_password inject failed with config_drive Status in OpenStack Compute (Nova): New Bug description: Inject "admin_password" to instance with config_drive failed. Reproduce: 1. Set "force_config_drive=always" in nova.conf 2. Launch one instance with "admin_pass"; I checked the /dev/sr0 with console of this instance. And found the "meta_data.json" file that had info {"admin_pass": "admin",……} To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1421086/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1421083] [NEW] admin_password inject failed in kvm
Public bug reported: Inject "admin_password" info to instance's image with modifying the content of image not config_drive. Instance booted successfully, but i got the log below: 2015-02-12 14:11:28.119 2 WARNING nova.virt.disk.api [req-bebb329c-ee79-48d2-b800-e02e2e7ffbd5 189303f928574600af4f95a00f13c280 94b44a817f9a4c40a21b136de7cefee7] Ignoring error injecting data into image (Error mounting /var/lib/nova/instances/eb030d0d-2f39-453b-9faa-4bffaef2ee9c/disk with libguestfs (/usr/bin/qemu-system-x86_64 exited with error status 1. And login failed when i use the password above. Reproduce: 1. prepare environment: With libvirt and kvm. In nova.conf inject_partition=-1 inject_password=true 2. Launch one instance with "admin_pass" configured. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1421083 Title: admin_password inject failed in kvm Status in OpenStack Compute (Nova): New Bug description: Inject "admin_password" info to instance's image with modifying the content of image not config_drive. Instance booted successfully, but i got the log below: 2015-02-12 14:11:28.119 2 WARNING nova.virt.disk.api [req-bebb329c-ee79-48d2-b800-e02e2e7ffbd5 189303f928574600af4f95a00f13c280 94b44a817f9a4c40a21b136de7cefee7] Ignoring error injecting data into image (Error mounting /var/lib/nova/instances/eb030d0d-2f39-453b-9faa-4bffaef2ee9c/disk with libguestfs (/usr/bin/qemu-system-x86_64 exited with error status 1. And login failed when i use the password above. Reproduce: 1. prepare environment: With libvirt and kvm. In nova.conf inject_partition=-1 inject_password=true 2. Launch one instance with "admin_pass" configured. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1421083/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1421084] [NEW] admin_password inject failed in kvm
Public bug reported: Inject "admin_password" info to instance's image with modifying the content of image not config_drive. Instance booted successfully, but i got the log below: 2015-02-12 14:11:28.119 2 WARNING nova.virt.disk.api [req-bebb329c-ee79-48d2-b800-e02e2e7ffbd5 189303f928574600af4f95a00f13c280 94b44a817f9a4c40a21b136de7cefee7] Ignoring error injecting data into image (Error mounting /var/lib/nova/instances/eb030d0d-2f39-453b-9faa-4bffaef2ee9c/disk with libguestfs (/usr/bin/qemu-system-x86_64 exited with error status 1. And login failed when i use the password above. Reproduce: 1. prepare environment: With libvirt and kvm. In nova.conf inject_partition=-1 inject_password=true 2. Launch one instance with "admin_pass" configured. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1421084 Title: admin_password inject failed in kvm Status in OpenStack Compute (Nova): New Bug description: Inject "admin_password" info to instance's image with modifying the content of image not config_drive. Instance booted successfully, but i got the log below: 2015-02-12 14:11:28.119 2 WARNING nova.virt.disk.api [req-bebb329c-ee79-48d2-b800-e02e2e7ffbd5 189303f928574600af4f95a00f13c280 94b44a817f9a4c40a21b136de7cefee7] Ignoring error injecting data into image (Error mounting /var/lib/nova/instances/eb030d0d-2f39-453b-9faa-4bffaef2ee9c/disk with libguestfs (/usr/bin/qemu-system-x86_64 exited with error status 1. And login failed when i use the password above. Reproduce: 1. prepare environment: With libvirt and kvm. In nova.conf inject_partition=-1 inject_password=true 2. Launch one instance with "admin_pass" configured. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1421084/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1420597] [NEW] metadata missed after aggregate's az updated
Public bug reported: After changing aggregate's availability_zone to another, the other metadatas of this aggregate missed. Reproduce: 1. create one aggregate which belong to "nova"; # nova aggregate-create hagg-test nova +-+---+---+---+--+ | Id | Name | Availability Zone | Hosts | Metadata | +-+---+---+---+--+ | 134 | hagg-test | nova | | 'availability_zone=nova' | +-+---+---+---+--+ 2. set metadata: foo=bar # nova aggregate-set-metadata hagg-test foo=bar Metadata has been successfully updated for aggregate 134. +-+---+---+---+-+ | Id | Name | Availability Zone | Hosts | Metadata | +-+---+---+---+-+ | 134 | hagg-test | nova | | 'availability_zone=nova', 'foo=bar' | +-+---+---+---+-+ 3. change the availability_zone # nova aggregate-update 134 hagg-test az-test Aggregate 136 has been successfully updated. +-+---+---+---+-+ | Id | Name | Availability Zone | Hosts | Metadata| +-+---+---+---+-+ | 136 | hagg-test | az-test | | 'availability_zone=az-test' | +-+---+---+---+-+ ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1420597 Title: metadata missed after aggregate's az updated Status in OpenStack Compute (Nova): New Bug description: After changing aggregate's availability_zone to another, the other metadatas of this aggregate missed. Reproduce: 1. create one aggregate which belong to "nova"; # nova aggregate-create hagg-test nova +-+---+---+---+--+ | Id | Name | Availability Zone | Hosts | Metadata | +-+---+---+---+--+ | 134 | hagg-test | nova | | 'availability_zone=nova' | +-+---+---+---+--+ 2. set metadata: foo=bar # nova aggregate-set-metadata hagg-test foo=bar Metadata has been successfully updated for aggregate 134. +-+---+---+---+-+ | Id | Name | Availability Zone | Hosts | Metadata | +-+---+---+---+-+ | 134 | hagg-test | nova | | 'availability_zone=nova', 'foo=bar' | +-+---+---+---+-+ 3. change the availability_zone # nova aggregate-update 134 hagg-test az-test Aggregate 136 has been successfully updated. +-+---+---+---+-+ | Id | Name | Availability Zone | Hosts | Metadata| +-+---+---+---+-+ | 136 | hagg-test | az-test | | 'availability_zone=az-test' | +-+---+---+---+-+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1420597/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1397822] Re: Can't access image after reboot host
*** This bug is a duplicate of bug 1195884 *** https://bugs.launchpad.net/bugs/1195884 ** This bug has been marked a duplicate of bug 1195884 resume guests on libvirt host reboot fails for instances created from multi-part image -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1397822 Title: Can't access image after reboot host Status in OpenStack Compute (Nova): New Bug description: I upload the cirros image which add property (--property architecture=x86_64) with CLI: glance image-update 6886dd80-b48c-4192-98bb-977d5ffa0314 --property architecture=x86_64 Then launch one instance with this image. But when the host with the instance rebooted, got log below: 2014-12-01 13:38:45.761 5845 WARNING nova.compute.utils [-] [instance: 1b72bff6-3f4d-49ac-8d0b-f173a42783f5] Can't access image 6886dd80-b48c-4192-98bb-977d5ffa0314: can't be encoded use nova.2014.1.3 in-stable-icehouse To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1397822/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1406440] Re: unshelve failed when instance attached volume
*** This bug is a duplicate of bug 1404801 *** https://bugs.launchpad.net/bugs/1404801 ** This bug has been marked a duplicate of bug 1404801 Unshelve instance not working if instance is boot from volume -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1406440 Title: unshelve failed when instance attached volume Status in OpenStack Compute (Nova): New Bug description: unshelved instance failed when the instance attached one volume. reproduce: 1. instance boot from image; 2. create 1GB volume in lvm, use local storage; 3. attach volume to instance; 4. shelve instance, then unshelve instance detail logs: 2014-12-30 08:43:17.401 8797 ERROR nova.compute.manager [req-df836400-b68f-4a0b-89a1-055dcad00b70 40dc8656066f432895be13be71e44b86 a20fb3edeab44755a861f510183a679a] [instance: 1632aa3b-5a00-495a-9041-283566592a65] Instance failed block device setup 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] Traceback (most recent call last): 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1890, in _prep_block_device 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] self.driver, self._await_block_device_map_created) + 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 367, in attach_block_devices 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] map(_log_and_attach, block_device_mapping) 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 365, in _log_and_attach 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] bdm.attach(*attach_args, **attach_kwargs) 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 44, in wrapped 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] ret_val = method(obj, context, *args, **kwargs) 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 217, in attach 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] volume_api.check_attach(context, volume, instance=instance) 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 234, in check_attach 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] raise exception.InvalidVolume(reason=msg) 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] InvalidVolume: Invalid volume: Volume has been attached to the instance To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1406440/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1413833] [NEW] vm_state inaccurate when operate virsh command
Public bug reported: "vm_state" of instance is inaccurate when user operate "virsh pause DOM" command. One case: 1. Use "nova pause INSTANCE", and get: | e4ce6895-a0db-4b43-9483-c4b8401fdd1c | instance_test | PAUSED | - | Paused | | 2. Use "virsh pause DOM", and get: | e4ce6895-a0db-4b43-9483-c4b8401fdd1c | instance_test | ACTIVE | - | Paused | | Another case: 1. nova pause INSTANCE; | e4ce6895-a0db-4b43-9483-c4b8401fdd1c | instance_test | PAUSED | - | Paused | | 2. virsh resume INSTANCE; | e4ce6895-a0db-4b43-9483-c4b8401fdd1c | instance_test | PAUSED | - | Running | | ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1413833 Title: vm_state inaccurate when operate virsh command Status in OpenStack Compute (Nova): New Bug description: "vm_state" of instance is inaccurate when user operate "virsh pause DOM" command. One case: 1. Use "nova pause INSTANCE", and get: | e4ce6895-a0db-4b43-9483-c4b8401fdd1c | instance_test | PAUSED | - | Paused | | 2. Use "virsh pause DOM", and get: | e4ce6895-a0db-4b43-9483-c4b8401fdd1c | instance_test | ACTIVE | - | Paused | | Another case: 1. nova pause INSTANCE; | e4ce6895-a0db-4b43-9483-c4b8401fdd1c | instance_test | PAUSED | - | Paused | | 2. virsh resume INSTANCE; | e4ce6895-a0db-4b43-9483-c4b8401fdd1c | instance_test | PAUSED | - | Running | | To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1413833/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1411480] [NEW] remain "BUILD" status when instance with force-host failed to create
Public bug reported: The state "BUILD" remained when instance with force-host failed to create. Till the period task "_check_instance_build_time" change the state "BUILD" to "ERROR". # nova list +--+---+++-+---+ | ID | Name | Status | Task State | Power State | Networks | +--+---+++-+---+ | 2d14b1b3-fa57-4953-b226-40076f78e9ac | instanceA | ACTIVE | - | Running | net-test=192.168.0.23 | | fcc745de-a15d-47c4-9167-153eb73a4c9b | instanceB | BUILD | - | NOSTATE | | +--+---+++-+---+ reproduce: 1. create one instance group with policy "anti-affinity", named "group-anti-affinity"; 2. create one instance with "--hint group=group-anti-affinity", hosted on hostA; 3. create another instance with "--hint group=group-anti-affinity", force it to hostA; ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1411480 Title: remain "BUILD" status when instance with force-host failed to create Status in OpenStack Compute (Nova): New Bug description: The state "BUILD" remained when instance with force-host failed to create. Till the period task "_check_instance_build_time" change the state "BUILD" to "ERROR". # nova list +--+---+++-+---+ | ID | Name | Status | Task State | Power State | Networks | +--+---+++-+---+ | 2d14b1b3-fa57-4953-b226-40076f78e9ac | instanceA | ACTIVE | - | Running | net-test=192.168.0.23 | | fcc745de-a15d-47c4-9167-153eb73a4c9b | instanceB | BUILD | - | NOSTATE | | +--+---+++-+---+ reproduce: 1. create one instance group with policy "anti-affinity", named "group-anti-affinity"; 2. create one instance with "--hint group=group-anti-affinity", hosted on hostA; 3. create another instance with "--hint group=group-anti-affinity", force it to hostA; To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1411480/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1406465] [NEW] shelved image remained when delete shelved instance
Public bug reported: Shelve-offload one instance with one attached volume. There was a bug(https://bugs.launchpad.net/nova/+bug/1406440). The unshelve op would be failed. Then delete the instance, but shelved image remaind that should be deleted. #nova image-list | 52fc907d-838f-4409-8ba4-0ff34c1e5ae5 | vm-shelve-test-shelved | ACTIVE | ab44ff05-9634-4597-bcf8-408bb8deac2c | When boot instance without volume, unshelved successfully. The shelved image deleted when instance delete. reproduce: 1. boot one instance; 2. create one volume, then attach to the instance; 3. shelve, shelve-offload, unshelve the instance; 4. delete the instance ** Affects: nova Importance: Undecided Status: New ** Tags: in-stable-icehouse shelve -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1406465 Title: shelved image remained when delete shelved instance Status in OpenStack Compute (Nova): New Bug description: Shelve-offload one instance with one attached volume. There was a bug(https://bugs.launchpad.net/nova/+bug/1406440). The unshelve op would be failed. Then delete the instance, but shelved image remaind that should be deleted. #nova image-list | 52fc907d-838f-4409-8ba4-0ff34c1e5ae5 | vm-shelve-test-shelved | ACTIVE | ab44ff05-9634-4597-bcf8-408bb8deac2c | When boot instance without volume, unshelved successfully. The shelved image deleted when instance delete. reproduce: 1. boot one instance; 2. create one volume, then attach to the instance; 3. shelve, shelve-offload, unshelve the instance; 4. delete the instance To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1406465/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1406460] [NEW] anti-affinity property broken when instance unshelve
Public bug reported: The instance can not be scheduled by anti-affinity property while unshelving. reproduce: 1. create one server-group, use policy anti-affinity; nova server-group-create --policy anti-affinity server-group-test-anti 2. boot two instances with the server-group; nova boot --flavor 1 --image f83026e6-86a3-4eaf-a24c-d0281217aba6 --nic net-id=3a68a059-3493-41d5-9063-773250e570b0 --hint group=36bc7998-ce69-42fc-a45b-e9130bd36f1e vm-anti-affinity-shelve-1 nova boot --flavor 1 --image f83026e6-86a3-4eaf-a24c-d0281217aba6 --nic net-id=3a68a059-3493-41d5-9063-773250e570b0 --hint group=36bc7998-ce69-42fc-a45b-e9130bd36f1e vm-anti-affinity-shelve-2 They were located at: vm-anti-affinity-shelve-1 hpc7000-slot10 vm-anti-affinity-shelve-2 hpc7000-slot4 3. shelve vm-anti-affinity-shelve-2, then shelve-offload, then unshelve; 4. check vm-anti-affinity-shelve-2 location: hpc7000-slot10 ** Affects: nova Importance: Undecided Status: New ** Tags: in-stable-icehouse shelve -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1406460 Title: anti-affinity property broken when instance unshelve Status in OpenStack Compute (Nova): New Bug description: The instance can not be scheduled by anti-affinity property while unshelving. reproduce: 1. create one server-group, use policy anti-affinity; nova server-group-create --policy anti-affinity server-group-test-anti 2. boot two instances with the server-group; nova boot --flavor 1 --image f83026e6-86a3-4eaf-a24c-d0281217aba6 --nic net-id=3a68a059-3493-41d5-9063-773250e570b0 --hint group=36bc7998-ce69-42fc-a45b-e9130bd36f1e vm-anti-affinity-shelve-1 nova boot --flavor 1 --image f83026e6-86a3-4eaf-a24c-d0281217aba6 --nic net-id=3a68a059-3493-41d5-9063-773250e570b0 --hint group=36bc7998-ce69-42fc-a45b-e9130bd36f1e vm-anti-affinity-shelve-2 They were located at: vm-anti-affinity-shelve-1 hpc7000-slot10 vm-anti-affinity-shelve-2 hpc7000-slot4 3. shelve vm-anti-affinity-shelve-2, then shelve-offload, then unshelve; 4. check vm-anti-affinity-shelve-2 location: hpc7000-slot10 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1406460/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1406441] [NEW] A progress bar showed but shelve-offload finished
Public bug reported: Instance shelve-offload finished successfully, but a progress bar still existed in "status" of instance. reproduce: 1. create one instance; 2. shelve, shelve-offload the instance; 3. check the instance page of project on dashboard. ** Affects: horizon Importance: Undecided Status: New ** Tags: in-stable-icehouse shelve -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1406441 Title: A progress bar showed but shelve-offload finished Status in OpenStack Dashboard (Horizon): New Bug description: Instance shelve-offload finished successfully, but a progress bar still existed in "status" of instance. reproduce: 1. create one instance; 2. shelve, shelve-offload the instance; 3. check the instance page of project on dashboard. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1406441/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1406440] [NEW] unshelve failed when instance attached volume
Public bug reported: unshelved instance failed when the instance attached one volume. reproduce: 1. instance boot from image; 2. create 1GB volume in lvm, use local storage; 3. attach volume to instance; 4. shelve instance, then unshelve instance detail logs: 2014-12-30 08:43:17.401 8797 ERROR nova.compute.manager [req-df836400-b68f-4a0b-89a1-055dcad00b70 40dc8656066f432895be13be71e44b86 a20fb3edeab44755a861f510183a679a] [instance: 1632aa3b-5a00-495a-9041-283566592a65] Instance failed block device setup 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] Traceback (most recent call last): 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1890, in _prep_block_device 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] self.driver, self._await_block_device_map_created) + 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 367, in attach_block_devices 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] map(_log_and_attach, block_device_mapping) 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 365, in _log_and_attach 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] bdm.attach(*attach_args, **attach_kwargs) 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 44, in wrapped 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] ret_val = method(obj, context, *args, **kwargs) 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 217, in attach 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] volume_api.check_attach(context, volume, instance=instance) 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 234, in check_attach 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] raise exception.InvalidVolume(reason=msg) 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] InvalidVolume: Invalid volume: Volume has been attached to the instance ** Affects: nova Importance: Undecided Status: New ** Tags: in-stable-icehouse shelve -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1406440 Title: unshelve failed when instance attached volume Status in OpenStack Compute (Nova): New Bug description: unshelved instance failed when the instance attached one volume. reproduce: 1. instance boot from image; 2. create 1GB volume in lvm, use local storage; 3. attach volume to instance; 4. shelve instance, then unshelve instance detail logs: 2014-12-30 08:43:17.401 8797 ERROR nova.compute.manager [req-df836400-b68f-4a0b-89a1-055dcad00b70 40dc8656066f432895be13be71e44b86 a20fb3edeab44755a861f510183a679a] [instance: 1632aa3b-5a00-495a-9041-283566592a65] Instance failed block device setup 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] Traceback (most recent call last): 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1890, in _prep_block_device 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] self.driver, self._await_block_device_map_created) + 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 367, in attach_block_devices 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] map(_log_and_attach, block_device_mapping) 2014-12-30 08:43:17.401 8797 TRACE nova.compute.manager [instance: 1632aa3b-5a00-495a-9041-283566592a65] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 365, in
[Yahoo-eng-team] [Bug 1405374] [NEW] Unable to create new image with mini-disk and mini-ram
Public bug reported: Unable to create new image from "Image Location" when config the "Minimum Disk (GB)" and "Minimum RAM (MB)". But without these configs image created successfully. The log of httpd: [Wed Dec 24 09:31:04.564521 2014] [:error] [pid 4869] Recoverable error: [Wed Dec 24 09:31:04.564563 2014] [:error] [pid 4869] [Wed Dec 24 09:31:04.564569 2014] [:error] [pid 4869] 403 Forbidden [Wed Dec 24 09:31:04.564575 2014] [:error] [pid 4869] [Wed Dec 24 09:31:04.564579 2014] [:error] [pid 4869] [Wed Dec 24 09:31:04.564584 2014] [:error] [pid 4869] 403 Forbidden [Wed Dec 24 09:31:04.564589 2014] [:error] [pid 4869] Access was denied to this resource. [Wed Dec 24 09:31:04.564594 2014] [:error] [pid 4869] [Wed Dec 24 09:31:04.564599 2014] [:error] [pid 4869] [Wed Dec 24 09:31:04.564604 2014] [:error] [pid 4869] (HTTP 403) Reproduce: 1. Click "Create Image"; 2. "Image Source" chooses "Image Location", and enter image url; 3. "Minimum Disk (GB)" - 20 , "Minimum RAM (MB)" - 1024; 4. Click "Create Image". Use 2014.2 juno-stable ** Affects: horizon Importance: Undecided Status: New ** Tags: in-juno-stable -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1405374 Title: Unable to create new image with mini-disk and mini-ram Status in OpenStack Dashboard (Horizon): New Bug description: Unable to create new image from "Image Location" when config the "Minimum Disk (GB)" and "Minimum RAM (MB)". But without these configs image created successfully. The log of httpd: [Wed Dec 24 09:31:04.564521 2014] [:error] [pid 4869] Recoverable error: [Wed Dec 24 09:31:04.564563 2014] [:error] [pid 4869] [Wed Dec 24 09:31:04.564569 2014] [:error] [pid 4869] 403 Forbidden [Wed Dec 24 09:31:04.564575 2014] [:error] [pid 4869] [Wed Dec 24 09:31:04.564579 2014] [:error] [pid 4869] [Wed Dec 24 09:31:04.564584 2014] [:error] [pid 4869] 403 Forbidden [Wed Dec 24 09:31:04.564589 2014] [:error] [pid 4869] Access was denied to this resource. [Wed Dec 24 09:31:04.564594 2014] [:error] [pid 4869] [Wed Dec 24 09:31:04.564599 2014] [:error] [pid 4869] [Wed Dec 24 09:31:04.564604 2014] [:error] [pid 4869] (HTTP 403) Reproduce: 1. Click "Create Image"; 2. "Image Source" chooses "Image Location", and enter image url; 3. "Minimum Disk (GB)" - 20 , "Minimum RAM (MB)" - 1024; 4. Click "Create Image". Use 2014.2 juno-stable To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1405374/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1397822] [NEW] Can't access image after reboot host
Public bug reported: I upload the cirros image which add property (--property architecture=x86_64) with CLI: glance image-update 6886dd80-b48c-4192-98bb-977d5ffa0314 --property architecture=x86_64 Then launch one instance with this image. But when the host with the instance rebooted, got log below: 2014-12-01 13:38:45.761 5845 WARNING nova.compute.utils [-] [instance: 1b72bff6-3f4d-49ac-8d0b-f173a42783f5] Can't access image 6886dd80-b48c-4192-98bb-977d5ffa0314: can't be encoded ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1397822 Title: Can't access image after reboot host Status in OpenStack Compute (Nova): New Bug description: I upload the cirros image which add property (--property architecture=x86_64) with CLI: glance image-update 6886dd80-b48c-4192-98bb-977d5ffa0314 --property architecture=x86_64 Then launch one instance with this image. But when the host with the instance rebooted, got log below: 2014-12-01 13:38:45.761 5845 WARNING nova.compute.utils [-] [instance: 1b72bff6-3f4d-49ac-8d0b-f173a42783f5] Can't access image 6886dd80-b48c-4192-98bb-977d5ffa0314: can't be encoded To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1397822/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1396456] [NEW] instance boot failed when restart host
Public bug reported: When I reboot host, state of one instance became ERROR with CLI(nova list). The log of the nova-compute service: 2014-11-25 12:13:48.095 3848 DEBUG nova.compute.manager [-] [instance: 7d7ec3f2-3709-4bbc-b278-849fd672d284] Current state is 4, state in DB is 1. _init_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:961 But: 2014-11-25 12:14:48.249 3848 ERROR nova.virt.libvirt.driver [-] An error occurred while trying to launch a defined domain with xml: instance-0006 7d7ec3f2-3709-4bbc-b278-849fd672d284 . Check the log of libvirt: 2014-11-25 04:14:18.997+: 2545: error : qemuMonitorOpenUnix:313 : monitor socket did not show up: No such file or directory Use stable-icehouse. ** Affects: nova Importance: Undecided Status: New ** Tags: libvirt -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1396456 Title: instance boot failed when restart host Status in OpenStack Compute (Nova): New Bug description: When I reboot host, state of one instance became ERROR with CLI(nova list). The log of the nova-compute service: 2014-11-25 12:13:48.095 3848 DEBUG nova.compute.manager [-] [instance: 7d7ec3f2-3709-4bbc-b278-849fd672d284] Current state is 4, state in DB is 1. _init_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:961 But: 2014-11-25 12:14:48.249 3848 ERROR nova.virt.libvirt.driver [-] An error occurred while trying to launch a defined domain with xml: instance-0006 7d7ec3f2-3709-4bbc-b278-849fd672d284 . Check the log of libvirt: 2014-11-25 04:14:18.997+: 2545: error : qemuMonitorOpenUnix:313 : monitor socket did not show up: No such file or directory Use stable-icehouse. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1396456/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1354244] [NEW] instance boot failed when assigned more than one sriov nic
Public bug reported: Our project(OpenCOS) wants to use PCI SRIOV functions. Merged codes of the blueprint(https://review.openstack.org/#/c/67500/) on icehouse release version(nova-2014.1.tar.gz). But when assigned more than one SRIOV nic to one instance, it booted failed. The failed log on compute node as below: ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] Error: Unterminated string starting at: line 1 column 225 (char 224) 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] Traceback (most recent call last): 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1286, in _build_instance 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] with rt.instance_claim(context, instance, limits): 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 249, in inner 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] return f(*args, **kwargs) 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 122, in instance_claim 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] overhead=overhead, limits=limits) 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] File "/usr/lib/python2.7/site-packages/nova/compute/claims.py", line 95, in __init__ 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] self._claim_test(resources, limits) 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] File "/usr/lib/python2.7/site-packages/nova/compute/claims.py", line 144, in _claim_test 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] self._test_pci()] 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] File "/usr/lib/python2.7/site-packages/nova/compute/claims.py", line 171, in _test_pci 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] pci_requests = pci_request.get_instance_pci_requests(self.instance) 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] File "/usr/lib/python2.7/site-packages/nova/pci/pci_request.py", line 208, in get_instance_pci_requests 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] return jsonutils.loads(pci_requests) 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] File "/usr/lib/python2.7/site-packages/nova/openstack/common/jsonutils.py", line 164, in loads 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] return json.loads(s) 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] return _default_decoder.decode(s) 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] File "/usr/lib64/python2.7/json/decoder.py", line 365, in decode 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] File "/usr/lib64/python2.7/json/decoder.py", line 381, in raw_decode 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] obj, end = self.scan_once(s, idx) 2014-08-06 15:50:28.976 2807 TRACE nova.compute.manager [instance: 1ba9ea9a-41e6-46f3-9013-58a4aad9f8b0] ValueError: Unterminated string starting at: line 1 column 225 (char 224) ** Affects: nova Importance: Undecided Status: New ** Tags removed: any ** Tags added: compute ** Tags removed: compute -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1354244 Title: instance boot failed when assigned more than one sriov nic Status in OpenStack Compute (Nova): New Bug description: Our project(OpenCOS) w
[Yahoo-eng-team] [Bug 1341128] [NEW] Several inaccuracies in wiki PCI_passthrough_SRIOV_support
Public bug reported: https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support There are several inaccuracies below: 1) create PCI flavor #The below name bigGPU should be bigGPU2. nova pci-flavor-create name 'bigGPU' description 'passthrough Intel's on-die GPU' nova pci-flavor-update name 'bigGPU2' set'vendor_id'='8086' 'product_id': '0002' 2)create flavor and boot with it nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU,bigGPU2;' nova boot mytest --flavor m1.tiny --image=cirros-0.3.1-x86_64-uec # The flavor above should be same. Are they treat as one bug? I don't sure:) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1341128 Title: Several inaccuracies in wiki PCI_passthrough_SRIOV_support Status in OpenStack Compute (Nova): New Bug description: https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support There are several inaccuracies below: 1) create PCI flavor #The below name bigGPU should be bigGPU2. nova pci-flavor-create name 'bigGPU' description 'passthrough Intel's on-die GPU' nova pci-flavor-update name 'bigGPU2' set'vendor_id'='8086' 'product_id': '0002' 2)create flavor and boot with it nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU,bigGPU2;' nova boot mytest --flavor m1.tiny --image=cirros-0.3.1-x86_64-uec # The flavor above should be same. Are they treat as one bug? I don't sure:) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1341128/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp