Following up on my previous comment: I believe self.detach_device(config, persistent=False, live=live) is eventually calling /usr/lib64/python2.7/site-packages/libvirt.detachDeviceFlags() which itself calls /usr/lib64/python2.7/site-packages/libvirtmod.so. From there I can't see what happens next. From libvirt.detachDeviceFlags() I see: ret = libvirtmod.virDomainDetachDeviceFlags(self._o, xml, flags) if ret == -1: raise libvirtError ('virDomainDetachDeviceFlags() failed', dom=self) return ret but I'm not seeing an error in the libvirt log. The most relevant thing I can find in the log is: 2016-04-11 15:20:19.756+0000: 38234: debug : virDomainDetachDeviceFlags:8560 : dom=0x3fff9c00b2e0, (VM: name=2_cvj_vm_test-a4806355-0000001c, uuid=a4806355-40df-48c8-9b73-8f8037a3aaa5), xml=<disk type="block" device="disk"> <driver name="qemu" type="raw" cache="none" io="native"/> <source dev="/dev/disk/by-id/wwn-0x60050768028110a4880000000000bec5"/> <target bus="virtio" dev="vdb"/> <serial>c16946aa-3018-4bdf-9586-a1cdfe8c8cb5</serial> </disk> , flags=1
I've spoken to a libvirt on PowerKVM expert. It appears the problem is that the packages were out of date on the VMs that were created which caused them to not fully detach. After updating the OS I was able to detach the volumes with out issue. Invalidating the bug. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1565859 Title: Can't detach SVC volume from an instance; guest detach device times out Status in Cinder: Invalid Status in OpenStack Compute (nova): Invalid Bug description: Steps to Reproduce: 1. Setup instance with a libvirt/KVM host and SVC storage 2. Spawn a VM 3. Attach a volume to the VM 4. Wait for volume attachment to complete successfully 5. Detach the volume Expected Result: 1. The volume is detached from the VM 2. The volume's status becomes "Available" 3. The volume can be deleted. Actual result: 1. Volume remains attached to the VM (waited over 10 minutes) 2. The volume's state stays "In-Use" Logs: 016-03-24 16:34:13.852 143842 INFO nova.compute.resource_tracker [-] Final resource view: name=C387f19U21-KVM phys_ram=260533MB used_ram=4608MB phys_disk=545GB used_disk=40GB total_vcpus=160 used_vcpus=2 pci_stats=[] 2016-03-24 16:34:14.081 143842 INFO nova.compute.resource_tracker [-] Compute_service record updated for C387f19U21_KVM:C387f19U21-KVM 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall._func' failed 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall Traceback (most recent call last): 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 136, in _run_loop 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall result = func(*self.args, **self.kw) 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 377, in _func 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall return self._sleep_time 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall self.force_reraise() 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall six.reraise(self.type_, self.value, self.tb) 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 356, in _func 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall result = f(*args, **kwargs) 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 342, in _do_wait_and_retry_detach 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall reason=reason) 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain.) 2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [req-fab1608e-cffe-40f9-82d0-a4c7a9cebf10 3ebcf1a38bc7b4977b7f8da32faad97bdef843372a670bb2817f8a066f042b9b e10bc17f58d8499a8fab1b05687123e5 - - -] [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] Failed to detach volume 4221ccad-0f98-4f78-ad06-92ea4941afc1 from /dev/vdb 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] Traceback (most recent call last): 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4767, in _driver_detach_volume 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] encryption=encryption) 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1469, in detach_volume 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] wait_for_detach() 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 385, in func 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] return evt.wait() 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] return hubs.get_hub().switch() 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] return self.greenlet.switch() 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 136, in _run_loop 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] result = func(*self.args, **self.kw) 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 377, in _func 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] return self._sleep_time 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] self.force_reraise() 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] six.reraise(self.type_, self.value, self.tb) 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 356, in _func 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] result = f(*args, **kwargs) 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 342, in _do_wait_and_retry_detach 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] reason=reason) 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain.) 2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1565859/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp