*** This bug is a duplicate of bug 1327218 *** https://bugs.launchpad.net/bugs/1327218
** This bug has been marked a duplicate of bug 1327218 Volume detach failure because of invalid bdm.connection_info -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1348204 Title: test_encrypted_cinder_volumes_cryptsetup times out waiting for volume to be available Status in OpenStack Compute (Nova): Confirmed Status in OpenStack Compute (nova) icehouse series: New Bug description: http://logs.openstack.org/15/109115/1/check/check-tempest-dsvm- full/168a5dd/console.html#_2014-07-24_01_07_09_115 2014-07-24 01:07:09.116 | tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup[compute,image,volume] 2014-07-24 01:07:09.116 | ---------------------------------------------------------------------------------------------------------------------------------------- 2014-07-24 01:07:09.116 | 2014-07-24 01:07:09.116 | Captured traceback: 2014-07-24 01:07:09.117 | ~~~~~~~~~~~~~~~~~~~ 2014-07-24 01:07:09.117 | Traceback (most recent call last): 2014-07-24 01:07:09.117 | File "tempest/test.py", line 128, in wrapper 2014-07-24 01:07:09.117 | return f(self, *func_args, **func_kwargs) 2014-07-24 01:07:09.117 | File "tempest/scenario/test_encrypted_cinder_volumes.py", line 63, in test_encrypted_cinder_volumes_cryptsetup 2014-07-24 01:07:09.117 | self.attach_detach_volume() 2014-07-24 01:07:09.117 | File "tempest/scenario/test_encrypted_cinder_volumes.py", line 49, in attach_detach_volume 2014-07-24 01:07:09.117 | self.nova_volume_detach() 2014-07-24 01:07:09.117 | File "tempest/scenario/manager.py", line 757, in nova_volume_detach 2014-07-24 01:07:09.117 | self._wait_for_volume_status('available') 2014-07-24 01:07:09.117 | File "tempest/scenario/manager.py", line 710, in _wait_for_volume_status 2014-07-24 01:07:09.117 | self.volume_client.volumes, self.volume.id, status) 2014-07-24 01:07:09.118 | File "tempest/scenario/manager.py", line 230, in status_timeout 2014-07-24 01:07:09.118 | not_found_exception=not_found_exception) 2014-07-24 01:07:09.118 | File "tempest/scenario/manager.py", line 296, in _status_timeout 2014-07-24 01:07:09.118 | raise exceptions.TimeoutException(message) 2014-07-24 01:07:09.118 | TimeoutException: Request timed out 2014-07-24 01:07:09.118 | Details: Timed out waiting for thing 4ef6a14a-3fce-417f-aa13-5aab1789436e to become available I've actually been seeing this out of tree in our internal CI also but thought it was just us or our slow VMs, this is the first I've seen it upstream. From the traceback in the console log, it looks like the volume does get to available status because it doesn't get out of that state when tempest is trying to delete the volume on tear down. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1348204/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp