Public bug reported:

stack@ubuntu:~$ nova show 2313788b-f5cf-4183-be65-9ac93015c6dc
ERROR (CommandError): No server with a name or ID of 
'2313788b-f5cf-4183-be65-9ac93015c6dc' exists.
stack@ubuntu:~$ cinder list |grep 2313788b-f5cf-4183-be65-9ac93015c6dc
| 7ef3590d-2ecb-4e32-8623-1e26ead8c889 |      in-use     |        -         |   
   a      |  1   |      -      |   true   |    False    | 
2313788b-f5cf-4183-be65-9ac93015c6dc |


Volume is still "in-use" state after the instance was destroyed.

screen-n-cpu.log:2015-12-08 05:29:23.448 DEBUG nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Start destroying the instance on the 
hypervisor. _shutdown_instance /opt/stack/nova/nova/compute/manager.py:2281
screen-n-cpu.log:2015-12-08 05:29:23.460 4324 INFO nova.virt.libvirt.driver [-] 
[instance: 2313788b-f5cf-4183-be65-9ac93015c6dc] During wait destroy, instance 
disappeared.
screen-n-cpu.log:2015-12-08 05:29:23.461 DEBUG nova.virt.libvirt.vif 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] vif_type=ovs 
instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='',created_at=2015-12-08T04:33:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description='instance_1_144956711519',display_name='instance_1_144956711519',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),host='ubuntu',hostname='instance-1-144956711519',id=8,image_ref='9121970b-4af5-4538-a659-0d16711efafb',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,launch_index=0,launched_at=None,launched_on='ubuntu',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='ubuntu',numa_topology=<?>,old_flavor=None,os_type=N
 
one,pci_devices=PciDeviceList,pci_requests=<?>,power_state=0,progress=0,project_id='10ad966fa76c4536ab0d85d41a260631',ramdisk_id='',reservation_id='r-wagg3ytl',root_device_name='/dev/vda1',root_gb=1,security_groups=SecurityGroupList,shutdown_terminate=False,system_metadata={image_base_image_ref='9121970b-4af5-4538-a659-0d16711efafb',image_container_format='bare',image_disk_format='qcow2',image_min_disk='1',image_min_ram='0',network_allocated='True'},tags=<?>,task_state='deleting',terminated_at=None,updated_at=2015-12-08T05:21:07Z,user_data=None,user_id='4c0600df133d44feae965ec6ae44a6e6',uuid=2313788b-f5cf-4183-be65-9ac93015c6dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error')
 vif=VIF({'profile': {}, 'ovs_interfaceid': 
u'aca27420-ef84-45a6-9d90-0d37d0069338', 'preserve_on_delete': False, 
'network': Network({'bridge': u'br-int', 'subnets': [Subnet({'ips': 
[FixedIP({'meta': {}, 'version': 6, 'type': u'fixed', 'floating_ips': [], 
'address': u'2001:db8::a'})], 'version': 6, 'meta': 
 {}, 'dns': [], 'routes': [], 'cidr': u'2001:db8::/64', 'gateway': IP({'meta': 
{}, 'version': 6, 'type': u'gateway', 'address': u'2001:db8::2'})}), 
Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 
'floating_ips': [], 'address': u'172.24.4.9'})], 'version': 4, 'meta': {}, 
'dns': [], 'routes': [], 'cidr': u'172.24.4.0/24', 'gateway': IP({'meta': {}, 
'version': 4, 'type': u'gateway', 'address': u'172.24.4.1'})})], 'meta': 
{u'injected': False, u'tenant_id': u'10ad966fa76c4536ab0d85d41a260631'}, 'id': 
u'3677c113-d47b-44ff-ab10-d3a59357413d', 'label': u'public'}), 'devname': 
u'tapaca27420-ef', 'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 
'details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address': 
u'fa:16:3e:a2:64:6b', 'active': False, 'type': u'ovs', 'id': 
u'aca27420-ef84-45a6-9d90-0d37d0069338', 'qbg_params': None}) unplug 
/opt/stack/nova/nova/virt/libvirt/vif.py:877
screen-n-cpu.log:2015-12-08 05:29:37.648 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] Triggering sync for uuid 
2313788b-f5cf-4183-be65-9ac93015c6dc _sync_power_states 
/opt/stack/nova/nova/compute/manager.py:5997
screen-n-cpu.log:2015-12-08 05:29:51.744 DEBUG oslo_concurrency.processutils 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] Running cmd 
(subprocess): mv 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc_del execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:267
screen-n-cpu.log:2015-12-08 05:29:51.766 DEBUG oslo_concurrency.processutils 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] CMD "mv 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc_del" 
returned: 0 in 0.022s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297
screen-n-cpu.log:2015-12-08 05:29:51.768 INFO nova.virt.libvirt.driver 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Deleting instance files 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc_del
screen-n-cpu.log:2015-12-08 05:29:51.768 INFO nova.virt.libvirt.driver 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Deletion of 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc_del complete
screen-n-cpu.log:2015-12-08 05:29:51.873 INFO nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Took 28.42 seconds to destroy the 
instance on the hypervisor.
screen-n-cpu.log:2015-12-08 05:29:51.873 DEBUG nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Deallocating network for instance 
_deallocate_network /opt/stack/nova/nova/compute/manager.py:1800
screen-n-cpu.log:2015-12-08 05:29:51.873 DEBUG nova.network.neutronv2.api 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] deallocate_for_instance() 
deallocate_for_instance /opt/stack/nova/nova/network/neutronv2/api.py:818
screen-n-cpu.log:2015-12-08 05:29:51.874 DEBUG keystoneclient.session 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] REQ: curl -g -i -X GET 
http://10.109.194.149:9696/v2.0/ports.json?device_id=2313788b-f5cf-4183-be65-9ac93015c6dc
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}1d5c941c6c4ae01e07c2006dae10ca38221a1149" 
_http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:198
screen-n-cpu.log:RESP BODY: {"ports": [{"status": "DOWN", "binding:host_id": 
"ubuntu", "allowed_address_pairs": [], "extra_dhcp_opts": [], "dns_assignment": 
[{"hostname": "host-172-24-4-9", "ip_address": "172.24.4.9", "fqdn": 
"host-172-24-4-9.openstacklocal."}, {"hostname": "host-2001-db8--a", 
"ip_address": "2001:db8::a", "fqdn": "host-2001-db8--a.openstacklocal."}], 
"device_owner": "compute:None", "port_security_enabled": true, 
"binding:profile": {}, "fixed_ips": [{"subnet_id": 
"74f04ea3-7208-4244-bb3f-0bedcf2c36d4", "ip_address": "172.24.4.9"}, 
{"subnet_id": "a1ee33fb-4ae8-451b-b4e5-bb048d979d8b", "ip_address": 
"2001:db8::a"}], "id": "aca27420-ef84-45a6-9d90-0d37d0069338", 
"security_groups": ["6d539fd2-b3ac-49a1-ae39-9913c2c93086"], "device_id": 
"2313788b-f5cf-4183-be65-9ac93015c6dc", "name": "", "admin_state_up": true, 
"network_id": "3677c113-d47b-44ff-ab10-d3a59357413d", "dns_name": "", 
"binding:vif_details": {"port_filter": true, "ovs_hybrid_plug": true}, 
"binding:vnic_type": "n
 ormal", "binding:vif_type": "ovs", "tenant_id": 
"10ad966fa76c4536ab0d85d41a260631", "mac_address": "fa:16:3e:a2:64:6b"}]}
screen-n-cpu.log:2015-12-08 05:29:52.281 DEBUG nova.network.base_api 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Updating instance_info_cache with 
network_info: [] update_instance_cache_with_nw_info 
/opt/stack/nova/nova/network/base_api.py:43
screen-n-cpu.log:2015-12-08 05:29:52.296 INFO nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Took 0.42 seconds to deallocate network 
for instance.
screen-n-cpu.log:2015-12-08 05:29:54.338 DEBUG nova.compute.manager 
[req-431c6ff2-364c-402f-90a3-29331877a3a9 nova service] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Received event 
network-vif-deleted-aca27420-ef84-45a6-9d90-0d37d0069338 
external_instance_event /opt/stack/nova/nova/compute/manager.py:6485
screen-n-cpu.log:2015-12-08 05:30:12.688 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Skipping network cache update for 
instance because it is being deleted. _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:5569
screen-n-cpu.log:2015-12-08 05:31:12.617 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Skipping network cache update for 
instance because it is being deleted. _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:5538
screen-n-cpu.log:2015-12-08 05:32:12.619 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Skipping network cache update for 
instance because it is being deleted. _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:5538
screen-n-cpu.log:2015-12-08 05:33:14.615 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Skipping network cache update for 
instance because it is being deleted. _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:5538
screen-n-cpu.log:2015-12-08 05:34:16.613 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Skipping network cache update for 
instance because it is being deleted. _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:5538
screen-n-cpu.log:2015-12-08 05:34:53.051 WARNING nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Ignoring Unknown cinder exception: The 
server has either erred or is incapable of performing the requested operation. 
(HTTP 500) (Request-ID: req-ad946a2f-ccfa-40c5-a259-3aed8d0ebf9f)
screen-n-cpu.log:2015-12-08 05:34:53.052 INFO nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Took 300.75 seconds to detach 1 volumes 
for instance.
screen-n-cpu.log:2015-12-08 05:34:53.071 DEBUG nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] terminating bdm 
BlockDeviceMapping(boot_index=0,connection_info='{"driver_volume_type": 
"iscsi", "serial": "7ef3590d-2ecb-4e32-8623-1e26ead8c889", "data": 
{"target_luns": [239, 239, 239], "target_iqns": 
["iqn.1992-04.com.emc:cx.apm00141056037.a9", 
"iqn.1992-04.com.emc:cx.apm00141056037.b8", 
"iqn.1992-04.com.emc:cx.apm00141056037.b9"], "target_discovered": true, 
"encrypted": false, "qos_specs": null, "target_iqn": 
"iqn.1992-04.com.emc:cx.apm00141056037.a9", "target_portal": 
"10.109.196.250:3260", "volume_id": "7ef3590d-2ecb-4e32-8623-1e26ead8c889", 
"target_lun": 239, "access_mode": "rw", "target_portals": 
["10.109.196.250:3260", "10.109.196.251:3260", 
"10.109.196.252:3260"]}}',created_at=2015-12-08T04:33:44Z,delete_on_termination=False,deleted=False,deleted_at=None,destination_type='volume',device_name='/dev/vda1',d
 
evice_type='disk',disk_bus='virtio',guest_format=None,id=11,image_id=None,instance=<?>,instance_uuid=2313788b-f5cf-4183-be65-9ac93015c6dc,no_device=False,snapshot_id=None,source_type='volume',updated_at=2015-12-08T04:34:19Z,volume_id='7ef3590d-2ecb-4e32-8623-1e26ead8c889',volume_size=1)
 _cleanup_volumes /opt/stack/nova/nova/compute/manager.py:2341
screen-n-cpu.log:2015-12-08 05:34:53.162 DEBUG nova.objects.instance 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] Lazy-loading 
`numa_topology' on Instance uuid 2313788b-f5cf-4183-be65-9ac93015c6dc 
obj_load_attr /opt/stack/nova/nova/objects/instance.py:843
screen-n-cpu.log:2015-12-08 05:34:53.419 DEBUG oslo_concurrency.lockutils 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] Lock 
"2313788b-f5cf-4183-be65-9ac93015c6dc" released by 
"nova.compute.manager.do_terminate_instance" :: held 329.975s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
screen-n-cpu.log:2015-12-08 05:34:53.420 4324 DEBUG oslo_concurrency.lockutils 
[-] Lock "2313788b-f5cf-4183-be65-9ac93015c6dc" acquired by 
"nova.compute.manager.query_driver_power_state_and_sync" :: waited 315.768s 
inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
screen-n-cpu.log:2015-12-08 05:34:53.420 4324 INFO nova.compute.manager [-] 
[instance: 2313788b-f5cf-4183-be65-9ac93015c6dc] During sync_power_state the 
instance has a pending task (deleting). Skip.
screen-n-cpu.log:2015-12-08 05:34:53.421 4324 DEBUG oslo_concurrency.lockutils 
[-] Lock "2313788b-f5cf-4183-be65-9ac93015c6dc" released by 
"nova.compute.manager.query_driver_power_state_and_sync" :: held 0.001s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524187

Title:
  Volume is not automatically detached after destroy instance

Status in OpenStack Compute (nova):
  New

Bug description:
  stack@ubuntu:~$ nova show 2313788b-f5cf-4183-be65-9ac93015c6dc
  ERROR (CommandError): No server with a name or ID of 
'2313788b-f5cf-4183-be65-9ac93015c6dc' exists.
  stack@ubuntu:~$ cinder list |grep 2313788b-f5cf-4183-be65-9ac93015c6dc
  | 7ef3590d-2ecb-4e32-8623-1e26ead8c889 |      in-use     |        -         | 
     a      |  1   |      -      |   true   |    False    | 
2313788b-f5cf-4183-be65-9ac93015c6dc |

  
  Volume is still "in-use" state after the instance was destroyed.

  screen-n-cpu.log:2015-12-08 05:29:23.448 DEBUG nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Start destroying the instance on the 
hypervisor. _shutdown_instance /opt/stack/nova/nova/compute/manager.py:2281
  screen-n-cpu.log:2015-12-08 05:29:23.460 4324 INFO nova.virt.libvirt.driver 
[-] [instance: 2313788b-f5cf-4183-be65-9ac93015c6dc] During wait destroy, 
instance disappeared.
  screen-n-cpu.log:2015-12-08 05:29:23.461 DEBUG nova.virt.libvirt.vif 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] vif_type=ovs 
instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='',created_at=2015-12-08T04:33:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description='instance_1_144956711519',display_name='instance_1_144956711519',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),host='ubuntu',hostname='instance-1-144956711519',id=8,image_ref='9121970b-4af5-4538-a659-0d16711efafb',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,launch_index=0,launched_at=None,launched_on='ubuntu',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='ubuntu',numa_topology=<?>,old_flavor=None,os_type
 
=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=0,progress=0,project_id='10ad966fa76c4536ab0d85d41a260631',ramdisk_id='',reservation_id='r-wagg3ytl',root_device_name='/dev/vda1',root_gb=1,security_groups=SecurityGroupList,shutdown_terminate=False,system_metadata={image_base_image_ref='9121970b-4af5-4538-a659-0d16711efafb',image_container_format='bare',image_disk_format='qcow2',image_min_disk='1',image_min_ram='0',network_allocated='True'},tags=<?>,task_state='deleting',terminated_at=None,updated_at=2015-12-08T05:21:07Z,user_data=None,user_id='4c0600df133d44feae965ec6ae44a6e6',uuid=2313788b-f5cf-4183-be65-9ac93015c6dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error')
 vif=VIF({'profile': {}, 'ovs_interfaceid': 
u'aca27420-ef84-45a6-9d90-0d37d0069338', 'preserve_on_delete': False, 
'network': Network({'bridge': u'br-int', 'subnets': [Subnet({'ips': 
[FixedIP({'meta': {}, 'version': 6, 'type': u'fixed', 'floating_ips': [], 
'address': u'2001:db8::a'})], 'version': 6, 'meta'
 : {}, 'dns': [], 'routes': [], 'cidr': u'2001:db8::/64', 'gateway': 
IP({'meta': {}, 'version': 6, 'type': u'gateway', 'address': 
u'2001:db8::2'})}), Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 
u'fixed', 'floating_ips': [], 'address': u'172.24.4.9'})], 'version': 4, 
'meta': {}, 'dns': [], 'routes': [], 'cidr': u'172.24.4.0/24', 'gateway': 
IP({'meta': {}, 'version': 4, 'type': u'gateway', 'address': 
u'172.24.4.1'})})], 'meta': {u'injected': False, u'tenant_id': 
u'10ad966fa76c4536ab0d85d41a260631'}, 'id': 
u'3677c113-d47b-44ff-ab10-d3a59357413d', 'label': u'public'}), 'devname': 
u'tapaca27420-ef', 'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 
'details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address': 
u'fa:16:3e:a2:64:6b', 'active': False, 'type': u'ovs', 'id': 
u'aca27420-ef84-45a6-9d90-0d37d0069338', 'qbg_params': None}) unplug 
/opt/stack/nova/nova/virt/libvirt/vif.py:877
  screen-n-cpu.log:2015-12-08 05:29:37.648 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] Triggering sync for uuid 
2313788b-f5cf-4183-be65-9ac93015c6dc _sync_power_states 
/opt/stack/nova/nova/compute/manager.py:5997
  screen-n-cpu.log:2015-12-08 05:29:51.744 DEBUG oslo_concurrency.processutils 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] Running cmd 
(subprocess): mv 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc_del execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:267
  screen-n-cpu.log:2015-12-08 05:29:51.766 DEBUG oslo_concurrency.processutils 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] CMD "mv 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc_del" 
returned: 0 in 0.022s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297
  screen-n-cpu.log:2015-12-08 05:29:51.768 INFO nova.virt.libvirt.driver 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Deleting instance files 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc_del
  screen-n-cpu.log:2015-12-08 05:29:51.768 INFO nova.virt.libvirt.driver 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Deletion of 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc_del complete
  screen-n-cpu.log:2015-12-08 05:29:51.873 INFO nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Took 28.42 seconds to destroy the 
instance on the hypervisor.
  screen-n-cpu.log:2015-12-08 05:29:51.873 DEBUG nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Deallocating network for instance 
_deallocate_network /opt/stack/nova/nova/compute/manager.py:1800
  screen-n-cpu.log:2015-12-08 05:29:51.873 DEBUG nova.network.neutronv2.api 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] deallocate_for_instance() 
deallocate_for_instance /opt/stack/nova/nova/network/neutronv2/api.py:818
  screen-n-cpu.log:2015-12-08 05:29:51.874 DEBUG keystoneclient.session 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] REQ: curl -g -i -X GET 
http://10.109.194.149:9696/v2.0/ports.json?device_id=2313788b-f5cf-4183-be65-9ac93015c6dc
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}1d5c941c6c4ae01e07c2006dae10ca38221a1149" 
_http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:198
  screen-n-cpu.log:RESP BODY: {"ports": [{"status": "DOWN", "binding:host_id": 
"ubuntu", "allowed_address_pairs": [], "extra_dhcp_opts": [], "dns_assignment": 
[{"hostname": "host-172-24-4-9", "ip_address": "172.24.4.9", "fqdn": 
"host-172-24-4-9.openstacklocal."}, {"hostname": "host-2001-db8--a", 
"ip_address": "2001:db8::a", "fqdn": "host-2001-db8--a.openstacklocal."}], 
"device_owner": "compute:None", "port_security_enabled": true, 
"binding:profile": {}, "fixed_ips": [{"subnet_id": 
"74f04ea3-7208-4244-bb3f-0bedcf2c36d4", "ip_address": "172.24.4.9"}, 
{"subnet_id": "a1ee33fb-4ae8-451b-b4e5-bb048d979d8b", "ip_address": 
"2001:db8::a"}], "id": "aca27420-ef84-45a6-9d90-0d37d0069338", 
"security_groups": ["6d539fd2-b3ac-49a1-ae39-9913c2c93086"], "device_id": 
"2313788b-f5cf-4183-be65-9ac93015c6dc", "name": "", "admin_state_up": true, 
"network_id": "3677c113-d47b-44ff-ab10-d3a59357413d", "dns_name": "", 
"binding:vif_details": {"port_filter": true, "ovs_hybrid_plug": true}, 
"binding:vnic_type": 
 "normal", "binding:vif_type": "ovs", "tenant_id": 
"10ad966fa76c4536ab0d85d41a260631", "mac_address": "fa:16:3e:a2:64:6b"}]}
  screen-n-cpu.log:2015-12-08 05:29:52.281 DEBUG nova.network.base_api 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Updating instance_info_cache with 
network_info: [] update_instance_cache_with_nw_info 
/opt/stack/nova/nova/network/base_api.py:43
  screen-n-cpu.log:2015-12-08 05:29:52.296 INFO nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Took 0.42 seconds to deallocate network 
for instance.
  screen-n-cpu.log:2015-12-08 05:29:54.338 DEBUG nova.compute.manager 
[req-431c6ff2-364c-402f-90a3-29331877a3a9 nova service] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Received event 
network-vif-deleted-aca27420-ef84-45a6-9d90-0d37d0069338 
external_instance_event /opt/stack/nova/nova/compute/manager.py:6485
  screen-n-cpu.log:2015-12-08 05:30:12.688 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Skipping network cache update for 
instance because it is being deleted. _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:5569
  screen-n-cpu.log:2015-12-08 05:31:12.617 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Skipping network cache update for 
instance because it is being deleted. _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:5538
  screen-n-cpu.log:2015-12-08 05:32:12.619 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Skipping network cache update for 
instance because it is being deleted. _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:5538
  screen-n-cpu.log:2015-12-08 05:33:14.615 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Skipping network cache update for 
instance because it is being deleted. _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:5538
  screen-n-cpu.log:2015-12-08 05:34:16.613 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Skipping network cache update for 
instance because it is being deleted. _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:5538
  screen-n-cpu.log:2015-12-08 05:34:53.051 WARNING nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Ignoring Unknown cinder exception: The 
server has either erred or is incapable of performing the requested operation. 
(HTTP 500) (Request-ID: req-ad946a2f-ccfa-40c5-a259-3aed8d0ebf9f)
  screen-n-cpu.log:2015-12-08 05:34:53.052 INFO nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Took 300.75 seconds to detach 1 volumes 
for instance.
  screen-n-cpu.log:2015-12-08 05:34:53.071 DEBUG nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] terminating bdm 
BlockDeviceMapping(boot_index=0,connection_info='{"driver_volume_type": 
"iscsi", "serial": "7ef3590d-2ecb-4e32-8623-1e26ead8c889", "data": 
{"target_luns": [239, 239, 239], "target_iqns": 
["iqn.1992-04.com.emc:cx.apm00141056037.a9", 
"iqn.1992-04.com.emc:cx.apm00141056037.b8", 
"iqn.1992-04.com.emc:cx.apm00141056037.b9"], "target_discovered": true, 
"encrypted": false, "qos_specs": null, "target_iqn": 
"iqn.1992-04.com.emc:cx.apm00141056037.a9", "target_portal": 
"10.109.196.250:3260", "volume_id": "7ef3590d-2ecb-4e32-8623-1e26ead8c889", 
"target_lun": 239, "access_mode": "rw", "target_portals": 
["10.109.196.250:3260", "10.109.196.251:3260", 
"10.109.196.252:3260"]}}',created_at=2015-12-08T04:33:44Z,delete_on_termination=False,deleted=False,deleted_at=None,destination_type='volume',device_name='/dev/vda1'
 
,device_type='disk',disk_bus='virtio',guest_format=None,id=11,image_id=None,instance=<?>,instance_uuid=2313788b-f5cf-4183-be65-9ac93015c6dc,no_device=False,snapshot_id=None,source_type='volume',updated_at=2015-12-08T04:34:19Z,volume_id='7ef3590d-2ecb-4e32-8623-1e26ead8c889',volume_size=1)
 _cleanup_volumes /opt/stack/nova/nova/compute/manager.py:2341
  screen-n-cpu.log:2015-12-08 05:34:53.162 DEBUG nova.objects.instance 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] Lazy-loading 
`numa_topology' on Instance uuid 2313788b-f5cf-4183-be65-9ac93015c6dc 
obj_load_attr /opt/stack/nova/nova/objects/instance.py:843
  screen-n-cpu.log:2015-12-08 05:34:53.419 DEBUG oslo_concurrency.lockutils 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] Lock 
"2313788b-f5cf-4183-be65-9ac93015c6dc" released by 
"nova.compute.manager.do_terminate_instance" :: held 329.975s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
  screen-n-cpu.log:2015-12-08 05:34:53.420 4324 DEBUG 
oslo_concurrency.lockutils [-] Lock "2313788b-f5cf-4183-be65-9ac93015c6dc" 
acquired by "nova.compute.manager.query_driver_power_state_and_sync" :: waited 
315.768s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
  screen-n-cpu.log:2015-12-08 05:34:53.420 4324 INFO nova.compute.manager [-] 
[instance: 2313788b-f5cf-4183-be65-9ac93015c6dc] During sync_power_state the 
instance has a pending task (deleting). Skip.
  screen-n-cpu.log:2015-12-08 05:34:53.421 4324 DEBUG 
oslo_concurrency.lockutils [-] Lock "2313788b-f5cf-4183-be65-9ac93015c6dc" 
released by "nova.compute.manager.query_driver_power_state_and_sync" :: held 
0.001s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to