[Yahoo-eng-team] [Bug 1730892] Re: Nova Image Resize Generating Errors

2017-11-08 Thread Xuanzhou Perry Dong
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730892

Title:
  Nova Image Resize Generating Errors

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  When flavor disk size is larger than the image size, Nova will try to 
increase the image disk size to match the flavor disk size. In the process, it 
will call resize2fs to resize the image disk file system as will for 
raw-format, but this will generate an error since resize2fs should be executed 
on a partition block device instead of the whole block device (which includes 
boot sector, partition table, etc). 

  Steps to Reproduce
  ==

  1. Set the following configuration for nova-compute:
  use_cow_images = False
  force_raw_images = True

  2. nova boot --image cirros-0.3.5-x86_64-disk --nic net-
  id=6f0df6a5-8848-427b-8222-7b69d5602fe4 --flavor m1.tiny test_vm

  The following error log are generated:

  Nov 08 14:42:51 devstack01 nova-compute[10609]: DEBUG nova.virt.disk.api 
[None req-771fa44d-46ce-4486-9ed7-7a89ddb735ed demo admin] Unable to determine 
label for image  with error Unexpected error while running command.
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Command: e2label 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Exit code: 1
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Stdout: u''
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Stderr: u"e2label: Bad magic 
number in super-block while trying to open 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk\nCouldn't
 find valid filesystem superblock.\n". Cannot resize. {{(pid=10609) 
is_image_extendable /opt/stack/nova/nova/virt/disk/api.py:254}}

  Expected Result
  ===
  Wrong command should not be executed and no error logs should be generated.

  Actual Result
  =
  Error logs are generated:

  Environment
  ===
  1. Openstack Nova
  stack@devstack01:/opt/stack/nova$ git log -1
  commit 232458ae4e83e8b218397e42435baa9f1d025b68
  Merge: 650c9f3 9d400c3
  Author: Jenkins 
  Date:   Tue Oct 10 06:27:52 2017 +

  Merge "rp: Move RP._get|set_aggregates() to module scope"

  2. Hypervisor
  Libvirt + QEMU
  stack@devstack01:/opt/stack/nova$ dpkg -l | grep libvirt
  ii  libvirt-bin3.6.0-1ubuntu5~cloud0  
amd64programs for the libvirt library
  ii  libvirt-clients3.6.0-1ubuntu5~cloud0  
amd64Programs for the libvirt library
  ii  libvirt-daemon 3.6.0-1ubuntu5~cloud0  
amd64Virtualization daemon
  ii  libvirt-daemon-system  3.6.0-1ubuntu5~cloud0  
amd64Libvirt daemon configuration files
  ii  libvirt-dev:amd64  3.6.0-1ubuntu5~cloud0  
amd64development files for the libvirt library
  ii  libvirt0:amd64 3.6.0-1ubuntu5~cloud0  
amd64library for interfacing with different virtualization systems
  stack@devstack01:/opt/stack/nova$ dpkg -l | grep qemu
  ii  ipxe-qemu  1.0.0+git-20150424.a25a16d-1ubuntu1
all  PXE boot firmware - ROM images for qemu
  ii  qemu-block-extra:amd64 1:2.10+dfsg-0ubuntu3~cloud0
amd64extra block backend modules for qemu-system and qemu-utils
  ii  qemu-kvm   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU Full virtualization
  ii  qemu-slof  20151103+dfsg-1ubuntu1 
all  Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system1:2.10+dfsg-0ubuntu3~cloud0
amd64QEMU full system emulation binaries
  ii  qemu-system-arm1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (arm)
  ii  qemu-system-common 1:2.10+dfsg-0ubuntu3~cloud0
amd64QEMU full system emulation binaries (common files)
  ii  qemu-system-mips   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (mips)
  ii  qemu-system-misc   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x  

[Yahoo-eng-team] [Bug 1730892] [NEW] Nova Image Resize Generating Errors

2017-11-07 Thread Xuanzhou Perry Dong
1:2.10+dfsg-0ubuntu3~cloud0  
  amd64QEMU utilities
stack@devstack01:/opt/stack/nova$ 

3. Networking type
Neutron with Openvswitch

** Affects: nova
 Importance: Low
     Assignee: Xuanzhou Perry Dong (oss-xzdong)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730892

Title:
  Nova Image Resize Generating Errors

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When flavor disk size is larger than the image size, Nova will try to 
increase the image disk size to match the flavor disk size. In the process, it 
will call resize2fs to resize the image disk file system as will for 
raw-format, but this will generate an error since resize2fs should be executed 
on a partition block device instead of the whole block device (which includes 
boot sector, partition table, etc). 

  Steps to Reproduce
  ==

  1. Set the following configuration for nova-compute:
  use_cow_images = False
  force_raw_images = True

  2. nova boot --image cirros-0.3.5-x86_64-disk --nic net-
  id=6f0df6a5-8848-427b-8222-7b69d5602fe4 --flavor m1.tiny test_vm

  The following error log are generated:

  Nov 08 14:42:51 devstack01 nova-compute[10609]: DEBUG nova.virt.disk.api 
[None req-771fa44d-46ce-4486-9ed7-7a89ddb735ed demo admin] Unable to determine 
label for image <LocalFileImage:{'path': 
'/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk', 
'format': 'raw'}> with error Unexpected error while running command.
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Command: e2label 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Exit code: 1
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Stdout: u''
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Stderr: u"e2label: Bad magic 
number in super-block while trying to open 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk\nCouldn't
 find valid filesystem superblock.\n". Cannot resize. {{(pid=10609) 
is_image_extendable /opt/stack/nova/nova/virt/disk/api.py:254}}

  Expected Result
  ===
  Wrong command should not be executed and no error logs should be generated.

  Actual Result
  =
  Error logs are generated:

  Environment
  ===
  1. Openstack Nova
  stack@devstack01:/opt/stack/nova$ git log -1
  commit 232458ae4e83e8b218397e42435baa9f1d025b68
  Merge: 650c9f3 9d400c3
  Author: Jenkins <jenk...@review.openstack.org>
  Date:   Tue Oct 10 06:27:52 2017 +

  Merge "rp: Move RP._get|set_aggregates() to module scope"

  2. Hypervisor
  Libvirt + QEMU
  stack@devstack01:/opt/stack/nova$ dpkg -l | grep libvirt
  ii  libvirt-bin3.6.0-1ubuntu5~cloud0  
amd64programs for the libvirt library
  ii  libvirt-clients3.6.0-1ubuntu5~cloud0  
amd64Programs for the libvirt library
  ii  libvirt-daemon 3.6.0-1ubuntu5~cloud0  
amd64Virtualization daemon
  ii  libvirt-daemon-system  3.6.0-1ubuntu5~cloud0  
amd64Libvirt daemon configuration files
  ii  libvirt-dev:amd64  3.6.0-1ubuntu5~cloud0  
amd64development files for the libvirt library
  ii  libvirt0:amd64 3.6.0-1ubuntu5~cloud0  
amd64library for interfacing with different virtualization systems
  stack@devstack01:/opt/stack/nova$ dpkg -l | grep qemu
  ii  ipxe-qemu  1.0.0+git-20150424.a25a16d-1ubuntu1
all  PXE boot firmware - ROM images for qemu
  ii  qemu-block-extra:amd64 1:2.10+dfsg-0ubuntu3~cloud0
amd64extra block backend modules for qemu-system and qemu-utils
  ii  qemu-kvm   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU Full virtualization
  ii  qemu-slof  20151103+dfsg-1ubuntu1 
all  Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system1:2.10+dfsg-0ubuntu3~cloud0
amd64QEMU full system emulation binaries
  ii  qemu-system-arm1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (arm)
  ii  qemu-system-common 1:2.10+dfsg-0ubuntu3~cloud0
amd64QEMU full system emulation binaries (common files)
  ii  qemu-system-mips   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (mips)
  ii  qemu-system-misc   1:2.10+dfsg-0ubuntu1~cloud0
amd64  

[Yahoo-eng-team] [Bug 1714247] Re: Cleaning up deleted instances leaks resources

2017-10-11 Thread Xuanzhou Perry Dong
Thanks for the response. I have stopped and started the nova-compute
service. The re-starting of nova-compute service is shown in the log (I
am not sure why the stopping of nova-compute service is not shown;
probably I should use "raw").

stack@devstack01:~/devstack$ systemctl start devstack@n-cpu.service 


 AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to start 'devstack@n-cpu.service'.
Authenticating as: stack,,, (stack)
Password: 
 AUTHENTICATION COMPLETE ===

BR/Perry

** Changed in: nova
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714247

Title:
  Cleaning up deleted instances leaks resources

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  When the '_cleanup_running_deleted_instances' nova-compute manager
  periodic task cleans up an instance that still exists on the host
  although being deleted from the DB, the according network info is not
  properly retrieved. For this reason, vif ports will not be cleaned up.

  In this situation there may also be stale volume connections. Those
  will be leaked as well as os-brick attempts to flush those
  inaccessible devices, which will fail. As per a recent os-brick
  change, a 'force' flag must be set in order to ignore flush errors.

  Log: http://paste.openstack.org/raw/620048/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714247] Re: Cleaning up deleted instances leaks resources

2017-10-11 Thread Xuanzhou Perry Dong
Tested in the latest master branch using devstack:

1. vif is unplugged

See logs in: paste.openstack.org/show/623286/

2. no stale iscsi session

See logs in: http://paste.openstack.org/show/623288/

Hi, Lucian,

Could you check the logs to see if you do things differently?

BR/Perry



** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714247

Title:
  Cleaning up deleted instances leaks resources

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When the '_cleanup_running_deleted_instances' nova-compute manager
  periodic task cleans up an instance that still exists on the host
  although being deleted from the DB, the according network info is not
  properly retrieved. For this reason, vif ports will not be cleaned up.

  In this situation there may also be stale volume connections. Those
  will be leaked as well as os-brick attempts to flush those
  inaccessible devices, which will fail. As per a recent os-brick
  change, a 'force' flag must be set in order to ignore flush errors.

  Log: http://paste.openstack.org/raw/620048/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599688] [NEW] host.py assertion error during NOVA handling of HUP signal

2016-07-06 Thread Xuanzhou Perry Dong
Public bug reported:

Description
===
During handling of HUP signal in nova, the following exception is generated:

2016-07-07 01:36:18.012 DEBUG nova.virt.libvirt.host [-] Starting green 
dispatch thread from (pid=30178) _init_events /op
t/stack/nova/nova/virt/libvirt/host.py:341
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
listener.cb(fileno)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
result = function(*args, **kwargs)
  File "/opt/stack/nova/nova/utils.py", line 1053, in context_wrapper
return func(*args, **kwargs)
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 131, in 
_dispatch_thread
self._dispatch_events()
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 236, in 
_dispatch_events
assert _c
AssertionError


Steps to reproduce
==
1. Start a devstack with latest master branch.

2. Devstack doesn't start the nova-compute with daemon. So kill the
nova-compute started by devstack and replace it with "nohup
/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf &"

3. Send a HUP signal to nova-compute process.

Expected result
===
Expect the nova-compute reloads the configuration file and no exception is 
generated.

Actual result
=
An exception is generated.

Environment
===
1. Nova version:

vagrant@vagrant-ubuntu-trusty-64:/opt/stack/nova/nova$ git log -1
commit 2d5460d085895a577734547660a8bcfc53b04de2
Merge: 51fdeaf 40ea165
Author: Jenkins <jenk...@review.openstack.org>
Date:   Wed Jun 22 06:18:23 2016 +

Merge "Publish proxy APIs deprecation in api ref doc"


Logs & Configs
======
As above.

** Affects: nova
 Importance: Medium
 Assignee: Xuanzhou Perry Dong (oss-xzdong)
     Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Xuanzhou Perry Dong (oss-xzdong)

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1599688

Title:
  host.py assertion error during NOVA handling of HUP signal

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  During handling of HUP signal in nova, the following exception is generated:

  2016-07-07 01:36:18.012 DEBUG nova.virt.libvirt.host [-] Starting green 
dispatch thread from (pid=30178) _init_events /op
  t/stack/nova/nova/virt/libvirt/host.py:341
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
  listener.cb(fileno)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/utils.py", line 1053, in context_wrapper
  return func(*args, **kwargs)
File "/opt/stack/nova/nova/virt/libvirt/host.py", line 131, in 
_dispatch_thread
  self._dispatch_events()
File "/opt/stack/nova/nova/virt/libvirt/host.py", line 236, in 
_dispatch_events
  assert _c
  AssertionError

  
  Steps to reproduce
  ==
  1. Start a devstack with latest master branch.

  2. Devstack doesn't start the nova-compute with daemon. So kill the
  nova-compute started by devstack and replace it with "nohup
  /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf &"

  3. Send a HUP signal to nova-compute process.

  Expected result
  ===
  Expect the nova-compute reloads the configuration file and no exception is 
generated.

  Actual result
  =
  An exception is generated.

  Environment
  ===
  1. Nova version:

  vagrant@vagrant-ubuntu-trusty-64:/opt/stack/nova/nova$ git log -1
  commit 2d5460d085895a577734547660a8bcfc53b04de2
  Merge: 51fdeaf 40ea165
  Author: Jenkins <jenk...@review.openstack.org>
  Date:   Wed Jun 22 06:18:23 2016 +

  Merge "Publish proxy APIs deprecation in api ref doc"

  
  Logs & Configs
  ==
  As above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1599688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526734] Re: Restart of nova-compute service fails

2016-07-06 Thread Xuanzhou Perry Dong
Already fixed by: https://review.openstack.org/284287

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526734

Title:
  Restart of nova-compute service fails

Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Incomplete

Bug description:
  After sending HUP signal to nova-compute process we can observe trace
  in logs:

  2015-11-30 10:35:26.509 INFO oslo_service.service 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Caught SIGHUP, exiting
  2015-11-30 10:35:31.894 DEBUG oslo_concurrency.lockutils 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Acquired semaphore 
"singleton_lock" from (pid=24742) lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
  2015-11-30 10:35:31.900 DEBUG oslo_concurrency.lockutils 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Releasing semaphore 
"singleton_lock" from (pid=24742) lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
  2015-11-30 10:35:31.903 ERROR nova.service 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Service error occurred 
during cleanup_host
  2015-11-30 10:35:31.903 TRACE nova.service Traceback (most recent call last):
  2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/service.py", line 312, in stop
  2015-11-30 10:35:31.903 TRACE nova.service self.manager.cleanup_host()
  2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/compute/manager.py", line 1323, in cleanup_host
  2015-11-30 10:35:31.903 TRACE nova.service 
self.instance_events.cancel_all_events()
  2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/compute/manager.py", line 578, in cancel_all_events
  2015-11-30 10:35:31.903 TRACE nova.service for instance_uuid, events in 
our_events.items():
  2015-11-30 10:35:31.903 TRACE nova.service AttributeError: 'NoneType' object 
has no attribute 'items'
  2015-11-30 10:35:31.903 TRACE nova.service

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1526734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348509] Re: the volume may leave over when we delete instance whose task_state is block_device_mapping

2016-07-04 Thread Xuanzhou Perry Dong
This bug can't be reproduced in the latest master branch. This probably
is fixed by the resource tracker lock for the instance action. Propose
to close this bug.

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348509

Title:
  the volume may leave over when  we delete instance whose task_state is
  block_device_mapping

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  here, two scenes may cause that a volume leaves over   when  we delete
  instance whose task_state is   block_device_mapping .The first scene
  is that using the boot volume created by image  creates instance; The
  other scene is that using image create instance  with a volume created
  through a image.

  Through analyzing, we find that the volume id is not update to
  block_device_mapping table in DB until a volume created by  an image
  through setting parameters in Blocking Device Mapping v2 is attached
  to an instance completely.If we delete the instance before the volume
  id is not update to the block_device_mapping table, the problem
  mentioned above will occur

  Two examples  to reproduce the problem on latest  icehousce:
  1. the first scene
  (1)root@devstack:~# nova list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+
  (2)root@devstack:~# nova boot --flavor m1.tiny --block-device 
id=61ebee75-5883-49a3-bf85-ad6f6c29fc1b,source=image,dest=volume,device=vda,size=1,shutdown=removed,bootindex=0
 --nic net-id=354ba9ac-e6a7-4fd6-a49f-6ae18a815e95 tralon_test
  root@devstack:~# nova list
  
+--+-++--+-+---+
  | ID   | Name| Status | Task State
   | Power State | Networks  |
  
+--+-++--+-+---+
  | 57cbb39d-c93f-44eb-afda-9ce00110950d | tralon_test | BUILD  | 
block_device_mapping | NOSTATE | private=10.0.0.20 |
  
+--+-++--+-+---+
  (3)root@devstack:~# nova delete tralon_test
  root@devstack:~# nova list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+
  (4) root@devstack:~# cinder list
  
+--+---+--+--+-+--+--+
  |  ID  |   Status  | Name | Size | Volume 
Type | Bootable | Attached to  |
  
+--+---+--+--+-+--+--+
  | 3e5579a9-5aac-42b6-9885-441e861f6cc0 | available | None |  1   | None   
 |  false   |  |
  | a4121322-529b-4223-ac26-0f569dc7821e | available |  |  1   | None   
 |   true   |  |
  | a7ad846b-8638-40c1-be42-f2816638a917 |   in-use  |  |  1   | None   
 |   true   | 57cbb39d-c93f-44eb-afda-9ce00110950d |
  
+--+---+--+--+-+--+--+
  we can see that the instance  57cbb39d-c93f-44eb-afda-9ce00110950d was 
deleted while the volume still exists with the "in-use" status

  2. the scend scene
   (1)root@devstack:~# nova list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+
  (2)root@devstack:~# nova boot --flavor m1.tiny --image 
61ebee75-5883-49a3-bf85-ad6f6c29fc1b --nic 
net-id=354ba9ac-e6a7-4fd6-a49f-6ae18a815e95  --block-device 
id=61ebee75-5883-49a3-bf85-ad6f6c29fc1b,source=image,dest=volume,device=vdb,size=1,shutdown=removed
 tralon_image_instance
  root@devstack:~# nova list
  
+--+---++--+-+---+
  | ID   | Name  | Status | 
Task State   | Power State | Networks  |
  
+--+---++--+-+---+
  |