Public bug reported: we are able to destroy an instance when taking a snapshot.
the new image status would depend if it was already created and uploaded to /var/lib/glance/images I think that if we allow to destroy the instance when taking the snapshot we run the risk of data corruption on the new snapshot or the snapshot not being created at all. so I think that to destroy the instance while taking the snapshot we should have a --force in while the admin user knowingly destroys the instance. [root@puma31 ~(keystone_admin)]# nova list nov+--------------------------------------+------+--------+----------------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+----------------------+-------------+--------------------------+ | e00ae899-e285-4f09-8cda-2c2680799bba | from | ACTIVE | image_pending_upload | Running | novanetwork=192.168.32.2 | +--------------------------------------+------+--------+----------------------+-------------+--------------------------+ [root@puma31 ~(keystone_admin)]# nova delete e00ae899-e285-4f09-8cda-2c2680799bba [root@puma31 ~(keystone_admin)]# nova list +--------------------------------------+------+--------+------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+--------------------------+ | e00ae899-e285-4f09-8cda-2c2680799bba | from | ACTIVE | deleting | Running | novanetwork=192.168.32.2 | +--------------------------------------+------+--------+------------+-------------+--------------------------+ [root@puma31 ~(keystone_admin)]# nova image-create e00ae899-e285-4f09-8cda-2c2680799bba destroy_test --poll Server snapshotting... 50% complete Server snapshotting... 50% complete Server snapshotting... 100% complete Finished ERROR: Instance could not be found (HTTP 404) (Request-ID: req-b6b7b066-0da8-441a-8788-b6969d7b1527) [root@puma31 ~(keystone_admin)]# [root@puma31 ~(keystone_admin)]# [root@puma31 ~(keystone_admin)]# glance image-list +--------------------------------------+--------------+-------------+------------------+------------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+--------------+-------------+------------------+------------+--------+ | 6aa2362c-a1bb-490a-aeeb-3786ad7b9312 | destroy_test | qcow2 | bare | 3629645824 | active | | 73f92385-3080-4a4e-a100-76de38a3a569 | new_snap | qcow2 | bare | 3628728320 | active | | deddabea-475f-4c2f-88e3-0c76612e529c | poll-test1 | qcow2 | bare | 3629383680 | active | | df06e227-0d6a-4e2c-90c1-13cd32721360 | rhel | qcow2 | bare | 3628990464 | active | | 6175a441-8cb2-4d35-9b7d-241d51eaa270 | rhel1 | qcow2 | bare | 3629383680 | active | +--------------------------------------+--------------+-------------+------------------+------------+--------+ ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1312796 Title: we are able to destroy an instance while taking a snapshot Status in OpenStack Compute (Nova): New Bug description: we are able to destroy an instance when taking a snapshot. the new image status would depend if it was already created and uploaded to /var/lib/glance/images I think that if we allow to destroy the instance when taking the snapshot we run the risk of data corruption on the new snapshot or the snapshot not being created at all. so I think that to destroy the instance while taking the snapshot we should have a --force in while the admin user knowingly destroys the instance. [root@puma31 ~(keystone_admin)]# nova list nov+--------------------------------------+------+--------+----------------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+----------------------+-------------+--------------------------+ | e00ae899-e285-4f09-8cda-2c2680799bba | from | ACTIVE | image_pending_upload | Running | novanetwork=192.168.32.2 | +--------------------------------------+------+--------+----------------------+-------------+--------------------------+ [root@puma31 ~(keystone_admin)]# nova delete e00ae899-e285-4f09-8cda-2c2680799bba [root@puma31 ~(keystone_admin)]# nova list +--------------------------------------+------+--------+------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+--------------------------+ | e00ae899-e285-4f09-8cda-2c2680799bba | from | ACTIVE | deleting | Running | novanetwork=192.168.32.2 | +--------------------------------------+------+--------+------------+-------------+--------------------------+ [root@puma31 ~(keystone_admin)]# nova image-create e00ae899-e285-4f09-8cda-2c2680799bba destroy_test --poll Server snapshotting... 50% complete Server snapshotting... 50% complete Server snapshotting... 100% complete Finished ERROR: Instance could not be found (HTTP 404) (Request-ID: req-b6b7b066-0da8-441a-8788-b6969d7b1527) [root@puma31 ~(keystone_admin)]# [root@puma31 ~(keystone_admin)]# [root@puma31 ~(keystone_admin)]# glance image-list +--------------------------------------+--------------+-------------+------------------+------------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+--------------+-------------+------------------+------------+--------+ | 6aa2362c-a1bb-490a-aeeb-3786ad7b9312 | destroy_test | qcow2 | bare | 3629645824 | active | | 73f92385-3080-4a4e-a100-76de38a3a569 | new_snap | qcow2 | bare | 3628728320 | active | | deddabea-475f-4c2f-88e3-0c76612e529c | poll-test1 | qcow2 | bare | 3629383680 | active | | df06e227-0d6a-4e2c-90c1-13cd32721360 | rhel | qcow2 | bare | 3628990464 | active | | 6175a441-8cb2-4d35-9b7d-241d51eaa270 | rhel1 | qcow2 | bare | 3629383680 | active | +--------------------------------------+--------------+-------------+------------------+------------+--------+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1312796/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp