Public bug reported:
There seems to be a timing issue in between nova and cinder where the volumes
update(state and attache-state) happens but something fails in between and the
states dont reverse.
Sometimes I have a server that looks like it have an attachment but the cinder
lists the volume as available and not attached. see below.
````
$ openstack server show myserver -c volumes_attached
+------------------+--------------------------------------------------------------------------+
| Field | Value
|
+------------------+--------------------------------------------------------------------------+
| volumes_attached | delete_on_termination='False',
id='9e43a5ad-7fa7-4b18-9d98-1992096e6b60' |
| | delete_on_termination='False',
id='9e39e53c-9c88-4a41-b447-8008b19b51b2' |
+------------------+--------------------------------------------------------------------------+
$ openstack volume show 9e39e53c-9c88-4a41-b447-8008b19b51b2
+------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value
|
+------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| attachments | []
|
| availability_zone | nova
|
| bootable | false
|
| consistencygroup_id | None
|
| created_at | 2021-04-16T12:22:40.000000
|
| encrypted | False
|
| id | 9e39e53c-9c88-4a41-b447-8008b19b51b2
|
| multiattach | False
|
| name | myvolume
|
| replication_status | None
|
| size | 187
|
| snapshot_id | None
|
| source_volid | None
|
| status | available
|
| type | __DEFAULT__
|
| updated_at | 2025-05-30T14:14:27.000000
|
+------------------------------+----------------------------------------------------------------------------------
```
In this case the volume cant be removed from the server because of this
state(available) and it cant be attached to another server or this server
either because its already attached according to nova.
This happens from time to time, but more when there is network glitches.
So it seems like a timing issue and that the states when something fails are
not reset or changed in the right way, and nova and cinder become out of sync.
# versions
| RegionOne | compute | 2.1 | CURRENT | 2.1 | 2.95
|
| RegionOne | block-storage | 3.0 | CURRENT | 3.0 | 3.70
|
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2116931
Title:
Nova and cinder out of sync
Status in OpenStack Compute (nova):
New
Bug description:
There seems to be a timing issue in between nova and cinder where the volumes
update(state and attache-state) happens but something fails in between and the
states dont reverse.
Sometimes I have a server that looks like it have an attachment but the
cinder lists the volume as available and not attached. see below.
````
$ openstack server show myserver -c volumes_attached
+------------------+--------------------------------------------------------------------------+
| Field | Value
|
+------------------+--------------------------------------------------------------------------+
| volumes_attached | delete_on_termination='False',
id='9e43a5ad-7fa7-4b18-9d98-1992096e6b60' |
| | delete_on_termination='False',
id='9e39e53c-9c88-4a41-b447-8008b19b51b2' |
+------------------+--------------------------------------------------------------------------+
$ openstack volume show 9e39e53c-9c88-4a41-b447-8008b19b51b2
+------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value
|
+------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| attachments | []
|
| availability_zone | nova
|
| bootable | false
|
| consistencygroup_id | None
|
| created_at | 2021-04-16T12:22:40.000000
|
| encrypted | False
|
| id | 9e39e53c-9c88-4a41-b447-8008b19b51b2
|
| multiattach | False
|
| name | myvolume
|
| replication_status | None
|
| size | 187
|
| snapshot_id | None
|
| source_volid | None
|
| status | available
|
| type | __DEFAULT__
|
| updated_at | 2025-05-30T14:14:27.000000
|
+------------------------------+----------------------------------------------------------------------------------
```
In this case the volume cant be removed from the server because of this
state(available) and it cant be attached to another server or this server
either because its already attached according to nova.
This happens from time to time, but more when there is network glitches.
So it seems like a timing issue and that the states when something fails are
not reset or changed in the right way, and nova and cinder become out of sync.
# versions
| RegionOne | compute | 2.1 | CURRENT | 2.1 |
2.95 |
| RegionOne | block-storage | 3.0 | CURRENT | 3.0 |
3.70 |
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2116931/+subscriptions
--
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : [email protected]
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help : https://help.launchpad.net/ListHelp

