Original reported said they could no longer reproduce the bug.

** Changed in: nova
       Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482171

Title:
  cinder ceph volume cannot attach to instance in Kilo

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Kilo release on Centos 7.

  when nova  attach a ceph based volume to an existing instance, do not
  notify cinder(?) about the attaching so I cannot detach and/or delete
  the volume later. The volume actualy attached to the instance

  ## Cinder volume and nova instance

  $ cinder list
  
+--------------------------------------+-----------+-------------+------+--------------+----------+-------------+-------------+
  |                  ID                  |   Status  |     Name    | Size | 
Volume Type  | Bootable | Multiattach | Attached to |
  
+--------------------------------------+-----------+-------------+------+--------------+----------+-------------+-------------+
  | 736ead1d-2c59-4415-a0a3-8a1e8379872d | available | test volume |  10  | 
ceph_cluster |  false   |    False    |             |
  
+--------------------------------------+-----------+-------------+------+--------------+----------+-------------+-------------+

  $ nova list
  
+--------------------------------------+-------+--------+------------+-------------+---------------------+
  | ID                                   | Name  | Status | Task State | Power 
State | Networks            |
  
+--------------------------------------+-------+--------+------------+-------------+---------------------+
  | 890cce5e-8e5c-4871-ba80-6fd5a4045c2f | ubu 1 | ACTIVE | -          | 
Running     | daddy-net=10.0.0.19 |
  
+--------------------------------------+-------+--------+------------+-------------+---------------------+


  ## attach volume to instance
  $ nova volume-attach 890cce5e-8e5c-4871-ba80-6fd5a4045c2f 
736ead1d-2c59-4415-a0a3-8a1e8379872d
  +----------+--------------------------------------+
  | Property | Value                                |
  +----------+--------------------------------------+
  | device   | /dev/vdb                             |
  | id       | 736ead1d-2c59-4415-a0a3-8a1e8379872d |
  | serverId | 890cce5e-8e5c-4871-ba80-6fd5a4045c2f |
  | volumeId | 736ead1d-2c59-4415-a0a3-8a1e8379872d |
  +----------+--------------------------------------+


  ## cinder (and nova) do not know anything about attaching ('attached
  to' field is empty):

  $ cinder list
  
+--------------------------------------+-----------+-------------+------+--------------+----------+-------------+-------------+
  |                  ID                  |   Status  |     Name    | Size | 
Volume Type  | Bootable | Multiattach | Attached to |
  
+--------------------------------------+-----------+-------------+------+--------------+----------+-------------+-------------+
  | 736ead1d-2c59-4415-a0a3-8a1e8379872d | available | test volume |  10  | 
ceph_cluster |  false   |    False    |             |
  
+--------------------------------------+-----------+-------------+------+--------------+----------+-------------+-------------+


  ## while  the instance can use the new volume:

  root@ubu-1:~# mkfs.ext4 /dev/vdb 
  mke2fs 1.42.12 (29-Aug-2014)
  Creating filesystem with 2621440 4k blocks and 655360 inodes
  Filesystem UUID: 450b9169-087e-4dba-aac6-4a23593a5a97
  Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

  Allocating group tables: done                            
  Writing inode tables: done                            
  Creating journal (32768 blocks): done
  Writing superblocks and filesystem accounting information: done 

  root@ubu-1:~# mount /dev/vdb /mnt/
  root@ubu-1:~# df -h
  Filesystem                          Size  Used Avail Use% Mounted on
  udev                                997M     0  997M   0% /dev
  tmpfs                               201M  4.6M  196M   3% /run
  /dev/disk/by-label/cloudimg-rootfs   20G  858M   19G   5% /
  tmpfs                              1001M     0 1001M   0% /dev/shm
  tmpfs                               5.0M     0  5.0M   0% /run/lock
  tmpfs                              1001M     0 1001M   0% /sys/fs/cgroup
  tmpfs                               201M     0  201M   0% /run/user/1000
  /dev/vdb                            9.8G   23M  9.2G   1% /mnt


  
  ## finally I cannot delete the volume,  cinder stuck "deleting'  state and do 
not delete it from ceph  pool
  $ cinder list
  
+--------------------------------------+----------+-------------+------+--------------+----------+-------------+-------------+
  |                  ID                  |  Status  |     Name    | Size | 
Volume Type  | Bootable | Multiattach | Attached to |
  
+--------------------------------------+----------+-------------+------+--------------+----------+-------------+-------------+
  | 736ead1d-2c59-4415-a0a3-8a1e8379872d | deleting | test volume |  10  | 
ceph_cluster |  false   |    False    |             |
  
+--------------------------------------+----------+-------------+------+--------------+----------+-------------+-------------+

  
  logs do not show  errors

  OS: Centos 7

  cinder and nova versions:

  python-cinder-2015.1.0-3.el7.noarch
  openstack-cinder-2015.1.0-3.el7.noarch
  python-cinderclient-1.1.1-1.el7.noarch

  python-novaclient-2.23.0-1.el7.noarch
  openstack-nova-conductor-2015.1.0-3.el7.noarch
  openstack-nova-console-2015.1.0-3.el7.noarch
  openstack-nova-common-2015.1.0-3.el7.noarch
  openstack-nova-api-2015.1.0-3.el7.noarch
  python-nova-2015.1.0-3.el7.noarch
  openstack-nova-cert-2015.1.0-3.el7.noarch
  openstack-nova-novncproxy-2015.1.0-3.el7.noarch
  openstack-nova-scheduler-2015.1.0-3.el7.noarch

  Expected result: able to attach a ceph based cinder volume to
  instance(s), and then detach and delete it

  Actual result: nova volume-attach the volume to the instance but do
  not aware of the fact of the attaching therefore I cannot detach the
  volume or delete the volume

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to