[Openstack-operators] cinder volume_clear=zero makes sense with rbd ?

2015-11-04 Thread Saverio Proto
Hello there, I am using cinder with rbd, and most volumes are created from glance images on rbd as well. Because of ceph features, these volumes are CoW and only blocks different from the original parent image are really written. Today I am debugging why in my production system deleting cinder vo

Re: [Openstack-operators] cinder volume_clear=zero makes sense with rbd ?

2015-11-04 Thread Chris Friesen
On 11/04/2015 08:46 AM, Saverio Proto wrote: Hello there, I am using cinder with rbd, and most volumes are created from glance images on rbd as well. Because of ceph features, these volumes are CoW and only blocks different from the original parent image are really written. Today I am debugging

Re: [Openstack-operators] cinder volume_clear=zero makes sense with rbd ?

2015-11-04 Thread David Wahlstrom
Looking at the code in master (and ignoring tests), the only drivers I see reference to volume_clear are the LVM and block device drivers: $ git grep -l volume_clear driver.py drivers/block_device.py drivers/lvm.py utils.py So other drivers (netapp, smb, gluster, and of course Ceph/RBD) simply ig

Re: [Openstack-operators] cinder volume_clear=zero makes sense with rbd ?

2015-11-05 Thread Serguei Bezverkhi (sbezverk)
From: David Wahlstrom [mailto:david.wahlst...@gmail.com] Sent: Wednesday, November 04, 2015 12:53 PM To: OpenStack Operators Subject: Re: [Openstack-operators] cinder volume_clear=zero makes sense with rbd ? Looking at the code in master (and ignoring tests), the only drivers I see reference t