[Openstack-operators] cinder volume_clear=zero makes sense with rbd ?

2015-11-04 Thread Saverio Proto
Hello there,

I am using cinder with rbd, and most volumes are created from glance
images on rbd as well.
Because of ceph features, these volumes are CoW and only blocks
different from the original parent image are really written.

Today I am debugging why in my production system deleting cinder
volumes gets very slow. Looks like the problem happens only at scale,
I can't reproduce it on my small test cluster.

I read all the cinder.conf reference, and I found this default value
=>   volume_clear=0.

Is this parameter evaluated when cinder works with rbd ?

This means that everytime we delete a Volume we first write all blocks
to 0 with a "dd" like operation and then we really delete it. This
default is designed with LVM backend in mind. In fact we dont want
that the next user gets a raw block device that is dirty, and can
potentially can read data out of it.

But what happens when we are using Ceph rbd as cinder backend ? and
our volumes are CoW from Glance Images most of the time, so we only
write in Ceph the blocks that are different from the original image. I
hope this is not writing all the rbd objects with zeros before
actually deleting the ceph volumes.

Does anybody has any advice on volume_clear setting to be used with rbd ?
Or even better, how can I make sure that the setting volume_clear is
not evaluated at all when using the rbd backend ?

thank you

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder volume_clear=zero makes sense with rbd ?

2015-11-04 Thread Chris Friesen

On 11/04/2015 08:46 AM, Saverio Proto wrote:

Hello there,

I am using cinder with rbd, and most volumes are created from glance
images on rbd as well.
Because of ceph features, these volumes are CoW and only blocks
different from the original parent image are really written.

Today I am debugging why in my production system deleting cinder
volumes gets very slow. Looks like the problem happens only at scale,
I can't reproduce it on my small test cluster.

I read all the cinder.conf reference, and I found this default value
=>   volume_clear=0.

Is this parameter evaluated when cinder works with rbd ?


I don't think that's actually used with rbd, since as you say Ceph uses CoW 
internally.


I believe it's also ignored if you use LVM with thin provisioning.

Chris

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder volume_clear=zero makes sense with rbd ?

2015-11-04 Thread David Wahlstrom
Looking at the code in master (and ignoring tests), the only drivers I see
reference to volume_clear are the LVM and block device drivers:

$ git grep -l volume_clear
driver.py
drivers/block_device.py
drivers/lvm.py
utils.py

So other drivers (netapp, smb, gluster, and of course Ceph/RBD) simply
ignore this option (or more accurately, don't take any action).


On Wed, Nov 4, 2015 at 8:52 AM, Chris Friesen 
wrote:

> On 11/04/2015 08:46 AM, Saverio Proto wrote:
>
>> Hello there,
>>
>> I am using cinder with rbd, and most volumes are created from glance
>> images on rbd as well.
>> Because of ceph features, these volumes are CoW and only blocks
>> different from the original parent image are really written.
>>
>> Today I am debugging why in my production system deleting cinder
>> volumes gets very slow. Looks like the problem happens only at scale,
>> I can't reproduce it on my small test cluster.
>>
>> I read all the cinder.conf reference, and I found this default value
>> =>   volume_clear=0.
>>
>> Is this parameter evaluated when cinder works with rbd ?
>>
>
> I don't think that's actually used with rbd, since as you say Ceph uses
> CoW internally.
>
> I believe it's also ignored if you use LVM with thin provisioning.
>
> Chris
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
David W.
Unix, because every barista in Seattle has an MCSE.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder volume_clear=zero makes sense with rbd ?

2015-11-05 Thread Serguei Bezverkhi (sbezverk)
Hello,

With Kilo I noticed that with LVM backend for these two parameters to be 
considered they must be put in specific LVM section. Like this:

lvm-local]
iscsi_helper=lioadm
volume_group=cinder-volumes-local
iscsi_ip_address=X.X.X.X
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volumes_dir=/var/lib/cinder/volumes
iscsi_protocol=iscsi
volume_backend_name=lvm-local
!
!
!
volume_clear=zero
volume_clear_size=300

Otherwise they are ignored and it was taking lots of time to delete the volume, 
I do not use rbd but I suspect it should be done the same way as well.

Thank you

Serguei
[http://www.cisco.com/c/dam/assets/email-signature-tool/logo_07.png?ct=1423747865775]

Serguei Bezverkhi,
TECHNICAL LEADER.SERVICES
Global SP Services
sbezv...@cisco.com<mailto:sbezv...@cisco.com>
Phone: +1 416 306 7312
Mobile: +1 514 234 7374

CCIE (R&S,SP,Sec) - #9527


Cisco.com<http://www.cisco.com/>



[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif] Think before you 
print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
Please click 
here<http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for 
Company Registration Information.



From: David Wahlstrom [mailto:david.wahlst...@gmail.com]
Sent: Wednesday, November 04, 2015 12:53 PM
To: OpenStack Operators 
Subject: Re: [Openstack-operators] cinder volume_clear=zero makes sense with 
rbd ?

Looking at the code in master (and ignoring tests), the only drivers I see 
reference to volume_clear are the LVM and block device drivers:

$ git grep -l volume_clear
driver.py
drivers/block_device.py
drivers/lvm.py
utils.py

So other drivers (netapp, smb, gluster, and of course Ceph/RBD) simply ignore 
this option (or more accurately, don't take any action).


On Wed, Nov 4, 2015 at 8:52 AM, Chris Friesen 
mailto:chris.frie...@windriver.com>> wrote:
On 11/04/2015 08:46 AM, Saverio Proto wrote:
Hello there,

I am using cinder with rbd, and most volumes are created from glance
images on rbd as well.
Because of ceph features, these volumes are CoW and only blocks
different from the original parent image are really written.

Today I am debugging why in my production system deleting cinder
volumes gets very slow. Looks like the problem happens only at scale,
I can't reproduce it on my small test cluster.

I read all the cinder.conf reference, and I found this default value
=>   volume_clear=0.

Is this parameter evaluated when cinder works with rbd ?

I don't think that's actually used with rbd, since as you say Ceph uses CoW 
internally.

I believe it's also ignored if you use LVM with thin provisioning.

Chris


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
David W.
Unix, because every barista in Seattle has an MCSE.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators