Hi Caius,

This has existed in the rbd cinder driver since volume-to-image was added:

https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/drivers/rbd.py#L823

Cinder falls back to doing the full copy if glance doesn't report the
location, or it's not marked as raw format.

If glance doesn't have show_image_direct_url = True, or cinder doesn't
have glance_api_version = 2, cinder won't be able to do the clone. See

http://ceph.com/docs/master/rbd/rbd-openstack/#configure-openstack-to-use-ceph

for more details.

Josh


On 07/29/2015 07:36 AM, Caius Howcroft wrote:
Hi,

We (bloomberg) are preparing to roll out kilo into production and one
thing is causing a lot of grief. I wonder if anyone else has
encountered it.

We run BCPC (https://github.com/bloomberg/chef-bcpc) which is ceph
backed. When we boot an instance from volume the cinder create volume
from image function (
https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/drivers/rbd.py#L850)
ends up pulling the entire image through the glance API, so lots of
tenants doing this creates quite a bit of load on our API nodes.

We were confused why it did this, when its way more efficient to go
directly via rbd clone, we created a patch and tested and it seems to
work just fine (and an order of magnitude faster)
https://github.com/bloomberg/chef-bcpc/pull/742

So, the question is: what are other ceph backed installations doing ?


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to