I'm not sure what's normal, but I'm on Openstack Juno with ceph .94.5 using
separate pools for nova, glance, and cinder.  Takes 16 seconds to start an
instance (el7 minimal).

Everything is on 10GE and I'm using cache tiering, which I'm sure speeds
things up.  Can personally verify that COW is working as I recently killed
my images pool as a result of a bug and user error and had to recreate the
base image and re-associate the VM image with the parent-id of the new base
image before I could get all my VMs working again.

On Mon, Feb 8, 2016 at 6:10 AM, Vickey Singh <vickey.singh22...@gmail.com>
wrote:

> Hello Community
>
> I need some guidance how can i reduce openstack instance boot time using
> Ceph
>
> We are using Ceph Storage with openstack ( cinder, glance and nova ). All
> OpenStack images and instances are being stored on Ceph in different pools
> glance and nova pool respectively.
>
> I assume that Ceph by default uses COW rbd , so for example if an instance
> is launched using glance image (which is stored on Ceph) , Ceph should take
> COW snapshot of glance image and map it as RBD disk for instance. And this
> whole process should be very quick.
>
> In our case , the instance launch is taking 90 seconds. Is this normal ? (
> i know this really depends one's infra , but still )
>
> Is there any way , i can utilize Ceph's power and can launch instances
> ever faster.
>
> - From Ceph point of view. does COW works cross pool i.e. image from
> glance pool ---> (cow) --> instance disk on nova pool
> - Will a single pool for glance and nova instead of separate pool . will
> help here ?
> - Is there any tunable parameter from Ceph or OpenStack side that should
> be set ?
>
> Regards
> Vickey
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to