Ilya, i have followed the instructions on ceph website and it worked perfectly 
well. The only addition is to enable rbd caching in the ceph.conf 

Andrei 

----- Original Message -----

> From: "ilya musayev" <ilya.mailing.li...@gmail.com>
> To: dev@cloudstack.apache.org
> Sent: Thursday, 19 February, 2015 10:21:22 AM
> Subject: Re: Libvirt & RBD caching

> Logan

> Side note: it would help great if you can post your notes/guide on
> setting up Ceph as primary with CloudStack.
> There arent any docs out there.

> Thanks
> ilya
> On 2/18/15 12:00 PM, Logan Barfield wrote:
> > Our current deployment is KVM with Ceph RBD primary storage. We
> > have
> > rbd_cache enabled, and use "cache=none" in Qemu by default.
> >
> > I've been running some tests to try to figure out why our write
> > speeds
> > with FreeBSD are significantly lower than Linux. I was testing both
> > RBD and local SSD storage, with various cache configurations. Out
> > of
> > all of them the only one that performed close to our standard Linux
> > images was local SSD, Qemu cache=writeback, FreeBSD gpt journal
> > enabled.
> >
> > I've been reading on various lists the reasons and risks for
> > cache=none vs cache=writeback:
> > - cache=none: Safer for live migration
> > - cache=writeback: Ceph RBD docs claim that this is required for
> > data
> > integrity when using rbd_cache
> >
> > From what I can tell performance is generally the same with both,
> > except in the case of FreeBSD.
> >
> > What is the current line of thinking on this? Should be using
> > 'none'
> > or 'writeback' with RBD by default? Is 'writeback' considered safe
> > for live migration?

Reply via email to