On 13 September 2013 17:12, Simon Leinen <simon.lei...@switch.ch> wrote:
>
> [We're not using is *instead* of rbd, we're using it *in addition to*
>  rbd.  For example, our OpenStack (users') cinder volumes are stored in
>  rbd.]

So you probably have cinder volumes in rbd but you boot instances from
images. This is why you need cephfs for /var/lib/nova/instances. I
suggest creating volumes from images and booting instances from them.
Cephfs is not required then

> What we want to achieve is to have a shared "instance store"
> (i.e. "/var/lib/nova/instances") across all our nova-compute nodes, so
> that we can e.g. live-migrate instances between different hosts.  And we
> want to use Ceph for that.
>
> In Folsom (but also in Grizzly, I think), this isn't straightforward to
> do with RBD.  A feature[1] to make it more straightforward was merged in
> Havana(-3) just two weeks ago.

I dont get it. I am successfully using live-migration (in Grizzly,
havent try Folsom) of instances booted from cinder volumes stored as
rbd volumes. What is not straightforward to do? Are you using KVM?

> Yes, people want shared storage that they can access in a POSIXly way
> from multiple VMs.  CephFS is a relatively easy way to give them that,
> though I don't consider it "production-ready" - mostly because secure
> isolation between different tenants is hard to achieve.

For now GlusterFS may fits better here.

regards
-- 
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 0000440358 REGON: 101504426
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to