On 06/02/2014 10:54 AM, Erik Logtenberg wrote:
Hi,

In march 2013 Greg wrote an excellent blog posting regarding the (then)
current status of MDS/CephFS and the plans for going forward with
development.

http://ceph.com/dev-notes/cephfs-mds-status-discussion/

Since then, I understand progress has been slow, and Greg confirmed that
he didn't want to commit to any release date yet, when I asked him for
an update earlier this year.
CephFS appears to be a more or less working product, does receive
stability fixes every now and then, but I don't think Inktank would call
it production ready.

So my question is: I would like to use Ceph as a storage for files, as a
fileserver or at least as a backend to my fileserver. What is the
recommended way to do this?

A more or less obvious alternative for CephFS would be to simply create
a huge RBD and have a separate file server (running NFS / Samba /
whatever) use that block device as backend. Just put a regular FS on top
of the RBD and use it that way.
Clients wouldn't really have any of the real performance and resilience
benefits that Ceph could offer though, because the (single machine?)
file server is now the bottleneck.

Any advice / best practice would be greatly appreciated. Any real-world
experience with current CephFS as well.

It's kind of a tough call. Your observations regarding the downsides of using NFS with RBD are apt. You could try throwing another distributed storage system on top of RBD and use Ceph for the replication/etc, but that's not really ideal either. CephFS is relatively stable with active/standby MDS configurations, but it may still have bugs and there are no guarantees or official support (yet!).

Regardless of what you choose, good luck. :)


Kind regards,

Erik.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to