Re: [ceph-users] Recommended way to use Ceph as storage for file server
We have an NFS to RBD gateway with a large number of smaller RBDs. In our use case we are allowing users to request their own RBD containers that are then served up via NFS into a mixed cluster of clients.Our gateway is quite beefy, probably more than it needs to be, 2x8 core cpus and 96GB ram. It has been pressed into this service, drawn from a pool homogeneous servers rather then being spec'd out for this role explicitly (it could likely be less beefy). It has performed well. Our RBD nodes connected via 2x10GB nics in a transmit-load-balance config. The server has performed well in this role. It could just be the specs. An individual RBD in this NFS gateway won't see the parallel performance advantages that CephFS promises, however, one potential advantage is that a multi-RBD backend will be able to simultaneously manage NFS client requests isolated to different RBD. One RBD may still get a heavy load but it at least the server as a whole has the potential to spread requests across different devices. I haven't done load comparisons so this is just a point of interest. It's probably moot if the kernel doesn't do a good job of spreading NFS load across threads or there is some other kernel/RBD constriction point. ~jpr On 06/02/2014 12:35 PM, Dimitri Maziuk wrote: A more or less obvious alternative for CephFS would be to simply create a huge RBD and have a separate file server (running NFS / Samba / whatever) use that block device as backend. Just put a regular FS on top of the RBD and use it that way. Clients wouldn't really have any of the real performance and resilience benefits that Ceph could offer though, because the (single machine?) file server is now the bottleneck. Performance: assuming all your nodes are fast storage on a quad-10Gb pipe. Resilience: your gateway can be an active-passive HA pair, that shouldn't be any different from NFS+DRBD setups. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] Recommended way to use Ceph as storage for file server
Hi, In march 2013 Greg wrote an excellent blog posting regarding the (then) current status of MDS/CephFS and the plans for going forward with development. http://ceph.com/dev-notes/cephfs-mds-status-discussion/ Since then, I understand progress has been slow, and Greg confirmed that he didn't want to commit to any release date yet, when I asked him for an update earlier this year. CephFS appears to be a more or less working product, does receive stability fixes every now and then, but I don't think Inktank would call it production ready. So my question is: I would like to use Ceph as a storage for files, as a fileserver or at least as a backend to my fileserver. What is the recommended way to do this? A more or less obvious alternative for CephFS would be to simply create a huge RBD and have a separate file server (running NFS / Samba / whatever) use that block device as backend. Just put a regular FS on top of the RBD and use it that way. Clients wouldn't really have any of the real performance and resilience benefits that Ceph could offer though, because the (single machine?) file server is now the bottleneck. Any advice / best practice would be greatly appreciated. Any real-world experience with current CephFS as well. Kind regards, Erik. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Recommended way to use Ceph as storage for file server
On 06/02/2014 10:54 AM, Erik Logtenberg wrote: Hi, In march 2013 Greg wrote an excellent blog posting regarding the (then) current status of MDS/CephFS and the plans for going forward with development. http://ceph.com/dev-notes/cephfs-mds-status-discussion/ Since then, I understand progress has been slow, and Greg confirmed that he didn't want to commit to any release date yet, when I asked him for an update earlier this year. CephFS appears to be a more or less working product, does receive stability fixes every now and then, but I don't think Inktank would call it production ready. So my question is: I would like to use Ceph as a storage for files, as a fileserver or at least as a backend to my fileserver. What is the recommended way to do this? A more or less obvious alternative for CephFS would be to simply create a huge RBD and have a separate file server (running NFS / Samba / whatever) use that block device as backend. Just put a regular FS on top of the RBD and use it that way. Clients wouldn't really have any of the real performance and resilience benefits that Ceph could offer though, because the (single machine?) file server is now the bottleneck. Any advice / best practice would be greatly appreciated. Any real-world experience with current CephFS as well. It's kind of a tough call. Your observations regarding the downsides of using NFS with RBD are apt. You could try throwing another distributed storage system on top of RBD and use Ceph for the replication/etc, but that's not really ideal either. CephFS is relatively stable with active/standby MDS configurations, but it may still have bugs and there are no guarantees or official support (yet!). Regardless of what you choose, good luck. :) Kind regards, Erik. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Recommended way to use Ceph as storage for file server
On 06/02/2014 11:24 AM, Mark Nelson wrote: A more or less obvious alternative for CephFS would be to simply create a huge RBD and have a separate file server (running NFS / Samba / whatever) use that block device as backend. Just put a regular FS on top of the RBD and use it that way. Clients wouldn't really have any of the real performance and resilience benefits that Ceph could offer though, because the (single machine?) file server is now the bottleneck. Performance: assuming all your nodes are fast storage on a quad-10Gb pipe. Resilience: your gateway can be an active-passive HA pair, that shouldn't be any different from NFS+DRBD setups. It's kind of a tough call. Your observations regarding the downsides of using NFS with RBD are apt. You could try throwing another distributed storage system on top of RBD and use Ceph for the replication/etc, but that's not really ideal either. CephFS is relatively stable with active/standby MDS configurations, but it may still have bugs and there are no guarantees or official support (yet!). If you believe in the 10 years rule of thumb, cephfs will become stable enough for production use sometime between 2017 and 2022 dep. on whether you start counting from Sage's thesis defense or from the first official code release. ;) -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com