We have an NFS to RBD gateway with a large number of smaller RBDs.  In
our use case we are allowing users to request their own RBD containers
that are then served up via NFS into a mixed cluster of clients.    Our
gateway is quite beefy, probably more than it needs to be, 2x8 core
cpus  and 96GB ram.  It has been pressed into this service, drawn from a
pool homogeneous servers rather then being spec'd out for this role
explicitly (it could likely be less beefy).  It has performed well.  Our
RBD nodes connected via  2x10GB nics in a transmit-load-balance config.

The server has performed well in this role.  It could just be the
specs.  An individual RBD in this NFS gateway won't see the parallel
performance advantages that CephFS promises, however, one potential
advantage is that a multi-RBD backend will be able to simultaneously
manage NFS client requests isolated to different RBD.   One RBD may
still get a heavy load but it at least the server as a whole has the
potential to spread requests across different devices. 

I haven't done load comparisons so this is just a point of interest. 
It's probably moot if the kernel doesn't do a good job of spreading NFS
load across threads or there is some other kernel/RBD constriction point.

~jpr

On 06/02/2014 12:35 PM, Dimitri Maziuk wrote:
>>> A more or less obvious alternative for CephFS would be to simply create
>>> >> a huge RBD and have a separate file server (running NFS / Samba /
>>> >> whatever) use that block device as backend. Just put a regular FS on top
>>> >> of the RBD and use it that way.
>>> >> Clients wouldn't really have any of the real performance and resilience
>>> >> benefits that Ceph could offer though, because the (single machine?)
>>> >> file server is now the bottleneck.
> Performance: assuming all your nodes are fast storage on a quad-10Gb
> pipe. Resilience: your gateway can be an active-passive HA pair, that
> shouldn't be any different from NFS+DRBD setups.
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to