Hi Nigel,

you can use Kernel NFS with CEPH ? I'm a new user with Ganesha, and I do not understand what this is ?

My comparison is based on
- a DELL R740 server, 16 HDDs, PERC H730P, 2 raid6 stripped with 8HDD each, NFS v4, 10Gb ethernet, 64GB RAM - 5 HPE Proliant DL345, 4HDDs each (so 20HDD), Squid 19.2.2, EC 5+3 setup (2 chuncks/node), Ganesha, NFSv4, 25Gb ethernet (only one network), 128GB RAM each.
- client has 25Gb ethernet too.

Test is 4 processes writing or reading 4 remotes files of 4GB 10 time (so write 160GB then read  160GB)
- NFS server test: write avg 287Mo/s, read avg 802Mo/s
- Ganesha test: write avg *225*Mo/s, read avg*340*Mo/s
- CephFS (kernel) test: write avg 767Mo/s, read avg 1079Mo/s

It is the first step for building a Ceph Cluster so 5 nodes and few disks at this time in the cluster. HDD is because we need capacitive storage. Data access profile is few access (codes are not writing or reading frequently) but intensive access (tens to hundreds parallel processes reading ou writing synchronously).

Patrick

Le 02/10/2025 à 00:02, Nigel Williams a écrit :
On Wed, 1 Oct 2025 at 17:35, Patrick Begou <
[email protected]> wrote:

...
I saw the I/O performances are quite bad with ganesha,
significantly slower than my old standalone NFS server. Moving to kernel
mounts with cephfs after updating the clients will be the best way in
the next months.

We recently spun-up a new cluster on current hardware (in our case Dell
R760XD2, spinners and rear bay with E3.S NVMe) using Squid and are seeing
much better Ganesha performance than we have experienced in the past. Good
enough that we expect we can continue with Ganesha and not use kernel NFS.

cheers,
nigel.
_______________________________________________
ceph-users mailing list [email protected]
To unsubscribe send an email [email protected]

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to