Hello,

I'm doing benchmarks for metadata operations on CephFS, HDFS, and HopsFS on
Google Cloud. In my current setup, i'm using 32 vCPU machines with 29 GB
memory, and i have 1 MDS, 1 MON and 3 OSDs. The MDS and the MON nodes are
co-located on one vm, while each of the OSDs is on a separate vm with 1 SSD
disk attached. I'm using the default configuration for MDS, and OSDs.

I'm running 300 clients on 10 machines (16 vCPU), each client creates a
CephFileSystem using the CephFS hadoop plugin, and then writes empty files
for 30 seconds followed by reading the empty files for another 30 seconds.
The aggregated throughput is around 2000 file create opertions/sec and
10000 file read operations/sec. However, the MDS is not fully utilizing the
32 cores on the machine, is there any configuration that i should consider
to fully utilize the machine?.

Also, i noticed that running more than 20-30 clients (on different threads)
per machine degrade the aggregated throughput for read, is there a
limitation on CephFileSystem and libceph on the number of clients created
per machine?

Another issue,  Are the MDS operations single threaded as pointed here "
https://www.slideshare.net/XiaoxiChen3/cephfs-jewel-mds-performance-benchmark
"?
Regarding the MDS global lock, is it it a single lock per MDS or is it a
global distributed lock for all MDSs?

Regards,
Mahmoud
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to