Hi

 I was doing some testing on erasure coded based CephFS cluster. cluster is
running with giant 0.87.1 release.



Cluster info

15 * 36 drives node(journal on same osd)

3 * 4 drives SSD cache node( Intel DC3500)

3 * MON/MDS

EC 10 +3

10G Ethernet for private and cluster network



We got approx. 55MB/s read transfer speed using ceph-fuse client, when the
data was available on cache tier( cold storage was empty). When I tried to
add more data, ceph started the flushing the data from cache tier to cold
storage. During flushing, cluster read speed became approx 100 KB/s. But I
got 50 – 55MB/s write transfer speed during flushing from multiple
simultaneous ceph-fuse client( 1G Ethernet). I think there is an issue on
data migration from cold storage to cache tier during ceph-fuse client
read. Am I hitting any known issue/bug or is there any issue with my
cluster?



I used big video files( approx 5 GB to 10 GB) for this testing .



Any help ?

Cheers
K.Mohamed Pakkeer
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to