Here you go. Below are the fio job options and the results.

blocksize=4K
size=500MB
directory=[ceph_fs_mount_directory]
ioengine=libaio
iodepth=64
direct=1
runtime=60
time_based
group_reporting

numjobs Ceph FS Erasure Coding (k=2, m=1)       Ceph FS 3 Replica
1 job   577KB/s 765KB/s
2 job   1.27MB/s        793KB/s
4 job   2.33MB/s        1.36MB/s
8 job   4.14MB/s        2.36MB/s
16 job  6.87MB/s        4.40MB/s
32 job  11.07MB/s       8.17MB/s
64 job  13.75MB/s       15.84MB/s
128 job 10.46MB/s       26.82MB/s

On Jun 28, 2018, at 5:01 PM, Yan, Zheng 
<uker...@gmail.com<mailto:uker...@gmail.com>> wrote:

On Thu, Jun 28, 2018 at 10:30 AM Yu Haiyang 
<haiya...@moqi.ai<mailto:haiya...@moqi.ai>> wrote:

Hi Yan,

Thanks for your suggestion.
No, I didn’t run fio on ceph-fuse. I mounted my Ceph FS in kernel mode.


command option of fio ?

Regards,
Haiyang

On Jun 27, 2018, at 9:45 PM, Yan, Zheng 
<uker...@gmail.com<mailto:uker...@gmail.com>> wrote:

On Wed, Jun 27, 2018 at 8:04 PM Yu Haiyang 
<haiya...@moqi.ai<mailto:haiya...@moqi.ai>> wrote:

Hi All,

Using fio with job number ranging from 1 to 128, the random write speed for 4KB 
block size has been consistently around 1MB/s to 2MB/s.
Random read of the same block size can reach 60MB/s with 32 jobs.

run fio on ceph-fuse? If I remember right, fio does 1 bytes write.
overhead of passing the 1 byte to ceph-fuse is too high.


Our ceph cluster consists of 4 OSDs all running on SSD connected through a 
switch with 9.06 Gbits/sec bandwidth.
Any suggestion please?

Warmest Regards,
Haiyang
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to