I have a 3 nodes, 15 osds ceph cluster setup:* 15 7200 RPM SATA disks, 5 for 
each node.* 10G network* Intel(R) Xeon(R) CPU E5-2620(6 cores) 2.00GHz, for 
each node.* 64G Ram for each node.

I deployed the cluster with ceph-deploy, and created a new data pool for 
cephfs.Both the data and metadata pools are set with replica size 3.Then 
mounted the cephfs on one of the three nodes, and tested the performance with 
fio.
The sequential read  performance looks good:fio -direct=1 -iodepth 1 -thread 
-rw=read -ioengine=libaio -bs=16K -size=1G -numjobs=16 -group_reporting 
-name=mytest -runtime 60read : io=10630MB, bw=181389KB/s, iops=11336 , runt= 
60012msec
But the sequential write/random read/random write performance is very poor:fio 
-direct=1 -iodepth 1 -thread -rw=write -ioengine=libaio -bs=16K -size=256M 
-numjobs=16 -group_reporting -name=mytest -runtime 60write: io=397280KB, 
bw=6618.2KB/s, iops=413 , runt= 60029msecfio -direct=1 -iodepth 1 -thread 
-rw=randread -ioengine=libaio -bs=16K -size=256M -numjobs=16 -group_reporting 
-name=mytest -runtime 60read : io=665664KB, bw=11087KB/s, iops=692 , runt= 
60041msecfio -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=libaio 
-bs=16K -size=256M -numjobs=16 -group_reporting -name=mytest -runtime 60write: 
io=361056KB, bw=6001.1KB/s, iops=375 , runt= 60157msec
I am mostly surprised by the seq write performance comparing to the raw sata 
disk performance(It can get 4127 IOPS when mounted with ext4). My cephfs only 
gets 1/10 performance of the raw disk.
How can I tune my cluster to improve the sequential write/random read/random 
write performance?


                                          
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to