> 
> Good afternoon everybody!
> 
> I have the following scenario:
> Pool RBD replication x3
> 5 hosts with 12 SAS spinning disks each

Old hardware?  SAS is mostly dead.

> I'm using exactly the following line with FIO to test:
> fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=10G
> -iodepth=16 -rw=write -filename=./test.img

On what kind of client?  

> If I increase the blocksize I can easily reach 1.5 GBps or more.
> 
> But when I use blocksize in 4K I get a measly 12 Megabytes per second,
> which is quite annoying. I achieve the same rate if rw=read.

If your client is VM especially, check if you have IOPS throttling. With small 
block sizes you'll throttle IOPS long before bandwidth.

> Note: I tested it on another smaller cluster, with 36 SAS disks and got the
> same result.

SAS has a price premium over SATA, and still requires an HBA.  Many chassis 
vendors really want you to buy an anachronistic RoC HBA.

Eschewing SAS and the HBA helps close the gap to justify SSDs, the TCO just 
doesn't favor spinners.

> Maybe the 5 host cluster is not
> saturated by your current fio test. Try running 2 or 4 in parallel.


Agreed that Ceph is a scale out solution, not DAS, but note the difference 
reported with a larger block size.

>How is this related to 60 drives? His test is only on 3 drives at a time not? 

RBD volumes by and large will live on most or all OSDs in the pool.




> 
> I don't know exactly what to look for or configure to have any improvement.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to