I have a Mimic Bluestore EC RBD Pool running on 8+2, this is currently
running across 4 node's.

3 Node's are running Toshiba disk's while one node is running Segate disks
(same size, spinning speed, enterprise disks e.t.c), I have noticed huge
difference in IOWAIT and disk latency performance between the two sets of
disks, can also be seen from a ceph osd perf during read and write
operations.

Speaking to my host (server provider), they bench marked the two disks
before approving them for use in this type of server, they actually saw
higher performance from the Toshiba disk during their tests.

They did however state there test where at higher / larger block sizes, I
can imagine CEPH using EC of 8+2 the block sizes / requests are quite small?

Is there anything I can do ? Changing the RBD object size & stripe unit to
a bigger than default? Will this make the data sent to the disk larger
chunks at once compared to lot's of smaller block's.

If anyone else has any advice I'm open to trying.

P.s I have already disabled the disk cache on all disks and this was
causing high write latency across all.

Thanks
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to