When doing FIO RBD benchmarking using 94.7 on Ubuntu 14.04 using 10 SSD/OSD
and with/and without journals on separate SSDs, I get an even distribution
of IO to the OSD and to the journals (if used).

If I drop the # of OSD's down to 8, the IO to the journals is skewed by
40%, meaning 1 journal is doing 40% more IO than the other journal. Same
test, same 32 iodepth, performance drops because 1 journal is 90% busy and
the other journal is 40% busy.

I did another set of tests putting the journals on a single PCIe flash
card. Using 10 ssd/osd's, the ceph ops is very consistent. If I drop the #
of OSD's down to 8, the ceph ops varies from 28,000 to 10,000 and
performance drops.

The ssd's and pcie card operate without any errors and the appropriate
device tuning has been applied. Ceph health is OK, no errors at the system
or ceph level.

Any ideas what could be cuasing this behavior?
Thanks

Rick Stehno
Sr. Database and Ceph Performance Architect  @ Seagate
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to