I had recently setup a test cluster of Ceph Octopus, on a particular set of 
hybrid OSD nodes.
It ran at a particular rated IO level, judging by "fio".

Now in the last month or so, I got to deploy an evaluation cluster of ceph 
pacific, on the same hardware.
It is *drastically* slower, using the same hardware. In some use cases, slower 
than 100io/s.

Could anyone suggest a reason for this? (and ideally, how to retune?)

Did the required minimum effective size of the WAL on SSD grow between 
releases, for example?


Target use is an iSCSI storage pool.




--
Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
5 Peters Canyon Rd Suite 250 
Irvine CA 92606 
Office 714.918.1310| Fax 714.918.1325 
pbr...@medata.com| www.medata.com
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to