Hello,

I've just upgraded a Pacific cluster into Quincy, and all my osd have the low value osd_mclock_max_capacity_iops_hdd : 315.000000.

The manuel does not explain how to benchmark the OSD with fio or ceph bench with good options. Can someone have the good ceph bench options or fio options in order to configure osd_mclock_max_capacity_iops_hdd for each osd ?

I ran this bench various times on the same OSD (class hdd) and I obtain different results.
ceph tell ${osd} cache drop
ceph tell ${osd} bench 12288000 4096 4194304 100

example :
osd.21 (hdd): osd_mclock_max_capacity_iops_hdd = 315.000000
    bench 1 : 3006.2271379745534
    bench 2 : 819.503206458996
    bench 3 : 946.5406320134085

How can I succeed in getting the good values for the osd_mclock_max_capacity_iops_[hdd|ssd] options ?

Thank you for your help,

Rafael



Attachment: smime.p7s
Description: Signature cryptographique S/MIME

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to