Are you a victim of bluefs_buffered_io=false: 
https://www.mail-archive.com/ceph-users@ceph.io/msg05550.html ?

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Kamil Szczygieł <ka...@szczygiel.io>
Sent: 27 October 2020 21:39:22
To: ceph-users@ceph.io
Subject: [ceph-users] Very high read IO during backfilling

Hi,

We're running Octopus and we've 3 control plane nodes (12 core, 64 GB memory 
each) that are running mon, mds and mgr and also 4 data nodes (12 core, 256 GB 
memory, 13x10TB HDDs each). We've increased number of PGs inside our pool, 
which resulted in all OSDs going crazy and reading the average of 900 M/s 
constantly (based on iotop).

This has resulted in slow ops and very low recovery speed. Any tips on how to 
handle this kind of situation? We've osd_recovery_sleep_hdd set to 0.2, 
osd_recovery_max_active set to 5 and osd_max_backfills set to 4. Some OSDs are 
reporting slow ops constantly and iowait on machines is at 70-80% constantly.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to