Hello Iain,
Does anyone have any ideas of what could be the issue here or anywhere we
> can check what is going on??
>
>
You could be hitting the slow backfill/recovery issue with
mclock_scheduler.
Could you please provide the output of the following commands?
1. ceph versions
2. ceph config get
Hi Sridhar,
Thanks for the response, I have added the output you requested below, I have
attached the output from the last command in a file as it was rather long. We
did try to set high_recovery_ops but it didn't seem to have any visible effect.
root@gb4-li-cephgw-001 ~ # ceph versions
{
"
To help complete the recovery, you can temporarily try disabling scrub and
deep scrub
operations by running:
ceph osd set noscrub
ceph osd set nodeep-scrub
This should help speed up the recovery process. Once the recovery is done,
you
can unset the above scrub flags and revert the mClock profile
Hi,Please take a look at the following thread: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PWHG6QJ6N2TJEYD2U4AXJAJ23CRPJG4E/#7ZMBM23GXYFIGY52ZWJDY5NUSYSDSYL6In short, the value for "osd_mclock_cost_per_byte_usec_hdd" isn't correct. With the release of 17.2.7 this option will be