I received a few suggestions, and resolved my issue.

Anthony D'Atri suggested mclock (newer than my nautilus version), adding 
"--osd_recovery_max_single_start 1” (didn’t seem to take), 
“osd_op_queue_cut_off=high” (which I didn’t get to checking), and pgremapper 
(from github).

Pgremapper did the trick to cancel the backfill which had been initiated by an 
unfortunate OSD name-changing sequence.  Big winner, achieved EXACTLY what I 
needed, which was to undo an unfortunate recalculation of placement groups.

Before: 310842802/17308319325 objects misplaced (1.796%)
Ran: pgremapper cancel-backfill --yes
After: 421709/17308356309 objects misplaced (0.002%)

The “before” scenario was causing over 10GiB/s of backfill traffic.  The 
“after” scenario was a very cool 300-400MiB/s, entirely within the realm of 
sanity.  The cluster is temporarily split between two datacenters, being 
physically lifted and shifted over a period of a month.

Alex Gorbachev also suggested setting osd-recovery-sleep.  That was probably 
the solution I was looking for to throttle backfill operations at the 
beginning, and I’ll be keeping that in my toolbox, as well.

As always, I’m HUGELY appreciative of the community response.  I learned a lot 
in the process, had an outage-inducing scenario rectified very quickly, and got 
back to work.  Thanks so much!  Happy to answer any followup questions and 
return the favor when I can.

From: Rice, Christian <cr...@pandora.com>
Date: Wednesday, March 8, 2023 at 3:57 PM
To: ceph-users <ceph-users@ceph.io>
Subject: [EXTERNAL] [ceph-users] Trying to throttle global backfill
I have a large number of misplaced objects, and I have all osd settings to “1” 
already:

sudo ceph tell osd.\* injectargs '--osd_max_backfills=1 
--osd_recovery_max_active=1 --osd_recovery_op_priority=1'


How can I slow it down even more?  The cluster is too large, it’s impacting 
other network traffic 😉
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to