ciative of the community response. I learned a lot
in the process, had an outage-inducing scenario rectified very quickly, and got
back to work. Thanks so much! Happy to answer any followup questions and
return the favor when I can.
From: Rice, Christian
Date: Wednesday, March 8, 2023 at 3:57 P
I have a large number of misplaced objects, and I have all osd settings to “1”
already:
sudo ceph tell osd.\* injectargs '--osd_max_backfills=1
--osd_recovery_max_active=1 --osd_recovery_op_priority=1'
How can I slow it down even more? The cluster is too large, it’s impacting
other network t
name and starting it with the new name.
> You only must keep the ID from the node in the crushmap!
>
> Regards
> Manuel
>
>
> On Mon, 13 Feb 2023 22:22:35 +
> "Rice, Christian" wrote:
>
>> Can anyone please point me at a doc that explains the most
&
Can anyone please point me at a doc that explains the most efficient procedure
to rename a ceph node WITHOUT causing a massive misplaced objects churn?
When my node came up with a new name, it properly joined the cluster and owned
the OSDs, but the original node with no devices remained. I expe
we had issues with slow ops on ssd AND nvme; mostly fixed by raising aio-max-nr
from 64K to 1M, eg "fs.aio-max-nr=1048576" if I remember correctly.
On 3/29/22, 2:13 PM, "Alex Closs" wrote:
Hey folks,
We have a 16.2.7 cephadm cluster that's had slow ops and several
(constantly changin