I'm going to assume that ALL of your pools are replicated with size 3, since
you didn't provide that info, and that all but the *hdd pools are on SSDs.
`ceph osd dump | grep pool`
Let me know if that isn't the case.
With that assumption, I make your pg ratio to be ~ 57, which is way too low.
Hello Community
Currently, I operate a CEPH Cluster utilizing Ceph Octopus version 1.5.2.7,
installed through Ansible. The challenge I'm encountering is that, during
scrubbing, OSD latency spikes to 300-600 ms, resulting in sluggish
performance for all VMs.
Additionally, some OSDs fail during the