On 9/23/25 13:45, Niklas Hambüchen wrote:
I'm in 3-replication, DC building as failure domain.
Currently each DC has 1-2 hosts, and each host has ~10 OSDs.

watch -n 5 ceph pg ls remapped

Thanks, I'll try that next time it happens!

Hmm, we have similar (but larger) setups. We don't have misplaced objects after a node reboot, only degraded. Note that we have an even number of hosts in each DC. I also would not expect any misplaced objects (when OSDs do not get marked "out") at all. So I wonder how your CRUSH rule(s) look like, and if the DC placement CRUSH rule(s) are applied on all the pools. Can you provide your crush rules?

ceph osd crush rule dump

Example:
<snip>
"op": "chooseleaf_firstn",
"num": 0,
"type": "datacenter"
<snap>

Gr. Stefan
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to