Hi ceph-m...@rikdvk.mailer.me,

you could squeeze the OSDs back in but it does not make sense.

Just clean the disks with dd for example and add them as new disks to your cluster.

Best,
Malte

Am 04.09.23 um 09:39 schrieb ceph-m...@rikdvk.mailer.me:
Hello,

I have a ten node cluster with about 150 OSDs. One node went down a while back, 
several months. The OSDs on the node have been marked as down and out since.

I am now in the position to return the node to the cluster, with all the OS and 
OSD disks. When I boot up the now working node, the OSDs do not start.

Essentially , it seems to complain with "fail[ing]to load OSD map for [various 
epoch]s, got 0 bytes".

I'm guessing the OSDs on disk maps are so old, they can't get back into the 
cluster?

My questions are whether it's possible or worth it to try to squeeze these OSDs 
back in or to just replace them. And if I should just replace them, what's the 
best way? Manually remove [1] and recreate? Replace [2]? Purge in dashboard?

[1] 
https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-osds/#removing-osds-manual
[2] 
https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-osds/#replacing-an-osd

Many thanks!

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to