Hello list,
We have a Ceph cluster with two management nodes and six data nodes. Each data
node has 28 HDD disks. One disk recently failed in one of the nodes,
corresponding to osd.2. To replace the disk, we took the osd.2 out, stopped it,
and after a few days removed it, basically:
ceph osd out osd.2
ceph osd ok-to-stop osd.2
Once Ok to stop, then:
ceph orch daemon stop osd.2
Once stopped:
ceph osd crush remove osd.2
ceph auth del osd.2
ceph osd rm osd.2
Then checked `ceph osd tree`, `ceph orch ps`, `ceph -s`, and all confirmed that
the OSD was gone. We then proceeded to physically replace the failed drive on
the node. It was /dev/sdac before, and after replacing it (hot swap) the system
identified it as /dev/sdah.
I zapped it on the node with `sgdisk --zap-all /dev/sdah` and, back on the
management node, I could see that the disk was now showing up and marked as
available with `ceph orch device ls`.
Then I proceeded to add a new OSD to that disk with: `ceph orch daemon add osd
node-osd1:/dev/sdah`, which then failed with:
Created no osd(s) on host server-osd1; already created?
Which was rather strange, because total OSDs was still showing as 1 less, no
new OSD was showing up in `ceph osd tree`. What was even odder is that the disk
that was showing as available in `ceph orch device ls` now shows as *not*
available, and looking at lsblk's output in the node it seems that it was
populated by Ceph:
# lsblk /dev/sdah
NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdah
66:16 0 16.4T 0 disk
└─ceph--157c7c44--b519--4f6d--a54b--ecd466cf81d0-osd--block--10ada574--b99c--466d--8ad6--2c97d17d1f66
253:59 0 16.4T 0 lvm
Any hints on how to proceed to get the OSD added back with this disk?
Thank you for any suggestions!
-
Gustavo
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]