Hi!

An OSD failed in our 16.2.15 cluster. I prepared it for removal and ran
`ceph orch daemon rm osd.19 --force`. Somehow that didn't work as expected,
so now we still have osd.19 in the crush map:

-10         122.66965              host ceph02
 19           1.00000                  osd.19     down         0  1.00000

But OSD has been cleaned up on the host, although incompletely, as both
block and block.db LVs still exist.

If I try to remove the OSD again, I get an error:

# ceph orch daemon rm osd.19  --force
Error EINVAL: Unable to find daemon(s) ['osd.19']

How can I clean up this OSD and get rid of it completely, including the
crush map? I would appreciate any suggestions or pointers.

Best regards,
Zakhar
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to