Hi ceph users,
I've seen this happen a couple times and been meaning to ask the group
about it.

Sometimes I get a failed block device and I have to replace it.  My normal
process is -
* stop the osd process
* remove the osd from crush map
* rm -rf /var/lib/ceph/osd/<cluster>-<osd>/*
* run mkfs
* start osd process
* add osd to crush map

Sometimes the osd will get "stuck" and it doesn't want to flip to up/in.  I
have to actually rm the osd.id from ceph itself then recreate the id then
do all the above.

Does anyone know why this could be?  I'm on reef 18.2 but I've seen this a
lot over the years.

Thanks,
/Chris
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to