I have an OSD that is throwing sense errors - It's at it's end of life and 
needs to be replaced.
The server is in the datacentre and I won't get there for a few weeks so I've 
stopped the service (systemctl stop ceph-osd@208) and let the cluster 
rebalance, all is well.

My thinking is that if for some reason the host that OSD208 resides within was 
to reboot, that OSD would start and become part of the cluster again.

So I'd like to prevent this OSD from ever starting again without physically 
being able to remove it from the server.

I was thinking that deleting it's key from the auth list might work. So a ceph 
osd purge 208
Then when the service tries to start it'll fail with an auth error.

Any other suggestions?

Cheers,
Cory
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to