I am quite sure that this case is covered by cephadm already. A few months ago I tested it after a major rework of ceph-volume. I don’t have any links right now. But I had a lab environment with multiple OSDs per node with rocksDB on SSD and after wiping both HDD and DB LV cephadm automatically redeployed the OSD according to my drive group file.

Zitat von Stefan Kooman <ste...@bit.nl>:

On 3/19/21 7:47 PM, Philip Brown wrote:

I see.


I dont think it works when 7/8 devices are already configured, and the SSD is already mostly sliced.

OK. If it is a test cluster you might just blow it all away. By doing this you are simulating a "SSD" failure taking down all HDDs with it. It sure isn't pretty. I would say the situation you ended up with is not a corner case by any means. I am afraid I would really need to set up a test cluster with cephadm to help you further at this point, besides the suggestion above.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to