Good evening,

On 7/21/21 10:44 AM, Lokendra Rathour wrote:
Hello Everyone,

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/operations_guide/index#handling-a-node-failure
* refer to the section * "Replacing the node, reinstalling the operating
system, and using the Ceph OSD disks from the failed node."*

But somehow these steps are not clear and we are not able to retrieve the
Ceph OSD from the old Node.

As I reinstalled all nodes it was actually just a matter of `ceph-volume lvm actuvate --all` and it just picked up all osds it could identify and started them again.

That is the bluestore specific command though.

Ceph Version: Octopus 15.2.7
Ceph-Ansible: 5.0.x
OS: Centos 8.3

Please don't bomb that many mailing lists.

Best regards
Thore
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to