Yes, the LVs are not removed automatically, you need to free up the VG, there are a couple of ways to do so, for example remotely:

pacific1:~ # ceph orch device zap pacific4 /dev/vdb --force

or directly on the host with:

pacific1:~ # cephadm ceph-volume lvm zap --destroy /dev/<CEPH_VG>/<DB_LV>



Zitat von Kai Stian Olstad <ceph+l...@olstad.com>:

On 26.05.2021 08:22, Eugen Block wrote:
Hi,

did you wipe the LV on the SSD that was assigned to the failed HDD? I
just did that on a fresh Pacific install successfully, a couple of
weeks ago it also worked on an Octopus cluster.

No, I did not wipe the LV.
Not sure what you mean by wipe, so I tried overwriting the LV with /dev/zero, but that did solve it.
So I guess with wipe do you mean delete the LV with lvremove?


--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to