Side note, I was initially unable to manually recover because I was restarting the wrong ceph-volume service:
root@cephtest:~# systemctl -a| grep ceph-volume ceph-volume@bbfc0235-f8fd-458b-9c3d-21803b72f4bc.service loaded activating start start Ceph Volume activation: bbfc0235-f8fd-458b-9c3d-21803b72f4bc ceph-volume@lvm-2-bbfc0235-f8fd-458b-9c3d-21803b72f4bc.service loaded inactive dead Ceph Volume activation: lvm-2-bbfc0235-f8fd-458b-9c3d-21803b72f4bc i.e. there are two and it is the lvm* one that needs restarting (i tried to restart the other which didnt work). ** Changed in: charm-ceph-osd Assignee: dongdong tao (taodd) => (unassigned) ** Changed in: charm-ceph-osd Status: Triaged => Invalid ** Changed in: charm-ceph-osd Importance: High => Undecided ** Changed in: ceph (Ubuntu) Importance: Undecided => High ** Changed in: ceph (Ubuntu) Assignee: (unassigned) => dongdong tao (taodd) ** Also affects: cloud-archive Importance: Undecided Status: New ** Also affects: cloud-archive/queens Importance: Undecided Status: New ** Also affects: cloud-archive/ussuri Importance: Undecided Status: New ** Also affects: cloud-archive/rocky Importance: Undecided Status: New ** Also affects: cloud-archive/train Importance: Undecided Status: New ** Also affects: cloud-archive/stein Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1804261 Title: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s) To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-osd/+bug/1804261/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs