Hi, this is somewhat embarrassing, but one of my colleagues fat fingered an
ansible rule and managed to wipe out /etc/systemd/system on all of our ceph
hosts.

The cluster is running nautilus on ubuntu 18.04, deployed with
ceph-ansible, one of our near-future tasks is to upgrade to the latest ceph
and cephadm, so I'm not looking forward to redoing the entire cluster using
ceph-ansible.

Normally I'd put on the workboots and start re-installing a broken host
from scratch, but I hope there's a faster way.

Is there any way to generate the ceph-owned contents of /etc/systemd/system
?

-- 
Flemming Frandsen - YAPH - http://osaa.dk - http://dren.dk/
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to