Do you have access to the /var/log/ceph/ceph-volume-systemd.log after
the latest reboot? That should give us some details such as:

"[2019-05-31 20:43:44,334][systemd][WARNING] failed to find db volume,
retries left: 17"

or similar for wal volume.

If you see that the retries have been exceeded in your case you can tune
them (the new loops are using the same env vars):

http://docs.ceph.com/docs/mimic/ceph-volume/systemd/#failure-and-retries

As for the pvscan issue, I'm not sure if that is a ceph issue (?).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1828617/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to