On Thu, Apr 5, 2018 at 6:33 AM, Ansgar Jazdzewski
wrote:
> hi folks,
>
> i just figured out that my ODS's did not start because the filsystem
> is not mounted.
Would love to see some ceph-volume logs (both ceph-volume.log and
ceph-volume-systemd.log) because we do
Hi together,
this sounds a lot like my issue and quick solution here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024858.html
It seems http://tracker.ceph.com/issues/23067 is already under review, so maybe
that will be in a future release,
shortening the bash-script and
`systemctl list-dependencies ceph.target`
Do you have ceph-osd.target listed underneath it with all of your OSDs
under that? My guess is that you just need to enable them in systemctl to
manage them. `systemctl enable ceph-osd@${osd}.service` where $osd is the
osd number to be enabled. For
Hey Ansgar,
we have a similar "problem": in our case all servers are wiped on
reboot, as they boot their operating system from the network into
initramfs.
While the OS configuration is done with cdist [0], we consider ceph osds
more dynamic data and just re-initialise all osds on boot using the
hi folks,
i just figured out that my ODS's did not start because the filsystem
is not mounted.
So i wrote a script to Hack my way around it
#
#! /usr/bin/env bash
DATA=( $(ceph-volume lvm list | grep -e 'osd id\|osd fsid' | awk
'{print $3}' | tr '\n' ' ') )
OSDS=$(( ${#DATA[@]}/2 ))
for