Add-on-Question:

What is the intended purpose of the [email protected]? I can run

systemctl start ceph-disk@/dev/sdr1

but I can't 'enable' it like the the [email protected] so why is it there?

Best regards
Karsten

2016-04-27 9:33 GMT+02:00 Karsten Heymann <[email protected]>:
> Hi!
>
> the last days, I updated my jessie evaluation cluster to jewel and now
> osds are not started automatically after reboot because they are not
> mounted. This is the output of ceph-disk list after boot:
>
> /dev/sdh :
>  /dev/sdh1 ceph data, prepared, cluster ceph, osd.47, journal /dev/sde1
> /dev/sdi :
>  /dev/sdi1 ceph data, prepared, cluster ceph, osd.48, journal /dev/sde2
> /dev/sdj :
>  /dev/sdj1 ceph data, prepared, cluster ceph, osd.49, journal /dev/sde3
>
> and so on.
>
> systemd tried to start the units:
>
> # systemctl | grep osd
> ● [email protected]
>                              loaded failed failed    Ceph object
> storage daemon
> ● [email protected]
>                              loaded failed failed    Ceph object
> storage daemon
> ● [email protected]
>                              loaded failed failed    Ceph object
> storage daemon
>
> # systemctl status [email protected]
> ● [email protected] - Ceph object storage daemon
>    Loaded: loaded (/lib/systemd/system/[email protected]; enabled)
>    Active: failed (Result: start-limit) since Wed 2016-04-27 08:50:07
> CEST; 21min ago
>   Process: 3139 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER}
> --id %i --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
>   Process: 2682 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh
> --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph
> (code=exited, status=0/SUCCESS)
>  Main PID: 3139 (code=exited, status=1/FAILURE)
>
> Apr 27 08:50:06 ceph-cap1-02 systemd[1]: Unit [email protected]
> entered failed state.
> Apr 27 08:50:07 ceph-cap1-02 systemd[1]: [email protected] start
> request repeated too quickly, refusing to start.
> Apr 27 08:50:07 ceph-cap1-02 systemd[1]: Failed to start Ceph object
> storage daemon.
> Apr 27 08:50:07 ceph-cap1-02 systemd[1]: Unit [email protected]
> entered failed state.
>
> Which is no suprise as the osd is not mounted:
>
> # ls -l /var/lib/ceph/osd/ceph-47
> total 0
>
> The weird thing is running the following starts the osd:
>
> # echo add > /sys/class/block/sdr1/uevents
>
> so the udev rules to mount the osds seem to work.
>
> Any ideas on how to debug this?
>
> Best regards
> Karsten
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to