On 03/15/17 08:43, Gunwoo Gim wrote:
>  After a reboot, all the partitions of LVM don't show up in
> /dev/mapper -nor in the /dev/dm-<dm-num> or /proc/partitions- though
> the whole disks show up; I have to make the hosts run one 'partprobe'
> every time they boot so as to have the partitions all show up.
Maybe you need this after partprobe:

    udevadm trigger

>
>  I've found out that the udev rules have never triggered even when I
> removed the DEVTYPE checking part; checked with a udev
> line: RUN+="/bin/echo 'add /dev/$name' >> /root/log.txt" 
>  I've also tried chowning all the /dev/dm-<num> to ceph:disk in vain.
> Do I have to use the udev rules even if the /dev/dm-<num> s are
> already owned by ceph:ceph?
>
No, I think you just need them owned by ceph:ceph. Test that with
something like:

    sudo -u ceph hexdump -C /dev/dm-${number} | head

(which reads, not writes...so not a full test, but close enough)

And also make sure the files in /var/lib/ceph/{osd,mon,...} are owned by
ceph:ceph too. Maybe you have a mix of root and ceph, which is easy to
cause by running it as root when ceph owns some files.


And FYI, I don't like udev, and did not use ceph-deploy or ceph-disk. I
did it with a very simple init script instead:


> #!/bin/bash
> mkdir -p /var/run/ceph
> chown ceph:ceph /var/run/ceph
> chgrp -R ceph /var/log/ceph
> for d in /var/lib/ceph/osd/*/journal; do
>     d=$(readlink -f "$d")
>     chown ceph:ceph "$d"
> done

This works on ubuntu 14.04 as is, a badly written init script, but I
think centos will not accept it without the lsb tags.

A side effect of doing it this way is you have to manually run the
script again when replacing or adding disks, since it is not run after
hot swap like udev is.


>  Thank you very much for reading.
>
> Best Regards,
> Nicholas.
>
> On Wed, Mar 15, 2017 at 1:06 AM Gunwoo Gim <wind8...@gmail.com
> <mailto:wind8...@gmail.com>> wrote:
>
>      Thank you very much, Peter.
>
>      I'm sorry for not clarifying the version number; it's kraken and
>     11.2.0-1xenial.
>
>      I guess the udev rules in this file are supposed to change them :
>     /lib/udev/rules.d/95-ceph-osd.rules
>      ...but the rules' filters don't seem to match the DEVTYPE part of
>     the prepared partitions on the LVs I've got on the host.
>
>      Would it have been the cause of trouble? I'd love to be informed
>     of a good way to make it work with the logical volumes; should I
>     fix the udev rule?
>
>     ~ # cat /lib/udev/rules.d/95-ceph-osd.rules | head -n 19
>     # OSD_UUID
>     ACTION=="add", SUBSYSTEM=="block", \
>       ENV{DEVTYPE}=="partition", \
>       ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \
>       OWNER:="ceph", GROUP:="ceph", MODE:="660", \
>       RUN+="/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name"
>     ACTION=="change", SUBSYSTEM=="block", \
>       ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \
>       OWNER="ceph", GROUP="ceph", MODE="660"
>
>     # JOURNAL_UUID
>     ACTION=="add", SUBSYSTEM=="block", \
>       ENV{DEVTYPE}=="partition", \
>       ENV{ID_PART_ENTRY_TYPE}=="45b0969e-9b03-4f30-b4c6-b4b80ceff106", \
>       OWNER:="ceph", GROUP:="ceph", MODE:="660", \
>       RUN+="/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name"
>     ACTION=="change", SUBSYSTEM=="block", \
>       ENV{ID_PART_ENTRY_TYPE}=="45b0969e-9b03-4f30-b4c6-b4b80ceff106", \
>       OWNER="ceph", GROUP="ceph", MODE="660"
>
>
>     ~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep
>     ID_PART_ENTRY_TYPE
>     E: ID_PART_ENTRY_TYPE=45b0969e-9b03-4f30-b4c6-b4b80ceff106
>
>     ~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep DEVTYPE
>     E: DEVTYPE=disk
>
>
>     Best Regards,
>     Nicholas.
>      
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to