Thank you very much, Peter.

 I'm sorry for not clarifying the version number; it's kraken and
11.2.0-1xenial.

 I guess the udev rules in this file are supposed to change them :
/lib/udev/rules.d/95-ceph-osd.rules
 ...but the rules' filters don't seem to match the DEVTYPE part of the
prepared partitions on the LVs I've got on the host.

 Would it have been the cause of trouble? I'd love to be informed of a good
way to make it work with the logical volumes; should I fix the udev rule?

~ # cat /lib/udev/rules.d/95-ceph-osd.rules | head -n 19
# OSD_UUID
ACTION=="add", SUBSYSTEM=="block", \
  ENV{DEVTYPE}=="partition", \
  ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \
  OWNER:="ceph", GROUP:="ceph", MODE:="660", \
  RUN+="/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name"
ACTION=="change", SUBSYSTEM=="block", \
  ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \
  OWNER="ceph", GROUP="ceph", MODE="660"

# JOURNAL_UUID
ACTION=="add", SUBSYSTEM=="block", \
  ENV{DEVTYPE}=="partition", \
  ENV{ID_PART_ENTRY_TYPE}=="45b0969e-9b03-4f30-b4c6-b4b80ceff106", \
  OWNER:="ceph", GROUP:="ceph", MODE:="660", \
  RUN+="/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name"
ACTION=="change", SUBSYSTEM=="block", \
  ENV{ID_PART_ENTRY_TYPE}=="45b0969e-9b03-4f30-b4c6-b4b80ceff106", \
  OWNER="ceph", GROUP="ceph", MODE="660"


~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep ID_PART_ENTRY_TYPE
E: ID_PART_ENTRY_TYPE=45b0969e-9b03-4f30-b4c6-b4b80ceff106

~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep DEVTYPE
E: DEVTYPE=disk


Best Regards,
Nicholas.

On Tue, Mar 14, 2017 at 6:37 PM Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:

> Is this Jewel? Do you have some udev rules or anything that changes the
> owner on the journal device (eg. /dev/sdx or /dev/nvme0n1p1) to ceph:ceph?
>
>
> On 03/14/17 08:53, Gunwoo Gim wrote:
>
> I'd love to get helped out; it'd be much appreciated.
>
> Best Wishes,
> Nicholas.
>
> On Tue, Mar 14, 2017 at 4:51 PM Gunwoo Gim <wind8...@gmail.com> wrote:
>
>  Hello, I'm trying to deploy a ceph filestore cluster with LVM using
> ceph-ansible playbook. I've been fixing a couple of code blocks in
> ceph-ansible and ceph-disk/main.py and made some progress but now I'm stuck
> again; 'ceph-disk activate osd' fails.
>
>  Please let me just show you the error message and the output of 'ls':
>
> ~ # ceph-disk -v activate /dev/mapper/vg--hdd1-lv--hdd1p1
>
> [...]
>
> ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs',
> '--mkkey', '-i', u'1', '--monmap',
> '/var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap', '--osd-data',
> '/var/lib/ceph/tmp/mnt.cJDc7I', '--osd-journal',
> '/var/lib/ceph/tmp/mnt.cJDc7I/journal', '--osd-uuid',
> u'5097be3f-349e-480d-8b0d-d68c13ae2f72', '--keyring',
> '/var/lib/ceph/tmp/mnt.cJDc7I/keyring', '--setuser', 'ceph', '--setgroup',
> 'ceph'] failed : 2017-03-14 16:01:10.051537 7fdc9a025a40 -1
> filestore(/var/lib/ceph/tmp/mnt.cJDc7I) mkjournal error creating journal on
> /var/lib/ceph/tmp/mnt.cJDc7I/journal: (13) Permission denied
> 2017-03-14 16:01:10.051565 7fdc9a025a40 -1 OSD::mkfs: ObjectStore::mkfs
> failed with error -13
> 2017-03-14 16:01:10.051624 7fdc9a025a40 -1  ** ERROR: error creating empty
> object store in /var/lib/ceph/tmp/mnt.cJDc7I: (13) Permission denied
>
> ~ # ls -al /var/lib/ceph/tmp
> total 8
> drwxr-xr-x  2 ceph ceph 4096 Mar 14 16:01 .
> drwxr-xr-x 11 ceph ceph 4096 Mar 14 11:12 ..
> -rwxr-xr-x  1 root root    0 Mar 14 11:12 ceph-disk.activate.lock
> -rwxr-xr-x  1 root root    0 Mar 14 11:44 ceph-disk.prepare.lock
>
>
> ~ # ls -l /dev/mapper/vg-*-lv-*p*
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd1-lv--hdd1p1 ->
> ../dm-12
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd2-lv--hdd2p1 ->
> ../dm-14
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd3-lv--hdd3p1 ->
> ../dm-16
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd4-lv--hdd4p1 ->
> ../dm-18
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd5-lv--hdd5p1 ->
> ../dm-20
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd6-lv--hdd6p1 ->
> ../dm-22
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd7-lv--hdd7p1 ->
> ../dm-24
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd8-lv--hdd8p1 ->
> ../dm-26
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--hdd9-lv--hdd9p1 ->
> ../dm-28
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p1 ->
> ../dm-11
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p2 ->
> ../dm-15
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p3 ->
> ../dm-19
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p4 ->
> ../dm-23
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p5 ->
> ../dm-27
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p1 ->
> ../dm-13
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p2 ->
> ../dm-17
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p3 ->
> ../dm-21
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p4 ->
> ../dm-25
>
> ~ # ls -l /dev/dm-*
> brw-rw---- 1 root disk 252,  0 Mar 14 13:46 /dev/dm-0
> brw-rw---- 1 root disk 252,  1 Mar 14 13:46 /dev/dm-1
> brw-rw---- 1 root disk 252, 10 Mar 14 13:47 /dev/dm-10
> brw-rw---- 1 ceph ceph 252, 11 Mar 14 13:47 /dev/dm-11
> brw-rw---- 1 ceph ceph 252, 12 Mar 14 13:46 /dev/dm-12
> brw-rw---- 1 ceph ceph 252, 13 Mar 14 13:47 /dev/dm-13
> brw-rw---- 1 ceph ceph 252, 14 Mar 14 13:46 /dev/dm-14
> brw-rw---- 1 ceph ceph 252, 15 Mar 14 13:47 /dev/dm-15
> brw-rw---- 1 ceph ceph 252, 16 Mar 14 13:46 /dev/dm-16
> brw-rw---- 1 ceph ceph 252, 17 Mar 14 13:47 /dev/dm-17
> brw-rw---- 1 ceph ceph 252, 18 Mar 14 13:46 /dev/dm-18
> brw-rw---- 1 ceph ceph 252, 19 Mar 14 13:47 /dev/dm-19
> brw-rw---- 1 root disk 252,  2 Mar 14 13:46 /dev/dm-2
> brw-rw---- 1 ceph ceph 252, 20 Mar 14 13:46 /dev/dm-20
> brw-rw---- 1 ceph ceph 252, 21 Mar 14 13:47 /dev/dm-21
> brw-rw---- 1 ceph ceph 252, 22 Mar 14 13:46 /dev/dm-22
> brw-rw---- 1 ceph ceph 252, 23 Mar 14 13:47 /dev/dm-23
> brw-rw---- 1 ceph ceph 252, 24 Mar 14 13:46 /dev/dm-24
> brw-rw---- 1 ceph ceph 252, 25 Mar 14 13:47 /dev/dm-25
> brw-rw---- 1 ceph ceph 252, 26 Mar 14 13:46 /dev/dm-26
> brw-rw---- 1 ceph ceph 252, 27 Mar 14 13:47 /dev/dm-27
> brw-rw---- 1 ceph ceph 252, 28 Mar 14 13:47 /dev/dm-28
> brw-rw---- 1 root disk 252,  3 Mar 14 13:46 /dev/dm-3
> brw-rw---- 1 root disk 252,  4 Mar 14 13:46 /dev/dm-4
> brw-rw---- 1 root disk 252,  5 Mar 14 13:46 /dev/dm-5
> brw-rw---- 1 root disk 252,  6 Mar 14 13:47 /dev/dm-6
> brw-rw---- 1 root disk 252,  7 Mar 14 13:46 /dev/dm-7
> brw-rw---- 1 root disk 252,  8 Mar 14 13:46 /dev/dm-8
> brw-rw---- 1 root disk 252,  9 Mar 14 13:47 /dev/dm-9
>
>
>  Best Regards,
>  Nicholas.
>
> --
> You can find my PGP public key here: https://google.com/+DewrKim/about
>
> --
> You can find my PGP public key here: https://google.com/+DewrKim/about
>
>
> _______________________________________________
> ceph-users mailing 
> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
>
> --------------------------------------------
> Peter Maloney
> Brockmann Consult
> Max-Planck-Str. 2
> 21502 Geesthacht
> Germany
> Tel: +49 4152 889 300 <+49%204152%20889300>
> Fax: +49 4152 889 333 <+49%204152%20889333>
> E-mail: peter.malo...@brockmann-consult.de
> Internet: http://www.brockmann-consult.de
> --------------------------------------------
>
> --
You can find my PGP public key here: https://google.com/+DewrKim/about
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to