I'd love to get helped out; it'd be much appreciated.

Best Wishes,
Nicholas.

On Tue, Mar 14, 2017 at 4:51 PM Gunwoo Gim <wind8...@gmail.com> wrote:

>  Hello, I'm trying to deploy a ceph filestore cluster with LVM using
> ceph-ansible playbook. I've been fixing a couple of code blocks in
> ceph-ansible and ceph-disk/main.py and made some progress but now I'm stuck
> again; 'ceph-disk activate osd' fails.
>
>  Please let me just show you the error message and the output of 'ls':
>
> ~ # ceph-disk -v activate /dev/mapper/vg--hdd1-lv--hdd1p1
> main_activate: path = /dev/mapper/vg--hdd1-lv--hdd1p1
> get_dm_uuid: get_dm_uuid /dev/mapper/vg--hdd1-lv--hdd1p1 uuid path is
> /sys/dev/block/252:12/dm/uuid
> get_dm_uuid: get_dm_uuid /dev/mapper/vg--hdd1-lv--hdd1p1 uuid is
> part1-LVM-ETn7wXOmnesc9MNpleTYzP29jjOkp19J12ELrQez43LFPfdFc1dItn8EFF299401
>
> command: Running command: /sbin/blkid -p -s TYPE -o value --
> /dev/mapper/vg--hdd1-lv--hdd1p1
> command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
> --lookup osd_mount_options_xfs
> mount: Mounting /dev/mapper/vg--hdd1-lv--hdd1p1 on
> /var/lib/ceph/tmp/mnt.cJDc7I with options noatime,largeio,inode64,swalloc
> command_check_call: Running command: /bin/mount -t xfs -o
> noatime,largeio,inode64,swalloc -- /dev/mapper/vg--hdd1-lv--hdd1p1
> /var/lib/ceph/tmp/mnt.cJDc7I
> activate: Cluster uuid is 0bc0ea6d-ed8a-4ef0-9e82-ba6454a7214e
> command: Running command: /usr/bin/ceph-osd --cluster=ceph
> --show-config-value=fsid
> activate: Cluster name is ceph
> activate: OSD uuid is 5097be3f-349e-480d-8b0d-d68c13ae2f72
> activate: OSD id is 1
> activate: Initializing OSD...
> command_check_call: Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon
> getmap -o /var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap
> got monmap epoch 2
> command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph
> --mkfs --mkkey -i 1 --monmap /var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap
> --osd-data /var/lib/ceph/tmp/mnt.cJDc7I --osd-journal
> /var/lib/ceph/tmp/mnt.cJDc7I/journal --osd-uuid
> 5097be3f-349e-480d-8b0d-d68c13ae2f72 --keyring
> /var/lib/ceph/tmp/mnt.cJDc7I/keyring --setuser ceph --setgroup ceph
> mount_activate: Failed to activate
> unmount: Unmounting /var/lib/ceph/tmp/mnt.cJDc7I
> command_check_call: Running command: /bin/umount --
> /var/lib/ceph/tmp/mnt.cJDc7I
> Traceback (most recent call last):
>   File "/usr/sbin/ceph-disk", line 9, in <module>
>     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5251, in
> run
>     main(sys.argv[1:])
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5202, in
> main
>     args.func(args)
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3553, in
> main_activate
>     reactivate=args.reactivate,
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3310, in
> mount_activate
>     (osd_id, cluster) = activate(path, activate_key_template, init)
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3486, in
> activate
>     keyring=keyring,
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2948, in
> mkfs
>     '--setgroup', get_ceph_group(),
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2895, in
> ceph_osd_mkfs
>     raise Error('%s failed : %s' % (str(arguments), error))
> ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs',
> '--mkkey', '-i', u'1', '--monmap',
> '/var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap', '--osd-data',
> '/var/lib/ceph/tmp/mnt.cJDc7I', '--osd-journal',
> '/var/lib/ceph/tmp/mnt.cJDc7I/journal', '--osd-uuid',
> u'5097be3f-349e-480d-8b0d-d68c13ae2f72', '--keyring',
> '/var/lib/ceph/tmp/mnt.cJDc7I/keyring', '--setuser', 'ceph', '--setgroup',
> 'ceph'] failed : 2017-03-14 16:01:10.051537 7fdc9a025a40 -1
> filestore(/var/lib/ceph/tmp/mnt.cJDc7I) mkjournal error creating journal on
> /var/lib/ceph/tmp/mnt.cJDc7I/journal: (13) Permission denied
> 2017-03-14 16:01:10.051565 7fdc9a025a40 -1 OSD::mkfs: ObjectStore::mkfs
> failed with error -13
> 2017-03-14 16:01:10.051624 7fdc9a025a40 -1  ** ERROR: error creating empty
> object store in /var/lib/ceph/tmp/mnt.cJDc7I: (13) Permission denied
>
> ~ # ls -al /var/lib/ceph/tmp
> total 8
> drwxr-xr-x  2 ceph ceph 4096 Mar 14 16:01 .
> drwxr-xr-x 11 ceph ceph 4096 Mar 14 11:12 ..
> -rwxr-xr-x  1 root root    0 Mar 14 11:12 ceph-disk.activate.lock
> -rwxr-xr-x  1 root root    0 Mar 14 11:44 ceph-disk.prepare.lock
>
> ~ # ls -l /dev/mapper/vg-*-lv-*p*
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd1-lv--hdd1p1 ->
> ../dm-12
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd2-lv--hdd2p1 ->
> ../dm-14
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd3-lv--hdd3p1 ->
> ../dm-16
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd4-lv--hdd4p1 ->
> ../dm-18
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd5-lv--hdd5p1 ->
> ../dm-20
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd6-lv--hdd6p1 ->
> ../dm-22
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd7-lv--hdd7p1 ->
> ../dm-24
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd8-lv--hdd8p1 ->
> ../dm-26
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--hdd9-lv--hdd9p1 ->
> ../dm-28
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p1 ->
> ../dm-11
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p2 ->
> ../dm-15
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p3 ->
> ../dm-19
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p4 ->
> ../dm-23
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p5 ->
> ../dm-27
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p1 ->
> ../dm-13
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p2 ->
> ../dm-17
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p3 ->
> ../dm-21
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p4 ->
> ../dm-25
>
> ~ # ls -l /dev/dm-*
> brw-rw---- 1 root disk 252,  0 Mar 14 13:46 /dev/dm-0
> brw-rw---- 1 root disk 252,  1 Mar 14 13:46 /dev/dm-1
> brw-rw---- 1 root disk 252, 10 Mar 14 13:47 /dev/dm-10
> brw-rw---- 1 ceph ceph 252, 11 Mar 14 13:47 /dev/dm-11
> brw-rw---- 1 ceph ceph 252, 12 Mar 14 13:46 /dev/dm-12
> brw-rw---- 1 ceph ceph 252, 13 Mar 14 13:47 /dev/dm-13
> brw-rw---- 1 ceph ceph 252, 14 Mar 14 13:46 /dev/dm-14
> brw-rw---- 1 ceph ceph 252, 15 Mar 14 13:47 /dev/dm-15
> brw-rw---- 1 ceph ceph 252, 16 Mar 14 13:46 /dev/dm-16
> brw-rw---- 1 ceph ceph 252, 17 Mar 14 13:47 /dev/dm-17
> brw-rw---- 1 ceph ceph 252, 18 Mar 14 13:46 /dev/dm-18
> brw-rw---- 1 ceph ceph 252, 19 Mar 14 13:47 /dev/dm-19
> brw-rw---- 1 root disk 252,  2 Mar 14 13:46 /dev/dm-2
> brw-rw---- 1 ceph ceph 252, 20 Mar 14 13:46 /dev/dm-20
> brw-rw---- 1 ceph ceph 252, 21 Mar 14 13:47 /dev/dm-21
> brw-rw---- 1 ceph ceph 252, 22 Mar 14 13:46 /dev/dm-22
> brw-rw---- 1 ceph ceph 252, 23 Mar 14 13:47 /dev/dm-23
> brw-rw---- 1 ceph ceph 252, 24 Mar 14 13:46 /dev/dm-24
> brw-rw---- 1 ceph ceph 252, 25 Mar 14 13:47 /dev/dm-25
> brw-rw---- 1 ceph ceph 252, 26 Mar 14 13:46 /dev/dm-26
> brw-rw---- 1 ceph ceph 252, 27 Mar 14 13:47 /dev/dm-27
> brw-rw---- 1 ceph ceph 252, 28 Mar 14 13:47 /dev/dm-28
> brw-rw---- 1 root disk 252,  3 Mar 14 13:46 /dev/dm-3
> brw-rw---- 1 root disk 252,  4 Mar 14 13:46 /dev/dm-4
> brw-rw---- 1 root disk 252,  5 Mar 14 13:46 /dev/dm-5
> brw-rw---- 1 root disk 252,  6 Mar 14 13:47 /dev/dm-6
> brw-rw---- 1 root disk 252,  7 Mar 14 13:46 /dev/dm-7
> brw-rw---- 1 root disk 252,  8 Mar 14 13:46 /dev/dm-8
> brw-rw---- 1 root disk 252,  9 Mar 14 13:47 /dev/dm-9
>
>
>  Best Regards,
>  Nicholas.
> --
> You can find my PGP public key here: https://google.com/+DewrKim/about
>
-- 
You can find my PGP public key here: https://google.com/+DewrKim/about
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to