Re: [ceph-users] _read_bdev_label failed to open
Running the following after prepare and a reboot, "solves" this problem. [root@osd01 ~]# partx -v -a /dev/mapper/mpatha partition: none, disk: /dev/mapper/mpatha, lower: 0, upper: 0 /dev/mapper/mpatha: partition table type 'gpt' detected partx: /dev/mapper/mpatha: adding partition #1 failed: Invalid argument partx: /dev/mapper/mpatha: adding partition #2 failed: Invalid argument partx: /dev/mapper/mpatha: error adding partitions 1-2 The disk is then activated and in and up. It seems like the partuuid was not correctly imported into the kernel. Even if it states that partitions 1 - 2 were not added, they are (this disk has only two partitions). Should I open a bug? Kind regards, Kevin 2018-02-04 19:05 GMT+01:00 Kevin Olbrich: > I also noticed there are no folders under /var/lib/ceph/osd/ ... > > > Mit freundlichen Grüßen / best regards, > Kevin Olbrich. > > 2018-02-04 19:01 GMT+01:00 Kevin Olbrich : > >> Hi! >> >> Currently I try to re-deploy a cluster from filestore to bluestore. >> I zapped all disks (multiple times) but I fail adding a disk array: >> >> Prepare: >> >>> ceph-deploy --overwrite-conf osd prepare --bluestore --block-wal >>> /dev/sdb --block-db /dev/sdb osd01.cloud.example.local:/dev >>> /mapper/mpatha >> >> >> Activate: >> >>> ceph-deploy --overwrite-conf osd activate osd01.cloud.example.local:/dev >>> /mapper/mpatha1 >> >> >> Error on activate: >> >>> [osd01.cloud.example.local][WARNIN] got monmap epoch 2 >>> [osd01.cloud.example.local][WARNIN] command_check_call: Running >>> command: /usr/bin/ceph-osd --cluster ceph --mkfs -i 0 --monmap >>> /var/lib/ceph/tmp/mnt.pAfCl4/activate.monmap --osd-data >>> /var/lib/ceph/tmp/mnt.pAfCl4 --osd-uuid d5b6ab85-9437-4cb2-a34d-16a29067ba27 >>> --setuser ceph --setgroup ceph >>> >>> *[osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900368 >>> 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4/block) >>> _read_bdev_label failed to open /var/lib/ceph/tmp/mnt.pAfCl4/block: (2) No >>> such file or directory[osd01.cloud.example.local][WARNIN] 2018-02-04 >>> 18:52:43.900405 7f00d6359d00 -1 >>> bluestore(/var/lib/ceph/tmp/mnt.pAfCl4/block) _read_bdev_label failed to >>> open /var/lib/ceph/tmp/mnt.pAfCl4/block: (2) No such file or directory* >>> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900462 >>> 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4) >>> _setup_block_symlink_or_file failed to open block file: (13) Permission >>> denied >>> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900480 >>> 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4) mkfs failed, >>> (13) Permission denied >>> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900485 >>> 7f00d6359d00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (13) >>> Permission denied >>> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900662 >>> 7f00d6359d00 -1 ** ERROR: error creating empty object store in >>> /var/lib/ceph/tmp/mnt.pAfCl4: (13) Permission denied >>> [osd01.cloud.example.local][WARNIN] mount_activate: Failed to activate >>> [osd01.cloud.example.local][WARNIN] unmount: Unmounting >>> /var/lib/ceph/tmp/mnt.pAfCl4 >>> >> >> >> Same problem on 2x 14 disks. I was unable to get this cluster up. >> >> Any ideas? >> >> Kind regards, >> Kevin >> > > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] _read_bdev_label failed to open
I also noticed there are no folders under /var/lib/ceph/osd/ ... Mit freundlichen Grüßen / best regards, Kevin Olbrich. 2018-02-04 19:01 GMT+01:00 Kevin Olbrich: > Hi! > > Currently I try to re-deploy a cluster from filestore to bluestore. > I zapped all disks (multiple times) but I fail adding a disk array: > > Prepare: > >> ceph-deploy --overwrite-conf osd prepare --bluestore --block-wal /dev/sdb >> --block-db /dev/sdb osd01.cloud.example.local:/dev/mapper/mpatha > > > Activate: > >> ceph-deploy --overwrite-conf osd activate osd01.cloud.example.local:/ >> dev/mapper/mpatha1 > > > Error on activate: > >> [osd01.cloud.example.local][WARNIN] got monmap epoch 2 >> [osd01.cloud.example.local][WARNIN] command_check_call: Running command: >> /usr/bin/ceph-osd --cluster ceph --mkfs -i 0 --monmap >> /var/lib/ceph/tmp/mnt.pAfCl4/activate.monmap --osd-data >> /var/lib/ceph/tmp/mnt.pAfCl4 --osd-uuid d5b6ab85-9437-4cb2-a34d-16a29067ba27 >> --setuser ceph --setgroup ceph >> >> *[osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900368 >> 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4/block) >> _read_bdev_label failed to open /var/lib/ceph/tmp/mnt.pAfCl4/block: (2) No >> such file or directory[osd01.cloud.example.local][WARNIN] 2018-02-04 >> 18:52:43.900405 7f00d6359d00 -1 >> bluestore(/var/lib/ceph/tmp/mnt.pAfCl4/block) _read_bdev_label failed to >> open /var/lib/ceph/tmp/mnt.pAfCl4/block: (2) No such file or directory* >> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900462 >> 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4) >> _setup_block_symlink_or_file failed to open block file: (13) Permission >> denied >> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900480 >> 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4) mkfs failed, >> (13) Permission denied >> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900485 >> 7f00d6359d00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (13) >> Permission denied >> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900662 >> 7f00d6359d00 -1 ** ERROR: error creating empty object store in >> /var/lib/ceph/tmp/mnt.pAfCl4: (13) Permission denied >> [osd01.cloud.example.local][WARNIN] mount_activate: Failed to activate >> [osd01.cloud.example.local][WARNIN] unmount: Unmounting >> /var/lib/ceph/tmp/mnt.pAfCl4 >> > > > Same problem on 2x 14 disks. I was unable to get this cluster up. > > Any ideas? > > Kind regards, > Kevin > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] _read_bdev_label failed to open
Hi! Currently I try to re-deploy a cluster from filestore to bluestore. I zapped all disks (multiple times) but I fail adding a disk array: Prepare: > ceph-deploy --overwrite-conf osd prepare --bluestore --block-wal /dev/sdb > --block-db /dev/sdb osd01.cloud.example.local:/dev/mapper/mpatha Activate: > ceph-deploy --overwrite-conf osd activate osd01.cloud.example > .local:/dev/mapper/mpatha1 Error on activate: > [osd01.cloud.example.local][WARNIN] got monmap epoch 2 > [osd01.cloud.example.local][WARNIN] command_check_call: Running command: > /usr/bin/ceph-osd --cluster ceph --mkfs -i 0 --monmap > /var/lib/ceph/tmp/mnt.pAfCl4/activate.monmap --osd-data > /var/lib/ceph/tmp/mnt.pAfCl4 --osd-uuid > d5b6ab85-9437-4cb2-a34d-16a29067ba27 --setuser ceph --setgroup ceph > > *[osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900368 > 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4/block) > _read_bdev_label failed to open /var/lib/ceph/tmp/mnt.pAfCl4/block: (2) No > such file or directory[osd01.cloud.example.local][WARNIN] 2018-02-04 > 18:52:43.900405 7f00d6359d00 -1 > bluestore(/var/lib/ceph/tmp/mnt.pAfCl4/block) _read_bdev_label failed to > open /var/lib/ceph/tmp/mnt.pAfCl4/block: (2) No such file or directory* > [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900462 > 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4) > _setup_block_symlink_or_file failed to open block file: (13) Permission > denied > [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900480 > 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4) mkfs failed, (13) > Permission denied > [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900485 > 7f00d6359d00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (13) > Permission denied > [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900662 > 7f00d6359d00 -1 ** ERROR: error creating empty object store in > /var/lib/ceph/tmp/mnt.pAfCl4: (13) Permission denied > [osd01.cloud.example.local][WARNIN] mount_activate: Failed to activate > [osd01.cloud.example.local][WARNIN] unmount: Unmounting > /var/lib/ceph/tmp/mnt.pAfCl4 > Same problem on 2x 14 disks. I was unable to get this cluster up. Any ideas? Kind regards, Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] Luminous/Ubuntu 16.04 kernel recommendation ?
Hello, What is the best kernel for Luminous on Ubuntu 16.04 ? Is linux-image-virtual-lts-xenial still the best one ? Or linux-virtual-hwe-16.04 will offer some improvement ? Thanks, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] permitted cluster operations during i/o
Hello! One more newbie question ) I have a minimal working ceph cluster: 3 nodes, 6 OSDs. I understand that I can scale up cluster adding one or more OSDs or even nodes without stopping clients i/o and loosing consistency. What if I decide to add a cache pool backed by SSDs or move a journal to SSD? Should I stop clients i/o before performing operations of this kind? Is there a rule of thumb to determine permitted operations from "online" client's point of view? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com