Hello,

+1 Am facing the same problem in ubuntu after upgrade to pacific

2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
ceph-29/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-29/block:
(1) Operation not permitted
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable to
open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
directoryESC[0m

On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti <khoshbakh...@gmail.com>
wrote:

> It worth mentioning as I issue the following command, the Ceph OSD starts
> and joins the cluster:
> /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
>
>
>
> On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <khoshbakh...@gmail.com>
> wrote:
>
> > Hi all,
> >
> > As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
> > manual upgrade using the precompiled packages, the OSDs was down with the
> > following messages:
> >
> > root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
> > --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
> > Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> > Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> prime-osd-dir
> > --dev
> >
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > --path /var/lib/ceph/osd/ceph-2 --no-mon-config
> > Running command: /usr/bin/ln -snf
> >
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > /var/lib/ceph/osd/ceph-2/block
> > Running command: /usr/bin/chown -h ceph:ceph
> /var/lib/ceph/osd/ceph-2/block
> > Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
> > Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> > Running command: /usr/bin/systemctl enable
> > ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
> > Running command: /usr/bin/systemctl start ceph-osd@2
> > --> ceph-volume lvm activate successful for osd ID: 2
> >
> > Content of /var/log/ceph/ceph-osd.2.log
> > 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to 64045:64045
> > (ceph:ceph)
> > 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
> > (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
> > ceph-osd, pid 5484
> > 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore empty
> > --pid-file
> > 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
> > bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to open
> > /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
> > 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: unable to open
> > OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or
> directory*
> >
> >
> > root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
> > â— ceph-osd@2.service - Ceph object storage daemon osd.2
> >      Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled;
> > vendor preset: enabled)
> >      Active: failed (Result: exit-code) since Sun 2021-04-04 14:55:06
> > +0430; 50s ago
> >     Process: 5471 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh
> > --cluster ${CLUSTER} --id 2 (code=exited, status=0/SUCCESS)
> >     Process: 5484 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER}
> --id
> > 2 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
> >    Main PID: 5484 (code=exited, status=1/FAILURE)
> >
> > Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Scheduled restart
> > job, restart counter is at 3.
> > Apr 04 14:55:06 osd03 systemd[1]: Stopped Ceph object storage daemon
> osd.2.
> > Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Start request
> > repeated too quickly.
> > Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Failed with result
> > 'exit-code'.
> > Apr 04 14:55:06 osd03 systemd[1]: Failed to start Ceph object storage
> > daemon osd.2.
> > root@osd03:/var/lib/ceph/osd/ceph-2#
> >
> > root@osd03:~# lsblk
> > NAME                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > fd0                                     2:0    1    4K  0 disk
> > loop0                                   7:0    0 55.5M  1 loop
> > /snap/core18/1988
> > loop1                                   7:1    0 69.9M  1 loop
> > /snap/lxd/19188
> > loop2                                   7:2    0 55.5M  1 loop
> > /snap/core18/1997
> > loop3                                   7:3    0 70.4M  1 loop
> > /snap/lxd/19647
> > loop4                                   7:4    0 32.3M  1 loop
> > /snap/snapd/11402
> > loop5                                   7:5    0 32.3M  1 loop
> > /snap/snapd/11107
> > sda                                     8:0    0   80G  0 disk
> > ├─sda1                                  8:1    0    1M  0 part
> > ├─sda2                                  8:2    0    1G  0 part /boot
> > └─sda3                                  8:3    0   79G  0 part
> >   └─ubuntu--vg-ubuntu--lv             253:0    0 69.5G  0 lvm  /
> > sdb                                     8:16   0   16G  0 disk
> > └─sdb1                                  8:17   0   16G  0 part
> >
> >
> └─ceph--9d37674b--a269--4239--aa9e--66a3c74df76c-osd--block--2d3ffc61--e430--4
> >      b89--bcd4--105b2df26352
> >                                       253:1    0   16G  0 lvm
> > root@osd03:~#
> >
> > root@osd03:/var/lib/ceph/osd/ceph-2# mount | grep -i ceph
> > tmpfs on /var/lib/ceph/osd/ceph-2 type tmpfs (rw,relatime)
> > root@osd03:/var/lib/ceph/osd/ceph-2#
> >
> > any help is much appreciated
> > --
> >
> > Regards
> >  Behzad Khoshbakhti
> >  Computer Network Engineer (CCIE #58887)
> >
> >
>
> --
>
> Regards
>  Behzad Khoshbakhti
>  Computer Network Engineer (CCIE #58887)
>  +989128610474
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to