Thanks for reporting this issue. It has been fixed by
https://github.com/ceph/ceph/pull/40845 and will be released in the
next pacific point release.

Neha

On Mon, Apr 19, 2021 at 8:19 AM Behzad Khoshbakhti
<khoshbakh...@gmail.com> wrote:
>
> thanks by commenting the ProtectClock directive, the issue is resolved.
> Thanks for the support.
>
> On Sun, Apr 18, 2021 at 9:28 AM Lomayani S. Laizer <lomlai...@gmail.com>
> wrote:
>
> > Hello,
> >
> > Uncomment ProtectClock=true in /lib/systemd/system/ceph-osd@.service
> > should fix the issue
> >
> >
> >
> > On Thu, Apr 8, 2021 at 9:49 AM Behzad Khoshbakhti <khoshbakh...@gmail.com>
> > wrote:
> >
> >> I believe there is some of problem in the systemd as the ceph starts
> >> successfully by running manually using the ceph-osd command.
> >>
> >> On Thu, Apr 8, 2021, 10:32 AM Enrico Kern <enrico.k...@stackxperts.com>
> >> wrote:
> >>
> >> > I agree. But why does the process start manual without systemd which
> >> > obviously has nothing to do with uid/gid 167 ? It is also not really a
> >> fix
> >> > to let all users change uid/gids...
> >> >
> >> > On Wed, Apr 7, 2021 at 7:39 PM Wladimir Mutel <m...@mwg.dp.ua> wrote:
> >> >
> >> > > Could there be more smooth migration? On my Ubuntu I have the same
> >> > > behavior and my ceph uid/gud are also 64045.
> >> > > I started with Luminous in 2018 when it was not containerized, and
> >> still
> >> > > continue updating it with apt.
> >> > > Since when we have got this hardcoded value of 167 ???
> >> > >
> >> > > Andrew Walker-Brown wrote:
> >> > > > UID and guid should both be 167 I believe.
> >> > > >
> >> > > > Make a note of the current values and change them to 167 using
> >> usermod
> >> > > and groupmod.
> >> > > >
> >> > > > I had just this issue. It’s partly to do with how perms are used
> >> within
> >> > > the containers I think.
> >> > > >
> >> > > > I changed the values to 167 in passwd everything worked again.
> >> Symptoms
> >> > > for me were OSDs not starting and permissions/file not found errors.
> >> > > >
> >> > > > Sent from my iPhone
> >> > > >
> >> > > > On 4 Apr 2021, at 13:43, Lomayani S. Laizer <lomlai...@gmail.com>
> >> > wrote:
> >> > > >
> >> > > > 
> >> > > > Hello,
> >> > > > Permissions are correct. guid/uid is 64045/64045
> >> > > >
> >> > > > ls -alh
> >> > > > total 32K
> >> > > > drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
> >> > > > drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
> >> > > > lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
> >> > > > -rw------- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
> >> > > > -rw------- 1 ceph ceph   37 Apr  4 14:11 fsid
> >> > > > -rw------- 1 ceph ceph   56 Apr  4 14:11 keyring
> >> > > > -rw------- 1 ceph ceph    6 Apr  4 14:11 ready
> >> > > > -rw------- 1 ceph ceph    3 Apr  4 14:11 require_osd_release
> >> > > > -rw------- 1 ceph ceph   10 Apr  4 14:11 type
> >> > > > -rw------- 1 ceph ceph    3 Apr  4 14:11 whoami
> >> > > >
> >> > > > On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown <
> >> > > andrew_jbr...@hotmail.com<mailto:andrew_jbr...@hotmail.com>> wrote:
> >> > > > Are the file permissions correct and UID/guid in passwd  both 167?
> >> > > >
> >> > > > Sent from my iPhone
> >> > > >
> >> > > > On 4 Apr 2021, at 12:29, Lomayani S. Laizer <lomlai...@gmail.com
> >> > <mailto:
> >> > > lomlai...@gmail.com>> wrote:
> >> > > >
> >> > > > Hello,
> >> > > >
> >> > > > +1 Am facing the same problem in ubuntu after upgrade to pacific
> >> > > >
> >> > > > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1
> >> > bluestore(/var/lib/ceph/osd/
> >> > > > ceph-29/block) _read_bdev_label failed to open
> >> > > /var/lib/ceph/osd/ceph-29/block:
> >> > > > (1) Operation not permitted
> >> > > > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR:
> >> unable
> >> > > to
> >> > > > open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file
> >> or
> >> > > > directoryESC[0m
> >> > > >
> >> > > > On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti <
> >> > > khoshbakh...@gmail.com<mailto:khoshbakh...@gmail.com>>
> >> > > > wrote:
> >> > > >
> >> > > >> It worth mentioning as I issue the following command, the Ceph OSD
> >> > > starts
> >> > > >> and joins the cluster:
> >> > > >> /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph
> >> --setgroup
> >> > > ceph
> >> > > >>
> >> > > >>
> >> > > >>
> >> > > >> On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <
> >> > > khoshbakh...@gmail.com<mailto:khoshbakh...@gmail.com>>
> >> > > >> wrote:
> >> > > >>
> >> > > >>> Hi all,
> >> > > >>>
> >> > > >>> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during
> >> the
> >> > > >>> manual upgrade using the precompiled packages, the OSDs was down
> >> with
> >> > > the
> >> > > >>> following messages:
> >> > > >>>
> >> > > >>> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate
> >> --all
> >> > > >>> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> > > >>> Running command: /usr/bin/chown -R ceph:ceph
> >> /var/lib/ceph/osd/ceph-2
> >> > > >>> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> >> > > >> prime-osd-dir
> >> > > >>> --dev
> >> > > >>>
> >> > > >>
> >> > >
> >> >
> >> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> > > >>> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
> >> > > >>> Running command: /usr/bin/ln -snf
> >> > > >>>
> >> > > >>
> >> > >
> >> >
> >> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> > > >>> /var/lib/ceph/osd/ceph-2/block
> >> > > >>> Running command: /usr/bin/chown -h ceph:ceph
> >> > > >> /var/lib/ceph/osd/ceph-2/block
> >> > > >>> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
> >> > > >>> Running command: /usr/bin/chown -R ceph:ceph
> >> /var/lib/ceph/osd/ceph-2
> >> > > >>> Running command: /usr/bin/systemctl enable
> >> > > >>> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> > > >>> Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
> >> > > >>> Running command: /usr/bin/systemctl start ceph-osd@2
> >> > > >>> --> ceph-volume lvm activate successful for osd ID: 2
> >> > > >>>
> >> > > >>> Content of /var/log/ceph/ceph-osd.2.log
> >> > > >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to
> >> > 64045:64045
> >> > > >>> (ceph:ceph)
> >> > > >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
> >> > > >>> (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable),
> >> process
> >> > > >>> ceph-osd, pid 5484
> >> > > >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore
> >> > > empty
> >> > > >>> --pid-file
> >> > > >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
> >> > > >>> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed
> >> to
> >> > > open
> >> > > >>> /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
> >> > > >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: unable to
> >> > open
> >> > > >>> OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or
> >> > > >> directory*
> >> > > >>>
> >> > > >>>
> >> > > >>> root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
> >> > > >>> â— ceph-osd@2.service - Ceph object storage daemon osd.2
> >> > > >>>      Loaded: loaded (/lib/systemd/system/ceph-osd@.service;
> >> enabled;
> >> > > >>> vendor preset: enabled)
> >> > > >>>      Active: failed (Result: exit-code) since Sun 2021-04-04
> >> 14:55:06
> >> > > >>> +0430; 50s ago
> >> > > >>>     Process: 5471
> >> ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh
> >> > > >>> --cluster ${CLUSTER} --id 2 (code=exited, status=0/SUCCESS)
> >> > > >>>     Process: 5484 ExecStart=/usr/bin/ceph-osd -f --cluster
> >> ${CLUSTER}
> >> > > >> --id
> >> > > >>> 2 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
> >> > > >>>    Main PID: 5484 (code=exited, status=1/FAILURE)
> >> > > >>>
> >> > > >>> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Scheduled
> >> > > restart
> >> > > >>> job, restart counter is at 3.
> >> > > >>> Apr 04 14:55:06 osd03 systemd[1]: Stopped Ceph object storage
> >> daemon
> >> > > >> osd.2.
> >> > > >>> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Start
> >> request
> >> > > >>> repeated too quickly.
> >> > > >>> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Failed with
> >> > > result
> >> > > >>> 'exit-code'.
> >> > > >>> Apr 04 14:55:06 osd03 systemd[1]: Failed to start Ceph object
> >> storage
> >> > > >>> daemon osd.2.
> >> > > >>> root@osd03:/var/lib/ceph/osd/ceph-2#
> >> > > >>>
> >> > > >>> root@osd03:~# lsblk
> >> > > >>> NAME                                  MAJ:MIN RM  SIZE RO TYPE
> >> > > MOUNTPOINT
> >> > > >>> fd0                                     2:0    1    4K  0 disk
> >> > > >>> loop0                                   7:0    0 55.5M  1 loop
> >> > > >>> /snap/core18/1988
> >> > > >>> loop1                                   7:1    0 69.9M  1 loop
> >> > > >>> /snap/lxd/19188
> >> > > >>> loop2                                   7:2    0 55.5M  1 loop
> >> > > >>> /snap/core18/1997
> >> > > >>> loop3                                   7:3    0 70.4M  1 loop
> >> > > >>> /snap/lxd/19647
> >> > > >>> loop4                                   7:4    0 32.3M  1 loop
> >> > > >>> /snap/snapd/11402
> >> > > >>> loop5                                   7:5    0 32.3M  1 loop
> >> > > >>> /snap/snapd/11107
> >> > > >>> sda                                     8:0    0   80G  0 disk
> >> > > >>> ├─sda1                                  8:1    0    1M  0 part
> >> > > >>> ├─sda2                                  8:2    0    1G  0 part
> >> /boot
> >> > > >>> └─sda3                                  8:3    0   79G  0 part
> >> > > >>>   └─ubuntu--vg-ubuntu--lv             253:0    0 69.5G  0 lvm  /
> >> > > >>> sdb                                     8:16   0   16G  0 disk
> >> > > >>> └─sdb1                                  8:17   0   16G  0 part
> >> > > >>>
> >> > > >>>
> >> > > >>
> >> > >
> >> >
> >> └─ceph--9d37674b--a269--4239--aa9e--66a3c74df76c-osd--block--2d3ffc61--e430--4
> >> > > >>>      b89--bcd4--105b2df26352
> >> > > >>>                                       253:1    0   16G  0 lvm
> >> > > >>> root@osd03:~#
> >> > > >>>
> >> > > >>> root@osd03:/var/lib/ceph/osd/ceph-2# mount | grep -i ceph
> >> > > >>> tmpfs on /var/lib/ceph/osd/ceph-2 type tmpfs (rw,relatime)
> >> > > >>> root@osd03:/var/lib/ceph/osd/ceph-2#
> >> > > >>>
> >> > > >>> any help is much appreciated
> >> > > >>> --
> >> > > >>>
> >> > > >>> Regards
> >> > > >>> Behzad Khoshbakhti
> >> > > >>> Computer Network Engineer (CCIE #58887)
> >> > > >>>
> >> > > >>>
> >> > > >>
> >> > > >> --
> >> > > >>
> >> > > >> Regards
> >> > > >> Behzad Khoshbakhti
> >> > > >> Computer Network Engineer (CCIE #58887)
> >> > > >> +989128610474
> >> > > >> _______________________________________________
> >> > > >> ceph-users mailing list -- ceph-users@ceph.io<mailto:
> >> > ceph-users@ceph.io
> >> > > >
> >> > > >> To unsubscribe send an email to ceph-users-le...@ceph.io<mailto:
> >> > > ceph-users-le...@ceph.io>
> >> > > >>
> >> > > > _______________________________________________
> >> > > > ceph-users mailing list -- ceph-users@ceph.io<mailto:
> >> > ceph-users@ceph.io>
> >> > > > To unsubscribe send an email to ceph-users-le...@ceph.io<mailto:
> >> > > ceph-users-le...@ceph.io>
> >> > > > _______________________________________________
> >> > > > ceph-users mailing list -- ceph-users@ceph.io
> >> > > > To unsubscribe send an email to ceph-users-le...@ceph.io
> >> > > >
> >> > >
> >> > > _______________________________________________
> >> > > ceph-users mailing list -- ceph-users@ceph.io
> >> > > To unsubscribe send an email to ceph-users-le...@ceph.io
> >> > >
> >> >
> >> >
> >> > --
> >> > UG (haftungsbeschränkt)
> >> > *   Enrico Kern, Geschäftsführer*
> >> >    Südstraße 26
> >> >    01877 Demitz-Thumitz
> >> >
> >> >    https://www.stackxperts.com
> >> >    enrico.k...@stackxperts.com
> >> >     +49 (0) 152 26814501
> >> >     skype: flyersa
> >> >
> >> >     Geschäftsführer Enrico Kern
> >> >     Amtsgericht Dresden | HRB 38717
> >> > _______________________________________________
> >> > ceph-users mailing list -- ceph-users@ceph.io
> >> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >> >
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >
>
> --
>
> Regards
>  Behzad Khoshbakhti
>  Computer Network Engineer (CCIE #58887)
>  +989128610474
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to