[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-05-07 Thread Neha Ojha
Thanks for reporting this issue. It has been fixed by
https://github.com/ceph/ceph/pull/40845 and will be released in the
next pacific point release.

Neha

On Mon, Apr 19, 2021 at 8:19 AM Behzad Khoshbakhti
 wrote:
>
> thanks by commenting the ProtectClock directive, the issue is resolved.
> Thanks for the support.
>
> On Sun, Apr 18, 2021 at 9:28 AM Lomayani S. Laizer 
> wrote:
>
> > Hello,
> >
> > Uncomment ProtectClock=true in /lib/systemd/system/ceph-osd@.service
> > should fix the issue
> >
> >
> >
> > On Thu, Apr 8, 2021 at 9:49 AM Behzad Khoshbakhti 
> > wrote:
> >
> >> I believe there is some of problem in the systemd as the ceph starts
> >> successfully by running manually using the ceph-osd command.
> >>
> >> On Thu, Apr 8, 2021, 10:32 AM Enrico Kern 
> >> wrote:
> >>
> >> > I agree. But why does the process start manual without systemd which
> >> > obviously has nothing to do with uid/gid 167 ? It is also not really a
> >> fix
> >> > to let all users change uid/gids...
> >> >
> >> > On Wed, Apr 7, 2021 at 7:39 PM Wladimir Mutel  wrote:
> >> >
> >> > > Could there be more smooth migration? On my Ubuntu I have the same
> >> > > behavior and my ceph uid/gud are also 64045.
> >> > > I started with Luminous in 2018 when it was not containerized, and
> >> still
> >> > > continue updating it with apt.
> >> > > Since when we have got this hardcoded value of 167 ???
> >> > >
> >> > > Andrew Walker-Brown wrote:
> >> > > > UID and guid should both be 167 I believe.
> >> > > >
> >> > > > Make a note of the current values and change them to 167 using
> >> usermod
> >> > > and groupmod.
> >> > > >
> >> > > > I had just this issue. It’s partly to do with how perms are used
> >> within
> >> > > the containers I think.
> >> > > >
> >> > > > I changed the values to 167 in passwd everything worked again.
> >> Symptoms
> >> > > for me were OSDs not starting and permissions/file not found errors.
> >> > > >
> >> > > > Sent from my iPhone
> >> > > >
> >> > > > On 4 Apr 2021, at 13:43, Lomayani S. Laizer 
> >> > wrote:
> >> > > >
> >> > > > 
> >> > > > Hello,
> >> > > > Permissions are correct. guid/uid is 64045/64045
> >> > > >
> >> > > > ls -alh
> >> > > > total 32K
> >> > > > drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
> >> > > > drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
> >> > > > lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
> >> > > > -rw--- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
> >> > > > -rw--- 1 ceph ceph   37 Apr  4 14:11 fsid
> >> > > > -rw--- 1 ceph ceph   56 Apr  4 14:11 keyring
> >> > > > -rw--- 1 ceph ceph6 Apr  4 14:11 ready
> >> > > > -rw--- 1 ceph ceph3 Apr  4 14:11 require_osd_release
> >> > > > -rw--- 1 ceph ceph   10 Apr  4 14:11 type
> >> > > > -rw--- 1 ceph ceph3 Apr  4 14:11 whoami
> >> > > >
> >> > > > On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown <
> >> > > andrew_jbr...@hotmail.com> wrote:
> >> > > > Are the file permissions correct and UID/guid in passwd  both 167?
> >> > > >
> >> > > > Sent from my iPhone
> >> > > >
> >> > > > On 4 Apr 2021, at 12:29, Lomayani S. Laizer  >> >  >> > > lomlai...@gmail.com>> wrote:
> >> > > >
> >> > > > Hello,
> >> > > >
> >> > > > +1 Am facing the same problem in ubuntu after upgrade to pacific
> >> > > >
> >> > > > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1
> >> > bluestore(/var/lib/ceph/osd/
> >> > > > ceph-29/block) _read_bdev_label failed to open
> >> > > /var/lib/ceph/osd/ceph-29/block:
> >> > > > (1) Operation not permitted
> >> > > > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR:
> >> unable
> >> > > to
> >> > > > open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file
> >> or
> >> > > > directoryESC[0m
> >> > > >
> >> > > > On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti <
> >> > > khoshbakh...@gmail.com>
> >> > > > wrote:
> >> > > >
> >> > > >> It worth mentioning as I issue the following command, the Ceph OSD
> >> > > starts
> >> > > >> and joins the cluster:
> >> > > >> /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph
> >> --setgroup
> >> > > ceph
> >> > > >>
> >> > > >>
> >> > > >>
> >> > > >> On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <
> >> > > khoshbakh...@gmail.com>
> >> > > >> wrote:
> >> > > >>
> >> > > >>> Hi all,
> >> > > >>>
> >> > > >>> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during
> >> the
> >> > > >>> manual upgrade using the precompiled packages, the OSDs was down
> >> with
> >> > > the
> >> > > >>> following messages:
> >> > > >>>
> >> > > >>> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate
> >> --all
> >> > > >>> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> > > >>> Running command: /usr/bin/chown -R ceph:ceph
> >> /var/lib/ceph/osd/ceph-2
> >> > > >>> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> >> > > >> prime-osd-dir
> >> > > >>> --dev
> >> > > >>>

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-19 Thread Behzad Khoshbakhti
thanks by commenting the ProtectClock directive, the issue is resolved.
Thanks for the support.

On Sun, Apr 18, 2021 at 9:28 AM Lomayani S. Laizer 
wrote:

> Hello,
>
> Uncomment ProtectClock=true in /lib/systemd/system/ceph-osd@.service
> should fix the issue
>
>
>
> On Thu, Apr 8, 2021 at 9:49 AM Behzad Khoshbakhti 
> wrote:
>
>> I believe there is some of problem in the systemd as the ceph starts
>> successfully by running manually using the ceph-osd command.
>>
>> On Thu, Apr 8, 2021, 10:32 AM Enrico Kern 
>> wrote:
>>
>> > I agree. But why does the process start manual without systemd which
>> > obviously has nothing to do with uid/gid 167 ? It is also not really a
>> fix
>> > to let all users change uid/gids...
>> >
>> > On Wed, Apr 7, 2021 at 7:39 PM Wladimir Mutel  wrote:
>> >
>> > > Could there be more smooth migration? On my Ubuntu I have the same
>> > > behavior and my ceph uid/gud are also 64045.
>> > > I started with Luminous in 2018 when it was not containerized, and
>> still
>> > > continue updating it with apt.
>> > > Since when we have got this hardcoded value of 167 ???
>> > >
>> > > Andrew Walker-Brown wrote:
>> > > > UID and guid should both be 167 I believe.
>> > > >
>> > > > Make a note of the current values and change them to 167 using
>> usermod
>> > > and groupmod.
>> > > >
>> > > > I had just this issue. It’s partly to do with how perms are used
>> within
>> > > the containers I think.
>> > > >
>> > > > I changed the values to 167 in passwd everything worked again.
>> Symptoms
>> > > for me were OSDs not starting and permissions/file not found errors.
>> > > >
>> > > > Sent from my iPhone
>> > > >
>> > > > On 4 Apr 2021, at 13:43, Lomayani S. Laizer 
>> > wrote:
>> > > >
>> > > > 
>> > > > Hello,
>> > > > Permissions are correct. guid/uid is 64045/64045
>> > > >
>> > > > ls -alh
>> > > > total 32K
>> > > > drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
>> > > > drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
>> > > > lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
>> > > > -rw--- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
>> > > > -rw--- 1 ceph ceph   37 Apr  4 14:11 fsid
>> > > > -rw--- 1 ceph ceph   56 Apr  4 14:11 keyring
>> > > > -rw--- 1 ceph ceph6 Apr  4 14:11 ready
>> > > > -rw--- 1 ceph ceph3 Apr  4 14:11 require_osd_release
>> > > > -rw--- 1 ceph ceph   10 Apr  4 14:11 type
>> > > > -rw--- 1 ceph ceph3 Apr  4 14:11 whoami
>> > > >
>> > > > On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown <
>> > > andrew_jbr...@hotmail.com> wrote:
>> > > > Are the file permissions correct and UID/guid in passwd  both 167?
>> > > >
>> > > > Sent from my iPhone
>> > > >
>> > > > On 4 Apr 2021, at 12:29, Lomayani S. Laizer > > > > > lomlai...@gmail.com>> wrote:
>> > > >
>> > > > Hello,
>> > > >
>> > > > +1 Am facing the same problem in ubuntu after upgrade to pacific
>> > > >
>> > > > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1
>> > bluestore(/var/lib/ceph/osd/
>> > > > ceph-29/block) _read_bdev_label failed to open
>> > > /var/lib/ceph/osd/ceph-29/block:
>> > > > (1) Operation not permitted
>> > > > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR:
>> unable
>> > > to
>> > > > open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file
>> or
>> > > > directoryESC[0m
>> > > >
>> > > > On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti <
>> > > khoshbakh...@gmail.com>
>> > > > wrote:
>> > > >
>> > > >> It worth mentioning as I issue the following command, the Ceph OSD
>> > > starts
>> > > >> and joins the cluster:
>> > > >> /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph
>> --setgroup
>> > > ceph
>> > > >>
>> > > >>
>> > > >>
>> > > >> On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <
>> > > khoshbakh...@gmail.com>
>> > > >> wrote:
>> > > >>
>> > > >>> Hi all,
>> > > >>>
>> > > >>> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during
>> the
>> > > >>> manual upgrade using the precompiled packages, the OSDs was down
>> with
>> > > the
>> > > >>> following messages:
>> > > >>>
>> > > >>> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate
>> --all
>> > > >>> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
>> > > >>> Running command: /usr/bin/chown -R ceph:ceph
>> /var/lib/ceph/osd/ceph-2
>> > > >>> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
>> > > >> prime-osd-dir
>> > > >>> --dev
>> > > >>>
>> > > >>
>> > >
>> >
>> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> > > >>> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
>> > > >>> Running command: /usr/bin/ln -snf
>> > > >>>
>> > > >>
>> > >
>> >
>> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> > > >>> /var/lib/ceph/osd/ceph-2/block
>> > > >>> Running command: /usr/bin/chown -h ceph:ceph
>> > > >> 

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-17 Thread Lomayani S. Laizer
Hello,

Uncomment ProtectClock=true in /lib/systemd/system/ceph-osd@.service should
fix the issue



On Thu, Apr 8, 2021 at 9:49 AM Behzad Khoshbakhti 
wrote:

> I believe there is some of problem in the systemd as the ceph starts
> successfully by running manually using the ceph-osd command.
>
> On Thu, Apr 8, 2021, 10:32 AM Enrico Kern 
> wrote:
>
> > I agree. But why does the process start manual without systemd which
> > obviously has nothing to do with uid/gid 167 ? It is also not really a
> fix
> > to let all users change uid/gids...
> >
> > On Wed, Apr 7, 2021 at 7:39 PM Wladimir Mutel  wrote:
> >
> > > Could there be more smooth migration? On my Ubuntu I have the same
> > > behavior and my ceph uid/gud are also 64045.
> > > I started with Luminous in 2018 when it was not containerized, and
> still
> > > continue updating it with apt.
> > > Since when we have got this hardcoded value of 167 ???
> > >
> > > Andrew Walker-Brown wrote:
> > > > UID and guid should both be 167 I believe.
> > > >
> > > > Make a note of the current values and change them to 167 using
> usermod
> > > and groupmod.
> > > >
> > > > I had just this issue. It’s partly to do with how perms are used
> within
> > > the containers I think.
> > > >
> > > > I changed the values to 167 in passwd everything worked again.
> Symptoms
> > > for me were OSDs not starting and permissions/file not found errors.
> > > >
> > > > Sent from my iPhone
> > > >
> > > > On 4 Apr 2021, at 13:43, Lomayani S. Laizer 
> > wrote:
> > > >
> > > > 
> > > > Hello,
> > > > Permissions are correct. guid/uid is 64045/64045
> > > >
> > > > ls -alh
> > > > total 32K
> > > > drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
> > > > drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
> > > > lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
> > > > -rw--- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
> > > > -rw--- 1 ceph ceph   37 Apr  4 14:11 fsid
> > > > -rw--- 1 ceph ceph   56 Apr  4 14:11 keyring
> > > > -rw--- 1 ceph ceph6 Apr  4 14:11 ready
> > > > -rw--- 1 ceph ceph3 Apr  4 14:11 require_osd_release
> > > > -rw--- 1 ceph ceph   10 Apr  4 14:11 type
> > > > -rw--- 1 ceph ceph3 Apr  4 14:11 whoami
> > > >
> > > > On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown <
> > > andrew_jbr...@hotmail.com> wrote:
> > > > Are the file permissions correct and UID/guid in passwd  both 167?
> > > >
> > > > Sent from my iPhone
> > > >
> > > > On 4 Apr 2021, at 12:29, Lomayani S. Laizer  >  > > lomlai...@gmail.com>> wrote:
> > > >
> > > > Hello,
> > > >
> > > > +1 Am facing the same problem in ubuntu after upgrade to pacific
> > > >
> > > > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1
> > bluestore(/var/lib/ceph/osd/
> > > > ceph-29/block) _read_bdev_label failed to open
> > > /var/lib/ceph/osd/ceph-29/block:
> > > > (1) Operation not permitted
> > > > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR:
> unable
> > > to
> > > > open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
> > > > directoryESC[0m
> > > >
> > > > On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti <
> > > khoshbakh...@gmail.com>
> > > > wrote:
> > > >
> > > >> It worth mentioning as I issue the following command, the Ceph OSD
> > > starts
> > > >> and joins the cluster:
> > > >> /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup
> > > ceph
> > > >>
> > > >>
> > > >>
> > > >> On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <
> > > khoshbakh...@gmail.com>
> > > >> wrote:
> > > >>
> > > >>> Hi all,
> > > >>>
> > > >>> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during
> the
> > > >>> manual upgrade using the precompiled packages, the OSDs was down
> with
> > > the
> > > >>> following messages:
> > > >>>
> > > >>> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate
> --all
> > > >>> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
> > > >>> Running command: /usr/bin/chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-2
> > > >>> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> > > >> prime-osd-dir
> > > >>> --dev
> > > >>>
> > > >>
> > >
> >
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > > >>> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
> > > >>> Running command: /usr/bin/ln -snf
> > > >>>
> > > >>
> > >
> >
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > > >>> /var/lib/ceph/osd/ceph-2/block
> > > >>> Running command: /usr/bin/chown -h ceph:ceph
> > > >> /var/lib/ceph/osd/ceph-2/block
> > > >>> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
> > > >>> Running command: /usr/bin/chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-2
> > > >>> Running command: /usr/bin/systemctl enable
> > > >>> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > > >>> Running command: 

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-08 Thread Behzad Khoshbakhti
I believe there is some of problem in the systemd as the ceph starts
successfully by running manually using the ceph-osd command.

On Thu, Apr 8, 2021, 10:32 AM Enrico Kern 
wrote:

> I agree. But why does the process start manual without systemd which
> obviously has nothing to do with uid/gid 167 ? It is also not really a fix
> to let all users change uid/gids...
>
> On Wed, Apr 7, 2021 at 7:39 PM Wladimir Mutel  wrote:
>
> > Could there be more smooth migration? On my Ubuntu I have the same
> > behavior and my ceph uid/gud are also 64045.
> > I started with Luminous in 2018 when it was not containerized, and still
> > continue updating it with apt.
> > Since when we have got this hardcoded value of 167 ???
> >
> > Andrew Walker-Brown wrote:
> > > UID and guid should both be 167 I believe.
> > >
> > > Make a note of the current values and change them to 167 using usermod
> > and groupmod.
> > >
> > > I had just this issue. It’s partly to do with how perms are used within
> > the containers I think.
> > >
> > > I changed the values to 167 in passwd everything worked again. Symptoms
> > for me were OSDs not starting and permissions/file not found errors.
> > >
> > > Sent from my iPhone
> > >
> > > On 4 Apr 2021, at 13:43, Lomayani S. Laizer 
> wrote:
> > >
> > > 
> > > Hello,
> > > Permissions are correct. guid/uid is 64045/64045
> > >
> > > ls -alh
> > > total 32K
> > > drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
> > > drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
> > > lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
> > > -rw--- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
> > > -rw--- 1 ceph ceph   37 Apr  4 14:11 fsid
> > > -rw--- 1 ceph ceph   56 Apr  4 14:11 keyring
> > > -rw--- 1 ceph ceph6 Apr  4 14:11 ready
> > > -rw--- 1 ceph ceph3 Apr  4 14:11 require_osd_release
> > > -rw--- 1 ceph ceph   10 Apr  4 14:11 type
> > > -rw--- 1 ceph ceph3 Apr  4 14:11 whoami
> > >
> > > On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown <
> > andrew_jbr...@hotmail.com> wrote:
> > > Are the file permissions correct and UID/guid in passwd  both 167?
> > >
> > > Sent from my iPhone
> > >
> > > On 4 Apr 2021, at 12:29, Lomayani S. Laizer   > lomlai...@gmail.com>> wrote:
> > >
> > > Hello,
> > >
> > > +1 Am facing the same problem in ubuntu after upgrade to pacific
> > >
> > > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1
> bluestore(/var/lib/ceph/osd/
> > > ceph-29/block) _read_bdev_label failed to open
> > /var/lib/ceph/osd/ceph-29/block:
> > > (1) Operation not permitted
> > > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable
> > to
> > > open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
> > > directoryESC[0m
> > >
> > > On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti <
> > khoshbakh...@gmail.com>
> > > wrote:
> > >
> > >> It worth mentioning as I issue the following command, the Ceph OSD
> > starts
> > >> and joins the cluster:
> > >> /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup
> > ceph
> > >>
> > >>
> > >>
> > >> On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <
> > khoshbakh...@gmail.com>
> > >> wrote:
> > >>
> > >>> Hi all,
> > >>>
> > >>> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
> > >>> manual upgrade using the precompiled packages, the OSDs was down with
> > the
> > >>> following messages:
> > >>>
> > >>> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
> > >>> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
> > >>> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> > >>> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> > >> prime-osd-dir
> > >>> --dev
> > >>>
> > >>
> >
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > >>> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
> > >>> Running command: /usr/bin/ln -snf
> > >>>
> > >>
> >
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > >>> /var/lib/ceph/osd/ceph-2/block
> > >>> Running command: /usr/bin/chown -h ceph:ceph
> > >> /var/lib/ceph/osd/ceph-2/block
> > >>> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
> > >>> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> > >>> Running command: /usr/bin/systemctl enable
> > >>> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > >>> Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
> > >>> Running command: /usr/bin/systemctl start ceph-osd@2
> > >>> --> ceph-volume lvm activate successful for osd ID: 2
> > >>>
> > >>> Content of /var/log/ceph/ceph-osd.2.log
> > >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to
> 64045:64045
> > >>> (ceph:ceph)
> > >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
> > >>> 

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-08 Thread Enrico Kern
I agree. But why does the process start manual without systemd which
obviously has nothing to do with uid/gid 167 ? It is also not really a fix
to let all users change uid/gids...

On Wed, Apr 7, 2021 at 7:39 PM Wladimir Mutel  wrote:

> Could there be more smooth migration? On my Ubuntu I have the same
> behavior and my ceph uid/gud are also 64045.
> I started with Luminous in 2018 when it was not containerized, and still
> continue updating it with apt.
> Since when we have got this hardcoded value of 167 ???
>
> Andrew Walker-Brown wrote:
> > UID and guid should both be 167 I believe.
> >
> > Make a note of the current values and change them to 167 using usermod
> and groupmod.
> >
> > I had just this issue. It’s partly to do with how perms are used within
> the containers I think.
> >
> > I changed the values to 167 in passwd everything worked again. Symptoms
> for me were OSDs not starting and permissions/file not found errors.
> >
> > Sent from my iPhone
> >
> > On 4 Apr 2021, at 13:43, Lomayani S. Laizer  wrote:
> >
> > 
> > Hello,
> > Permissions are correct. guid/uid is 64045/64045
> >
> > ls -alh
> > total 32K
> > drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
> > drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
> > lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
> > -rw--- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
> > -rw--- 1 ceph ceph   37 Apr  4 14:11 fsid
> > -rw--- 1 ceph ceph   56 Apr  4 14:11 keyring
> > -rw--- 1 ceph ceph6 Apr  4 14:11 ready
> > -rw--- 1 ceph ceph3 Apr  4 14:11 require_osd_release
> > -rw--- 1 ceph ceph   10 Apr  4 14:11 type
> > -rw--- 1 ceph ceph3 Apr  4 14:11 whoami
> >
> > On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown <
> andrew_jbr...@hotmail.com> wrote:
> > Are the file permissions correct and UID/guid in passwd  both 167?
> >
> > Sent from my iPhone
> >
> > On 4 Apr 2021, at 12:29, Lomayani S. Laizer  lomlai...@gmail.com>> wrote:
> >
> > Hello,
> >
> > +1 Am facing the same problem in ubuntu after upgrade to pacific
> >
> > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
> > ceph-29/block) _read_bdev_label failed to open
> /var/lib/ceph/osd/ceph-29/block:
> > (1) Operation not permitted
> > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable
> to
> > open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
> > directoryESC[0m
> >
> > On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti <
> khoshbakh...@gmail.com>
> > wrote:
> >
> >> It worth mentioning as I issue the following command, the Ceph OSD
> starts
> >> and joins the cluster:
> >> /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup
> ceph
> >>
> >>
> >>
> >> On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <
> khoshbakh...@gmail.com>
> >> wrote:
> >>
> >>> Hi all,
> >>>
> >>> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
> >>> manual upgrade using the precompiled packages, the OSDs was down with
> the
> >>> following messages:
> >>>
> >>> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
> >>> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
> >>> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> >>> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> >> prime-osd-dir
> >>> --dev
> >>>
> >>
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >>> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
> >>> Running command: /usr/bin/ln -snf
> >>>
> >>
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >>> /var/lib/ceph/osd/ceph-2/block
> >>> Running command: /usr/bin/chown -h ceph:ceph
> >> /var/lib/ceph/osd/ceph-2/block
> >>> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
> >>> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> >>> Running command: /usr/bin/systemctl enable
> >>> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >>> Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
> >>> Running command: /usr/bin/systemctl start ceph-osd@2
> >>> --> ceph-volume lvm activate successful for osd ID: 2
> >>>
> >>> Content of /var/log/ceph/ceph-osd.2.log
> >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to 64045:64045
> >>> (ceph:ceph)
> >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
> >>> (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
> >>> ceph-osd, pid 5484
> >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore
> empty
> >>> --pid-file
> >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
> >>> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to
> open
> >>> /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
> >>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: 

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-07 Thread Wladimir Mutel

Could there be more smooth migration? On my Ubuntu I have the same behavior and 
my ceph uid/gud are also 64045.
I started with Luminous in 2018 when it was not containerized, and still 
continue updating it with apt.
Since when we have got this hardcoded value of 167 ???

Andrew Walker-Brown wrote:

UID and guid should both be 167 I believe.

Make a note of the current values and change them to 167 using usermod and 
groupmod.

I had just this issue. It’s partly to do with how perms are used within the 
containers I think.

I changed the values to 167 in passwd everything worked again. Symptoms for me 
were OSDs not starting and permissions/file not found errors.

Sent from my iPhone

On 4 Apr 2021, at 13:43, Lomayani S. Laizer  wrote:


Hello,
Permissions are correct. guid/uid is 64045/64045

ls -alh
total 32K
drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
-rw--- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
-rw--- 1 ceph ceph   37 Apr  4 14:11 fsid
-rw--- 1 ceph ceph   56 Apr  4 14:11 keyring
-rw--- 1 ceph ceph6 Apr  4 14:11 ready
-rw--- 1 ceph ceph3 Apr  4 14:11 require_osd_release
-rw--- 1 ceph ceph   10 Apr  4 14:11 type
-rw--- 1 ceph ceph3 Apr  4 14:11 whoami

On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown 
mailto:andrew_jbr...@hotmail.com>> wrote:
Are the file permissions correct and UID/guid in passwd  both 167?

Sent from my iPhone

On 4 Apr 2021, at 12:29, Lomayani S. Laizer 
mailto:lomlai...@gmail.com>> wrote:

Hello,

+1 Am facing the same problem in ubuntu after upgrade to pacific

2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
ceph-29/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-29/block:
(1) Operation not permitted
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable to
open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
directoryESC[0m

On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti 
mailto:khoshbakh...@gmail.com>>
wrote:


It worth mentioning as I issue the following command, the Ceph OSD starts
and joins the cluster:
/usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph



On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti 
mailto:khoshbakh...@gmail.com>>
wrote:


Hi all,

As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
manual upgrade using the precompiled packages, the OSDs was down with the
following messages:

root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
--> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph

prime-osd-dir

--dev


/dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352

--path /var/lib/ceph/osd/ceph-2 --no-mon-config
Running command: /usr/bin/ln -snf


/dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352

/var/lib/ceph/osd/ceph-2/block
Running command: /usr/bin/chown -h ceph:ceph

/var/lib/ceph/osd/ceph-2/block

Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/systemctl enable
ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
Running command: /usr/bin/systemctl start ceph-osd@2
--> ceph-volume lvm activate successful for osd ID: 2

Content of /var/log/ceph/ceph-osd.2.log
2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to 64045:64045
(ceph:ceph)
2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
(0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
ceph-osd, pid 5484
2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore empty
--pid-file
2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to open
/var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: unable to open
OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or

directory*



root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
â— ceph-osd@2.service - Ceph object storage daemon osd.2
 Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled;
vendor preset: enabled)
 Active: failed (Result: exit-code) since Sun 2021-04-04 14:55:06
+0430; 50s ago
Process: 5471 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh
--cluster ${CLUSTER} --id 2 (code=exited, status=0/SUCCESS)
Process: 5484 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER}

--id

2 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
   Main PID: 5484 (code=exited, status=1/FAILURE)

Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Scheduled restart
job, 

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-05 Thread Behzad Khoshbakhti
running as ceph user and not root.
Following is the startup configuration which can be found via the
https://paste.ubuntu.com/p/2kV8KhrRfV/.
[Unit]
Description=Ceph object storage daemon osd.%i
PartOf=ceph-osd.target
After=network-online.target local-fs.target time-sync.target
Before=remote-fs-pre.target ceph-osd.target
Wants=network-online.target local-fs.target time-sync.target
remote-fs-pre.target ceph-osd.target

[Service]
Environment=CLUSTER=ceph
EnvironmentFile=-/etc/default/ceph
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph
--setgroup ceph
ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER}
--id %i
LimitNOFILE=1048576
LimitNPROC=1048576
LockPersonality=true
MemoryDenyWriteExecute=true
# Need NewPrivileges via `sudo smartctl`
NoNewPrivileges=false
PrivateTmp=true
ProtectClock=true
ProtectControlGroups=true
ProtectHome=true
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
# flushing filestore requires access to /proc/sys/vm/drop_caches
ProtectKernelTunables=false
ProtectSystem=full
Restart=on-failure
RestartSec=10
RestrictSUIDSGID=true
StartLimitBurst=3
StartLimitInterval=30min
TasksMax=infinity

[Install]
WantedBy=ceph-osd.target

When I issue the following command, the ceph osd starts successfully.
However, when it is failed when launching from systemctl.
root@osd03:~# /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph
--setgroup ceph
2021-04-05T11:24:08.823+0430 7f91772c5f00 -1 osd.2 496 log_to_monitors
{default=true}
2021-04-05T11:24:09.943+0430 7f916f7b9700 -1 osd.2 496 set_numa_affinity
unable to identify public interface 'ens160' numa node: (0) Success


On Mon, Apr 5, 2021, 10:51 AM Behzad Khoshbakhti 
wrote:

> running as ceph user
>
> On Mon, Apr 5, 2021, 10:49 AM Anthony D'Atri 
> wrote:
>
>> Running as root, or as ceph?
>>
>> > On Apr 4, 2021, at 3:51 AM, Behzad Khoshbakhti 
>> wrote:
>> >
>> > It worth mentioning as I issue the following command, the Ceph OSD
>> starts
>> > and joins the cluster:
>> > /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup
>> ceph
>> >
>> >
>> >
>> > On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <
>> khoshbakh...@gmail.com>
>> > wrote:
>> >
>> >> Hi all,
>> >>
>> >> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
>> >> manual upgrade using the precompiled packages, the OSDs was down with
>> the
>> >> following messages:
>> >>
>> >> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
>> >> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
>> >> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
>> >> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
>> prime-osd-dir
>> >> --dev
>> >>
>> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> >> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
>> >> Running command: /usr/bin/ln -snf
>> >>
>> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> >> /var/lib/ceph/osd/ceph-2/block
>> >> Running command: /usr/bin/chown -h ceph:ceph
>> /var/lib/ceph/osd/ceph-2/block
>> >> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
>> >> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
>> >> Running command: /usr/bin/systemctl enable
>> >> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> >> Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
>> >> Running command: /usr/bin/systemctl start ceph-osd@2
>> >> --> ceph-volume lvm activate successful for osd ID: 2
>> >>
>> >> Content of /var/log/ceph/ceph-osd.2.log
>> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to 64045:64045
>> >> (ceph:ceph)
>> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
>> >> (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
>> >> ceph-osd, pid 5484
>> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore
>> empty
>> >> --pid-file
>> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
>> >> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to
>> open
>> >> /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
>> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: unable to open
>> >> OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or
>> directory*
>> >>
>> >>
>> >> root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
>> >> â— ceph-osd@2.service - Ceph object storage daemon osd.2
>> >> Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled;
>> >> vendor preset: enabled)
>> >> Active: failed (Result: exit-code) since Sun 2021-04-04 14:55:06
>> >> +0430; 50s ago
>> >>Process: 5471 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh
>> >> --cluster ${CLUSTER} --id 2 (code=exited, status=0/SUCCESS)
>> >>Process: 5484 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER}
>> --id

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-04 Thread Behzad Khoshbakhti
Following is the error:

Apr  4 19:09:39 osd03 systemd[1]: ceph-osd@2.service: Scheduled restart
job, restart counter is at 3.
Apr  4 19:09:39 osd03 systemd[1]: Stopped Ceph object storage daemon osd.2.
Apr  4 19:09:39 osd03 systemd[1]: Starting Ceph object storage daemon
osd.2...
Apr  4 19:09:39 osd03 systemd[1]: Started Ceph object storage daemon osd.2.
Apr  4 19:09:39 osd03 ceph-osd[3272]: 2021-04-04T19:09:39.668+0430
7fb5d7819f00 -1 bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label
failed to open /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted
Apr  4 19:09:39 osd03 ceph-osd[3272]: 2021-04-04T19:09:39.668+0430
7fb5d7819f00 -1 #033[0;31m ** ERROR: unable to open OSD superblock on
/var/lib/ceph/osd/ceph-2: (2) No such file or directory#033[0m
Apr  4 19:09:39 osd03 systemd[1]: ceph-osd@2.service: Main process exited,
code=exited, status=1/FAILURE
Apr  4 19:09:39 osd03 systemd[1]: ceph-osd@2.service: Failed with result
'exit-code'.


And this the files that shows the file exists:
root@osd03:/var/lib/ceph/osd/ceph-2# ls -l
total 24
lrwxrwxrwx 1 ceph ceph 93 Apr  4 19:09 block ->
/dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
-rw--- 1 ceph ceph 37 Apr  4 19:09 ceph_fsid
-rw--- 1 ceph ceph 37 Apr  4 19:09 fsid
-rw--- 1 ceph ceph 55 Apr  4 19:09 keyring
-rw--- 1 ceph ceph  6 Apr  4 19:09 ready
-rw--- 1 ceph ceph 10 Apr  4 19:09 type
-rw--- 1 ceph ceph  2 Apr  4 19:09 whoami
root@osd03:/var/lib/ceph/osd/ceph-2#
root@osd03:/var/lib/ceph/osd/ceph-2#


On Sun, Apr 4, 2021 at 7:06 PM Andrew Walker-Brown <
andrew_jbr...@hotmail.com> wrote:

> And after a reboot what errors are you getting?
>
> Sent from my iPhone
>
> On 4 Apr 2021, at 15:33, Behzad Khoshbakhti 
> wrote:
>
> 
> I have changed the uid and gid to 167, but still no progress.
> cat /etc/group | grep -i ceph
> ceph:x:167:
> root@osd03:~# cat /etc/passwd | grep -i ceph
> ceph:x:167:167:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin
>
> On Sun, Apr 4, 2021 at 6:47 PM Andrew Walker-Brown <
> andrew_jbr...@hotmail.com> wrote:
>
>> UID and guid should both be 167 I believe.
>>
>> Make a note of the current values and change them to 167 using usermod
>> and groupmod.
>>
>> I had just this issue. It’s partly to do with how perms are used within
>> the containers I think.
>>
>> I changed the values to 167 in passwd everything worked again. Symptoms
>> for me were OSDs not starting and permissions/file not found errors.
>>
>> Sent from my iPhone
>>
>> On 4 Apr 2021, at 13:43, Lomayani S. Laizer  wrote:
>>
>> 
>> Hello,
>> Permissions are correct. guid/uid is 64045/64045
>>
>> ls -alh
>> total 32K
>> drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
>> drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
>> lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
>> -rw--- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
>> -rw--- 1 ceph ceph   37 Apr  4 14:11 fsid
>> -rw--- 1 ceph ceph   56 Apr  4 14:11 keyring
>> -rw--- 1 ceph ceph6 Apr  4 14:11 ready
>> -rw--- 1 ceph ceph3 Apr  4 14:11 require_osd_release
>> -rw--- 1 ceph ceph   10 Apr  4 14:11 type
>> -rw--- 1 ceph ceph3 Apr  4 14:11 whoami
>>
>> On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown <
>> andrew_jbr...@hotmail.com> wrote:
>>
>>> Are the file permissions correct and UID/guid in passwd  both 167?
>>>
>>> Sent from my iPhone
>>>
>>> On 4 Apr 2021, at 12:29, Lomayani S. Laizer  wrote:
>>>
>>> Hello,
>>>
>>> +1 Am facing the same problem in ubuntu after upgrade to pacific
>>>
>>> 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
>>> ceph-29/block) _read_bdev_label failed to open
>>> /var/lib/ceph/osd/ceph-29/block:
>>> (1) Operation not permitted
>>> 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable
>>> to
>>> open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
>>> directoryESC[0m
>>>
>>> On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti <
>>> khoshbakh...@gmail.com>
>>> wrote:
>>>
>>> > It worth mentioning as I issue the following command, the Ceph OSD
>>> starts
>>> > and joins the cluster:
>>> > /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup
>>> ceph
>>> >
>>> >
>>> >
>>> > On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <
>>> khoshbakh...@gmail.com>
>>> > wrote:
>>> >
>>> >> Hi all,
>>> >>
>>> >> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
>>> >> manual upgrade using the precompiled packages, the OSDs was down with
>>> the
>>> >> following messages:
>>> >>
>>> >> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
>>> >> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
>>> >> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
>>> >> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
>>> > prime-osd-dir
>>> >> --dev
>>> >>
>>> >
>>> 

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-04 Thread Andrew Walker-Brown
And after a reboot what errors are you getting?

Sent from my iPhone

On 4 Apr 2021, at 15:33, Behzad Khoshbakhti  wrote:


I have changed the uid and gid to 167, but still no progress.
cat /etc/group | grep -i ceph
ceph:x:167:
root@osd03:~# cat /etc/passwd | grep -i ceph
ceph:x:167:167:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin

On Sun, Apr 4, 2021 at 6:47 PM Andrew Walker-Brown 
mailto:andrew_jbr...@hotmail.com>> wrote:
UID and guid should both be 167 I believe.

Make a note of the current values and change them to 167 using usermod and 
groupmod.

I had just this issue. It’s partly to do with how perms are used within the 
containers I think.

I changed the values to 167 in passwd everything worked again. Symptoms for me 
were OSDs not starting and permissions/file not found errors.

Sent from my iPhone

On 4 Apr 2021, at 13:43, Lomayani S. Laizer 
mailto:lomlai...@gmail.com>> wrote:


Hello,
Permissions are correct. guid/uid is 64045/64045

ls -alh
total 32K
drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
-rw--- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
-rw--- 1 ceph ceph   37 Apr  4 14:11 fsid
-rw--- 1 ceph ceph   56 Apr  4 14:11 keyring
-rw--- 1 ceph ceph6 Apr  4 14:11 ready
-rw--- 1 ceph ceph3 Apr  4 14:11 require_osd_release
-rw--- 1 ceph ceph   10 Apr  4 14:11 type
-rw--- 1 ceph ceph3 Apr  4 14:11 whoami

On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown 
mailto:andrew_jbr...@hotmail.com>> wrote:
Are the file permissions correct and UID/guid in passwd  both 167?

Sent from my iPhone

On 4 Apr 2021, at 12:29, Lomayani S. Laizer 
mailto:lomlai...@gmail.com>> wrote:

Hello,

+1 Am facing the same problem in ubuntu after upgrade to pacific

2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
ceph-29/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-29/block:
(1) Operation not permitted
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable to
open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
directoryESC[0m

On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti 
mailto:khoshbakh...@gmail.com>>
wrote:

> It worth mentioning as I issue the following command, the Ceph OSD starts
> and joins the cluster:
> /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
>
>
>
> On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti 
> mailto:khoshbakh...@gmail.com>>
> wrote:
>
>> Hi all,
>>
>> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
>> manual upgrade using the precompiled packages, the OSDs was down with the
>> following messages:
>>
>> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
>> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
>> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
>> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> prime-osd-dir
>> --dev
>>
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
>> Running command: /usr/bin/ln -snf
>>
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> /var/lib/ceph/osd/ceph-2/block
>> Running command: /usr/bin/chown -h ceph:ceph
> /var/lib/ceph/osd/ceph-2/block
>> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
>> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
>> Running command: /usr/bin/systemctl enable
>> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
>> Running command: /usr/bin/systemctl start ceph-osd@2
>> --> ceph-volume lvm activate successful for osd ID: 2
>>
>> Content of /var/log/ceph/ceph-osd.2.log
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to 64045:64045
>> (ceph:ceph)
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
>> (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
>> ceph-osd, pid 5484
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore empty
>> --pid-file
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
>> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to open
>> /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: unable to open
>> OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or
> directory*
>>
>>
>> root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
>> â— ceph-osd@2.service - Ceph object storage daemon osd.2
>> Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled;
>> vendor preset: enabled)
>> Active: failed (Result: exit-code) since Sun 2021-04-04 14:55:06
>> +0430; 50s ago
>>Process: 5471 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh
>> --cluster ${CLUSTER} 

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-04 Thread Behzad Khoshbakhti
I have changed the uid and gid to 167, but still no progress.
cat /etc/group | grep -i ceph
ceph:x:167:
root@osd03:~# cat /etc/passwd | grep -i ceph
ceph:x:167:167:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin

On Sun, Apr 4, 2021 at 6:47 PM Andrew Walker-Brown <
andrew_jbr...@hotmail.com> wrote:

> UID and guid should both be 167 I believe.
>
> Make a note of the current values and change them to 167 using usermod and
> groupmod.
>
> I had just this issue. It’s partly to do with how perms are used within
> the containers I think.
>
> I changed the values to 167 in passwd everything worked again. Symptoms
> for me were OSDs not starting and permissions/file not found errors.
>
> Sent from my iPhone
>
> On 4 Apr 2021, at 13:43, Lomayani S. Laizer  wrote:
>
> 
> Hello,
> Permissions are correct. guid/uid is 64045/64045
>
> ls -alh
> total 32K
> drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
> drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
> lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
> -rw--- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
> -rw--- 1 ceph ceph   37 Apr  4 14:11 fsid
> -rw--- 1 ceph ceph   56 Apr  4 14:11 keyring
> -rw--- 1 ceph ceph6 Apr  4 14:11 ready
> -rw--- 1 ceph ceph3 Apr  4 14:11 require_osd_release
> -rw--- 1 ceph ceph   10 Apr  4 14:11 type
> -rw--- 1 ceph ceph3 Apr  4 14:11 whoami
>
> On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown <
> andrew_jbr...@hotmail.com> wrote:
>
>> Are the file permissions correct and UID/guid in passwd  both 167?
>>
>> Sent from my iPhone
>>
>> On 4 Apr 2021, at 12:29, Lomayani S. Laizer  wrote:
>>
>> Hello,
>>
>> +1 Am facing the same problem in ubuntu after upgrade to pacific
>>
>> 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
>> ceph-29/block) _read_bdev_label failed to open
>> /var/lib/ceph/osd/ceph-29/block:
>> (1) Operation not permitted
>> 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable to
>> open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
>> directoryESC[0m
>>
>> On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti > >
>> wrote:
>>
>> > It worth mentioning as I issue the following command, the Ceph OSD
>> starts
>> > and joins the cluster:
>> > /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup
>> ceph
>> >
>> >
>> >
>> > On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <
>> khoshbakh...@gmail.com>
>> > wrote:
>> >
>> >> Hi all,
>> >>
>> >> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
>> >> manual upgrade using the precompiled packages, the OSDs was down with
>> the
>> >> following messages:
>> >>
>> >> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
>> >> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
>> >> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
>> >> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
>> > prime-osd-dir
>> >> --dev
>> >>
>> >
>> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> >> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
>> >> Running command: /usr/bin/ln -snf
>> >>
>> >
>> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> >> /var/lib/ceph/osd/ceph-2/block
>> >> Running command: /usr/bin/chown -h ceph:ceph
>> > /var/lib/ceph/osd/ceph-2/block
>> >> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
>> >> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
>> >> Running command: /usr/bin/systemctl enable
>> >> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> >> Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
>> >> Running command: /usr/bin/systemctl start ceph-osd@2
>> >> --> ceph-volume lvm activate successful for osd ID: 2
>> >>
>> >> Content of /var/log/ceph/ceph-osd.2.log
>> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to 64045:64045
>> >> (ceph:ceph)
>> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
>> >> (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
>> >> ceph-osd, pid 5484
>> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore
>> empty
>> >> --pid-file
>> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
>> >> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to
>> open
>> >> /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
>> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: unable to open
>> >> OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or
>> > directory*
>> >>
>> >>
>> >> root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
>> >> â— ceph-osd@2.service - Ceph object storage daemon osd.2
>> >> Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled;
>> >> vendor preset: enabled)
>> >> Active: failed (Result: exit-code) since Sun 2021-04-04 14:55:06
>> >> +0430; 50s ago
>> >>

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-04 Thread Andrew Walker-Brown
UID and guid should both be 167 I believe.

Make a note of the current values and change them to 167 using usermod and 
groupmod.

I had just this issue. It’s partly to do with how perms are used within the 
containers I think.

I changed the values to 167 in passwd everything worked again. Symptoms for me 
were OSDs not starting and permissions/file not found errors.

Sent from my iPhone

On 4 Apr 2021, at 13:43, Lomayani S. Laizer  wrote:


Hello,
Permissions are correct. guid/uid is 64045/64045

ls -alh
total 32K
drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
-rw--- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
-rw--- 1 ceph ceph   37 Apr  4 14:11 fsid
-rw--- 1 ceph ceph   56 Apr  4 14:11 keyring
-rw--- 1 ceph ceph6 Apr  4 14:11 ready
-rw--- 1 ceph ceph3 Apr  4 14:11 require_osd_release
-rw--- 1 ceph ceph   10 Apr  4 14:11 type
-rw--- 1 ceph ceph3 Apr  4 14:11 whoami

On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown 
mailto:andrew_jbr...@hotmail.com>> wrote:
Are the file permissions correct and UID/guid in passwd  both 167?

Sent from my iPhone

On 4 Apr 2021, at 12:29, Lomayani S. Laizer 
mailto:lomlai...@gmail.com>> wrote:

Hello,

+1 Am facing the same problem in ubuntu after upgrade to pacific

2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
ceph-29/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-29/block:
(1) Operation not permitted
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable to
open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
directoryESC[0m

On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti 
mailto:khoshbakh...@gmail.com>>
wrote:

> It worth mentioning as I issue the following command, the Ceph OSD starts
> and joins the cluster:
> /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
>
>
>
> On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti 
> mailto:khoshbakh...@gmail.com>>
> wrote:
>
>> Hi all,
>>
>> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
>> manual upgrade using the precompiled packages, the OSDs was down with the
>> following messages:
>>
>> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
>> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
>> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
>> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> prime-osd-dir
>> --dev
>>
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
>> Running command: /usr/bin/ln -snf
>>
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> /var/lib/ceph/osd/ceph-2/block
>> Running command: /usr/bin/chown -h ceph:ceph
> /var/lib/ceph/osd/ceph-2/block
>> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
>> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
>> Running command: /usr/bin/systemctl enable
>> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
>> Running command: /usr/bin/systemctl start ceph-osd@2
>> --> ceph-volume lvm activate successful for osd ID: 2
>>
>> Content of /var/log/ceph/ceph-osd.2.log
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to 64045:64045
>> (ceph:ceph)
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
>> (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
>> ceph-osd, pid 5484
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore empty
>> --pid-file
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
>> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to open
>> /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: unable to open
>> OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or
> directory*
>>
>>
>> root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
>> â— ceph-osd@2.service - Ceph object storage daemon osd.2
>> Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled;
>> vendor preset: enabled)
>> Active: failed (Result: exit-code) since Sun 2021-04-04 14:55:06
>> +0430; 50s ago
>>Process: 5471 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh
>> --cluster ${CLUSTER} --id 2 (code=exited, status=0/SUCCESS)
>>Process: 5484 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER}
> --id
>> 2 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
>>   Main PID: 5484 (code=exited, status=1/FAILURE)
>>
>> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Scheduled restart
>> job, restart counter is at 3.
>> Apr 04 14:55:06 osd03 systemd[1]: Stopped Ceph object storage daemon
> osd.2.
>> Apr 04 14:55:06 osd03 systemd[1]: 

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-04 Thread Lomayani S. Laizer
Hello,
Permissions are correct. guid/uid is 64045/64045

ls -alh
total 32K
drwxrwxrwt 2 ceph ceph  200 Apr  4 14:11 .
drwxr-xr-x 8 ceph ceph 4.0K Sep 18  2018 ..
lrwxrwxrwx 1 ceph ceph   93 Apr  4 14:11 block -> /dev/...
-rw--- 1 ceph ceph   37 Apr  4 14:11 ceph_fsid
-rw--- 1 ceph ceph   37 Apr  4 14:11 fsid
-rw--- 1 ceph ceph   56 Apr  4 14:11 keyring
-rw--- 1 ceph ceph6 Apr  4 14:11 ready
-rw--- 1 ceph ceph3 Apr  4 14:11 require_osd_release
-rw--- 1 ceph ceph   10 Apr  4 14:11 type
-rw--- 1 ceph ceph3 Apr  4 14:11 whoami

On Sun, Apr 4, 2021 at 3:07 PM Andrew Walker-Brown <
andrew_jbr...@hotmail.com> wrote:

> Are the file permissions correct and UID/guid in passwd  both 167?
>
> Sent from my iPhone
>
> On 4 Apr 2021, at 12:29, Lomayani S. Laizer  wrote:
>
> Hello,
>
> +1 Am facing the same problem in ubuntu after upgrade to pacific
>
> 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
> ceph-29/block) _read_bdev_label failed to open
> /var/lib/ceph/osd/ceph-29/block:
> (1) Operation not permitted
> 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable to
> open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
> directoryESC[0m
>
> On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti 
> wrote:
>
> > It worth mentioning as I issue the following command, the Ceph OSD starts
> > and joins the cluster:
> > /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
> >
> >
> >
> > On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <
> khoshbakh...@gmail.com>
> > wrote:
> >
> >> Hi all,
> >>
> >> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
> >> manual upgrade using the precompiled packages, the OSDs was down with
> the
> >> following messages:
> >>
> >> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
> >> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> >> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> > prime-osd-dir
> >> --dev
> >>
> >
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
> >> Running command: /usr/bin/ln -snf
> >>
> >
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> /var/lib/ceph/osd/ceph-2/block
> >> Running command: /usr/bin/chown -h ceph:ceph
> > /var/lib/ceph/osd/ceph-2/block
> >> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
> >> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> >> Running command: /usr/bin/systemctl enable
> >> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
> >> Running command: /usr/bin/systemctl start ceph-osd@2
> >> --> ceph-volume lvm activate successful for osd ID: 2
> >>
> >> Content of /var/log/ceph/ceph-osd.2.log
> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to 64045:64045
> >> (ceph:ceph)
> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
> >> (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
> >> ceph-osd, pid 5484
> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore empty
> >> --pid-file
> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
> >> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to
> open
> >> /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: unable to open
> >> OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or
> > directory*
> >>
> >>
> >> root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
> >> â— ceph-osd@2.service - Ceph object storage daemon osd.2
> >> Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled;
> >> vendor preset: enabled)
> >> Active: failed (Result: exit-code) since Sun 2021-04-04 14:55:06
> >> +0430; 50s ago
> >>Process: 5471 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh
> >> --cluster ${CLUSTER} --id 2 (code=exited, status=0/SUCCESS)
> >>Process: 5484 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER}
> > --id
> >> 2 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
> >>   Main PID: 5484 (code=exited, status=1/FAILURE)
> >>
> >> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Scheduled restart
> >> job, restart counter is at 3.
> >> Apr 04 14:55:06 osd03 systemd[1]: Stopped Ceph object storage daemon
> > osd.2.
> >> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Start request
> >> repeated too quickly.
> >> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Failed with
> result
> >> 'exit-code'.
> >> Apr 04 14:55:06 osd03 systemd[1]: Failed to start Ceph object storage
> >> daemon osd.2.
> >> root@osd03:/var/lib/ceph/osd/ceph-2#
> >>
> >> root@osd03:~# lsblk
> >> NAME 

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-04 Thread Andrew Walker-Brown
Are the file permissions correct and UID/guid in passwd  both 167?

Sent from my iPhone

On 4 Apr 2021, at 12:29, Lomayani S. Laizer  wrote:

Hello,

+1 Am facing the same problem in ubuntu after upgrade to pacific

2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
ceph-29/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-29/block:
(1) Operation not permitted
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable to
open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
directoryESC[0m

On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti 
wrote:

> It worth mentioning as I issue the following command, the Ceph OSD starts
> and joins the cluster:
> /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
> 
> 
> 
> On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti 
> wrote:
> 
>> Hi all,
>> 
>> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
>> manual upgrade using the precompiled packages, the OSDs was down with the
>> following messages:
>> 
>> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
>> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
>> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
>> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> prime-osd-dir
>> --dev
>> 
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
>> Running command: /usr/bin/ln -snf
>> 
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> /var/lib/ceph/osd/ceph-2/block
>> Running command: /usr/bin/chown -h ceph:ceph
> /var/lib/ceph/osd/ceph-2/block
>> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
>> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
>> Running command: /usr/bin/systemctl enable
>> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
>> Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
>> Running command: /usr/bin/systemctl start ceph-osd@2
>> --> ceph-volume lvm activate successful for osd ID: 2
>> 
>> Content of /var/log/ceph/ceph-osd.2.log
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to 64045:64045
>> (ceph:ceph)
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
>> (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
>> ceph-osd, pid 5484
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore empty
>> --pid-file
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
>> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to open
>> /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
>> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: unable to open
>> OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or
> directory*
>> 
>> 
>> root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
>> â— ceph-osd@2.service - Ceph object storage daemon osd.2
>> Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled;
>> vendor preset: enabled)
>> Active: failed (Result: exit-code) since Sun 2021-04-04 14:55:06
>> +0430; 50s ago
>>Process: 5471 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh
>> --cluster ${CLUSTER} --id 2 (code=exited, status=0/SUCCESS)
>>Process: 5484 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER}
> --id
>> 2 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
>>   Main PID: 5484 (code=exited, status=1/FAILURE)
>> 
>> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Scheduled restart
>> job, restart counter is at 3.
>> Apr 04 14:55:06 osd03 systemd[1]: Stopped Ceph object storage daemon
> osd.2.
>> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Start request
>> repeated too quickly.
>> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Failed with result
>> 'exit-code'.
>> Apr 04 14:55:06 osd03 systemd[1]: Failed to start Ceph object storage
>> daemon osd.2.
>> root@osd03:/var/lib/ceph/osd/ceph-2#
>> 
>> root@osd03:~# lsblk
>> NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
>> fd0 2:014K  0 disk
>> loop0   7:00 55.5M  1 loop
>> /snap/core18/1988
>> loop1   7:10 69.9M  1 loop
>> /snap/lxd/19188
>> loop2   7:20 55.5M  1 loop
>> /snap/core18/1997
>> loop3   7:30 70.4M  1 loop
>> /snap/lxd/19647
>> loop4   7:40 32.3M  1 loop
>> /snap/snapd/11402
>> loop5   7:50 32.3M  1 loop
>> /snap/snapd/11107
>> sda 8:00   80G  0 disk
>> ├─sda1  8:101M  0 part
>> ├─sda2  8:201G  0 part /boot
>> └─sda3 

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-04 Thread Lomayani S. Laizer
Hello,

+1 Am facing the same problem in ubuntu after upgrade to pacific

2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
ceph-29/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-29/block:
(1) Operation not permitted
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable to
open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
directoryESC[0m

On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti 
wrote:

> It worth mentioning as I issue the following command, the Ceph OSD starts
> and joins the cluster:
> /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
>
>
>
> On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti 
> wrote:
>
> > Hi all,
> >
> > As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
> > manual upgrade using the precompiled packages, the OSDs was down with the
> > following messages:
> >
> > root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
> > --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
> > Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> > Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> prime-osd-dir
> > --dev
> >
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > --path /var/lib/ceph/osd/ceph-2 --no-mon-config
> > Running command: /usr/bin/ln -snf
> >
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > /var/lib/ceph/osd/ceph-2/block
> > Running command: /usr/bin/chown -h ceph:ceph
> /var/lib/ceph/osd/ceph-2/block
> > Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
> > Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> > Running command: /usr/bin/systemctl enable
> > ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
> > Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
> > Running command: /usr/bin/systemctl start ceph-osd@2
> > --> ceph-volume lvm activate successful for osd ID: 2
> >
> > Content of /var/log/ceph/ceph-osd.2.log
> > 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to 64045:64045
> > (ceph:ceph)
> > 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
> > (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
> > ceph-osd, pid 5484
> > 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore empty
> > --pid-file
> > 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
> > bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to open
> > /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
> > 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: unable to open
> > OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or
> directory*
> >
> >
> > root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
> > â— ceph-osd@2.service - Ceph object storage daemon osd.2
> >  Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled;
> > vendor preset: enabled)
> >  Active: failed (Result: exit-code) since Sun 2021-04-04 14:55:06
> > +0430; 50s ago
> > Process: 5471 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh
> > --cluster ${CLUSTER} --id 2 (code=exited, status=0/SUCCESS)
> > Process: 5484 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER}
> --id
> > 2 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
> >Main PID: 5484 (code=exited, status=1/FAILURE)
> >
> > Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Scheduled restart
> > job, restart counter is at 3.
> > Apr 04 14:55:06 osd03 systemd[1]: Stopped Ceph object storage daemon
> osd.2.
> > Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Start request
> > repeated too quickly.
> > Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Failed with result
> > 'exit-code'.
> > Apr 04 14:55:06 osd03 systemd[1]: Failed to start Ceph object storage
> > daemon osd.2.
> > root@osd03:/var/lib/ceph/osd/ceph-2#
> >
> > root@osd03:~# lsblk
> > NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > fd0 2:014K  0 disk
> > loop0   7:00 55.5M  1 loop
> > /snap/core18/1988
> > loop1   7:10 69.9M  1 loop
> > /snap/lxd/19188
> > loop2   7:20 55.5M  1 loop
> > /snap/core18/1997
> > loop3   7:30 70.4M  1 loop
> > /snap/lxd/19647
> > loop4   7:40 32.3M  1 loop
> > /snap/snapd/11402
> > loop5   7:50 32.3M  1 loop
> > /snap/snapd/11107
> > sda 8:00   80G  0 disk
> > ├─sda1  8:101M  0 part
> > ├─sda2  8:201G  0 part /boot
> > └─sda3  8:30   79G  0 part
> >   

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-04 Thread Behzad Khoshbakhti
It worth mentioning as I issue the following command, the Ceph OSD starts
and joins the cluster:
/usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph



On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti 
wrote:

> Hi all,
>
> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
> manual upgrade using the precompiled packages, the OSDs was down with the
> following messages:
>
> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir
> --dev
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
> Running command: /usr/bin/ln -snf
> /dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> /var/lib/ceph/osd/ceph-2/block
> Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> Running command: /usr/bin/systemctl enable
> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
> Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
> Running command: /usr/bin/systemctl start ceph-osd@2
> --> ceph-volume lvm activate successful for osd ID: 2
>
> Content of /var/log/ceph/ceph-osd.2.log
> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 set uid:gid to 64045:64045
> (ceph:ceph)
> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 ceph version 16.2.0
> (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
> ceph-osd, pid 5484
> 2021-04-04T14:54:56.625+0430 7f4afbac0f00  0 pidfile_write: ignore empty
> --pid-file
> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to open
> /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1  *** ERROR: unable to open
> OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or directory*
>
>
> root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
> â— ceph-osd@2.service - Ceph object storage daemon osd.2
>  Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled;
> vendor preset: enabled)
>  Active: failed (Result: exit-code) since Sun 2021-04-04 14:55:06
> +0430; 50s ago
> Process: 5471 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh
> --cluster ${CLUSTER} --id 2 (code=exited, status=0/SUCCESS)
> Process: 5484 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id
> 2 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
>Main PID: 5484 (code=exited, status=1/FAILURE)
>
> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Scheduled restart
> job, restart counter is at 3.
> Apr 04 14:55:06 osd03 systemd[1]: Stopped Ceph object storage daemon osd.2.
> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Start request
> repeated too quickly.
> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd@2.service: Failed with result
> 'exit-code'.
> Apr 04 14:55:06 osd03 systemd[1]: Failed to start Ceph object storage
> daemon osd.2.
> root@osd03:/var/lib/ceph/osd/ceph-2#
>
> root@osd03:~# lsblk
> NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> fd0 2:014K  0 disk
> loop0   7:00 55.5M  1 loop
> /snap/core18/1988
> loop1   7:10 69.9M  1 loop
> /snap/lxd/19188
> loop2   7:20 55.5M  1 loop
> /snap/core18/1997
> loop3   7:30 70.4M  1 loop
> /snap/lxd/19647
> loop4   7:40 32.3M  1 loop
> /snap/snapd/11402
> loop5   7:50 32.3M  1 loop
> /snap/snapd/11107
> sda 8:00   80G  0 disk
> ├─sda1  8:101M  0 part
> ├─sda2  8:201G  0 part /boot
> └─sda3  8:30   79G  0 part
>   └─ubuntu--vg-ubuntu--lv 253:00 69.5G  0 lvm  /
> sdb 8:16   0   16G  0 disk
> └─sdb1  8:17   0   16G  0 part
>
> └─ceph--9d37674b--a269--4239--aa9e--66a3c74df76c-osd--block--2d3ffc61--e430--4
>  b89--bcd4--105b2df26352
>   253:10   16G  0 lvm
> root@osd03:~#
>
> root@osd03:/var/lib/ceph/osd/ceph-2# mount | grep -i ceph
> tmpfs on /var/lib/ceph/osd/ceph-2 type tmpfs (rw,relatime)
> root@osd03:/var/lib/ceph/osd/ceph-2#
>
> any help is much appreciated
> --
>
> Regards
>  Behzad Khoshbakhti
>  Computer Network Engineer (CCIE #58887)
>
>

-- 

Regards
 Behzad