i have created keyring for the osd3 but still pod is not booting up..

As outlined:
https://access.redhat.com/solutions/3524771

ceph auth export osd.2 -o osd.2.export
cp osd.2.export osd.3.export
ceph auth import -i osd.3.export
imported keyring


Any suggestions ?

Thanks!

On Tue, Sep 21, 2021 at 8:34 AM Abdelillah Asraoui <aasra...@gmail.com>
wrote:

> Hi,
>
> one of the osd in the cluster went down, is there a workaround to bring
> back this osd?
>
>
> logs from ceph osd pod shows the following:
>
> kubectl -n rook-ceph logs rook-ceph-osd-3-6497bdc65b-pn7mg
>
> debug 2021-09-20T14:32:46.388+0000 7f930fe9cf00 -1 auth: unable to find a
> keyring on /var/lib/ceph/osd/ceph-3/keyring: (13) Permission denied
>
> debug 2021-09-20T14:32:46.389+0000 7f930fe9cf00 -1 auth: unable to find a
> keyring on /var/lib/ceph/osd/ceph-3/keyring: (13) Permission denied
>
> debug 2021-09-20T14:32:46.389+0000 7f930fe9cf00 -1 auth: unable to find a
> keyring on /var/lib/ceph/osd/ceph-3/keyring: (13) Permission denied
>
> debug 2021-09-20T14:32:46.389+0000 7f930fe9cf00 -1 monclient: keyring not
> found
>
> failed to fetch mon config (--no-mon-config to skip)
>
>
>
>
>
> kubectl -n rook-ceph describe pod  rook-ceph-osd-3-64
>
>
>
>
>
> Events:
>
>   Type     Reason   Age                      From     Message
>
>   ----     ------   ----                     ----     -------
>
>   Normal   Pulled   50m (x749 over 2d16h)    kubelet  Container image
> "ceph/ceph:v15.2.13" already present on machine
>
>   Warning  BackOff  19s (x18433 over 2d16h)  kubelet  Back-off restarting
> failed container
>
>
>
> ceph health detail | more
>
> HEALTH_WARN noout flag(s) set; 1 osds down; 1 host (1 osds) down; Degraded
> data redundancy: 180969/542907 objects degraded (33.333%), 225 pgs degra
>
> ded, 225 pgs undersized
>
> [WRN] OSDMAP_FLAGS: noout flag(s) set
>
> [WRN] OSD_DOWN: 1 osds down
>
>     osd.3 (root=default,host=ab-test) is down
>
> [WRN] OSD_HOST_DOWN: 1 host (1 osds) down
>
>     host ab-test-mstr-1-cwan-net (root=default) (1 osds) is down
>
> [WRN] PG_DEGRADED: Degraded data redundancy: 180969/542907 objects
> degraded (33.333%), 225 pgs degraded, 225 pgs undersized
>
>     pg 3.4d is active+undersized+degraded, acting [2,0]
>
>     pg 3.4e is stuck undersized for 3d, current state
> active+undersized+degraded, last acting [0,2]
>
>
> Thanks!
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to