Tx. # ceph version
ceph version 15.2.7 (88e41c6c49beb18add4fdb6b4326ca466d931db8) octopus
(stable)



On Thu, Nov 18, 2021 at 3:28 PM Stefan Kooman <ste...@bit.nl> wrote:

> On 11/18/21 13:20, David Tinker wrote:
> > I just grepped all the OSD pod logs for error and warn and nothing comes
> up:
> >
> > # k logs -n rook-ceph rook-ceph-osd-10-659549cd48-nfqgk  | grep -i warn
> > etc
> >
> > I am assuming that would bring back something if any of them were
> unhappy.
>
> Your issue looks similar to another thread last week (thread pg
> inactive+remapped).
>
> What Ceph version are you running?
>
> I don't know if enabling debugging on osd.7 would reveal something
>
> Maybe recovery can be trigger by moving the primary to another OSD with
> pg upmap. Check your failure domain to see what OSD would be suitable.
>
> Gr. Stefan
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to