Hi,
I am stuck with ceph cluster with multiple PG errors due to multiple OSD
was stopped and starting OSD's manually again didn't help. OSD service
stops again there is no issue with HDD for sure but for some reason, OSD
stops.

I am using running ceph version 15.2.5 on podman container.

How do I recover these pg failures?

can someone help me to recover this or where to look further?

    pgs:     0.360% pgs not active
             124186/5082364 objects degraded (2.443%)
             29899/5082364 objects misplaced (0.588%)
             670 active+clean
             69  active+undersized+remapped
             26  active+undersized+degraded+remapped+backfill_wait
             16  active+undersized+remapped+backfill_wait
             15  active+undersized+degraded+remapped
             13  active+clean+remapped
             9   active+recovery_wait+degraded
             4   active+remapped+backfill_wait
             3   stale+down
             3   active+undersized+remapped+inconsistent
             2   active+recovery_wait+degraded+remapped
             1   active+recovering+degraded+remapped
             1   active+clean+remapped+inconsistent
             1   active+recovering+degraded
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to