Hi Alexander, I'd be suspicious that something is up with pool 25. Which pool is that? ('ceph osd pool ls detail') Knowing the pool and the CRUSH rule it's using is a good place to start. Then that can be compared to your CRUSH map (e.g. 'ceph osd tree') to see why Ceph is struggling to map that PG to a valid up set.
Josh On Tue, Oct 25, 2022 at 6:45 AM Alexander Fiedler <alexander.fied...@imos.net> wrote: > > Hello, > > we run a ceph cluster with the following error which came up suddenly without > any maintenance/changes: > > HEALTH_WARN Reduced data availability: 1 pg stale; Degraded data redundancy: > 1 pg undersized > > The PG in question is PG 25 > > Output of ceph pg dump_stuck stale: > > PG_STAT STATE UP UP_PRIMARY ACTING > ACTING_PRIMARY > 25.0 stale+active+undersized+remapped [] -1 [66,64] > 66 > > Both acting OSDs and the mons+managers were rebooted. All OSDs in the cluster > are up. > > Do you have any idea why 1 PG is stuck? > > Best regards > > Alexander Fiedler > > > -- > imos Gesellschaft fuer Internet-Marketing und Online-Services mbH > Alfons-Feifel-Str. 9 // D-73037 Goeppingen // Stauferpark Ost > Tel: 07161 93339- // Fax: 07161 93339-99 // Internet: www.imos.net > > Eingetragen im Handelsregister des Amtsgerichts Ulm, HRB 532571 > Vertreten durch die Geschaeftsfuehrer Alfred und Rolf Wallender > _______________________________________________ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io