Hi Frank.
Check your cluster for inactive/incomplete placement groups. I saw similar 
behavior on Octopus when some pgs stuck in incomplete/inactive or peering state.

________________________________
From: Frank Schilder <fr...@dtu.dk>
Sent: Monday, May 3, 2021 3:42:48 AM
To: ceph-users@ceph.io <ceph-users@ceph.io>
Subject: [ceph-users] OSD slow ops warning not clearing after OSD down

Dear cephers,

I have a strange problem. An OSD went down and recovery finished. For some 
reason, I have a slow ops warning for the failed OSD stuck in the system:

    health: HEALTH_WARN
            430 slow ops, oldest one blocked for 36 sec, osd.580 has slow ops

The OSD is auto-out:

| 580 | ceph-22 |    0  |    0  |    0   |     0   |    0   |     0   | 
autoout,exists |

It is probably a warning dating back to just before the fail. How can I clear 
it?

Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to