[ceph-users] Re: OSD_TOO_MANY_REPAIRS on random OSDs causing clients to hang

2023-04-26 Thread Thomas Hukkelberg
apr. 2023 kl. 13:55 skrev Robert Sander : > > On 26.04.23 13:24, Thomas Hukkelberg wrote: > >> [WRN] OSD_TOO_MANY_REPAIRS: Too many repaired reads on 1 OSDs >> osd.34 had 9936 reads repaired > > Are there any messages in the kernel log that indicate this devic

[ceph-users] OSD_TOO_MANY_REPAIRS on random OSDs causing clients to hang

2023-04-26 Thread Thomas Hukkelberg
+repair, active+clean+repair every few seconds? Any ideas on how to gracefully battle this problem? Thanks! --thomas Thomas Hukkelberg tho...@hovedkvarteret.no ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users

[ceph-users] Multiple cephfs MDS crashes with same assert_condition: state == LOCK_XLOCK || state == LOCK_XLOCKDONE

2021-08-09 Thread Thomas Hukkelberg
Hi Today we suddenly experience multiple MDS crashes during the day with an error we have not seen earlier. We run octopus 15.2.13 with 4 ranks and 4 standby-reply MDSes and 1 passive standby. Any input on how to troubleshot or resolve this would be most welcome. --- root@hk-cephnode-54:~#

[ceph-users] Preferred order of operations when changing crush map and pool rules

2021-03-30 Thread Thomas Hukkelberg
pus. Should we upgrade to Octopus before crush changes? Any thoughts or insight on how to achieve this with minimal data movement and risk of cluster downtime would be welcome! --thomas -- Thomas Hukkelberg tho...@hovedkvarteret.no ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] CephFS error: currently failed to rdlock, waiting. clients crashing and evicted

2020-11-17 Thread Thomas Hukkelberg
RN] client.3869410498 isn't responding to mclientcaps(revoke), ino 0x10001202b55 pending pAsLsXsFs issued pAsLsXsFsx, sent 64.045677 seconds ago --thomas -- Thomas Hukkelberg tho...@hovedkvarteret.no +47 971 81 192 -- supp...@hovedkvarteret.no +47 966 44 999 ___