[ceph-users] Re: Ceph 17.2.7 to 18.2.0 issues

2023-12-11 Thread pclark6063
Thanks for this, I've replied above but sadly a client eviction and remount didn't help. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph 17.2.7 to 18.2.0 issues

2023-12-11 Thread pclark6063
Hi, Thank you very much for the reply. So I evicted all my clients and still no luck. Check for blocked ops returns 0 from each mds service. Each mds service is serving a different pool suffering the same issue. If I write any recent files I can both stat and pull those so I have zero issues

[ceph-users] Re: Ceph 17.2.7 to 18.2.0 issues

2023-12-06 Thread Venky Shankar
On Thu, Dec 7, 2023 at 12:49 PM Eugen Block wrote: > > Hi, did you unmount your clients after the cluster poweroff? If this is the case, then a remount would kick things back working. > You could > also enable debug logs in mds to see more information. Are there any > blocked requests? You can

[ceph-users] Re: Ceph 17.2.7 to 18.2.0 issues

2023-12-06 Thread Eugen Block
Hi, did you unmount your clients after the cluster poweroff? You could also enable debug logs in mds to see more information. Are there any blocked requests? You can query the mds daemon via cephadm shell or with ad admin keyring like this: # ceph tell mds.cephfs.storage.lgmyqv