[ceph-users] Re: MDS behind on trimming every 4-5 weeks causing issue for ceph filesystem

2024-05-17 Thread Akash Warkhade
@Kotresh Hiremath Ravishankar Can you please help on above On Fri, 17 May, 2024, 12:26 pm Akash Warkhade, wrote: > Hi Kotresh, > > > Thanks for the reply. > 1)There are no customer configs defined > 2) not enabled subtree pinning > 3) there were no warning related t

[ceph-users] Re: MDS behind on trimming every 4-5 weeks causing issue for ceph filesystem

2024-05-17 Thread Akash Warkhade
re any warnings w.r.t rados slowness ? > 4. Please share the mds perf dump to check for latencies and other stuff. >$ceph tell mds. perf dump > > Thanks and Regards, > Kotresh H R > > On Fri, May 17, 2024 at 11:01 AM Akash Warkhade > wrote: > >> Hi, >> >

[ceph-users] MDS behind on trimming every 4-5 weeks causing issue for ceph filesystem

2024-05-16 Thread Akash Warkhade
Hi, We are using rook-ceph with operator 1.10.8 and ceph 17.2.5. we are using ceph filesystem with 4 mds i.e 2 active & 2 standby MDS every 3-4 weeks filesystem is having issue i.e in ceph status we can see below warnings warnings : 2 MDS reports slow requests 2 MDS Behind on Trimming

[ceph-users] Ambigous mds behind on trimming and slowps issue on ceph 17.2.5 with rook 1.10.8 operator

2024-02-14 Thread Akash Warkhade
Guys, We were facing cephFs volume mount issue and ceph status it was showing mds slow requests Mds behind on trimming After restarting mds pods it was resolved But wanted to know Root caus of this It was started after 2 hours of one of the active mds was crashed So does that an active mds

[ceph-users] Consistent OSD crashes for ceph 17.2.5 which is causing osd up and down

2023-12-27 Thread Akash Warkhade
We are running rook-ceph deployed as a operator in kubernetes with rook version 1.10.8 and ceph 17.2.5. Its working fine but we are seeing frequent OSD daemon crash in 3-4 days and restarts without any problem also we are seeing flapping osds i.e osd up down. Recently daemon crash happened for 2