@Kotresh Hiremath Ravishankar
Can you please help on above
On Fri, 17 May, 2024, 12:26 pm Akash Warkhade,
wrote:
> Hi Kotresh,
>
>
> Thanks for the reply.
> 1)There are no customer configs defined
> 2) not enabled subtree pinning
> 3) there were no warning related t
re any warnings w.r.t rados slowness ?
> 4. Please share the mds perf dump to check for latencies and other stuff.
>$ceph tell mds. perf dump
>
> Thanks and Regards,
> Kotresh H R
>
> On Fri, May 17, 2024 at 11:01 AM Akash Warkhade
> wrote:
>
>> Hi,
>>
>
Hi,
We are using rook-ceph with operator 1.10.8 and ceph 17.2.5.
we are using ceph filesystem with 4 mds i.e 2 active & 2 standby MDS
every 3-4 weeks filesystem is having issue i.e in ceph status we can see
below warnings warnings :
2 MDS reports slow requests
2 MDS Behind on Trimming
Guys,
We were facing cephFs volume mount issue and ceph status it was showing
mds slow requests
Mds behind on trimming
After restarting mds pods it was resolved
But wanted to know Root caus of this
It was started after 2 hours of one of the active mds was crashed
So does that an active mds
We are running rook-ceph deployed as a operator in kubernetes with rook
version 1.10.8 and ceph 17.2.5.
Its working fine but we are seeing frequent OSD daemon crash in 3-4 days
and restarts without any problem also we are seeing flapping osds i.e osd
up down.
Recently daemon crash happened for 2