Thank you Frank.

My focus is actually performance tuning.
After your mail, I started to investigate client-side.

I think the kernel tunings work great now.
After the tunings I didn't get any warning again.

Now I will continue with performance tunings.
I decided to distribute subvolumes across multiple pools instead of
multi-active-mds.
With this method I will have multiple MDS and [1x cephfs clients for each
pool / Host]

To hide subvolume uuids, I'm using "mount --bind kernel links" and I wonder
is it able to create performance issues on cephfs clients?

Best regards.



Frank Schilder <fr...@dtu.dk>, 27 Oca 2024 Cmt, 12:34 tarihinde şunu yazdı:

> Hi Özkan,
>
> > ... The client is actually at idle mode and there is no reason to fail
> at all. ...
>
> if you re-read my message, you will notice that I wrote that
>
> - its not the client failing, its a false positive error flag that
> - is not cleared for idle clients.
>
> You seem to encounter exactly this situation and a simple
>
> echo 3 > /proc/sys/vm/drop_caches
>
> would probably have cleared the warning. There is nothing wrong with your
> client, its an issue with the client-MDS communication protocol that is
> probably still under review. You will encounter these warnings every now
> and then until its fixed.
>
> Best regards,
> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to