On Thu, Jan 24, 2019 at 6:21 PM Andras Pataki
<apat...@flatironinstitute.org> wrote:
>
> Hi Ilya,
>
> Thanks for the clarification - very helpful.
> I've lowered osd_map_messages_max to 10, and this resolves the issue
> about the kernel being unhappy about large messages when the OSDMap
> changes.  One comment here though: you mentioned that Luminous uses 40
> as the default, which is indeed the case.  The documentation for
> Luminous (and master), however, says that the default is 100.

Looks like that page hasn't been kept up to date.  I'll fix that
section.

>
> One other follow-up question on the kernel client about something I've
> been seeing while testing.  Does the kernel client clean up when the MDS
> asks due to cache pressure?  On a machine I ran something that touches a
> lot of files, so the kernel client accumulated over 4 million caps.
> Many hours after all the activity finished (i.e. many hours after
> anything accesses ceph on that node) the kernel client still holds
> millions of caps, and the MDS periodically complains about clients not
> responding to cache pressure.  How is this supposed to be handled?
> Obviously asking the kernel to drop caches via /proc/sys/vm/drop_caches
> does a very thorough cleanup, but something in the middle would be better.

The kernel client sitting on way too many caps for way too long is
a long standing issue.  Adding Zheng who has recently been doing some
work to facilitate cap releases and put a limit on the overall cap
count.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to