Thanks.
The heapdump indicated that most of the space was occupied by the caches
(filter and documentCache in my case).
I followed your suggestion of removing the limit on maxRAMMB on filterCache
and documentCache and decreasing the number of entries allowed.
It did have a significant impact on the used heap size. So I guess, I have
to find the sweet spot between hit ratio and size
Still, the OldGeneration does not seem to fall significantly even if I
force a full GC (using jvisualvm).

Any other suggestions are welcome!
Thanks

Reinaldo

On Fri, Jun 26, 2020 at 5:05 AM Zisis T. <zist...@runbox.com> wrote:

> I have faced similar issues and the culprit was filterCache when using
> maxRAMMB. More specifically on a sharded Solr cluster with lots of faceting
> during search (which makes use of the filterCache in a distributed setting)
> I noticed that maxRAMMB value was not respected. I had a value of 300MB set
> but I witnessed an instance sized a couple of GBs in a heap dump at some
> point. The thing that I found was that because the keys of the Map
> (BooleanQuery or something if I recall correctly) was not implementing the
> Accountable interface it was NOT taken into account when calculating the
> cache's size. But all that was on a 7.5 cluster using FastLRUCache.
>
> There's also https://issues.apache.org/jira/browse/SOLR-12743 on caches
> memory leak which does not seem to have been fixed yet although the trigger
> points of this memory leak are not clear. I've witnessed this as well on a
> 7.5 cluster with multiple (>10) filter cache objects for a single core each
> holding from a few MBs to GBs.
>
> Try to get a heap dump from your cluster, the truth is almost always hidden
> there.
>
> One workaround which seems to alleviate the problem is to check you running
> Solr cluster and see in reality how many cache entries actually give you a
> good hit ratio and get rid of the maxRAMMB attribute. Play only with the
> size.
>
>
>
> --
> Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>

Reply via email to