Greetings,

I've recently moved to running some of our Solr (3.6.1) instances
using JDK 7u7 with the G1 GC (playing with max pauses in the 20 to
100ms range). By and large, it has been working well (or, perhaps I
should say that without requiring much tuning it works much better in
general than my haphazard attempts to tune CMS).

I have two instances in particular, one with a heap size of 14G and
one with a heap size of 60G. I'm attempting to squeeze out additional
performance by increasing Solr's cache sizes (I am still seeing the
hit ratio go up as I increase max size size and decrease the number of
evictions), and am guessing this is the cause of some recent
situations where the 14G instance especially eventually (12-24 hrs
later under 100s of queries per minute) makes it to 80%-90% of the
heap and then spirals into major GC with long-pause territory.

I am wondering:
1) if anybody has experience tuning the G1 GC, especially for use with
Solr (what are decent max-pause times to use?)
2) how to better tune Solr's cache sizes - e.g. how to even tell the
actual amount of memory used by each cache (not # entries as the stats
sow, but # bits)
3) if there are any guidelines on when increasing a cache's size (even
if it does continue to increase the hit ratio) runs into the law of
diminishing returns or even starts to hurt - e.g. if the document
cache has a current maxSize of 65536 and has seen 4409275 evictions,
and currently has a hit ratio of 0.74, should the max be increased
further? If so, how much ram needs to be added to the heap, and how
much larger should its max size be made?

I should mention that these solr instances are read-only (so cache is
probably more valuable than in other scenarios - we only invalidate
the searcher every 20-24hrs or so) and are also backed with indexes
(6G and 70G for the 14G and 60G heap sizes) on IODrives, so I'm not as
concerned about leaving RAM for linux to cache the index files (I'd
much rather actually cache the post-transformed values).

Thanks as always,
     Aaron

Reply via email to