On 4/13/2017 11:51 AM, Chetas Joshi wrote:
> Thanks for the insights into the memory requirements. Looks like cursor
> approach is going to require a lot of memory for millions of documents.
> If I run a query that returns only 500K documents still keeping 100K docs
> per page, I don't see long GC pauses. So it is not really the number of
> rows per page but the overall number of docs. May be I can reduce the
> document cache and the field cache. What do you think?

Lucene handles the field cache automatically and as far as I am aware,
it is not configurable in any way.  Having docValues on fields that you
are using will reduce the amount of memory required for the field cache.

The filterCache is typically going to be much larger than any of the
other configurable caches.  Each entry in filterCache will be 25 million
bytes on a 200 million document index.  The filterCache should not be
configured with a large size -- typical example defaults have a size of
512 ... 512 entries that are each 25 million bytes will use 12
gigabytes.  The other caches typically have much smaller entries and
therefore can usually be configured with fairly large sizes.

Thanks,
Shawn

Reply via email to