Very nice article - thank you!  Is there a similar article available when the index is on HDFS?  Sorry to hijack!  I'm very interested in how we can improve cache/general performance when running with HDFS.

-Joe


On 9/18/2017 11:35 AM, Erick Erickson wrote:
<filterCache class="solr.FastLRUCache" size="20000" initialSize="4096"
autowarmCount="512"/>

This is suspicious too. Each entry is up to about
maxDoc/8 bytes + (string size of fq clause) long
and you can have up to 20,000 of them. An autowarm count of 512 is
almost never  a good thing.

Walter's comments about your memory are spot on of course, see:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

Best,
Erick

On Mon, Sep 18, 2017 at 7:59 AM, Walter Underwood <wun...@wunderwood.org> wrote:
29G on a 30G machine is still a bad config. That leaves no space for the OS, 
file buffers, or any other processes.

Try with 8G.

Also, give us some information about the number of docs, size of the indexes, 
and the kinds of search features you are using.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)


On Sep 18, 2017, at 7:55 AM, shamik <sham...@gmail.com> wrote:

Apologies, 290gb was a typo on my end, it should read 29gb instead. I started
with my 5.5 configurations of limiting the RAM to 15gb. But it started going
down once it reached the 15gb ceiling. I tried bumping it up to 29gb since
memory seemed to stabilize at 22gb after running for few hours, of course,
it didn't help eventually. I did try the G1 collector. Though garbage
collection was happening more efficiently compared to CMS, it brought the
nodes down after a while.

The part I'm trying to understand is whether the memory footprint is higher
for 6.6 and whether I need an instance with higher ram (>30gb in my case). I
haven't added any post 5.5 feature to rule out the possibility of a memory
leak.



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
---
This email has been checked for viruses by AVG.
http://www.avg.com


Reply via email to