Jack

First off, thanks for applying your mind to our performance problem.

On 2014/06/02, 1:34 PM, Jack Krupansky wrote:
Do you have enough system memory to fit the entire index in OS system memory so that the OS can fully cache it instead of thrashing with I/O? Do you see a lot of I/O or are the queries compute-bound?
Nice idea. The index is 200GB, the machine currently has 128GB RAM. We are using SSDs, but disappointingly, installing them didn't reduce search times to acceptable levels. I'll have to check your last question regarding I/O... I assume it is I/O bound, though will double check...

Currently, we are using

fsDirectory = new NRTCachingDirectory(fsDir, 5.0, 60.0);

Are you proposing we increase maxCachedMB or use the RAMDirectory? With the latter, we will still need to persistent the index data to disk, as it is undergoing constant updates.

You said you have a 128GB machine, so that sounds small for your index. Have you tried a 256GB machine?
Nope..didn't think it would make much of a different. I suppose, assuming we could store the entire index in RAM it would be helpful. How does one do this with Lucene, while still persisting the data?

How frequent are your commits for updates while doing queries?
Around ten to fifteen documents are being constantly added per second.

Thank again

Jamie


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to