Hi!

I have few other questions regarding Solr4 performance issue we're facing. 

We're committing data to Solr4 every ~30 seconds (up to 20K rows). We use 
commit=false in update URL. We have only hard commit setting in Solr4 config. 

<autoCommit>
       <maxTime>${solr.autoCommit.maxTime:600000}</maxTime>
       <maxDocs>100000</maxDocs>
       <openSearcher>true</openSearcher>       
     </autoCommit>


Since we're not using Soft commit at all (commit=false), the caches will not 
get reloaded for every commit and recently added documents will not be visible, 
correct? 

What we see is queries which usually take few milli seconds, takes ~40 seconds 
once in a while. Can high IO during hard commit cause queries to slow down? 

For some shards we see 98% full physical memory. We have 60GB machine (30 GB 
JVM, 28 GB free RAM, ~35 GB of index). We're ruling out that high physical 
memory would cause queries to slow down. We're in process of reducing JVM size 
anyways. 

We have never run optimization till now. QA optimization didn't yield in 
performance gain. 

Thanks much for all help.

-----Original Message-----
From: Shawn Heisey [mailto:s...@elyograg.org] 
Sent: Tuesday, February 18, 2014 4:55 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr4 performance

On 2/18/2014 2:14 PM, Joshi, Shital wrote:
> Thanks much for all suggestions. We're looking into reducing allocated heap 
> size of Solr4 JVM.
>
> We're using NRTCachingDirectoryFactory. Does it use MMapDirectory internally? 
> Can someone please confirm?

In Solr, NRTCachingDirectory does indeed use MMapDirectory as its 
default delegate.  That's probably also the case with Lucene -- these 
are Lucene classes, after all.

MMapDirectory is almost always the most efficient way to handle on-disk 
indexes.

Thanks,
Shawn

Reply via email to