Thanks for your answer. 

We confirmed that it is not GC issue. 

The auto warming query looks good too and queries before and after the long 
running query comes back really quick. The only thing stands out is shard on 
which query takes long time has couple million more documents than other 
shards. 

-----Original Message-----
From: Michael Della Bitta [mailto:michael.della.bi...@appinions.com] 
Sent: Thursday, February 20, 2014 5:26 PM
To: solr-user@lucene.apache.org
Subject: RE: Solr4 performance

Hi,

As for your first question, setting openSearcher to true means you will see
the new docs after every hard commit. Soft and hard commits only become
isolated from one another with that set to false.

Your second problem might be explained by your large heap and garbage
collection. Walking a heap that large can take an appreciable amount of
time. You might consider turning on the JVM options for logging GC and
seeing if you can correlate your slow responses to times when your JVM is
garbage collecting.

Hope that helps,
On Feb 20, 2014 4:52 PM, "Joshi, Shital" <shital.jo...@gs.com> wrote:

> Hi!
>
> I have few other questions regarding Solr4 performance issue we're facing.
>
> We're committing data to Solr4 every ~30 seconds (up to 20K rows). We use
> commit=false in update URL. We have only hard commit setting in Solr4
> config.
>
> <autoCommit>
>        <maxTime>${solr.autoCommit.maxTime:600000}</maxTime>
>        <maxDocs>100000</maxDocs>
>        <openSearcher>true</openSearcher>
>      </autoCommit>
>
>
> Since we're not using Soft commit at all (commit=false), the caches will
> not get reloaded for every commit and recently added documents will not be
> visible, correct?
>
> What we see is queries which usually take few milli seconds, takes ~40
> seconds once in a while. Can high IO during hard commit cause queries to
> slow down?
>
> For some shards we see 98% full physical memory. We have 60GB machine (30
> GB JVM, 28 GB free RAM, ~35 GB of index). We're ruling out that high
> physical memory would cause queries to slow down. We're in process of
> reducing JVM size anyways.
>
> We have never run optimization till now. QA optimization didn't yield in
> performance gain.
>
> Thanks much for all help.
>
> -----Original Message-----
> From: Shawn Heisey [mailto:s...@elyograg.org]
> Sent: Tuesday, February 18, 2014 4:55 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr4 performance
>
> On 2/18/2014 2:14 PM, Joshi, Shital wrote:
> > Thanks much for all suggestions. We're looking into reducing allocated
> heap size of Solr4 JVM.
> >
> > We're using NRTCachingDirectoryFactory. Does it use MMapDirectory
> internally? Can someone please confirm?
>
> In Solr, NRTCachingDirectory does indeed use MMapDirectory as its
> default delegate.  That's probably also the case with Lucene -- these
> are Lucene classes, after all.
>
> MMapDirectory is almost always the most efficient way to handle on-disk
> indexes.
>
> Thanks,
> Shawn
>
>

Reply via email to