Hi Otis,

We saw some improvement when increasing the size of the caches. Since then, we 
followed Shawn advice on the filterCache and gave some additional RAM to the 
JVM in order to reduce GC. The performance is very good right now but we are 
still experiencing some instability but not at the same level as before.
With our current settings the number of evictions is actually very low so we 
might be able to reduce some caches to free up some additional memory for the 
JVM to use.

As for the queries, it is a set of 5 million queries taken from our logs so 
they vary a lot. All I can say is that all queries involve either 
grouping/field collapsing and/or radius search around a point. Our largest 
customer is using a set of 8-10 filters that are translated as fq parameters. 
The collection contains around 13 million documents distributed on 5 shards 
with 2 replicas. The second collection has the same configuration and is used 
for indexing or as a fail-over index in case the first one falls.

We`ll keep making adjustments today but we are pretty close of having something 
that performs while being stable.

Thanks all for your help.



> -----Original Message-----
> From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
> Sent: June-03-14 12:17 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Strange behaviour when tuning the caches
> 
> Hi Jean-Sebastien,
> 
> One thing you didn't mention is whether as you are increasing(I assume)
> cache sizes you actually see performance improve?  If not, then maybe there
> is no value increasing cache sizes.
> 
> I assume you changed only one cache at a time? Were you able to get any
> one of them to the point where there were no evictions without things
> breaking?
> 
> What are your queries like, can you share a few examples?
> 
> Otis
> --
> Performance Monitoring * Log Analytics * Search Analytics Solr &
> Elasticsearch Support * http://sematext.com/
> 
> 
> On Mon, Jun 2, 2014 at 11:09 AM, Jean-Sebastien Vachon < jean-
> sebastien.vac...@wantedanalytics.com> wrote:
> 
> > Thanks for your quick response.
> >
> > Our JVM is configured with a heap of 8GB. So we are pretty close of
> > the "optimal" configuration you are mentioning. The only other
> > programs running is Zookeeper (which has its own storage device) and a
> > proprietary API (with a heap of 1GB) we have on top of Solr to server our
> customer`s requests.
> >
> > I will look into the filterCache to see if we can better use it.
> >
> > Thanks for your help
> >
> > > -----Original Message-----
> > > From: Shawn Heisey [mailto:s...@elyograg.org]
> > > Sent: June-02-14 10:48 AM
> > > To: solr-user@lucene.apache.org
> > > Subject: Re: Strange behaviour when tuning the caches
> > >
> > > On 6/2/2014 8:24 AM, Jean-Sebastien Vachon wrote:
> > > > We have yet to determine where the exact breaking point is.
> > > >
> > > > The two patterns we are seeing are:
> > > >
> > > > -          less cache (around 20-30% hit/ratio), poor performance but
> > > > overall good stability
> > >
> > > When caches are too small, a low hit ratio is expected.  Increasing
> > > them
> > is a
> > > good idea, but only increase them a little bit at a time.  The
> > filterCache in
> > > particular should not be increased dramatically, especially the
> > > autowarmCount value.  Filters can take a very long time to execute,
> > > so a
> > high
> > > autowarmCount can result in commits taking forever.
> > >
> > > Each filter entry can take up a lot of heap memory -- in terms of
> > > bytes,
> > it is
> > > the number of documents in the core divided by 8.  This means that
> > > if the core has 10 million documents, each filter entry (for JUST
> > > that
> > > core) will take over a megabyte of RAM.
> > >
> > > > -          more cache (over 90% hit/ratio), improved performance but
> > > > almost no stability. In that case, we start seeing messages such
> > > > as "No shards hosting shard X" or "cancelElection did not find
> > > > election node to remove"
> > >
> > > This would not be a direct result of increasing the cache size,
> > > unless
> > perhaps
> > > you've increased them so they are *REALLY* big and you're running
> > > out of RAM for the heap or OS disk cache.
> > >
> > > > Anyone, has any advice on what could cause this? I am beginning to
> > > > suspect the JVM version, is there any minimal requirements
> > > > regarding the JVM?
> > >
> > > Oracle Java 7 is recommended for all releases, and required for Solr
> > 4.8.  You
> > > just need to stay away from 7u40, 7u45, and 7u51 because of bugs in
> > > Java itself.  Right now, the latest release is recommended, which is 7u60.
> >  The
> > > 7u21 release that you are running should be perfectly fine.
> > >
> > > With six 9.4GB cores per node, you'll achieve the best performance
> > > if you have about 60GB of RAM left over for the OS disk cache to use
> > > -- the
> > size of
> > > your index data on disk.  You did mention that you have 92GB of RAM
> > > per node, but you have not said how big your Java heap is, or
> > > whether there
> > is
> > > other software on the machine that may be eating up RAM for its heap
> > > or data.
> > >
> > > http://wiki.apache.org/solr/SolrPerformanceProblems
> > >
> > > Thanks,
> > > Shawn
> > >
> > > -----
> > > Aucun virus trouvé dans ce message.
> > > Analyse effectuée par AVG - www.avg.fr
> > > Version: 2014.0.4570 / Base de données virale: 3950/7571 - Date:
> > > 27/05/2014
> >
> 
> -----
> Aucun virus trouvé dans ce message.
> Analyse effectuée par AVG - www.avg.fr
> Version: 2014.0.4570 / Base de données virale: 3950/7571 - Date:
> 27/05/2014 La Base de données des virus a expiré.

Reply via email to