Re: Strange behaviour when tuning the caches

2014-06-04 Thread Joel Bernstein
The CollapsingQParserPlugin can be resource intensive so you'll want to be
careful about how it's used. Particularly with autowarming in the
queryResultCache. If you autowarm lots of queries while using the
CollapsingQParserPlugin, your be running lots of CPU and memory intensive
queries after opening a new searcher.

Also you'll want to understand the memory profile for the
CollapsingQParserPlugin on your index. It uses more memory as the number of
unique values in the collapse field grows. This is regardless of the number
of unique values in the search results.

So, be aware of the cardinality of the collapse field and use the
nullPolicy=expand, if you have nulls in the collapsed field. This null
policy is designed to lesson the memory impact if there a nulls in the
collapsed field.

Also it's a good idea to have one static warming query that exercises the
CollapsingQParserPlugin as it can take time to warm. Autowarming the query
result cache might cover this in your case.

In general the CollapsingQParserPlugin should be faster then grouping when
you have a high number of distinct groups in the result set. But the
tradeoff is that it it's more memory intensive then grouping when there is
a low number of distinct groups in the result set. Both the
collapsingqparserpluging and grouping (with ngroups) have a high memory
footprint when there is a large number of distinct groups in the result
set. If your not using ngroups, grouping will always outperform the
collapsingqparserplugin.







Joel Bernstein
Search Engineer at Heliosearch


On Tue, Jun 3, 2014 at 12:38 PM, Jean-Sebastien Vachon 
jean-sebastien.vac...@wantedanalytics.com wrote:

 Yes we are already using it.

  -Original Message-
  From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
  Sent: June-03-14 11:41 AM
  To: solr-user@lucene.apache.org
  Subject: Re: Strange behaviour when tuning the caches
 
  Hi,
 
  Have you seen https://wiki.apache.org/solr/CollapsingQParserPlugin ?
  May
  help with the field collapsing queries.
 
  Otis
  --
  Performance Monitoring * Log Analytics * Search Analytics Solr 
  Elasticsearch Support * http://sematext.com/
 
 
  On Tue, Jun 3, 2014 at 8:41 AM, Jean-Sebastien Vachon  jean-
  sebastien.vac...@wantedanalytics.com wrote:
 
   Hi Otis,
  
   We saw some improvement when increasing the size of the caches. Since
   then, we followed Shawn advice on the filterCache and gave some
   additional RAM to the JVM in order to reduce GC. The performance is
   very good right now but we are still experiencing some instability but
   not at the same level as before.
   With our current settings the number of evictions is actually very low
   so we might be able to reduce some caches to free up some additional
   memory for the JVM to use.
  
   As for the queries, it is a set of 5 million queries taken from our
   logs so they vary a lot. All I can say is that all queries involve
   either grouping/field collapsing and/or radius search around a point.
   Our largest customer is using a set of 8-10 filters that are
   translated as fq parameters. The collection contains around 13 million
   documents distributed on 5 shards with 2 replicas. The second
   collection has the same configuration and is used for indexing or as a
   fail-over index in case the first one falls.
  
   We`ll keep making adjustments today but we are pretty close of having
   something that performs while being stable.
  
   Thanks all for your help.
  
  
  
-Original Message-
From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
Sent: June-03-14 12:17 AM
To: solr-user@lucene.apache.org
Subject: Re: Strange behaviour when tuning the caches
   
Hi Jean-Sebastien,
   
One thing you didn't mention is whether as you are increasing(I
assume) cache sizes you actually see performance improve?  If not,
then maybe
   there
is no value increasing cache sizes.
   
I assume you changed only one cache at a time? Were you able to get
any one of them to the point where there were no evictions without
things breaking?
   
What are your queries like, can you share a few examples?
   
Otis
--
Performance Monitoring * Log Analytics * Search Analytics Solr 
Elasticsearch Support * http://sematext.com/
   
   
On Mon, Jun 2, 2014 at 11:09 AM, Jean-Sebastien Vachon  jean-
sebastien.vac...@wantedanalytics.com wrote:
   
 Thanks for your quick response.

 Our JVM is configured with a heap of 8GB. So we are pretty close
 of the optimal configuration you are mentioning. The only other
 programs running is Zookeeper (which has its own storage device)
 and a proprietary API (with a heap of 1GB) we have on top of Solr
 to server
   our
customer`s requests.

 I will look into the filterCache to see if we can better use it.

 Thanks for your help

  -Original Message-
  From: Shawn Heisey [mailto:s

RE: Strange behaviour when tuning the caches

2014-06-03 Thread Jean-Sebastien Vachon
Hi Otis,

We saw some improvement when increasing the size of the caches. Since then, we 
followed Shawn advice on the filterCache and gave some additional RAM to the 
JVM in order to reduce GC. The performance is very good right now but we are 
still experiencing some instability but not at the same level as before.
With our current settings the number of evictions is actually very low so we 
might be able to reduce some caches to free up some additional memory for the 
JVM to use.

As for the queries, it is a set of 5 million queries taken from our logs so 
they vary a lot. All I can say is that all queries involve either 
grouping/field collapsing and/or radius search around a point. Our largest 
customer is using a set of 8-10 filters that are translated as fq parameters. 
The collection contains around 13 million documents distributed on 5 shards 
with 2 replicas. The second collection has the same configuration and is used 
for indexing or as a fail-over index in case the first one falls.

We`ll keep making adjustments today but we are pretty close of having something 
that performs while being stable.

Thanks all for your help.



 -Original Message-
 From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
 Sent: June-03-14 12:17 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Strange behaviour when tuning the caches
 
 Hi Jean-Sebastien,
 
 One thing you didn't mention is whether as you are increasing(I assume)
 cache sizes you actually see performance improve?  If not, then maybe there
 is no value increasing cache sizes.
 
 I assume you changed only one cache at a time? Were you able to get any
 one of them to the point where there were no evictions without things
 breaking?
 
 What are your queries like, can you share a few examples?
 
 Otis
 --
 Performance Monitoring * Log Analytics * Search Analytics Solr 
 Elasticsearch Support * http://sematext.com/
 
 
 On Mon, Jun 2, 2014 at 11:09 AM, Jean-Sebastien Vachon  jean-
 sebastien.vac...@wantedanalytics.com wrote:
 
  Thanks for your quick response.
 
  Our JVM is configured with a heap of 8GB. So we are pretty close of
  the optimal configuration you are mentioning. The only other
  programs running is Zookeeper (which has its own storage device) and a
  proprietary API (with a heap of 1GB) we have on top of Solr to server our
 customer`s requests.
 
  I will look into the filterCache to see if we can better use it.
 
  Thanks for your help
 
   -Original Message-
   From: Shawn Heisey [mailto:s...@elyograg.org]
   Sent: June-02-14 10:48 AM
   To: solr-user@lucene.apache.org
   Subject: Re: Strange behaviour when tuning the caches
  
   On 6/2/2014 8:24 AM, Jean-Sebastien Vachon wrote:
We have yet to determine where the exact breaking point is.
   
The two patterns we are seeing are:
   
-  less cache (around 20-30% hit/ratio), poor performance but
overall good stability
  
   When caches are too small, a low hit ratio is expected.  Increasing
   them
  is a
   good idea, but only increase them a little bit at a time.  The
  filterCache in
   particular should not be increased dramatically, especially the
   autowarmCount value.  Filters can take a very long time to execute,
   so a
  high
   autowarmCount can result in commits taking forever.
  
   Each filter entry can take up a lot of heap memory -- in terms of
   bytes,
  it is
   the number of documents in the core divided by 8.  This means that
   if the core has 10 million documents, each filter entry (for JUST
   that
   core) will take over a megabyte of RAM.
  
-  more cache (over 90% hit/ratio), improved performance but
almost no stability. In that case, we start seeing messages such
as No shards hosting shard X or cancelElection did not find
election node to remove
  
   This would not be a direct result of increasing the cache size,
   unless
  perhaps
   you've increased them so they are *REALLY* big and you're running
   out of RAM for the heap or OS disk cache.
  
Anyone, has any advice on what could cause this? I am beginning to
suspect the JVM version, is there any minimal requirements
regarding the JVM?
  
   Oracle Java 7 is recommended for all releases, and required for Solr
  4.8.  You
   just need to stay away from 7u40, 7u45, and 7u51 because of bugs in
   Java itself.  Right now, the latest release is recommended, which is 7u60.
   The
   7u21 release that you are running should be perfectly fine.
  
   With six 9.4GB cores per node, you'll achieve the best performance
   if you have about 60GB of RAM left over for the OS disk cache to use
   -- the
  size of
   your index data on disk.  You did mention that you have 92GB of RAM
   per node, but you have not said how big your Java heap is, or
   whether there
  is
   other software on the machine that may be eating up RAM for its heap
   or data.
  
   http://wiki.apache.org/solr/SolrPerformanceProblems
  
   Thanks,
   Shawn

Re: Strange behaviour when tuning the caches

2014-06-03 Thread Otis Gospodnetic
Hi,

Have you seen https://wiki.apache.org/solr/CollapsingQParserPlugin ?  May
help with the field collapsing queries.

Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr  Elasticsearch Support * http://sematext.com/


On Tue, Jun 3, 2014 at 8:41 AM, Jean-Sebastien Vachon 
jean-sebastien.vac...@wantedanalytics.com wrote:

 Hi Otis,

 We saw some improvement when increasing the size of the caches. Since
 then, we followed Shawn advice on the filterCache and gave some additional
 RAM to the JVM in order to reduce GC. The performance is very good right
 now but we are still experiencing some instability but not at the same
 level as before.
 With our current settings the number of evictions is actually very low so
 we might be able to reduce some caches to free up some additional memory
 for the JVM to use.

 As for the queries, it is a set of 5 million queries taken from our logs
 so they vary a lot. All I can say is that all queries involve either
 grouping/field collapsing and/or radius search around a point. Our largest
 customer is using a set of 8-10 filters that are translated as fq
 parameters. The collection contains around 13 million documents distributed
 on 5 shards with 2 replicas. The second collection has the same
 configuration and is used for indexing or as a fail-over index in case the
 first one falls.

 We`ll keep making adjustments today but we are pretty close of having
 something that performs while being stable.

 Thanks all for your help.



  -Original Message-
  From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
  Sent: June-03-14 12:17 AM
  To: solr-user@lucene.apache.org
  Subject: Re: Strange behaviour when tuning the caches
 
  Hi Jean-Sebastien,
 
  One thing you didn't mention is whether as you are increasing(I assume)
  cache sizes you actually see performance improve?  If not, then maybe
 there
  is no value increasing cache sizes.
 
  I assume you changed only one cache at a time? Were you able to get any
  one of them to the point where there were no evictions without things
  breaking?
 
  What are your queries like, can you share a few examples?
 
  Otis
  --
  Performance Monitoring * Log Analytics * Search Analytics Solr 
  Elasticsearch Support * http://sematext.com/
 
 
  On Mon, Jun 2, 2014 at 11:09 AM, Jean-Sebastien Vachon  jean-
  sebastien.vac...@wantedanalytics.com wrote:
 
   Thanks for your quick response.
  
   Our JVM is configured with a heap of 8GB. So we are pretty close of
   the optimal configuration you are mentioning. The only other
   programs running is Zookeeper (which has its own storage device) and a
   proprietary API (with a heap of 1GB) we have on top of Solr to server
 our
  customer`s requests.
  
   I will look into the filterCache to see if we can better use it.
  
   Thanks for your help
  
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: June-02-14 10:48 AM
To: solr-user@lucene.apache.org
Subject: Re: Strange behaviour when tuning the caches
   
On 6/2/2014 8:24 AM, Jean-Sebastien Vachon wrote:
 We have yet to determine where the exact breaking point is.

 The two patterns we are seeing are:

 -  less cache (around 20-30% hit/ratio), poor performance
 but
 overall good stability
   
When caches are too small, a low hit ratio is expected.  Increasing
them
   is a
good idea, but only increase them a little bit at a time.  The
   filterCache in
particular should not be increased dramatically, especially the
autowarmCount value.  Filters can take a very long time to execute,
so a
   high
autowarmCount can result in commits taking forever.
   
Each filter entry can take up a lot of heap memory -- in terms of
bytes,
   it is
the number of documents in the core divided by 8.  This means that
if the core has 10 million documents, each filter entry (for JUST
that
core) will take over a megabyte of RAM.
   
 -  more cache (over 90% hit/ratio), improved performance
 but
 almost no stability. In that case, we start seeing messages such
 as No shards hosting shard X or cancelElection did not find
 election node to remove
   
This would not be a direct result of increasing the cache size,
unless
   perhaps
you've increased them so they are *REALLY* big and you're running
out of RAM for the heap or OS disk cache.
   
 Anyone, has any advice on what could cause this? I am beginning to
 suspect the JVM version, is there any minimal requirements
 regarding the JVM?
   
Oracle Java 7 is recommended for all releases, and required for Solr
   4.8.  You
just need to stay away from 7u40, 7u45, and 7u51 because of bugs in
Java itself.  Right now, the latest release is recommended, which is
 7u60.
The
7u21 release that you are running should be perfectly fine.
   
With six 9.4GB cores per node, you'll achieve

RE: Strange behaviour when tuning the caches

2014-06-03 Thread Jean-Sebastien Vachon
Yes we are already using it.

 -Original Message-
 From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
 Sent: June-03-14 11:41 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Strange behaviour when tuning the caches
 
 Hi,
 
 Have you seen https://wiki.apache.org/solr/CollapsingQParserPlugin ?  May
 help with the field collapsing queries.
 
 Otis
 --
 Performance Monitoring * Log Analytics * Search Analytics Solr 
 Elasticsearch Support * http://sematext.com/
 
 
 On Tue, Jun 3, 2014 at 8:41 AM, Jean-Sebastien Vachon  jean-
 sebastien.vac...@wantedanalytics.com wrote:
 
  Hi Otis,
 
  We saw some improvement when increasing the size of the caches. Since
  then, we followed Shawn advice on the filterCache and gave some
  additional RAM to the JVM in order to reduce GC. The performance is
  very good right now but we are still experiencing some instability but
  not at the same level as before.
  With our current settings the number of evictions is actually very low
  so we might be able to reduce some caches to free up some additional
  memory for the JVM to use.
 
  As for the queries, it is a set of 5 million queries taken from our
  logs so they vary a lot. All I can say is that all queries involve
  either grouping/field collapsing and/or radius search around a point.
  Our largest customer is using a set of 8-10 filters that are
  translated as fq parameters. The collection contains around 13 million
  documents distributed on 5 shards with 2 replicas. The second
  collection has the same configuration and is used for indexing or as a
  fail-over index in case the first one falls.
 
  We`ll keep making adjustments today but we are pretty close of having
  something that performs while being stable.
 
  Thanks all for your help.
 
 
 
   -Original Message-
   From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
   Sent: June-03-14 12:17 AM
   To: solr-user@lucene.apache.org
   Subject: Re: Strange behaviour when tuning the caches
  
   Hi Jean-Sebastien,
  
   One thing you didn't mention is whether as you are increasing(I
   assume) cache sizes you actually see performance improve?  If not,
   then maybe
  there
   is no value increasing cache sizes.
  
   I assume you changed only one cache at a time? Were you able to get
   any one of them to the point where there were no evictions without
   things breaking?
  
   What are your queries like, can you share a few examples?
  
   Otis
   --
   Performance Monitoring * Log Analytics * Search Analytics Solr 
   Elasticsearch Support * http://sematext.com/
  
  
   On Mon, Jun 2, 2014 at 11:09 AM, Jean-Sebastien Vachon  jean-
   sebastien.vac...@wantedanalytics.com wrote:
  
Thanks for your quick response.
   
Our JVM is configured with a heap of 8GB. So we are pretty close
of the optimal configuration you are mentioning. The only other
programs running is Zookeeper (which has its own storage device)
and a proprietary API (with a heap of 1GB) we have on top of Solr
to server
  our
   customer`s requests.
   
I will look into the filterCache to see if we can better use it.
   
Thanks for your help
   
 -Original Message-
 From: Shawn Heisey [mailto:s...@elyograg.org]
 Sent: June-02-14 10:48 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Strange behaviour when tuning the caches

 On 6/2/2014 8:24 AM, Jean-Sebastien Vachon wrote:
  We have yet to determine where the exact breaking point is.
 
  The two patterns we are seeing are:
 
  -  less cache (around 20-30% hit/ratio), poor performance
  but
  overall good stability

 When caches are too small, a low hit ratio is expected.
 Increasing them
is a
 good idea, but only increase them a little bit at a time.  The
filterCache in
 particular should not be increased dramatically, especially the
 autowarmCount value.  Filters can take a very long time to
 execute, so a
high
 autowarmCount can result in commits taking forever.

 Each filter entry can take up a lot of heap memory -- in terms
 of bytes,
it is
 the number of documents in the core divided by 8.  This means
 that if the core has 10 million documents, each filter entry
 (for JUST that
 core) will take over a megabyte of RAM.

  -  more cache (over 90% hit/ratio), improved performance
  but
  almost no stability. In that case, we start seeing messages
  such as No shards hosting shard X or cancelElection did not
  find election node to remove

 This would not be a direct result of increasing the cache size,
 unless
perhaps
 you've increased them so they are *REALLY* big and you're
 running out of RAM for the heap or OS disk cache.

  Anyone, has any advice on what could cause this? I am
  beginning to suspect the JVM version, is there any minimal
  requirements regarding the JVM

Re: Strange behaviour when tuning the caches

2014-06-02 Thread Shawn Heisey
On 6/2/2014 8:24 AM, Jean-Sebastien Vachon wrote:
 We have yet to determine where the exact breaking point is.
 
 The two patterns we are seeing are:
 
 -  less cache (around 20-30% hit/ratio), poor performance but
 overall good stability

When caches are too small, a low hit ratio is expected.  Increasing them
is a good idea, but only increase them a little bit at a time.  The
filterCache in particular should not be increased dramatically,
especially the autowarmCount value.  Filters can take a very long time
to execute, so a high autowarmCount can result in commits taking forever.

Each filter entry can take up a lot of heap memory -- in terms of bytes,
it is the number of documents in the core divided by 8.  This means that
if the core has 10 million documents, each filter entry (for JUST that
core) will take over a megabyte of RAM.

 -  more cache (over 90% hit/ratio), improved performance but
 almost no stability. In that case, we start seeing messages such as
 No shards hosting shard X or cancelElection did not find election
 node to remove

This would not be a direct result of increasing the cache size, unless
perhaps you've increased them so they are *REALLY* big and you're
running out of RAM for the heap or OS disk cache.

 Anyone, has any advice on what could cause this? I am beginning to
 suspect the JVM version, is there any minimal requirements regarding
 the JVM?

Oracle Java 7 is recommended for all releases, and required for Solr
4.8.  You just need to stay away from 7u40, 7u45, and 7u51 because of
bugs in Java itself.  Right now, the latest release is recommended,
which is 7u60.  The 7u21 release that you are running should be
perfectly fine.

With six 9.4GB cores per node, you'll achieve the best performance if
you have about 60GB of RAM left over for the OS disk cache to use -- the
size of your index data on disk.  You did mention that you have 92GB of
RAM per node, but you have not said how big your Java heap is, or
whether there is other software on the machine that may be eating up RAM
for its heap or data.

http://wiki.apache.org/solr/SolrPerformanceProblems

Thanks,
Shawn


RE: Strange behaviour when tuning the caches

2014-06-02 Thread Jean-Sebastien Vachon
Thanks for your quick response.

Our JVM is configured with a heap of 8GB. So we are pretty close of the 
optimal configuration you are mentioning. The only other programs running is 
Zookeeper (which has its own storage device) and a proprietary API (with a heap 
of 1GB) we have on top of Solr to server our customer`s requests. 

I will look into the filterCache to see if we can better use it.

Thanks for your help

 -Original Message-
 From: Shawn Heisey [mailto:s...@elyograg.org]
 Sent: June-02-14 10:48 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Strange behaviour when tuning the caches
 
 On 6/2/2014 8:24 AM, Jean-Sebastien Vachon wrote:
  We have yet to determine where the exact breaking point is.
 
  The two patterns we are seeing are:
 
  -  less cache (around 20-30% hit/ratio), poor performance but
  overall good stability
 
 When caches are too small, a low hit ratio is expected.  Increasing them is a
 good idea, but only increase them a little bit at a time.  The filterCache in
 particular should not be increased dramatically, especially the
 autowarmCount value.  Filters can take a very long time to execute, so a high
 autowarmCount can result in commits taking forever.
 
 Each filter entry can take up a lot of heap memory -- in terms of bytes, it is
 the number of documents in the core divided by 8.  This means that if the
 core has 10 million documents, each filter entry (for JUST that
 core) will take over a megabyte of RAM.
 
  -  more cache (over 90% hit/ratio), improved performance but
  almost no stability. In that case, we start seeing messages such as
  No shards hosting shard X or cancelElection did not find election
  node to remove
 
 This would not be a direct result of increasing the cache size, unless perhaps
 you've increased them so they are *REALLY* big and you're running out of
 RAM for the heap or OS disk cache.
 
  Anyone, has any advice on what could cause this? I am beginning to
  suspect the JVM version, is there any minimal requirements regarding
  the JVM?
 
 Oracle Java 7 is recommended for all releases, and required for Solr 4.8.  You
 just need to stay away from 7u40, 7u45, and 7u51 because of bugs in Java
 itself.  Right now, the latest release is recommended, which is 7u60.  The
 7u21 release that you are running should be perfectly fine.
 
 With six 9.4GB cores per node, you'll achieve the best performance if you
 have about 60GB of RAM left over for the OS disk cache to use -- the size of
 your index data on disk.  You did mention that you have 92GB of RAM per
 node, but you have not said how big your Java heap is, or whether there is
 other software on the machine that may be eating up RAM for its heap or
 data.
 
 http://wiki.apache.org/solr/SolrPerformanceProblems
 
 Thanks,
 Shawn
 
 -
 Aucun virus trouvé dans ce message.
 Analyse effectuée par AVG - www.avg.fr
 Version: 2014.0.4570 / Base de données virale: 3950/7571 - Date:
 27/05/2014


Re: Strange behaviour when tuning the caches

2014-06-02 Thread Otis Gospodnetic
Hi Jean-Sebastien,

One thing you didn't mention is whether as you are increasing(I assume)
cache sizes you actually see performance improve?  If not, then maybe there
is no value increasing cache sizes.

I assume you changed only one cache at a time? Were you able to get any one
of them to the point where there were no evictions without things breaking?

What are your queries like, can you share a few examples?

Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr  Elasticsearch Support * http://sematext.com/


On Mon, Jun 2, 2014 at 11:09 AM, Jean-Sebastien Vachon 
jean-sebastien.vac...@wantedanalytics.com wrote:

 Thanks for your quick response.

 Our JVM is configured with a heap of 8GB. So we are pretty close of the
 optimal configuration you are mentioning. The only other programs running
 is Zookeeper (which has its own storage device) and a proprietary API (with
 a heap of 1GB) we have on top of Solr to server our customer`s requests.

 I will look into the filterCache to see if we can better use it.

 Thanks for your help

  -Original Message-
  From: Shawn Heisey [mailto:s...@elyograg.org]
  Sent: June-02-14 10:48 AM
  To: solr-user@lucene.apache.org
  Subject: Re: Strange behaviour when tuning the caches
 
  On 6/2/2014 8:24 AM, Jean-Sebastien Vachon wrote:
   We have yet to determine where the exact breaking point is.
  
   The two patterns we are seeing are:
  
   -  less cache (around 20-30% hit/ratio), poor performance but
   overall good stability
 
  When caches are too small, a low hit ratio is expected.  Increasing them
 is a
  good idea, but only increase them a little bit at a time.  The
 filterCache in
  particular should not be increased dramatically, especially the
  autowarmCount value.  Filters can take a very long time to execute, so a
 high
  autowarmCount can result in commits taking forever.
 
  Each filter entry can take up a lot of heap memory -- in terms of bytes,
 it is
  the number of documents in the core divided by 8.  This means that if the
  core has 10 million documents, each filter entry (for JUST that
  core) will take over a megabyte of RAM.
 
   -  more cache (over 90% hit/ratio), improved performance but
   almost no stability. In that case, we start seeing messages such as
   No shards hosting shard X or cancelElection did not find election
   node to remove
 
  This would not be a direct result of increasing the cache size, unless
 perhaps
  you've increased them so they are *REALLY* big and you're running out of
  RAM for the heap or OS disk cache.
 
   Anyone, has any advice on what could cause this? I am beginning to
   suspect the JVM version, is there any minimal requirements regarding
   the JVM?
 
  Oracle Java 7 is recommended for all releases, and required for Solr
 4.8.  You
  just need to stay away from 7u40, 7u45, and 7u51 because of bugs in Java
  itself.  Right now, the latest release is recommended, which is 7u60.
  The
  7u21 release that you are running should be perfectly fine.
 
  With six 9.4GB cores per node, you'll achieve the best performance if you
  have about 60GB of RAM left over for the OS disk cache to use -- the
 size of
  your index data on disk.  You did mention that you have 92GB of RAM per
  node, but you have not said how big your Java heap is, or whether there
 is
  other software on the machine that may be eating up RAM for its heap or
  data.
 
  http://wiki.apache.org/solr/SolrPerformanceProblems
 
  Thanks,
  Shawn
 
  -
  Aucun virus trouvé dans ce message.
  Analyse effectuée par AVG - www.avg.fr
  Version: 2014.0.4570 / Base de données virale: 3950/7571 - Date:
  27/05/2014