Re: Master-Slave setup using SolrCloud

2014-10-04 Thread Sachin Kale
Apparently, there is a bug in Solr 4.10.0 which was causing the
NullPointerExceptions. SOLR-6501
https://issues.apache.org/jira/browse/SOLR-6501
We have updated our production SOLR to 4.10.1


On Thu, Oct 2, 2014 at 8:13 PM, Sachin Kale sachinpk...@gmail.com wrote:

 If I look into the logs, many times I get only following line without any
 stacktrace:

 *ERROR - 2014-10-02 19:35:25.516; org.apache.solr.common.SolrException;
 java.lang.NullPointerException*

 These exceptions are not coming continuously. Once in every 10-15 minutes.
 But once it starts, there are continuous 800-1000 such exceptions one after
 another. Is it related to cache warmup?

 I can provide following information regarding the setup:
 We are now on using Solr 4.10.0
 Memory allocated to each SOLR instance is 7GB. I guess it is more than
 sufficient for 1 GB index, right?
 Indexes are stored as normal, local filesystem.
 I am using three caches:
 Query Cache: Size 4096, autoWarmCount 2048
 Filter cache: size 8192, autoWarmCount 4096
 Document cache: size 4096

 I am experimenting with commitMaxTime for both soft and hard commits

 After referring following:

 http://lucidworks.com/blog/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

 Hence, I set following:

 autoCommit
 maxTime${solr.autoCommit.maxTime:6}/maxTime
 openSearcherfalse/openSearcher
 /autoCommit

 autoSoftCommit
 maxTime${solr.autoSoftCommit.maxTime:90}/maxTime
 /autoSoftCommit

 Also, we are getting following warnings many times:

 *java.lang.NumberFormatException: For input string: 5193.0*

 Earlier we were on SOLR 4.4.0 and when we are upgraded to 4.10.0, we
 pointed it to the same index we were using for 4.4.0

 On Thu, Oct 2, 2014 at 7:11 PM, Shawn Heisey apa...@elyograg.org wrote:

 On 10/2/2014 6:58 AM, Sachin Kale wrote:
  We are trying to move our traditional master-slave Solr configuration to
  SolrCloud. As our index size is very small (around 1 GB), we are having
  only one shard.
  So basically, we are having same master-slave configuration with one
 leader
  and 6 replicas.
  We are experimenting with maxTime of both AutoCommit and AutoSoftCommit.
  Currently, autoCommit maxTime is 15 minutes and autoSoftCommit is 1
 minute
  (Let me know if these values does not make sense).
 
  Caches are set such that warmup time is at most 20 seconds.
 
  We are having continuous indexing requests mostly for updating the
 existing
  documents. Few requests are for deleting/adding the documents.
 
  The problem we are facing is that we are getting very frequent
  NullPointerExceptions.
  We get continuous 200-300 such exceptions within a period of 30 seconds
 and
  for next few minutes, it works fine.
 
  Stacktrace of NullPointerException:
 
  *ERROR - 2014-10-02 18:09:38.464; org.apache.solr.common.SolrException;
  null:java.lang.NullPointerException*
  *at
 
 org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)*
  *at
 
 org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)*
  *at
 
 org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)*
 
  ​
  I am not sure what would be causing it. My guess, whenever, it is
 trying to
  replay tlog, we are getting these exceptions. Is anything wrong in my
  configuration?

 Your automatic commit settings are fine.  If you had tried to use a very
 small maxTime like 1000 (1 second), I would tell you that it's probably
 too short.

 The tlogs only get replayed when a core is first started or reloaded.
 These appear to be errors during queries, having nothing at all to do
 with indexing.

 I can't be sure with the available information (no Solr version,
 incomplete stacktrace, no info about what request caused and received
 the error), but if I had to guess, I'd say you probably changed your
 schema so that certain fields are now required that weren't required
 before, and didn't reindex, so those fields are not present on every
 document.  Or it might be that you added a uniqueKey and didn't reindex,
 and that field is not present on every document.

 http://wiki.apache.org/solr/HowToReindex

 Thanks,
 Shawn





Master-Slave setup using SolrCloud

2014-10-02 Thread Sachin Kale
Hello,

We are trying to move our traditional master-slave Solr configuration to
SolrCloud. As our index size is very small (around 1 GB), we are having
only one shard.
So basically, we are having same master-slave configuration with one leader
and 6 replicas.
We are experimenting with maxTime of both AutoCommit and AutoSoftCommit.
Currently, autoCommit maxTime is 15 minutes and autoSoftCommit is 1 minute
(Let me know if these values does not make sense).

Caches are set such that warmup time is at most 20 seconds.

We are having continuous indexing requests mostly for updating the existing
documents. Few requests are for deleting/adding the documents.

The problem we are facing is that we are getting very frequent
NullPointerExceptions.
We get continuous 200-300 such exceptions within a period of 30 seconds and
for next few minutes, it works fine.

Stacktrace of NullPointerException:

*ERROR - 2014-10-02 18:09:38.464; org.apache.solr.common.SolrException;
null:java.lang.NullPointerException*
*at
org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)*
*at
org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)*
*at
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)*

​
I am not sure what would be causing it. My guess, whenever, it is trying to
replay tlog, we are getting these exceptions. Is anything wrong in my
configuration?


-Sachin-


Re: Master-Slave setup using SolrCloud

2014-10-02 Thread Shawn Heisey
On 10/2/2014 6:58 AM, Sachin Kale wrote:
 We are trying to move our traditional master-slave Solr configuration to
 SolrCloud. As our index size is very small (around 1 GB), we are having
 only one shard.
 So basically, we are having same master-slave configuration with one leader
 and 6 replicas.
 We are experimenting with maxTime of both AutoCommit and AutoSoftCommit.
 Currently, autoCommit maxTime is 15 minutes and autoSoftCommit is 1 minute
 (Let me know if these values does not make sense).
 
 Caches are set such that warmup time is at most 20 seconds.
 
 We are having continuous indexing requests mostly for updating the existing
 documents. Few requests are for deleting/adding the documents.
 
 The problem we are facing is that we are getting very frequent
 NullPointerExceptions.
 We get continuous 200-300 such exceptions within a period of 30 seconds and
 for next few minutes, it works fine.
 
 Stacktrace of NullPointerException:
 
 *ERROR - 2014-10-02 18:09:38.464; org.apache.solr.common.SolrException;
 null:java.lang.NullPointerException*
 *at
 org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)*
 *at
 org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)*
 *at
 org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)*
 
 ​
 I am not sure what would be causing it. My guess, whenever, it is trying to
 replay tlog, we are getting these exceptions. Is anything wrong in my
 configuration?

Your automatic commit settings are fine.  If you had tried to use a very
small maxTime like 1000 (1 second), I would tell you that it's probably
too short.

The tlogs only get replayed when a core is first started or reloaded.
These appear to be errors during queries, having nothing at all to do
with indexing.

I can't be sure with the available information (no Solr version,
incomplete stacktrace, no info about what request caused and received
the error), but if I had to guess, I'd say you probably changed your
schema so that certain fields are now required that weren't required
before, and didn't reindex, so those fields are not present on every
document.  Or it might be that you added a uniqueKey and didn't reindex,
and that field is not present on every document.

http://wiki.apache.org/solr/HowToReindex

Thanks,
Shawn



Re: Master-Slave setup using SolrCloud

2014-10-02 Thread Sachin Kale
If I look into the logs, many times I get only following line without any
stacktrace:

*ERROR - 2014-10-02 19:35:25.516; org.apache.solr.common.SolrException;
java.lang.NullPointerException*

These exceptions are not coming continuously. Once in every 10-15 minutes.
But once it starts, there are continuous 800-1000 such exceptions one after
another. Is it related to cache warmup?

I can provide following information regarding the setup:
We are now on using Solr 4.10.0
Memory allocated to each SOLR instance is 7GB. I guess it is more than
sufficient for 1 GB index, right?
Indexes are stored as normal, local filesystem.
I am using three caches:
Query Cache: Size 4096, autoWarmCount 2048
Filter cache: size 8192, autoWarmCount 4096
Document cache: size 4096

I am experimenting with commitMaxTime for both soft and hard commits

After referring following:
http://lucidworks.com/blog/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

Hence, I set following:

autoCommit
maxTime${solr.autoCommit.maxTime:6}/maxTime
openSearcherfalse/openSearcher
/autoCommit

autoSoftCommit
maxTime${solr.autoSoftCommit.maxTime:90}/maxTime
/autoSoftCommit

Also, we are getting following warnings many times:

*java.lang.NumberFormatException: For input string: 5193.0*

Earlier we were on SOLR 4.4.0 and when we are upgraded to 4.10.0, we
pointed it to the same index we were using for 4.4.0

On Thu, Oct 2, 2014 at 7:11 PM, Shawn Heisey apa...@elyograg.org wrote:

 On 10/2/2014 6:58 AM, Sachin Kale wrote:
  We are trying to move our traditional master-slave Solr configuration to
  SolrCloud. As our index size is very small (around 1 GB), we are having
  only one shard.
  So basically, we are having same master-slave configuration with one
 leader
  and 6 replicas.
  We are experimenting with maxTime of both AutoCommit and AutoSoftCommit.
  Currently, autoCommit maxTime is 15 minutes and autoSoftCommit is 1
 minute
  (Let me know if these values does not make sense).
 
  Caches are set such that warmup time is at most 20 seconds.
 
  We are having continuous indexing requests mostly for updating the
 existing
  documents. Few requests are for deleting/adding the documents.
 
  The problem we are facing is that we are getting very frequent
  NullPointerExceptions.
  We get continuous 200-300 such exceptions within a period of 30 seconds
 and
  for next few minutes, it works fine.
 
  Stacktrace of NullPointerException:
 
  *ERROR - 2014-10-02 18:09:38.464; org.apache.solr.common.SolrException;
  null:java.lang.NullPointerException*
  *at
 
 org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)*
  *at
 
 org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)*
  *at
 
 org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)*
 
  ​
  I am not sure what would be causing it. My guess, whenever, it is trying
 to
  replay tlog, we are getting these exceptions. Is anything wrong in my
  configuration?

 Your automatic commit settings are fine.  If you had tried to use a very
 small maxTime like 1000 (1 second), I would tell you that it's probably
 too short.

 The tlogs only get replayed when a core is first started or reloaded.
 These appear to be errors during queries, having nothing at all to do
 with indexing.

 I can't be sure with the available information (no Solr version,
 incomplete stacktrace, no info about what request caused and received
 the error), but if I had to guess, I'd say you probably changed your
 schema so that certain fields are now required that weren't required
 before, and didn't reindex, so those fields are not present on every
 document.  Or it might be that you added a uniqueKey and didn't reindex,
 and that field is not present on every document.

 http://wiki.apache.org/solr/HowToReindex

 Thanks,
 Shawn