Hello,

 

I'm trying to troubleshoot a problem that occurred on a few Solr slave
Tomcat instances and wanted to run it by the list to see if I'm on the
right track.

 

The setup involves 1 master replicating to three slaves (I don't know
what the replication interval is at this time).  These instances have
been running fine for a while (from what I understand) but ran into
problems just today during peak site usage.

 

The following two exceptions were observed (partially stripped stack
traces):



WARNING: [] Error opening new searcher. exceeded limit of
maxWarmingSearchers=2, try again later.

Feb 1, 2010 10:00:31 AM org.apache.solr.common.SolrException log

SEVERE: org.apache.solr.common.SolrException: Error opening new
searcher. exceeded limit of maxWarmingSearchers=2, try again later.

        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:941)

        at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.
java:368)

        at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpd
ateProcessorFactory.java:77)

 

Feb 1, 2010 10:29:36 AM org.apache.solr.common.SolrException log

SEVERE: java.lang.OutOfMemoryError: GC overhead limit exceeded

        at
org.apache.lucene.index.SegmentReader.termDocs(SegmentReader.java:734)

        at
org.apache.lucene.index.MultiSegmentReader$MultiTermDocs.termDocs(MultiS
egmentReader.java:612)

        at
org.apache.lucene.index.MultiSegmentReader$MultiTermDocs.termDocs(MultiS
egmentReader.java:605)

        at
org.apache.lucene.index.MultiSegmentReader$MultiTermDocs.read(MultiSegme
ntReader.java:570)

        at org.apache.lucene.search.TermScorer.next(TermScorer.java:106)

        at
org.apache.lucene.search.DisjunctionSumScorer.initScorerDocQueue(Disjunc
tionSumScorer.java:105)

        at
org.apache.lucene.search.DisjunctionSumScorer.next(DisjunctionSumScorer.
java:144)

        at
org.apache.lucene.search.BooleanScorer2.next(BooleanScorer2.java:352)

        at
org.apache.lucene.search.DisjunctionSumScorer.initScorerDocQueue(Disjunc
tionSumScorer.java:105)

        at
org.apache.lucene.search.DisjunctionSumScorer.next(DisjunctionSumScorer.
java:144)

        at
org.apache.lucene.search.BooleanScorer2.next(BooleanScorer2.java:352)

        at
org.apache.lucene.search.ConjunctionScorer.init(ConjunctionScorer.java:8
0)

        at
org.apache.lucene.search.ConjunctionScorer.next(ConjunctionScorer.java:4
8)

        at
org.apache.lucene.search.BooleanScorer2.score(BooleanScorer2.java:319)

        at
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:137)

        at org.apache.lucene.search.Searcher.search(Searcher.java:126)

        at org.apache.lucene.search.Searcher.search(Searcher.java:105)

        at
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.
java:920)

        at
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.j
ava:838)

        at
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:2
69)

 

Here's the config for the caches:



filterCache: size="15000" initialSize="5000" autowarmCount="5000"

queryResultCache: size="15000" initialSize="5000" autowarmCount="15000"

documentCache: size="15000" initialSize="5000"

 

>From what I understand, the first exception indicates that multiple
replications are being processed at the same time.  Is that correct or
could it be something else?

Does the second exception indicate that Solr is having problems handling
the query load (possibly due to a commit happening at the same time)?

 

Does anyone have any insight that might help here?  I sort of suspect
that the autowarm counts are too large but I may be off there.  I can
provide more details (as I get them) about this if needed.

 

Thanks,
Laurent Vauthrin

Reply via email to