Hi team,
        Currently I am using Solr 4.10 in tomcat. I have a one shard Solr
Cloud with 3 replicas. I set heap size to 15GB for each node. As I have big
data volume and large amount of query request. So always meet frequent full
GC issue. We have checked this and found that many memory was used as field
cache by Solr. To avoid this, we begin to reboot tomcat instance one by one
in schedule. We don't kill any process but run script  "catalina.sh stop" to
shutdown tomcat gracefully. To keep message not pending,  we receive message
from user all the time and send update request to Solr once get new message.
This means Solr may get update request during shutdown. I think that is the
reason we get  CorruptIndexException. Since we begin to do the reboot, we
always get CorruptIndexException. The trace is as below:
2017-09-14 04:25:49,241
ERROR[commitScheduler-15-thread-1][R31609](CommitTracker) - auto commit
error...:org.apache.solr.common.SolrException: Error opening new searcher
        at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
        at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:607)
        at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.index.CorruptIndexException:
liveDocs.count()=33574 info.docCount=34156 info.getDelCount()=584
(filename=_1uvck_k.del)
        at
org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:96)
        at
org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:116)
        at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:144)
        at
org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:282)
        at
org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3271)
        at
org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3262)
        at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:421)
        at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:279)
        at
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:251)
        at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1476)
        ... 10 more


        As we shutdown Solr gracefully, I think Solr should be strong enough
to handle this case. Please give me some advice about why this happen and
what we can do to avoid this. Ps below is some of our solrConfig cotent:

<autoCommit>
<maxTime>60000</maxTime>
<openSearcher>true</openSearcher>
</autoCommit>
<autoSoftCommit>
<maxTime>1000</maxTime>
</autoSoftCommit>

Regards,
Geng, Wei



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html

Reply via email to