Re: Java heap space exception in 4.2.1
I am sorry about a type mistake 8,000,000,000 -> 800,000,000 2013/5/27 Jam Luo > I have the same problem. at 4.1 ,a solr instance could take 8,000,000,000 > doc. but at 4.2.1, a instance only take 400,000,000 doc, it will oom at > facet query. the facet field was token by space. > > May 27, 2013 11:12:55 AM org.apache.solr.common.SolrException log > SEVERE: null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java > heap space > at > org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:653) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:366) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1338) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111) > at org.eclipse.jetty.server.Server.handle(Server.java:350) > at > org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454) > at > org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47) > at > org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:900) > at > org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:954) > at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:851) > at > org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) > at > org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:66) > at > org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:254) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:603) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:538) > at java.lang.Thread.run(Thread.java:662) > Caused by: java.lang.OutOfMemoryError: Java heap space > at > org.apache.lucene.index.DocTermOrds.uninvert(DocTermOrds.java:448) > at > org.apache.solr.request.UnInvertedField.(UnInvertedField.java:179) > at > org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:664) > at > org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:426) > at > org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:517) > at > org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:252) > at > org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:78) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:1825) > at > org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:639) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1338) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) > at >
Re: Java heap space exception in 4.2.1
I have the same problem. at 4.1 ,a solr instance could take 8,000,000,000 doc. but at 4.2.1, a instance only take 400,000,000 doc, it will oom at facet query. the facet field was token by space. May 27, 2013 11:12:55 AM org.apache.solr.common.SolrException log SEVERE: null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space at org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:653) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:366) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1338) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111) at org.eclipse.jetty.server.Server.handle(Server.java:350) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454) at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47) at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:900) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:954) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:851) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:66) at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:254) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:603) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:538) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.OutOfMemoryError: Java heap space at org.apache.lucene.index.DocTermOrds.uninvert(DocTermOrds.java:448) at org.apache.solr.request.UnInvertedField.(UnInvertedField.java:179) at org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:664) at org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:426) at org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:517) at org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:252) at org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:78) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1825) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:639) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1338) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999) at org.eclipse.jetty.server.handler.ScopedHandler.handle
Re: How to add shard in 4.2-snapshot
I build indexes by EmbeddedSolrServer, then move them to the online system. The online system do not add new Documents. So the hashcode range is not important, I need add shard only. how do I customise it? thanks 2013/3/10 adfel70 > Mark, what's the current estimation for official 4.2 release? > > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/How-to-add-shard-in-4-2-snapshot-tp4045716p4046099.html > Sent from the Solr - User mailing list archive at Nabble.com. >
Re: java.lang.OutOfMemoryError and shard can‘t work
OK, I will try to upgrade Oracle java and the lastest solr. thanks 2012/12/19 Shawn Heisey > On 12/18/2012 8:18 PM, Jam Luo wrote: > > I deployed a solr-4.0-beta cluster, 4 shard, 2 peers in a shard. A peer > > catch exception: > > 十二月 18, 2012 7:56:31 下午 org.apache.solr.common.SolrException log > > 严重: null:java.lang.RuntimeException: java.lang.OutOfMemoryError: unable > to > > create new native thread > > Let me start off this reply by saying that I have *no* concrete proof > about this, what I'm saying could be completely wrong. > > Are you by chance running Java 1.7.0_09? I was having *lots* of problems > on my Solr server with multiple programs hitting OOM errors when trying > to create threads. Some of those programs are things that can normally > run for weeks at a time with extremely low heap requirements and a very > high max heap. Here's an example stacktrace with the company name redacted: > > Thread c threw an exception > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:691) > at com.REDACTED.idxbuild.solr.Chain.deleteByQuery(Chain.java:827) > at com.REDACTED.idxbuild.solr.Chain.doDelete(Chain.java:1031) > at com.REDACTED.idxbuild.solr.Chain.updateIndex(Chain.java:1923) > at com.REDACTED.idxbuild.solr.Chain.run(Chain.java:2081) > > These problems went away when I upgraded Oracle Java to from 1.7.0_09 to > 1.7.0_10. > > Thanks, > Shawn > >
Re: SolrCloud - unable to get leader props after ZK timeout
Yes, I have the same problem. 2012/10/5 Kyryl Bilokurov > Hi, > > I have a functional/performance test SolrCloud cluster (using Solr > 4.0-BETA) with the following setup: 4 servers, each server hosts 1/4th of > the collection (no replicas, so there are only leaders for each shard). > Current ZK client timeout is set to 15 seconds. From time to time I see > that Solr's ZK client connection gets timed out: > > == > INFO: Client session timed out, have not heard from server in 19105ms for > sessionid 0x3388fcec9490677, closing socket connection and attempting > reconnect > == > > The reconnect is triggered, but after the reconnect, shard enters into the > bad state, as it cannot get the leader props for the extended period of > time: > > == > INFO: Updating cluster state from ZooKeeper... > Oct 3, 2012 4:07:20 AM org.apache.solr.common.cloud.ZkStateReader$2 process > INFO: A cluster state change has occurred - updating... > Oct 3, 2012 4:07:50 AM org.apache.solr.common.SolrException log > SEVERE: There was a problem finding the leader in > zk:java.lang.RuntimeException: Could not get leader props > at > org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:640) > at > > org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1031) > at > > org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:233) > at > org.apache.solr.cloud.ZkController.access$300(ZkController.java:77) > at > org.apache.solr.cloud.ZkController$1.command(ZkController.java:180) > at > > org.apache.solr.common.cloud.ConnectionManager$1.update(ConnectionManager.java:101) > at > > org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:47) > at > > org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:85) > at > > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:526) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502) > > ...the same message&stacktrace repeats every ~30 seconds, until it changes > to > ... > Oct 3, 2012 4:20:09 AM org.apache.solr.common.SolrException log > SEVERE: :org.apache.solr.common.SolrException: There was a problem finding > the leader in zk > at > > org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1041) > at > > org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:233) > at > org.apache.solr.cloud.ZkController.access$300(ZkController.java:77) > at > org.apache.solr.cloud.ZkController$1.command(ZkController.java:180) > at > > org.apache.solr.common.cloud.ConnectionManager$1.update(ConnectionManager.java:101) > at > > org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:47) > at > > org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:85) > at > > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:526) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502) > > Oct 3, 2012 4:20:09 AM org.apache.solr.cloud.ZkController > createEphemeralLiveNode > INFO: Register node as live in ZooKeeper:/live_nodes/host.domain:18100_solr > Oct 3, 2012 4:20:09 AM org.apache.solr.common.cloud.SolrZkClient makePath > INFO: makePath: /live_nodes/host.domain:18100_solr > ... > ... at this point cluster seems to be OK for some time. > == > > This looks a bit similar to the SOLR-3274, as it is also triggered by the > expired ZK connection, and results in "No servers hosting shard" search > errors. > > For now, I have increased the timeout to the 30secs, similar to suggested > in SOLR-3274 to lower down the probability of ZK timeouts, but shouldn't > cluster heal faster than in 15 mins? As there is only one server hosting > each shard, it could become a leader instantly. > > Thanks, > Kyryl >
Re: Recovery problem in solrcloud
There are 400 million documents in a shard, a document is less then 1 kb. the data file _**.fdt is 149g. Does the recovering need large memory in downloading or after downloaded? I find some log before OOM as below: Aug 06, 2012 9:43:04 AM org.apache.solr.core.SolrCore execute INFO: [blog] webapp=/solr path=/select params={sort=createdAt+desc&distrib=false&collection=today,blog&hl.fl=content&wt=javabin&hl=false&rows=10&version=2&f.content.hl.fragsize=0&fl=id&shard.url=index35:8983/solr/blog/&NOW=1344217556702&start=0&q=((("somewordsA"+%26%26+"somewordsB"+%26%26+"somewordsC")+%26%26+platform:abc)+||+id:"/")+%26%26+(createdAt:[2012-07-30T01:43:28.462Z+TO+2012-08-06T01:43:28.462Z])&_system=business&isShard=true&fsv=true&f.title.hl.fragsize=0} hits=0 status=0 QTime=95 Aug 06, 2012 9:43:05 AM org.apache.solr.core.SolrDeletionPolicy onInit INFO: SolrDeletionPolicy.onInit: commits:num=1 commit{dir=/home/ant/jetty/solr/data/index.20120801114027,segFN=segments_aui,generation=14058,filenames=[_cdnu_nrm.cfs, _cdnu_0.frq, segments_aui, _cdnu.fdt, _cdnu_nrm.cfe, _cdnu_0.tim, _cdnu.fdx, _cdnu.fnm, _cdnu_0.prx, _cdnu_0.tip, _cdnu.per] Aug 06, 2012 9:43:05 AM org.apache.solr.core.SolrDeletionPolicy updateCommits INFO: newest commit = 14058 Aug 06, 2012 9:43:05 AM org.apache.solr.update.DirectUpdateHandler2 commit INFO: start commit{flags=0,version=0,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false} Aug 06, 2012 9:43:05 AM org.apache.solr.search.SolrIndexSearcher INFO: Opening Searcher@13578a09 main Aug 06, 2012 9:43:05 AM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@13578a09main{StandardDirectoryReader(segments_aui:1269420 _cdnu(4.0):C457041702)} Aug 06, 2012 9:43:05 AM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. Aug 06, 2012 9:43:05 AM org.apache.solr.core.SolrCore registerSearcher INFO: [blog] Registered new searcher Searcher@13578a09main{StandardDirectoryReader(segments_aui:1269420 _cdnu(4.0):C457041702)} Aug 06, 2012 9:43:05 AM org.apache.solr.update.DirectUpdateHandler2 commit INFO: end_commit_flush Aug 06, 2012 9:43:05 AM org.apache.solr.update.processor.LogUpdateProcessor finish INFO: [blog] webapp=/solr path=/update params={waitSearcher=true&commit_end_point=true&wt=javabin&commit=true&version=2} {commit=} 0 1439 Aug 06, 2012 9:43:05 AM org.apache.solr.update.DirectUpdateHandler2 commit INFO: start commit{flags=0,version=0,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false} Aug 06, 2012 9:43:05 AM org.apache.solr.search.SolrIndexSearcher INFO: Opening Searcher@1a630c4d main Aug 06, 2012 9:43:05 AM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@1a630c4dmain{StandardDirectoryReader(segments_aui:1269420 _cdnu(4.0):C457041702)} Aug 06, 2012 9:43:05 AM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. Aug 06, 2012 9:43:05 AM org.apache.solr.core.SolrCore registerSearcher INFO: [blog] Registered new searcher Searcher@1a630c4dmain{StandardDirectoryReader(segments_aui:1269420 _cdnu(4.0):C457041702)} Aug 06, 2012 9:43:05 AM org.apache.solr.update.DirectUpdateHandler2 commit INFO: end_commit_flush Aug 06, 2012 9:43:07 AM org.apache.solr.core.SolrCore execute INFO: [blog] webapp=/solr path=/select params={sort=createdAt+desc&distrib=false&collection=today,blog&hl.fl=content&wt=javabin&hl=false&rows=10&version=2&f.content.hl.fragsize=0&fl=id&shard.url=index35:8983/solr/blog/&NOW=1344217558778&start=0&_system=business&q=(((somewordsD)+%26%26+platform:(abc))+||+id:"/")+%26%26+(createdAt:[2012-07-30T01:43:30.537Z+TO+2012-08-06T01:43:30.537Z])&isShard=true&fsv=true&f.title.hl.fragsize=0} hits=0 status=0 QTime=490 Except this log, all of other are "path=/select **" in a few minutes, there is no add documents request in this cluster in this time.Is that related to the OOM? This is live traffic, so I can't test it frequently, Tonight I add -XX:+HeapDumpOnOutOfMemoryError option, if this problem appear once again, I will get the heap dump, but I am not sure I can analyse it and get a result. I will ask for your help please. thanks 2012/8/8 Yonik Seeley > Stack trace looks normal - it's just a multi-term query instantiating > a bitset. The memory is being taken up somewhere else. > How many documents are in your index? > Can you get a heap dump or use some other memory profiler to see > what's taking up the space? > > > if I stop query more then ten minutes, the solr instance will start > normally. > > Maybe queries are piling up in threads before the server is ready to > han
Re: Recovery problem in solrcloud
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111) at org.eclipse.jetty.server.Server.handle(Server.java:351) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454) at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47) at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:900) This error often appear at the startup, no data write to the index, but it have a lot of query request. if I stop query more then ten minutes, the solr instance will start normally. My index data in solr data directory is 200g+, RAM is 16g, jvm properties is -Xmx10g -Xss256k -Xmn512m -XX:+UseCompressedOops The OOM and the peer startup fail may be uncorrelated, but this two things often happen in the same solr instance and the same time. I can provide the full log file if you want. thanks 2012/8/7 Mark Miller > Still no idea on the OOM - please send the stacktrace if you can. > > As for doing a replication recovery when it should not be necessary, yonik > just committed a fix for that a bit ago. > > On Aug 7, 2012, at 9:41 AM, Mark Miller wrote: > > > > > On Aug 7, 2012, at 5:49 AM, Jam Luo wrote: > > > >> Hi > >> I have big index data files more then 200g, there are two solr > >> instance in a shard. leader startup and is ok, but the peer alway OOM > >> when it startup. > > > > Can you share the OOM msg and stacktrace please? > > > >> The peer alway download index files from leader because > >> of recoveringAfterStartup property in RecoveryStrategy, total time > taken > >> for download : 2350 secs. if data of the peer is empty, it is ok, but > the > >> leader and the peer have a same generation number, why the peer > >> do recovering? > > > > We are looking into this. > > > >> > >> thanks > >> cooljam > > > > - Mark Miller > > lucidimagination.com > > > > > > > > > > > > > > > > > > > > > > > > - Mark Miller > lucidimagination.com > > > > > > > > > > > >
A endless loop in new SolrCloud probably
Hi I deployed a solr cluster,the code version is "NightlyBuilds apache-solr-4.0-2012-03-19_09-25-37". Cluster has 4 nodes named "A", "B", "C", "D", "num_shards=2", A and C in shard1 , B and D in shard2, A and B is the leader of their shard. It has ran 2 days, added 20m docs, all of then are OK, but after this ,C occured a Exception "org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed", and jetty on C was not down, node C exist in zookeeper in "/live_nodes". In this time, The A try to ask C to recover, but C can't an response, so A get a Exception, the log is: Ϣ: try and ask http://node23:8983/solr to recover 04, 2012 8:02:36 org.apache.solr.update.processor.DistributedUpdateProcessor doFinish Ϣ: try and ask http://node23:8983/solr to recover 04, 2012 8:02:36 org.apache.solr.update.processor.DistributedUpdateProcessor doFinish Ϣ: Could not tell a replica to recover org.apache.solr.client.solrj.SolrServerException: http://node23:8983/solr at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:496) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:251) at org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:347) at org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:816) at org.apache.solr.update.processor.LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:176) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1549) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:441) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:262) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:499) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111) at org.eclipse.jetty.server.Server.handle(Server.java:351) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454) at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47) at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:900) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:954) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:952) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:66) at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:254) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:599) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:534) at java.lang.Thread.run(Thread.java:722) Caused by: java.net.ConnectException: ܾ at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method