Hi

I have been requested to look at a solr instance that has been patched with
our own home grown patch to be able to handle 1000 cores on a solr instance

The solr instance doesn't perform well. Within 12 hours, I can see the
garbage collection taking a lot of time and query & update requests are
timing out (see below )

[Full GC [PSYoungGen: 673152K->98800K(933888K)] [PSOldGen:
2389375K->2389375K(2389376K)] 3062527K->2488176K(3323264K) [PSPermGen:
23681K->23681K(23744K)], 4.0807080 secs] [Times: user=4.08 sys=0.00,
real=4.08 secs]

org.apache.solr.client.solrj.SolrServerException:
java.net.SocketTimeoutException: Read timed out
        at
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:472)
        at
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:243)
        at
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)


I used yourkit to track down eventual memory leaks but didn't succeed in
finding one

The biggest objects using up the memory seem to be org.apache.lucene.Term
and org.apache.lucene.TermInfo

The total size of the data directory in index is 46G with a typical big core
being 100000 documents and size of 103M

There are lots of search requests and indexing happening

I am posting to the mailing list hoping to hear that we must be doing
something completely wrong because it doesn't seem to me that we are pushing
the limit. I would appreciate any tips as where to look at etc... to
troubleshoot and solve the issue

Thank you for your help !

matt

Reply via email to