Hi Rohit, Do you use caching? How big is your index in size on the disk? What is the stack trace contents?
The OOM problems that we have seen so far were related to the index physical size and usage of caching. I don't think we have ever found the exact cause of these problems, but sharding has helped to keep each index relatively small and OOM have gone away. You can also attach jconsole onto your SOLR via the jmx and monitor the memory / cpu usage in a graphical interface. I have also run garbage collector manually through jconsole sometimes and it was of a help. Regards, Dmitry On Wed, Sep 14, 2011 at 9:10 AM, Rohit <ro...@in-rev.com> wrote: > Thanks Jaeger. > > Actually I am storing twitter streaming data into the core, so the rate of > index is about 12tweets(docs)/second. The same solr contains 3 other cores > but these cores are not very heavy. Now the twitter core has become very > large (77516851) and its taking a long time to query (Mostly facet queries > based on date, string fields). > > After sometime about 18-20hr solr goes out of memory, the thread dump > doesn't show anything. How can I improve this besides adding more ram into > the system. > > > > Regards, > Rohit > Mobile: +91-9901768202 > About Me: http://about.me/rohitg > > -----Original Message----- > From: Jaeger, Jay - DOT [mailto:jay.jae...@dot.wi.gov] > Sent: 13 September 2011 21:06 > To: solr-user@lucene.apache.org > Subject: RE: Out of memory > > numDocs is not the number of documents in memory. It is the number of > documents currently in the index (which is kept on disk). Same goes for > maxDocs, except that it is a count of all of the documents that have ever > been in the index since it was created or optimized (including deleted > documents). > > Your subject indicates that something is giving you some kind of Out of > memory error. We might better be able to help you if you provide more > information about your exact problem. > > JRJ > > > -----Original Message----- > From: Rohit [mailto:ro...@in-rev.com] > Sent: Tuesday, September 13, 2011 2:29 PM > To: solr-user@lucene.apache.org > Subject: Out of memory > > I have solr running on a machine with 18Gb Ram , with 4 cores. One of the > core is very big containing 77516851 docs, the stats for searcher given > below > > > > searcherName : Searcher@5a578998 main > caching : true > numDocs : 77516851 > maxDoc : 77518729 > lockFactory=org.apache.lucene.store.NativeFSLockFactory@5a9c5842 > indexVersion : 1308817281798 > openedAt : Tue Sep 13 18:59:52 GMT 2011 > registeredAt : Tue Sep 13 19:00:55 GMT 2011 > warmupTime : 63139 > > > > . Is there a way to reduce the number of docs loaded into memory > for > this core? > > . At any given time I dont need data more than past 15 days, unless > someone queries for it explicetly. How can this be achieved? > > . Will it be better to go for Solr replication or distribution if > there is little option left > > > > > > Regards, > > Rohit > > Mobile: +91-9901768202 > > About Me: <http://about.me/rohitg> http://about.me/rohitg > > > >