Thanks Dmirty for the offer to help, I am using some caching in one of the 
cores not. Earlier I was using on other cores too, but now I have commented 
them out because of frequent OOM, also some warming up in one of the core. I 
have share the links for my config files for all the 4 cores,

http://haklus.com/crssConfig.xml
http://haklus.com/rssConfig.xml
http://haklus.com/twitterConfig.xml
http://haklus.com/facebookConfig.xml


Thanks again
Rohit


-----Original Message-----
From: Dmitry Kan [mailto:dmitry....@gmail.com] 
Sent: 14 September 2011 10:23
To: solr-user@lucene.apache.org
Subject: Re: Out of memory

Hi,

OK 64GB fits into one shard quite nicely in our setup. But I have never used
multicore setup. In total you have 79,9 GB. We try to have 70-100GB per
shard with caching on. Do you do warming up of your index on starting? Also,
there was a setting of pre-populating the cache.

It could also help, if you can show some parts of your solrconfig file. What
is the solr version you use?

Regards,
Dmitry

On Wed, Sep 14, 2011 at 11:38 AM, Rohit <ro...@in-rev.com> wrote:

> Hi Dimtry,
>
> To answer your questions,
>
> -Do you use caching?
> I do user caching, but will disable it and give it a go.
>
> -How big is your index in size on the disk?
> These are the size of the data folder for each of the cores.
> Core1 : 64GB
> Core2 : 6.1GB
> Core3 : 7.9GB
> Core4 : 1.9GB
>
> Will try attaching a jconsole to my solr as suggested to get a better
> picture.
>
> Regards,
> Rohit
>
>
> -----Original Message-----
> From: Dmitry Kan [mailto:dmitry....@gmail.com]
> Sent: 14 September 2011 08:15
> To: solr-user@lucene.apache.org
> Subject: Re: Out of memory
>
> Hi Rohit,
>
> Do you use caching?
> How big is your index in size on the disk?
> What is the stack trace contents?
>
> The OOM problems that we have seen so far were related to the
> index physical size and usage of caching. I don't think we have ever found
> the exact cause of these problems, but sharding has helped to keep each
> index relatively small and OOM have gone away.
>
> You can also attach jconsole onto your SOLR via the jmx and monitor the
> memory / cpu usage in a graphical interface. I have also run garbage
> collector manually through jconsole sometimes and it was of a help.
>
> Regards,
> Dmitry
>
> On Wed, Sep 14, 2011 at 9:10 AM, Rohit <ro...@in-rev.com> wrote:
>
> > Thanks Jaeger.
> >
> > Actually I am storing twitter streaming data into the core, so the rate
> of
> > index is about 12tweets(docs)/second. The same solr contains 3 other
> cores
> > but these cores are not very heavy. Now the twitter core has become very
> > large (77516851) and its taking a long time to query (Mostly facet
> queries
> > based on date, string fields).
> >
> > After sometime about 18-20hr solr goes out of memory, the thread dump
> > doesn't show anything. How can I improve this besides adding more ram
> into
> > the system.
> >
> >
> >
> > Regards,
> > Rohit
> > Mobile: +91-9901768202
> > About Me: http://about.me/rohitg
> >
> > -----Original Message-----
> > From: Jaeger, Jay - DOT [mailto:jay.jae...@dot.wi.gov]
> > Sent: 13 September 2011 21:06
> > To: solr-user@lucene.apache.org
> > Subject: RE: Out of memory
> >
> > numDocs is not the number of documents in memory.  It is the number of
> > documents currently in the index (which is kept on disk).  Same goes for
> > maxDocs, except that it is a count of all of the documents that have ever
> > been in the index since it was created or optimized (including deleted
> > documents).
> >
> > Your subject indicates that something is giving you some kind of Out of
> > memory error.  We might better be able to help you if you provide more
> > information about your exact problem.
> >
> > JRJ
> >
> >
> > -----Original Message-----
> > From: Rohit [mailto:ro...@in-rev.com]
> > Sent: Tuesday, September 13, 2011 2:29 PM
> > To: solr-user@lucene.apache.org
> > Subject: Out of memory
> >
> > I have solr running on a machine with 18Gb Ram , with 4 cores. One of the
> > core is very big containing 77516851 docs, the stats for searcher given
> > below
> >
> >
> >
> > searcherName : Searcher@5a578998 main
> > caching : true
> > numDocs : 77516851
> > maxDoc : 77518729
> > lockFactory=org.apache.lucene.store.NativeFSLockFactory@5a9c5842
> > indexVersion : 1308817281798
> > openedAt : Tue Sep 13 18:59:52 GMT 2011
> > registeredAt : Tue Sep 13 19:00:55 GMT 2011
> > warmupTime : 63139
> >
> >
> >
> > .         Is there a way to reduce the number of docs loaded into memory
> > for
> > this core?
> >
> > .         At any given time I dont need data more than past 15 days,
> unless
> > someone queries for it explicetly. How can this be achieved?
> >
> > .         Will it be better to go for Solr replication or distribution if
> > there is little option left
> >
> >
> >
> >
> >
> > Regards,
> >
> > Rohit
> >
> > Mobile: +91-9901768202
> >
> > About Me:  <http://about.me/rohitg> http://about.me/rohitg
> >
> >
> >
> >
>
>


-- 
Regards,

Dmitry Kan

Reply via email to