Yes 200 Individual Solr Instances not solr cores.

We get an avg response time of below 1 sec.

The number of documents is  not many most of the isntances ,some of the
instnaces have about 5 lac documents on average.

Regards
Sujahta

On Thu, Oct 20, 2011 at 3:35 AM, Jaeger, Jay - DOT <jay.jae...@dot.wi.gov>wrote:

> 200 instances of what?  The Solr application with lucene, etc. per usual?
>  Solr cores? ???
>
> Either way, 200 seems to be very very very many: unusually so.  Why so
> many?
>
> If you have 200 instances of Solr in a 20 GB JVM, that would only be 100MB
> per Solr instance.
>
> If you have 200 instances of Solr all accessing the same physical disk, the
> results are not likely to be satisfactory - the disk head will go nuts
> trying to handle all of the requests.
>
> JRJ
>
> -----Original Message-----
> From: Sujatha Arun [mailto:suja.a...@gmail.com]
> Sent: Wednesday, October 19, 2011 12:25 AM
> To: solr-user@lucene.apache.org; Otis Gospodnetic
> Subject: Re: OS Cache - Solr
>
> Thanks ,Otis,
>
> This is our Solr Cache  Allocation.We have the same Cache allocation for
> all
> our *200+ instances* in the single Server.Is this too high?
>
> *Query Result Cache*:LRU Cache(maxSize=16384, initialSize=4096,
> autowarmCount=1024, )
>
> *Document Cache *:LRU Cache(maxSize=16384, initialSize=16384)
>
>
> *Filter Cache* LRU Cache(maxSize=16384, initialSize=4096,
> autowarmCount=4096, )
>
> Regards
> Sujatha
>
> On Wed, Oct 19, 2011 at 4:05 AM, Otis Gospodnetic <
> otis_gospodne...@yahoo.com> wrote:
>
> > Maybe your Solr Document cache is big and that's consuming a big part of
> > that JVM heap?
> > If you want to be able to run with a smaller heap, consider making your
> > caches smaller.
> >
> > Otis
> > ----
> > Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> > Lucene ecosystem search :: http://search-lucene.com/
> >
> >
> > >________________________________
> > >From: Sujatha Arun <suja.a...@gmail.com>
> > >To: solr-user@lucene.apache.org
> > >Sent: Tuesday, October 18, 2011 12:53 AM
> > >Subject: Re: OS Cache - Solr
> > >
> > >Hello Jan,
> > >
> > >Thanks for your response and  clarification.
> > >
> > >We are monitoring the JVM cache utilization and we are currently using
> > about
> > >18 GB of the 20 GB assigned to JVM. Out total index size being abt 14GB
> > >
> > >Regards
> > >Sujatha
> > >
> > >On Tue, Oct 18, 2011 at 1:19 AM, Jan Høydahl <jan....@cominvent.com>
> > wrote:
> > >
> > >> Hi Sujatha,
> > >>
> > >> Are you sure you need 20Gb for Tomcat? Have you profiled using
> JConsole
> > or
> > >> similar? Try with 15Gb and see how it goes. The reason why this is
> > >> beneficial is that you WANT your OS to have available memory for disk
> > >> caching. If you have 17Gb free after starting Solr, your OS will be
> able
> > to
> > >> cache all index files in memory and you get very high search
> > performance.
> > >> With your current settings, there is only 12Gb free for both caching
> the
> > >> index and for your MySql activities.  Chances are that when you backup
> > >> MySql, the cached part of your Solr index gets flushed from disk
> caches
> > and
> > >> need to be re-cached later.
> > >>
> > >> How to interpret memory stats vary between OSes, and seing 163Mb free
> > may
> > >> simply mean that your OS has used most RAM for various caches and
> > paging,
> > >> but will flush it once an application asks for more memory. Have you
> > seen
> > >> http://wiki.apache.org/solr/SolrPerformanceFactors ?
> > >>
> > >> You should also slim down your index maximally by setting stored=false
> > and
> > >> indexed=false wherever possible. I would also upgrade to a more
> current
> > Solr
> > >> version.
> > >>
> > >> --
> > >> Jan Høydahl, search solution architect
> > >> Cominvent AS - www.cominvent.com
> > >> Solr Training - www.solrtraining.com
> > >>
> > >> On 17. okt. 2011, at 19:51, Sujatha Arun wrote:
> > >>
> > >> > Hello
> > >> >
> > >> > I am trying to understand the  OS cache utilization of Solr .Our
> > server
> > >> has
> > >> > several solr instances on a server .The total combined Index size of
> > all
> > >> > instances is abt 14 Gb and the size of the maximum single Index is
> abt
> > >> 2.5
> > >> > GB .
> > >> >
> > >> > Our Server has Quad processor with 32 GB RAM .Out of which 20 GB has
> > been
> > >> > assigned to  JVM. We are running solr1.3  on tomcat 5.5 and Java 1.6
> > >> >
> > >> > Our current Statistics indicate that  solr uses 18-19 GB of 20 GB
> RAM
> > >> > assigned to JVM .However the  Free physical seems to remain constant
> > as
> > >> > below.
> > >> > Free physical memory = 163 Mb
> > >> > Total physical memory = 32,232 Mb,
> > >> >
> > >> > The server also serves as a backup server for Mysql where the
> > application
> > >> DB
> > >> > is backed up and restored .During this activity we see that lot of
> > >> queries
> > >> > that nearly take even 10+ minutes to execute .But other wise
> > >> > maximum query time is less than  1-2 secs
> > >> >
> > >> > The physical memory that is free seems to be constant . Why is this
> > >> constant
> > >> > and how this will be used between the  Mysql backup and solr while
> > >> > backup activity is  happening How much free physical memory should
> be
> > >> > available to OS given out stats.?
> > >> >
> > >> > Any pointers would be helpful.
> > >> >
> > >> > Regards
> > >> > Sujatha
> > >>
> > >>
> > >
> > >
> > >
> >
>

Reply via email to