Hi Jesús,

Others have already asked a number of relevant question.  If I had to guess, 
I'd guess this is simply a disk IO issue, but of course there may be room for 
improvement without getting more RAM or SSDs, so tell us more about your 
queries, about disk IO you are seeing, etc.

Otis
----
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/


>________________________________
>From: Jesús Martín García <jmar...@cesca.cat>
>To: solr-user@lucene.apache.org
>Sent: Monday, October 17, 2011 6:19 AM
>Subject: millions of records problem
>
>Hi,
>
>I've got 500 millions of documents in solr everyone with the same number of 
>fields an similar width. The version of solr which I used is 1.4.1 with lucene 
>2.9.3.
>
>I don't have the option to use shards so the whole index has to be in a 
>machine...
>
>The size of the index is about 50Gb and the ram is 8Gb....Everything is 
>working but the searches are so slowly, although I tried different 
>configurations of the solrconfig.xml as:
>
>- configure a first searcher with the most used searches
>- configure the caches (query, filter and document) with great numbers...
>
>but everything is still working slowly, so do you have any ideas to boost the 
>searches without the penalty to use much more ram?
>
>Thanks in advance,
>
>Jesús
>
>-- .......................................................................
>      __
>    /   /       Jesús Martín García
>C E / S / C A   Tècnic de Projectes
>  /__ /         Centre de Serveis Científics i Acadèmics de Catalunya
>
>Gran Capità, 2-4 (Edifici Nexus) · 08034 Barcelona
>T. 93 551 6213 · F. 93 205 6979 · jmar...@cesca.cat
>.......................................................................
>
>
>
>

Reply via email to