Hello Zqzuk,

It's true that this index is probably too big for a single shard, but
make sure you heed Shawn's advice and use a 64-bit JVM in any case!

Michael Della Bitta

------------------------------------------------
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271

www.appinions.com

Where Influence Isn’t a Game


On Mon, Feb 25, 2013 at 2:45 PM, zqzuk <ziqizh...@hotmail.co.uk> wrote:
> Thanks again for your kind input!
>
> I followed Tim's advice and tried to use MMapDirectory. Then I get
> outofmemory on solr startup (tried giving only 8G, 4G to JVM)
>
> I guess this truely indicates that there arent sufficient memory for such a
> huge index.
>
> On another thread I posted days before, regarding splitting index for
> solrcloud, the answer I got was that it is not possible to split an existing
> index into a solrcloud configuration.
>
> I will try re-indexing with solrcloud instead...
>
> thanks
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/170G-index-1-5-billion-documents-out-of-memory-on-query-tp4042696p4042819.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to