It really should be unlimited: this setting has nothing to do with how
much RAM is on the computer.

See http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

Mike McCandless

http://blog.mikemccandless.com

On Tue, Feb 26, 2013 at 12:18 PM, zqzuk <ziqizh...@hotmail.co.uk> wrote:
> Hi
> sorry I couldnt do this directly... the way I do this is by subscribing to a
> cluster of computers in our organisation and send the job with required
> memory. It gets randomly allocated to a node (one single server in the
> cluster) once executed and it is not possible to connect to that specific
> node to check.
>
> But im pretty sure it wont be "unlimited" but matching the figure I
> required, which was 40G (the max memory on a single node is 48G anyway). So
> Solr only gets maximum of 40G memory for this index.
>
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/170G-index-1-5-billion-documents-out-of-memory-on-query-tp4042696p4043110.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to