My sense tells me that you're heading down the wrong path of trying to
fit such a large index on one server. Even if you resolve this current
issue, you're not likely to be happy with query performance as one
thread searching 1.5B docs index is going to be slower than 10 threads
searching 10 - 150M docs indexes concurrently. Solr is designed to
scale-out. There is overhead to support distributed search but in my
experience, the benefit outweighs the cost 10x. My index is only about
the same size on disk but only about 500M docs. I also think you'll
get much better support from the Solr community using SolrCloud. So I
recommend breaking your index up into multiple shards and distributing
across more nodes - start with: http://wiki.apache.org/solr/SolrCloud

Cheers,
Tim

On Mon, Feb 25, 2013 at 7:02 AM, Artem OXSEED <a.karpe...@oxseed.com> wrote:
> Hello,
>
> adding my 5 cents here as well: it seems that we experienced similar problem
> that was supposed to be fixed or not appear at all for 64-bit systems. Our
> current solution is custom build of Solr with DEFAULT_READ_CHUNK_SIZE set t0
> 10MB in FSDirectory class. This fix was done however not by me and in the
> old times of Solr 1.4.1 so I'm not sure if it's valid anymore considering
> vast changes in Lucene/Solr code and JVM improvements, so I'd like much to
> hear suggestions of experienced users.
>
> --
> Warm regards,
> Artem Karpenko
>
>
> On 25.02.2013 14:33, zqzuk wrote:
>>
>> Just to add... I noticed this line in the stack trace particularly:
>>
>> *try calling FSDirectory.setReadChunkSize with a value smaller than the
>> current chunk size (2147483647)*
>>
>> Had a look at the javadoc and solrconfig.xml. I cannot see a way to call
>> this method to change it with solr. If that would be a possible fix, how
>> can
>> I do it in Solr?
>>
>> Thanks
>>
>>
>>
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/170G-index-1-5-billion-documents-out-of-memory-on-query-tp4042696p4042705.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>
>

Reply via email to