Split up index into say 100 cores, and then route each search to a specific 
core by some mod operator on the user id:

core_number = userid % num_cores

core_name = "core"+core_number

That way each index core is relatively small (maybe 100 million docs or less).


On Mar 9, 2012, at 2:02 PM, Glen Newton wrote:

> millions of cores will not work...
> ...yet.
> 
> -glen
> 
> On Fri, Mar 9, 2012 at 1:46 PM, Lan <dung....@gmail.com> wrote:
>> Solr has no limitation on the number of cores. It's limited by your hardware,
>> inodes and how many files you could keep open.
>> 
>> I think even if you went the Lucene route you would run into same hardware
>> limits.
>> 
>> --
>> View this message in context: 
>> http://lucene.472066.n3.nabble.com/Lucene-vs-Solr-design-decision-tp3813457p3813511.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
> 
> 
> 
> -- 
> -
> http://zzzoot.blogspot.com/
> -

Reply via email to