Keeping track of 1000+ indices is actually not that hard.  I've implemented 
Simpy - http://simpy.com - in a way that keeps each member's index (or indices 
- some users have multiple indices) separate.  I can't give out the total 
number of Simpy users, but I can tell you it is weeeeeeellllll beyond 1000 :)

Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch

----- Original Message ----
From: Erick Erickson <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Tuesday, December 11, 2007 4:33:45 PM
Subject: Re: Solr, Multiple processes running

How much data are we talking about here? Because it seems *much*
 simpler
to just index a field with each document indicating the user and then
 just
AND that user's ID in with your query.

Or think about facets (although I admit I don't know enough about
 facets
to weigh in on its merits, it's just been mentioned a lot).

Keeping track of 1,000+ indexes seems like a maintenance headache, but
much depends upon how much data you're talking about.

When replying, the number of documents is almost, but not quite
totally, useless unless combined with the number of fields you're
storing per doc, the average length of each field, etc <G>.....

Erick

On Dec 11, 2007 4:01 PM, Owens, Martin <[EMAIL PROTECTED]>
 wrote:

> Hello everyone,
>
> The system we're moving from (dtSearch) allows each of our clients to
 have
> a search index. So far I have yet to find the options required to set
 this,
> it seems I can only set the directory path before run time.
>
> Each of the indexes uses the same schema, same configuration just
> different data in each; what kind of performance penalty would I have
 from
> running a new solr instance per required database? what is the best
 way to
> track what port or what index is being used? would I be able to run
 1,000 or
> more solr instances without performance degradation?
>
> Thanks for your help.
>
> Best regards, Martin Owens
>



Reply via email to