Thanks for the advice, everyone. I'll take a look at the API mentioned and
do some benchmarking over the weekend.

-Mike


On Fri, Oct 22, 2010 at 8:50 AM, Mark Miller <markrmil...@gmail.com> wrote:

> On 10/22/10 1:44 AM, Tharindu Mathew wrote:
> > Hi Mike,
> >
> > I've also considered using a separate cores in a multi tenant
> > application, ie a separate core for each tenant/domain. But the cores
> > do not suit that purpose.
> >
> > If you check out documentation no real API support exists for this so
> > it can be done dynamically through SolrJ. And all use cases I found,
> > only had users configuring it statically and then using it. That was
> > maybe 2 or 3 cores. Please correct me if I'm wrong Solr folks.
>
> You can dynamically manage cores with solrj. See
> org.apache.solr.client.solrj.request.CoreAdminRequest's static methods
> for a place to start.
>
> You probably want to turn solr.xml's persist option on so that your
> cores survive restarts.
>
> >
> > So your better off using a single index and with a user id and use a
> > query filter with the user id when fetching data.
>
> Many times this is probably the case - pro's and con's to each depending
> on what you are up to.
>
> - Mark
> lucidimagination.com
>
> >
> > On Fri, Oct 22, 2010 at 1:12 AM, Jonathan Rochkind <rochk...@jhu.edu>
> wrote:
> >> No, it does not seem reasonable.  Why do you think you need a seperate
> core
> >> for every user?
> >> mike anderson wrote:
> >>>
> >>> I'm exploring the possibility of using cores as a solution to "bookmark
> >>> folders" in my solr application. This would mean I'll need tens of
> >>> thousands
> >>> of cores... does this seem reasonable? I have plenty of CPUs available
> for
> >>> scaling, but I wonder about the memory overhead of adding cores (aside
> >>> from
> >>> needing to fit the new index in memory).
> >>>
> >>> Thoughts?
> >>>
> >>> -mike
> >>>
> >>>
> >>
> >
> >
> >
>
>

Reply via email to