We set it up like this
+ individual solr instances are setup
+ external mapping/routing to allocate users to instances. This information
can be stored in an external data store
+ all cores are created as transient and loadonstart as false
+ cores come online on demand
+ as and when users data get bigger (or hosts are hot)they are migrated
between less hit hosts using in built replication

Keep in mind we had the schema for all users. Currently there is no way to
upload a new schema to solr.
On Jun 8, 2013 1:15 AM, "Aleksey" <bitterc...@gmail.com> wrote:

> > Aleksey: What would you say is the average core size for your use case -
> > thousands or millions of rows? And how sharded would each of your
> > collections be, if at all?
>
> Average core/collection size wouldn't even be thousands, hundreds more
> like. And the largest would be half a million or so but that's a
> pathological case. I don't need sharding and queries than fan out to
> different machines. If fact I'd like to avoid that so I don't have to
> collate the results.
>
>
> > The Wiki page was built not for Cloud Solr.
> >
> > We have done such a deployment where less than a tenth of cores were
> active
> > at any given point in time. though there were tens of million indices
> they
> > were split among a large no:of hosts.
> >
> > If you don't insist of Cloud deployment it is possible. I'm not sure if
> it
> > is possible with cloud
>
> By Cloud you mean specifically SolrCloud? I don't have to have it if I
> can do without it. Bottom line is I want a bunch of small cores to be
> distributed over a fleet, each core completely fitting on one server.
> Would you be willing to provide a little more details on your setup?
> In particular, how are you managing the cores?
> How do you route requests to proper server?
> If you scale the fleet up and down, does reshuffling of the cores
> happen automatically or is it an involved manual process?
>
> Thanks,
>
> Aleksey
>

Reply via email to