Thanks as always for the great answers!

Jim


On 6/19/15, 11:57 AM, "Erick Erickson" <erickerick...@gmail.com> wrote:

>Jim:
>
>This is by design. There's no way to tell Solr to find all the cores
>available and put one replica on each. In fact, you're explicitly
>telling it to create one and only one replica, one and only one shard.
>That is, your collection will have exactly one low-level core. But you
>realized that...
>
>As to the reasoning. Consider hetergeneous collections all hosted on
>the same Solr cluster. I have big collections, little collections,
>some with high QPS rates, some not. etc. Having Solr do things like
>this automatically would make managing this difficult.
>
>Probably the "real" reason is "nobody thought it would be useful in
>the general case". And I probably concur. Adding a new node to an
>existing cluster would result in unbalanced clusters etc.
>
>I suppose a stop-gap would be to query the "live_nodes" in the cluster
>and add that to the URL, don't know how much of a pain that would be
>though.
>
>Best,
>Erick
>
>On Fri, Jun 19, 2015 at 10:15 AM, Jim.Musil <jim.mu...@target.com> wrote:
>> I noticed that when I issue the CREATE collection command to the api,
>>it does not automatically put a replica on every live node connected to
>>zookeeper.
>>
>> So, for example, if I have 3 solr nodes connected to a zookeeper
>>ensemble and create a collection like this:
>>
>> 
>>/admin/collections?action=CREATE&name=my_collection&numShards=1&replicati
>>onFactor=1&maxShardsPerNode=1&collection.configName=my_config
>>
>> It will only create a core on one of the three nodes. I can make it
>>work if I change replicationFactor to 3. When standing up an entire
>>stack using chef, this all gets a bit clunky. I don't see any option
>>such as "ALL" that would just create a replica on all nodes regardless
>>of size.
>>
>> I'm guessing this is intentional, but curious about the reasoning.
>>
>> Thanks!
>> Jim

Reply via email to