Thanks for your reply Erick.

This is what I'm doing at the moment with Solr 6.2 (I was mistaken, before
I said 6.1).

1. A new instance comes online
2. Systemd starts solr with a custom start.sh script
3. This script creates a core.properties file that looks like this:
```
name=blah
shard=shard1
```
4. Script starts solr via the jar.
```
java -DzkHost=....... -jar start.jar
```

If I understand correctly, what happens is that the node joins the cluster,
and runs a 'repair' from the leader where it copies the core? Once the
repair is done, the node is a healthy member of the cluster?

Is there a way to do something similar in 7.1?



On 19 December 2017 at 16:55, Erick Erickson <erickerick...@gmail.com>
wrote:

> What have you configured to add the replica when a new node is spun up?
>
> If you're just copying the entire directory including the core.properties
> file,
> you're just getting lucky. The legcyCloud=true default is _probably_ adding
> the replica with a new URL and thus making it distinct.
>
> Please detail exactly what you do when you add a new node.....
>
> Best,
> Erick
>
> On Mon, Dec 18, 2017 at 9:03 PM, Greg Roodt <gro...@gmail.com> wrote:
> > Hi
> >
> > Background:
> > * I am looking to upgrade from Solr 6.1 to Solr 7.1.
> > * Currently the system is run in cloud mode with a single collection and
> > single shard per node.
> > * Currently when a new node is added to the cluster, it becomes a replica
> > and copies the data / core "automagically".
> >
> > Question:
> > Is it possible to have this dynamic / automatic behaviour for repliacs in
> > Solr 7.1? I've seen mention of autoscale APIs and the Collections API and
> > also legacyCloud = true. I'm a little confused about what the best
> approach
> > is.
> >
> > Right now, our system is very flexible and we can scale-up by adding new
> > nodes to the cluster. I would really like to keep this behaviour when we
> > upgrade to 7.1.
> >
> > Is anybody able to point me in the right direction or describe how to
> > achieve this?
> >
> > Kind Regards
> > Greg
>

Reply via email to