Right, there is a shared filesystem requirement.  It would be nice if this
Solr feature could be enhanced to have more options like backing up
directly to another SolrCloud using replication/fetchIndex like your cool
solrcloud_manager thing.

On Wed, Mar 28, 2018 at 12:34 PM Jeff Wartes <jwar...@whitepages.com> wrote:

> The backup/restore still requires setting up a shared filesystem on all
> your nodes though right?
>
> I've been using the fetchindex trick in my solrcloud_manager tool for ages
> now: https://github.com/whitepages/solrcloud_manager#cluster-commands
> Some of the original features in that tool have been incorporated into
> Solr itself these days, but I still use clonecollection/copycollection
> regularly. (most recently with Solr 7.2)
>
>
> On 3/27/18, 9:55 PM, "David Smiley" <david.w.smi...@gmail.com> wrote:
>
>     The backup/restore API is intended to address this.
>
> https://builds.apache.org/job/Solr-reference-guide-master/javadoc/making-and-restoring-backups.html
>
>     Erick's advice is good (and I once drafted docs for the same scheme
> years
>     ago as well), but I consider it dated -- it's what people had to do
> before
>     the backup/restore API existed.  Internally, backup/restore is doing
>     similar stuff.  It's easy to give backup/restore a try; surely you
> have by
>     now?
>
>     ~ David
>
>     On Tue, Mar 6, 2018 at 9:47 AM Patrick Schemitz <p...@solute.de> wrote:
>
>     > Hi List,
>     >
>     > so I'm running a bunch of SolrCloud clusters (each cluster is: 8
> shards
>     > on 2 servers, with 4 instances per server, no replicas, i.e. 1 shard
> per
>     > instance).
>     >
>     > Building the index afresh takes 15+ hours, so when I have to deploy
> a new
>     > index, I build it once, on one cluster, and then copy (scp) over the
>     > data/<main_index>/index directories (shutting down the Solr instances
>     > first).
>     >
>     > I could get Solr 6.5.1 to number the shard/replica directories
> nicely via
>     > the createNodeSet and createNodeSet.shuffle options:
>     >
>     > Solr 6.5.1 /var/lib/solr:
>     >
>     > Server node 1:
>     > instance00/data/main_index_shard1_replica1
>     > instance01/data/main_index_shard2_replica1
>     > instance02/data/main_index_shard3_replica1
>     > instance03/data/main_index_shard4_replica1
>     >
>     > Server node 2:
>     > instance00/data/main_index_shard5_replica1
>     > instance01/data/main_index_shard6_replica1
>     > instance02/data/main_index_shard7_replica1
>     > instance03/data/main_index_shard8_replica1
>     >
>     > However, while attempting to upgrade to 7.2.1, this numbering has
> changed:
>     >
>     > Solr 7.2.1 /var/lib/solr:
>     >
>     > Server node 1:
>     > instance00/data/main_index_shard1_replica_n1
>     > instance01/data/main_index_shard2_replica_n2
>     > instance02/data/main_index_shard3_replica_n4
>     > instance03/data/main_index_shard4_replica_n6
>     >
>     > Server node 2:
>     > instance00/data/main_index_shard5_replica_n8
>     > instance01/data/main_index_shard6_replica_n10
>     > instance02/data/main_index_shard7_replica_n12
>     > instance03/data/main_index_shard8_replica_n14
>     >
>     > This new numbering breaks my copy script, and furthermode, I'm
> worried
>     > as to what happens when the numbering is different among target
> clusters.
>     >
>     > How can I switch this back to the old numbering scheme?
>     >
>     > Side note: is there a recommended way of doing this? Is the
>     > backup/restore mechanism suitable for this? The ref guide is kind of
> terse
>     > here.
>     >
>     > Thanks in advance,
>     >
>     > Ciao, Patrick
>     >
>     --
>     Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>     LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>     http://www.solrenterprisesearchserver.com
>
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com

Reply via email to