How brave are you? ;)....

I'll defer to Scott on the internals of ZK and why it
might be necessary to delete the ZK data dirs, but
what happens if you just correct your configuration and
drive on?

If that doesn't work here's something to try....
Shut down your Solr instances, then.

- bin/solr zk cp -r zk:/ some_local_dir

- fix your ZK, perhaps blowing the data directories away
and bring the ZK servers back up.

- bin/solr zk cp -r some_local_dir zk:/

Start your Solr instances.

NOTE: if you've configured your solr info with a "chroot", the ZK path
will be slightly different.

NOTE: I'm going from memory on the exact form of those commands.
bin/solr -help
should show you the info....

WARNING: This worked at some point in the past, but is _not_
"officially" supported, it was just a happy consequence of code to
copy data from ZK and back to replace the zkCli functionality, creating
one less thing for Solr users to have to keep track of.

What that does is copy the cluster status relevant to Solr from then back to ZK.

DO NOT change your Solr data in any way when doing this. What this is
trying to do is copy all the topology information in ZK. Assuming the Solr
nodes haven't changed, have the same IP address etc. it _might_ work for you.

Best,
Erick

On Fri, Jan 4, 2019 at 4:25 AM Joe Lerner <joeler...@gmail.com> wrote:
>
> wrt, "You'll probably have to delete the contents of the zk data directory
> and rebuild your collections."
>
> Rebuild my *SOLR* collections? That's easy enough for us.
>
> If this is how we're incorrectly configured now:
>
> server #1 = myid#1
> server #2 = myid#2
> server #3 = myid#2
>
> My plan would be to do the following, while users are still online (it's a
> big [bad] deal if we need to take search offline):
>
> 1. Take zk #3 down.
> 2. Fix zk #3 by deleting the contents of the zk data directory and assign it
> myid#3
> 3. Bring zk#3 back up
> 4. Do a full re-build of all collections
>
> Thanks!
>
> Joe
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html

Reply via email to