Hi Markus and Jan,

Thanks for the quick response and good ideas.
I will look for the puppet direction. We already use puppet, so this is
easy to add

Thanks a lot,
David

On Thu, Jan 26, 2017 at 3:38 PM Markus Jelsma <markus.jel...@openindex.io>
wrote:

> Or you can administate the nodes via configuration management  software
> such as Salt, Puppet, etc. If we add a Zookeeper to our list of Zookeepers,
> it is automatically updated in solr.in.sh file on all nodes and separate
> clusters.
>
> If you're looking for easy maintenance that is :)
>
> Markus
>
> -----Original message-----
> > From:Jan Høydahl <jan....@cominvent.com>
> > Sent: Thursday 26th January 2017 14:34
> > To: solr-user@lucene.apache.org
> > Subject: Re: Solr Cloud - How to maintain the addresses of the zookeeper
> servers
> >
> > Hi,
> >
> > Hardcoding your zk server addresses is a key factor to stability in your
> cluster.
> > If this was some kind of magic, and the magic failed, EVERYTHING would
> come to a halt :)
> > And since changing ZK is something you do very seldom, I think it is not
> too hard to
> >
> > 1. push new solr.in.sh file to all nodes
> > 2. restart all ndoes
> >
> > --
> > Jan Høydahl, search solution architect
> > Cominvent AS - www.cominvent.com
> >
> > > 26. jan. 2017 kl. 14.30 skrev David Michael Gang <
> michaelg...@gmail.com>:
> > >
> > > Hi all,
> > >
> > > I want to set up a solr cloud with x nodes and have 3 zookeepers
> servers.
> > > As i understand the following parties need to know all zookeeper
> servers:
> > > * All zookeeper servers
> > > * All solr cloud nodes
> > > * All solr4j cloud smart clients
> > >
> > > So let's say if i make it hard coded and then want to add 2 zookeeper
> > > nodes, I would have to update many places. This makes it hard to
> maintain
> > > it.
> > > How do you manage this? Is there a possibility to get the list of
> zookeeper
> > > services dynamically? Any other idea?
> > > I wanted to hear from your expereince how to achieve this task
> effectively.
> > >
> > > Thanks,
> > > David
> >
> >
>

Reply via email to