bq: What tools do you use for the "auto setup"? How do you get your config
automatically uploaded to zk?

Both uploading the config to ZK and creating collections are one-time
operations, usually done manually. Currently uploading the config set is
accomplished with zkCli (yes, it's a little clumsy). There's a JIRA to put
this into solr/bin as a command though. They'd be easy enough to script in
any given situation though with a shell script or wizard....

Best,
Erick

On Wed, Sep 23, 2015 at 7:33 PM, Steve Davids <sdav...@gmail.com> wrote:

> What tools do you use for the "auto setup"? How do you get your config
> automatically uploaded to zk?
>
> On Tue, Sep 22, 2015 at 2:35 PM, Gili Nachum <gilinac...@gmail.com> wrote:
>
> > Our auto setup sequence is:
> > 1.deploy 3 zk nodes
> > 2. Deploy solr nodes and start them connecting to zk.
> > 3. Upload collection config to zk.
> > 4. Call create collection rest api.
> > 5. Done. SolrCloud ready to work.
> >
> > Don't yet have automation for replacing or adding a node.
> > On Sep 22, 2015 18:27, "Steve Davids" <sdav...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I am trying to come up with a repeatable process for deploying a Solr
> > Cloud
> > > cluster from scratch along with the appropriate security groups, auto
> > > scaling groups, and custom Solr plugin code. I saw that LucidWorks
> > created
> > > a Solr Scale Toolkit but that seems to be more of a one-shot deal than
> > > really setting up your environment for the long-haul. Here is were we
> are
> > > at right now:
> > >
> > >    1. ZooKeeper ensemble is easily brought up via a Cloud Formation
> > Script
> > >    2. We have an RPM built to lay down the Solr distribution + Custom
> > >    plugins + Configuration
> > >    3. Solr machines come up and connect to ZK
> > >
> > > Now, we are using Puppet which could easily create the core.properties
> > file
> > > for the corresponding core and have ZK get bootstrapped but that seems
> to
> > > be a no-no these days... So, can anyone think of a way to get ZK
> > > bootstrapped automatically with pre-configured Collection
> configurations?
> > > Also, is there a recommendation on how to deal with machines that are
> > > coming/going? As I see it machines will be getting spun up and
> terminated
> > > from time to time and we need to have a process of dealing with that,
> the
> > > first idea was to just use a common node name so if a machine was
> > > terminated a new one can come up and replace that particular node but
> on
> > > second thought it would seem to require an auto scaling group *per*
> node
> > > (so it knows what node name it is). For a large cluster this seems
> crazy
> > > from a maintenance perspective, especially if you want to be elastic
> with
> > > regard to the number of live replicas for peak times. So, then the next
> > > idea was to have some outside observer listen to when new ec2 instances
> > are
> > > created or terminated (via CloudWatch SQS) and make the appropriate API
> > > calls to either add the replica or delete it, this seems doable but
> > perhaps
> > > not the simplest solution that could work.
> > >
> > > I was hoping others have already gone through this and have valuable
> > advice
> > > to give, we are trying to setup Solr Cloud the "right way" so we don't
> > get
> > > nickel-and-dimed to death from an O&M perspective.
> > >
> > > Thanks,
> > >
> > > -Steve
> > >
> >
>

Reply via email to