With AWS auto scaling, one can specify a minimum number of instances for an
auto scaling group.  So there should never be an insufficient number of
replicas.  Once can also specify a termination policy so that the newly
added nodes are removed first.

But with SolrCloud as long as there are enough replicas there is no wrong
node to remove, right?

AWS Beanstalk seems to be a wrapper for AWS auto scaling and other AWS
elastic services.  I am not sure if it offers the detail-grained control
that you have when using auto scaling directly.


On Wed, Jan 2, 2013 at 11:14 PM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:

> We've considered using AWS Beanstalk (hmm, what's the difference between
> AWS auto scaling and elastic beanstalk? not sure.) for search-lucene.com ,
> but the idea of something adding and removing nodes seems scary.  The
> scariest part to me is automatic removal of wrong nodes that ends up in
> data loss or insufficient number of replicas.
>
> But if somebody has done thing and has written up a how-to, I'd love to see
> it!
>
> Otis
> --
> Solr & ElasticSearch Support
> http://sematext.com/
>
>
>
>
>
> On Wed, Jan 2, 2013 at 5:51 PM, Bill Au <bill.w...@gmail.com> wrote:
>
> > Is anyone running Solr 4.0 SolrCloud with AWS auto scaling?
> >
> > My concern is that as AWS auto scaling add and remove instances to
> > SolrCloud, the number of nodes in SolrCloud Zookeeper config will grow
> > indefinitely as removed instances will never be used again.  AWS auto
> > scaling will keep on adding new instances, and there is no way to remove
> > them from Zookeeper, right?  What's the effect of have all these phantom
> > nodes?
> >
> > Bill
> >
>

Reply via email to