I would like to be able to use spark-ec2 to launch new slaves and add them
to an existing, running cluster. Similarly, I would also like to remove
slaves from an existing cluster.

Use cases include:

   1. Oh snap, I sized my cluster incorrectly. Let me add/remove some
   slaves.
   2. During scheduled batch processing, I want to add some new slaves,
   perhaps on spot instances. When that processing is done, I want to kill
   them. (Cruel, I know.)

I gather this is not possible at the moment. spark-ec2 appears to be able
to launch new slaves for an existing cluster only if the master is stopped.
I also do not see any ability to remove slaves from a cluster.

Is that correct? Are there plans to add such functionality to spark-ec2 in
the future?

Nick




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Having-spark-ec2-join-new-slaves-to-existing-cluster-tp3783.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to