This can’t be done through the script right now, but you can do it manually as 
long as the cluster is stopped. If the cluster is stopped, just go into the AWS 
Console, right click a slave and choose “launch more of these” to add more. Or 
select multiple slaves and delete them. When you run spark-ec2 start the next 
time to start your cluster, it will set it up on all the machines it finds in 
the mycluster-slaves security group.

This is pretty hacky so it would definitely be good to add this feature; feel 
free to open a JIRA about it.

Matei

On Apr 4, 2014, at 12:16 PM, Nicholas Chammas <nicholas.cham...@gmail.com> 
wrote:

> I would like to be able to use spark-ec2 to launch new slaves and add them to 
> an existing, running cluster. Similarly, I would also like to remove slaves 
> from an existing cluster.
> 
> Use cases include:
> Oh snap, I sized my cluster incorrectly. Let me add/remove some slaves.
> During scheduled batch processing, I want to add some new slaves, perhaps on 
> spot instances. When that processing is done, I want to kill them. (Cruel, I 
> know.)
> I gather this is not possible at the moment. spark-ec2 appears to be able to 
> launch new slaves for an existing cluster only if the master is stopped. I 
> also do not see any ability to remove slaves from a cluster.
> 
> Is that correct? Are there plans to add such functionality to spark-ec2 in 
> the future?
> 
> Nick
> 
> 
> View this message in context: Having spark-ec2 join new slaves to existing 
> cluster
> Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to