That's because you need to add the master's public key (~/.ssh/id_rsa.pub)
to the newly added slaves ~/.ssh/authorized_keys.

I add slaves this way:

- Launch a new instance by clicking on the slave instance and choose *launch
more like this *
*- *Once its launched, ssh into it and add the master public key to
.ssh/authorized_keys
- Add the slaves internal IP to the master's conf/slaves file

- Rsync spark directory to the slave machine (*rsync -za ~/spark SLAVES-IP:*
)

- do sbin/start-all.sh and it will show up along with other slaves.


Thanks
Best Regards

On Thu, Jun 4, 2015 at 6:45 AM, barmaley <o...@solver.com> wrote:

> I have the existing operating Spark cluster that was launched with
> spark-ec2
> script. I'm trying to add new slave by following the instructions:
>
> Stop the cluster
> On AWS console "launch more like this" on one of the slaves
> Start the cluster
> Although the new instance is added to the same security group and I can
> successfully SSH to it with the same private key, spark-ec2 ... start call
> can't access this machine for some reason:
>
> Running setup-slave on all cluster nodes to mount filesystems, etc...
> [1] 00:59:59 [FAILURE] ec2-52-25-53-64.us-west-2.compute.amazonaws.com
> Exited with error code 255 Stderr: Permission denied (publickey).
>
> , obviously, followed by tons of other errors while trying to deploy Spark
> stuff on this instance.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Adding-new-Spark-workers-on-AWS-EC2-access-error-tp23143.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to