Hi,

This is because "ssh-ready" is the ec2 scripy means that all the instances
are in the status of running and all the instances in the status of "OK",
In another word, the instances is ready to download and to install
software, just as emr is ready for bootstrap actions.
Before, the script just repeatedly prints the information showing that we
are waiting for every instance being launched.And it is quite ugly, so they
change the information to print
However, you can use ssh to connect the instance even if it is in the
status of pending. If you wait patiently a little more,, the script will
finish the launch of cluster.

Cheers
Gen


On Sat, Jan 17, 2015 at 7:00 PM, Nathan Murthy <nathan.mur...@gmail.com>
wrote:

> Originally posted here:
> http://stackoverflow.com/questions/28002443/cluster-hangs-in-ssh-ready-state-using-spark-1-2-ec2-launch-script
>
> I'm trying to launch a standalone Spark cluster using its pre-packaged EC2
> scripts, but it just indefinitely hangs in an 'ssh-ready' state:
>
>     ubuntu@machine:~/spark-1.2.0-bin-hadoop2.4$ ./ec2/spark-ec2 -k
> <key-pair> -i <identity-file>.pem -r us-west-2 -s 3 launch test
>     Setting up security groups...
>     Searching for existing cluster test...
>     Spark AMI: ami-ae6e0d9e
>     Launching instances...
>     Launched 3 slaves in us-west-2c, regid = r-b_______6
>     Launched master in us-west-2c, regid = r-0______0
>     Waiting for all instances in cluster to enter 'ssh-ready'
> state..........
>
> Yet I can SSH into these instances without compaint:
>
>     ubuntu@machine:~$ ssh -i <identity-file>.pem root@master-ip
>     Last login: Day MMM DD HH:mm:ss 20YY from
> c-AA-BBB-CCCC-DDD.eee1.ff.provider.net
>
>            __|  __|_  )
>            _|  (     /   Amazon Linux AMI
>           ___|\___|___|
>
>     https://aws.amazon.com/amazon-linux-ami/2013.03-release-notes/
>     There are 59 security update(s) out of 257 total update(s) available
>     Run "sudo yum update" to apply all updates.
>     Amazon Linux version 2014.09 is available.
>     root@ip-internal ~]$
>
> I'm trying to figure out if this is a problem in AWS or with the Spark
> scripts. I've never had this issue before until recently.
>
>
> --
> Nathan Murthy // 713.884.7110 (mobile) // @natemurthy
>

Reply via email to