Re: Cluster hangs in 'ssh-ready' state using Spark 1.2 EC2 launch script

2015-01-18 Thread Nicholas Chammas
Nathan,

I posted a bunch of questions for you as a comment on your question
 on Stack Overflow. If you
answer them (don't forget to @ping me) I may be able to help you.

Nick

On Sat Jan 17 2015 at 3:49:54 PM gen tang  wrote:

> Hi,
>
> This is because "ssh-ready" is the ec2 scripy means that all the instances
> are in the status of running and all the instances in the status of "OK",
> In another word, the instances is ready to download and to install
> software, just as emr is ready for bootstrap actions.
> Before, the script just repeatedly prints the information showing that we
> are waiting for every instance being launched.And it is quite ugly, so they
> change the information to print
> However, you can use ssh to connect the instance even if it is in the
> status of pending. If you wait patiently a little more,, the script will
> finish the launch of cluster.
>
> Cheers
> Gen
>
>
> On Sat, Jan 17, 2015 at 7:00 PM, Nathan Murthy 
> wrote:
>
>> Originally posted here:
>> http://stackoverflow.com/questions/28002443/cluster-hangs-in-ssh-ready-state-using-spark-1-2-ec2-launch-script
>>
>> I'm trying to launch a standalone Spark cluster using its pre-packaged
>> EC2 scripts, but it just indefinitely hangs in an 'ssh-ready' state:
>>
>> ubuntu@machine:~/spark-1.2.0-bin-hadoop2.4$ ./ec2/spark-ec2 -k
>>  -i .pem -r us-west-2 -s 3 launch test
>> Setting up security groups...
>> Searching for existing cluster test...
>> Spark AMI: ami-ae6e0d9e
>> Launching instances...
>> Launched 3 slaves in us-west-2c, regid = r-b___6
>> Launched master in us-west-2c, regid = r-0__0
>> Waiting for all instances in cluster to enter 'ssh-ready'
>> state..
>>
>> Yet I can SSH into these instances without compaint:
>>
>> ubuntu@machine:~$ ssh -i .pem root@master-ip
>> Last login: Day MMM DD HH:mm:ss 20YY from
>> c-AA-BBB--DDD.eee1.ff.provider.net
>>
>>__|  __|_  )
>>_|  ( /   Amazon Linux AMI
>>   ___|\___|___|
>>
>> https://aws.amazon.com/amazon-linux-ami/2013.03-release-notes/
>> There are 59 security update(s) out of 257 total update(s) available
>> Run "sudo yum update" to apply all updates.
>> Amazon Linux version 2014.09 is available.
>> root@ip-internal ~]$
>>
>> I'm trying to figure out if this is a problem in AWS or with the Spark
>> scripts. I've never had this issue before until recently.
>>
>>
>> --
>> Nathan Murthy // 713.884.7110 (mobile) // @natemurthy
>>
>
>


Re: Cluster hangs in 'ssh-ready' state using Spark 1.2 EC2 launch script

2015-01-17 Thread gen tang
Hi,

This is because "ssh-ready" is the ec2 scripy means that all the instances
are in the status of running and all the instances in the status of "OK",
In another word, the instances is ready to download and to install
software, just as emr is ready for bootstrap actions.
Before, the script just repeatedly prints the information showing that we
are waiting for every instance being launched.And it is quite ugly, so they
change the information to print
However, you can use ssh to connect the instance even if it is in the
status of pending. If you wait patiently a little more,, the script will
finish the launch of cluster.

Cheers
Gen


On Sat, Jan 17, 2015 at 7:00 PM, Nathan Murthy 
wrote:

> Originally posted here:
> http://stackoverflow.com/questions/28002443/cluster-hangs-in-ssh-ready-state-using-spark-1-2-ec2-launch-script
>
> I'm trying to launch a standalone Spark cluster using its pre-packaged EC2
> scripts, but it just indefinitely hangs in an 'ssh-ready' state:
>
> ubuntu@machine:~/spark-1.2.0-bin-hadoop2.4$ ./ec2/spark-ec2 -k
>  -i .pem -r us-west-2 -s 3 launch test
> Setting up security groups...
> Searching for existing cluster test...
> Spark AMI: ami-ae6e0d9e
> Launching instances...
> Launched 3 slaves in us-west-2c, regid = r-b___6
> Launched master in us-west-2c, regid = r-0__0
> Waiting for all instances in cluster to enter 'ssh-ready'
> state..
>
> Yet I can SSH into these instances without compaint:
>
> ubuntu@machine:~$ ssh -i .pem root@master-ip
> Last login: Day MMM DD HH:mm:ss 20YY from
> c-AA-BBB--DDD.eee1.ff.provider.net
>
>__|  __|_  )
>_|  ( /   Amazon Linux AMI
>   ___|\___|___|
>
> https://aws.amazon.com/amazon-linux-ami/2013.03-release-notes/
> There are 59 security update(s) out of 257 total update(s) available
> Run "sudo yum update" to apply all updates.
> Amazon Linux version 2014.09 is available.
> root@ip-internal ~]$
>
> I'm trying to figure out if this is a problem in AWS or with the Spark
> scripts. I've never had this issue before until recently.
>
>
> --
> Nathan Murthy // 713.884.7110 (mobile) // @natemurthy
>


Cluster hangs in 'ssh-ready' state using Spark 1.2 EC2 launch script

2015-01-17 Thread Nathan Murthy
Originally posted here:
http://stackoverflow.com/questions/28002443/cluster-hangs-in-ssh-ready-state-using-spark-1-2-ec2-launch-script

I'm trying to launch a standalone Spark cluster using its pre-packaged EC2
scripts, but it just indefinitely hangs in an 'ssh-ready' state:

ubuntu@machine:~/spark-1.2.0-bin-hadoop2.4$ ./ec2/spark-ec2 -k
 -i .pem -r us-west-2 -s 3 launch test
Setting up security groups...
Searching for existing cluster test...
Spark AMI: ami-ae6e0d9e
Launching instances...
Launched 3 slaves in us-west-2c, regid = r-b___6
Launched master in us-west-2c, regid = r-0__0
Waiting for all instances in cluster to enter 'ssh-ready'
state..

Yet I can SSH into these instances without compaint:

ubuntu@machine:~$ ssh -i .pem root@master-ip
Last login: Day MMM DD HH:mm:ss 20YY from
c-AA-BBB--DDD.eee1.ff.provider.net

   __|  __|_  )
   _|  ( /   Amazon Linux AMI
  ___|\___|___|

https://aws.amazon.com/amazon-linux-ami/2013.03-release-notes/
There are 59 security update(s) out of 257 total update(s) available
Run "sudo yum update" to apply all updates.
Amazon Linux version 2014.09 is available.
root@ip-internal ~]$

I'm trying to figure out if this is a problem in AWS or with the Spark
scripts. I've never had this issue before until recently.


-- 
Nathan Murthy // 713.884.7110 (mobile) // @natemurthy