Re: unable to bring up cluster with ec2 script

2015-07-08 Thread Akhil Das
Its showing connection refused, for some reason it was not able to connect
to the machine either its the machine\s start up time or its with the
security group.

Thanks
Best Regards

On Wed, Jul 8, 2015 at 2:04 AM, Pagliari, Roberto rpagli...@appcomsci.com
wrote:





 I'm following the tutorial about Apache Spark on EC2. The output is the
 following:





 $ ./spark-ec2 -i ../spark.pem -k spark --copy launch spark-training

 Setting up security groups...

 Searching for existing cluster spark-training...

 Latest Spark AMI: ami-19474270

 Launching instances...

 Launched 5 slaves in us-east-1d, regid = r-59a0d4b6

 Launched master in us-east-1d, regid = r-9ba2d674

 Waiting for instances to start up...

 Waiting 120 more seconds...

 Copying SSH key ../spark.pem to master...

 ssh: connect to host ec2-54-152-15-165.compute-1.amazonaws.com port
 22: Connection refused

 Error connecting to host Command 'ssh -t -o StrictHostKeyChecking=no
 -i ../spark.pem r...@ec2-54-152-15-165.compute-1.amazonaws.com 'mkdir -p
 ~/.ssh'' returned non-zero exit status 255, sleeping 30

 ssh: connect to host ec2-54-152-15-165.compute-1.amazonaws.com port
 22: Connection refused

 Error connecting to host Command 'ssh -t -o StrictHostKeyChecking=no
 -i ../spark.pem r...@ec2-54-152-15-165.compute-1.amazonaws.com 'mkdir -p
 ~/.ssh'' returned non-zero exit status 255, sleeping 30

 ssh: Could not resolve hostname
 ec2-54-152-15-165.compute-1.amazonaws.com: Name or service not known

 Error connecting to host Command 'ssh -t -o StrictHostKeyChecking=no
 -i ../spark.pem r...@ec2-54-152-15-165.compute-1.amazonaws.com 'mkdir -p
 ~/.ssh'' returned non-zero exit status 255, sleeping 30

 ssh: connect to host ec2-54-152-15-165.compute-1.amazonaws.com port
 22: Connection refused

Traceback (most recent call last):

   File ./spark_ec2.py, line 925, in module

 main()

   File ./spark_ec2.py, line 766, in main

 setup_cluster(conn, master_nodes, slave_nodes, zoo_nodes, opts,
 True)

   File ./spark_ec2.py, line 406, in setup_cluster

 ssh(master, opts, 'mkdir -p ~/.ssh')

   File ./spark_ec2.py, line 712, in ssh

 raise e

 subprocess.CalledProcessError: Command 'ssh -t -o
 StrictHostKeyChecking=no -i ../spark.pem
 r...@ec2-54-152-15-165.compute-1.amazonaws.com 'mkdir -p ~/.ssh''
 returned non-zero exit status 255





 However, I can see the six instances created on my EC2 console, and I
 could even get the name of the master. I'm not sure how to fix the ssh
 issue (my region is US EST).





Re: unable to bring up cluster with ec2 script

2015-07-07 Thread Arun Ahuja
Sorry, I can't help with this issue, but if you are interested in a simple
way to launch a Spark cluster on Amazon, Spark is now offered as an
application in Amazon EMR.  With this you can have a full cluster with a
few clicks:

https://aws.amazon.com/blogs/aws/new-apache-spark-on-amazon-emr/

- Arun

On Tue, Jul 7, 2015 at 4:34 PM, Pagliari, Roberto rpagli...@appcomsci.com
wrote:





 I'm following the tutorial about Apache Spark on EC2. The output is the
 following:





 $ ./spark-ec2 -i ../spark.pem -k spark --copy launch spark-training

 Setting up security groups...

 Searching for existing cluster spark-training...

 Latest Spark AMI: ami-19474270

 Launching instances...

 Launched 5 slaves in us-east-1d, regid = r-59a0d4b6

 Launched master in us-east-1d, regid = r-9ba2d674

 Waiting for instances to start up...

 Waiting 120 more seconds...

 Copying SSH key ../spark.pem to master...

 ssh: connect to host ec2-54-152-15-165.compute-1.amazonaws.com port
 22: Connection refused

 Error connecting to host Command 'ssh -t -o StrictHostKeyChecking=no
 -i ../spark.pem r...@ec2-54-152-15-165.compute-1.amazonaws.com 'mkdir -p
 ~/.ssh'' returned non-zero exit status 255, sleeping 30

 ssh: connect to host ec2-54-152-15-165.compute-1.amazonaws.com port
 22: Connection refused

 Error connecting to host Command 'ssh -t -o StrictHostKeyChecking=no
 -i ../spark.pem r...@ec2-54-152-15-165.compute-1.amazonaws.com 'mkdir -p
 ~/.ssh'' returned non-zero exit status 255, sleeping 30

 ssh: Could not resolve hostname
 ec2-54-152-15-165.compute-1.amazonaws.com: Name or service not known

 Error connecting to host Command 'ssh -t -o StrictHostKeyChecking=no
 -i ../spark.pem r...@ec2-54-152-15-165.compute-1.amazonaws.com 'mkdir -p
 ~/.ssh'' returned non-zero exit status 255, sleeping 30

 ssh: connect to host ec2-54-152-15-165.compute-1.amazonaws.com port
 22: Connection refused

Traceback (most recent call last):

   File ./spark_ec2.py, line 925, in module

 main()

   File ./spark_ec2.py, line 766, in main

 setup_cluster(conn, master_nodes, slave_nodes, zoo_nodes, opts,
 True)

   File ./spark_ec2.py, line 406, in setup_cluster

 ssh(master, opts, 'mkdir -p ~/.ssh')

   File ./spark_ec2.py, line 712, in ssh

 raise e

 subprocess.CalledProcessError: Command 'ssh -t -o
 StrictHostKeyChecking=no -i ../spark.pem
 r...@ec2-54-152-15-165.compute-1.amazonaws.com 'mkdir -p ~/.ssh''
 returned non-zero exit status 255





 However, I can see the six instances created on my EC2 console, and I
 could even get the name of the master. I'm not sure how to fix the ssh
 issue (my region is US EST).