[jira] [Assigned] (SPARK-5242) ec2/spark_ec2.py lauch does not work with VPC if no public DNS or IP is available

2015-04-04 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-5242:
---

Assignee: (was: Apache Spark)

 ec2/spark_ec2.py lauch does not work with VPC if no public DNS or IP is 
 available
 ---

 Key: SPARK-5242
 URL: https://issues.apache.org/jira/browse/SPARK-5242
 Project: Spark
  Issue Type: Bug
  Components: EC2
Reporter: Vladimir Grigor
  Labels: easyfix

 How to reproduce: user starting cluster in VPC needs to wait forever:
 {code}
 ./spark-ec2 -k key20141114 -i ~/aws/key.pem -s 1 --region=eu-west-1 
 --spark-version=1.2.0 --instance-type=m1.large --vpc-id=vpc-2e71dd46 
 --subnet-id=subnet-2571dd4d --zone=eu-west-1a  launch SparkByScript
 Setting up security groups...
 Searching for existing cluster SparkByScript...
 Spark AMI: ami-1ae0166d
 Launching instances...
 Launched 1 slaves in eu-west-1a, regid = r-e70c5502
 Launched master in eu-west-1a, regid = r-bf0f565a
 Waiting for cluster to enter 'ssh-ready' state..{forever}
 {code}
 Problem is that current code makes wrong assumption that VPC instance has 
 public_dns_name or public ip_address. Actually more common is that VPC 
 instance has only private_ip_address.
 The bug is already fixed in my fork, I am going to submit pull request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-5242) ec2/spark_ec2.py lauch does not work with VPC if no public DNS or IP is available

2015-04-04 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-5242:
---

Assignee: Apache Spark

 ec2/spark_ec2.py lauch does not work with VPC if no public DNS or IP is 
 available
 ---

 Key: SPARK-5242
 URL: https://issues.apache.org/jira/browse/SPARK-5242
 Project: Spark
  Issue Type: Bug
  Components: EC2
Reporter: Vladimir Grigor
Assignee: Apache Spark
  Labels: easyfix

 How to reproduce: user starting cluster in VPC needs to wait forever:
 {code}
 ./spark-ec2 -k key20141114 -i ~/aws/key.pem -s 1 --region=eu-west-1 
 --spark-version=1.2.0 --instance-type=m1.large --vpc-id=vpc-2e71dd46 
 --subnet-id=subnet-2571dd4d --zone=eu-west-1a  launch SparkByScript
 Setting up security groups...
 Searching for existing cluster SparkByScript...
 Spark AMI: ami-1ae0166d
 Launching instances...
 Launched 1 slaves in eu-west-1a, regid = r-e70c5502
 Launched master in eu-west-1a, regid = r-bf0f565a
 Waiting for cluster to enter 'ssh-ready' state..{forever}
 {code}
 Problem is that current code makes wrong assumption that VPC instance has 
 public_dns_name or public ip_address. Actually more common is that VPC 
 instance has only private_ip_address.
 The bug is already fixed in my fork, I am going to submit pull request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org