[ https://issues.apache.org/jira/browse/SPARK-11991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Reynold Xin resolved SPARK-11991. --------------------------------- Resolution: Fixed > spark_ec2.py does not perform sanity checks on hostnames > -------------------------------------------------------- > > Key: SPARK-11991 > URL: https://issues.apache.org/jira/browse/SPARK-11991 > Project: Spark > Issue Type: Bug > Components: EC2 > Affects Versions: 1.5.2 > Reporter: Jeremy Derr > > `ec2/spark_ec2.py` does not perform any sanity checks on hostnames when > testing connectivity in `is_ssh_available` and descendants. > This causes unexpected behavior when running a cluster in a VPC subnet > without public IPs if `--private-ips` is not given. While `--private-ips` > should be required in this context, the failure mode currently present is > suboptimal. > [ ... ] > All 1 slaves granted > Launched master in us-west-1c, regid = r-redacted > Waiting for AWS to propagate instance metadata... > Waiting for cluster to enter 'ssh-ready' state…………Password: > What has happened here is that the public dns name for the instance is a null > string, causing the ssh check later in the script to inadvertently connect to > localhost to test connectivity to the cluster. The password prompt here is OS > X's sshd asking to auth the user on that connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org