[ https://issues.apache.org/jira/browse/SPARK-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147146#comment-15147146 ]
Sean Owen commented on SPARK-13317: ----------------------------------- That sounds correct. You may wish to double check that the workers and masters did stop. {{SPARK_LOCAL_IP}} should be set to the public IP you want to bind to, separately, on each machine. Do you see it bind to the private IP? What about setting {{SPARK_PUBLIC_DNS}} to the public DNS name of the workers? > SPARK_LOCAL_IP does not bind on Slaves > -------------------------------------- > > Key: SPARK-13317 > URL: https://issues.apache.org/jira/browse/SPARK-13317 > Project: Spark > Issue Type: Bug > Environment: Linux EC2, different VPC > Reporter: Christopher Bourez > > SPARK_LOCAL_IP does not bind to the provided IP on slaves. > When launching a job or a spark-shell from a second network, the returned IP > for the slave is still the first IP of the slave. > So the job fails with the message : > Initial job has not accepted any resources; check your cluster UI to ensure > that workers are registered and have sufficient resources > It is not a question of resources but the driver which cannot connect to the > slave given the wrong IP. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org