[ https://issues.apache.org/jira/browse/SPARK-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147138#comment-15147138 ]
Christopher Bourez commented on SPARK-13317: -------------------------------------------- To confirm, I stop all with stop-all.sh. Then I set SPARK_LOCAL_IP to the public IP on all instances. And then I run again start-all.sh. Am I missing something ? > SPARK_LOCAL_IP does not bind on Slaves > -------------------------------------- > > Key: SPARK-13317 > URL: https://issues.apache.org/jira/browse/SPARK-13317 > Project: Spark > Issue Type: Bug > Environment: Linux EC2, different VPC > Reporter: Christopher Bourez > > SPARK_LOCAL_IP does not bind to the provided IP on slaves. > When launching a job or a spark-shell from a second network, the returned IP > for the slave is still the first IP of the slave. > So the job fails with the message : > Initial job has not accepted any resources; check your cluster UI to ensure > that workers are registered and have sufficient resources > It is not a question of resources but the driver which cannot connect to the > slave given the wrong IP. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org