[jira] [Commented] (SPARK-12422) Binding Spark Standalone Master to public IP fails
[ https://issues.apache.org/jira/browse/SPARK-12422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15159904#comment-15159904 ] Jakob Odersky commented on SPARK-12422: --- This blocker issue is quite old now, can you still reproduce it? I tried it in a non-docker environment (Debian 9) and everything worked fine (Spark versions 1.5.2 and 1.6.0). > Binding Spark Standalone Master to public IP fails > -- > > Key: SPARK-12422 > URL: https://issues.apache.org/jira/browse/SPARK-12422 > Project: Spark > Issue Type: Bug > Components: Deploy >Affects Versions: 1.5.2 > Environment: Fails on direct deployment on Mac OSX and also in Docker > Environment (running on OSX or Ubuntu) >Reporter: Bennet Jeutter >Priority: Blocker > > The start of the Spark Standalone Master fails, when the host specified > equals the public IP address. For example I created a Docker Machine with > public IP 192.168.99.100, then I run: > /usr/spark/bin/spark-class org.apache.spark.deploy.master.Master -h > 192.168.99.100 > It'll fail with: > Exception in thread "main" java.net.BindException: Failed to bind to: > /192.168.99.100:7093: Service 'sparkMaster' failed after 16 retries! > at > org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272) > at > akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393) > at > akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389) > at scala.util.Success$$anonfun$map$1.apply(Try.scala:206) > at scala.util.Try$.apply(Try.scala:161) > at scala.util.Success.map(Try.scala:206) > at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) > at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) > at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) > at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) > at > akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > at > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > So I thought oh well, lets just bind to the local IP and access it via public > IP - this doesn't work, it will give: > dropping message [class akka.actor.ActorSelectionMessage] for non-local > recipient [Actor[akka.tcp://sparkMaster@192.168.99.100:7077/]] arriving at > [akka.tcp://sparkMaster@192.168.99.100:7077] inbound addresses are > [akka.tcp://sparkMaster@spark-master:7077] > So there is currently no possibility to run all this... related stackoverflow > issues: > * > http://stackoverflow.com/questions/31659228/getting-java-net-bindexception-when-attempting-to-start-spark-master-on-ec2-node > * > http://stackoverflow.com/questions/33768029/access-apache-spark-standalone-master-via-ip -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-12422) Binding Spark Standalone Master to public IP fails
[ https://issues.apache.org/jira/browse/SPARK-12422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084913#comment-15084913 ] Tommy Yu commented on SPARK-12422: -- Hi For docker images, can you please check /etc/hosts file and remove first line for ip & hosts wrapper. Suggest take a look below doc if you want set up a cluster env base on docker. sometechshit.blogspot.ru/2015/04/running-spark-standalone-cluster-in.html Regards. > Binding Spark Standalone Master to public IP fails > -- > > Key: SPARK-12422 > URL: https://issues.apache.org/jira/browse/SPARK-12422 > Project: Spark > Issue Type: Bug > Components: Deploy >Affects Versions: 1.5.2 > Environment: Fails on direct deployment on Mac OSX and also in Docker > Environment (running on OSX or Ubuntu) >Reporter: Bennet Jeutter >Priority: Blocker > > The start of the Spark Standalone Master fails, when the host specified > equals the public IP address. For example I created a Docker Machine with > public IP 192.168.99.100, then I run: > /usr/spark/bin/spark-class org.apache.spark.deploy.master.Master -h > 192.168.99.100 > It'll fail with: > Exception in thread "main" java.net.BindException: Failed to bind to: > /192.168.99.100:7093: Service 'sparkMaster' failed after 16 retries! > at > org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272) > at > akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393) > at > akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389) > at scala.util.Success$$anonfun$map$1.apply(Try.scala:206) > at scala.util.Try$.apply(Try.scala:161) > at scala.util.Success.map(Try.scala:206) > at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) > at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) > at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) > at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) > at > akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > at > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > So I thought oh well, lets just bind to the local IP and access it via public > IP - this doesn't work, it will give: > dropping message [class akka.actor.ActorSelectionMessage] for non-local > recipient [Actor[akka.tcp://sparkMaster@192.168.99.100:7077/]] arriving at > [akka.tcp://sparkMaster@192.168.99.100:7077] inbound addresses are > [akka.tcp://sparkMaster@spark-master:7077] > So there is currently no possibility to run all this... related stackoverflow > issues: > * > http://stackoverflow.com/questions/31659228/getting-java-net-bindexception-when-attempting-to-start-spark-master-on-ec2-node > * > http://stackoverflow.com/questions/33768029/access-apache-spark-standalone-master-via-ip -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org