[jira] [Commented] (SPARK-12482) Spark fileserver not started on same IP as configured in spark.driver.host

2015-12-22 Thread Kyle Sutton (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15068568#comment-15068568
 ] 

Kyle Sutton commented on SPARK-12482:
-

Thanks!  I did.  I think he's saying that fileserver is listening on all ports, 
but if the Spark service can't see the IP given it by the Spark driver, the 
ports are immaterial.

> Spark fileserver not started on same IP as configured in spark.driver.host
> --
>
> Key: SPARK-12482
> URL: https://issues.apache.org/jira/browse/SPARK-12482
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.2.1, 1.5.2
>Reporter: Kyle Sutton
>
> The issue of the file server using the default IP instead of the IP address 
> configured through {{spark.driver.host}} still exists in _Spark 1.5.2_
> The problem is that, while the file server is listening on all ports on the 
> file server host, the _Spark_ service attempts to call back to the default 
> port of the host, to which it may or may not have connectivity.
> For instance, the following setup causes a 
> {{java.net.SocketTimeoutException}} when the _Spark_ service tries to contact 
> the _Spark_ driver host for a JAR:
> * Driver host has a default IP of {{192.168.1.2}} and a secondary LAN 
> connection IP of {{172.30.0.2}}
> * _Spark_ service is on the LAN with an IP of {{172.30.0.3}}
> * A connection is made from the driver host to the _Spark_ service
> ** {{spark.driver.host}} is set to the IP of the driver host on the LAN 
> {{172.30.0.2}}
> ** {{spark.driver.port}} is set to {{50003}}
> ** {{spark.fileserver.port}} is set to {{50005}}
> * Locally (on the driver host), the following listeners are active:
> ** {{0.0.0.0:50005}}
> ** {{172.30.0.2:50003}}
> * The _Spark_ service calls back to the file server host for a JAR file using 
> the driver host's default IP:  {{http://192.168.1.2:50005/jars/code.jar}}
> * The _Spark_ service, being on a different network than the driver host, 
> cannot see the {{192.168.1.0/24}} address space, and fails to connect to the 
> file server
> ** A {{netstat}} on the _Spark_ service host will show the connection to the 
> file server host as being in {{SYN_SENT}} state until the process gives up 
> trying to connect
> {code:title=Driver|borderStyle=solid}
> SparkConf conf = new SparkConf()
> .setMaster("spark://172.30.0.3:7077")
> .setAppName("TestApp")
> .set("spark.driver.host", "172.30.0.2")
> .set("spark.driver.port", "50003")
> .set("spark.fileserver.port", "50005");
> JavaSparkContext sc = new JavaSparkContext(conf);
> sc.addJar("target/code.jar");
> {code}
> {code:title=Stacktrace|borderStyle=solid}
> 15/12/22 12:48:33 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
> 172.30.0.3): java.net.SocketTimeoutException: connect timed out
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>   at java.net.Socket.connect(Socket.java:589)
>   at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>   at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>   at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>   at sun.net.www.http.HttpClient.(HttpClient.java:211)
>   at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>   at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933)
>   at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:555)
>   at org.apache.spark.util.Utils$.fetchFile(Utils.scala:356)
>   at 
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:405)
>   at 
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:397)
>   at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> s

[jira] [Commented] (SPARK-12482) Spark fileserver not started on same IP as configured in spark.driver.host

2015-12-22 Thread Kyle Sutton (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15068565#comment-15068565
 ] 

Kyle Sutton commented on SPARK-12482:
-

Actually, how do I reopen a ticket I didn't write?

> Spark fileserver not started on same IP as configured in spark.driver.host
> --
>
> Key: SPARK-12482
> URL: https://issues.apache.org/jira/browse/SPARK-12482
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.2.1, 1.5.2
>Reporter: Kyle Sutton
>
> The issue of the file server using the default IP instead of the IP address 
> configured through {{spark.driver.host}} still exists in _Spark 1.5.2_
> The problem is that, while the file server is listening on all ports on the 
> file server host, the _Spark_ service attempts to call back to the default 
> port of the host, to which it may or may not have connectivity.
> For instance, the following setup causes a 
> {{java.net.SocketTimeoutException}} when the _Spark_ service tries to contact 
> the _Spark_ driver host for a JAR:
> * Driver host has a default IP of {{192.168.1.2}} and a secondary LAN 
> connection IP of {{172.30.0.2}}
> * _Spark_ service is on the LAN with an IP of {{172.30.0.3}}
> * A connection is made from the driver host to the _Spark_ service
> ** {{spark.driver.host}} is set to the IP of the driver host on the LAN 
> {{172.30.0.2}}
> ** {{spark.driver.port}} is set to {{50003}}
> ** {{spark.fileserver.port}} is set to {{50005}}
> * Locally (on the driver host), the following listeners are active:
> ** {{0.0.0.0:50005}}
> ** {{172.30.0.2:50003}}
> * The _Spark_ service calls back to the file server host for a JAR file using 
> the driver host's default IP:  {{http://192.168.1.2:50005/jars/code.jar}}
> * The _Spark_ service, being on a different network than the driver host, 
> cannot see the {{192.168.1.0/24}} address space, and fails to connect to the 
> file server
> ** A {{netstat}} on the _Spark_ service host will show the connection to the 
> file server host as being in {{SYN_SENT}} state until the process gives up 
> trying to connect
> {code:title=Driver|borderStyle=solid}
> SparkConf conf = new SparkConf()
> .setMaster("spark://172.30.0.3:7077")
> .setAppName("TestApp")
> .set("spark.driver.host", "172.30.0.2")
> .set("spark.driver.port", "50003")
> .set("spark.fileserver.port", "50005");
> JavaSparkContext sc = new JavaSparkContext(conf);
> sc.addJar("target/code.jar");
> {code}
> {code:title=Stacktrace|borderStyle=solid}
> 15/12/22 12:48:33 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
> 172.30.0.3): java.net.SocketTimeoutException: connect timed out
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>   at java.net.Socket.connect(Socket.java:589)
>   at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>   at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>   at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>   at sun.net.www.http.HttpClient.(HttpClient.java:211)
>   at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>   at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933)
>   at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:555)
>   at org.apache.spark.util.Utils$.fetchFile(Utils.scala:356)
>   at 
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:405)
>   at 
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:397)
>   at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>   at scala.collection.mutable.HashMap.foreachEntr

[jira] [Commented] (SPARK-12482) Spark fileserver not started on same IP as configured in spark.driver.host

2015-12-22 Thread Kyle Sutton (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15068562#comment-15068562
 ] 

Kyle Sutton commented on SPARK-12482:
-

Wasn't able to reopen the original one, so tried cloning to preserve as much 
info from the original as possible.  Should I still create a new one?

> Spark fileserver not started on same IP as configured in spark.driver.host
> --
>
> Key: SPARK-12482
> URL: https://issues.apache.org/jira/browse/SPARK-12482
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.2.1, 1.5.2
>Reporter: Kyle Sutton
>
> The issue of the file server using the default IP instead of the IP address 
> configured through {{spark.driver.host}} still exists in _Spark 1.5.2_
> The problem is that, while the file server is listening on all ports on the 
> file server host, the _Spark_ service attempts to call back to the default 
> port of the host, to which it may or may not have connectivity.
> For instance, the following setup causes a 
> {{java.net.SocketTimeoutException}} when the _Spark_ service tries to contact 
> the _Spark_ driver host for a JAR:
> * Driver host has a default IP of {{192.168.1.2}} and a secondary LAN 
> connection IP of {{172.30.0.2}}
> * _Spark_ service is on the LAN with an IP of {{172.30.0.3}}
> * A connection is made from the driver host to the _Spark_ service
> ** {{spark.driver.host}} is set to the IP of the driver host on the LAN 
> {{172.30.0.2}}
> ** {{spark.driver.port}} is set to {{50003}}
> ** {{spark.fileserver.port}} is set to {{50005}}
> * Locally (on the driver host), the following listeners are active:
> ** {{0.0.0.0:50005}}
> ** {{172.30.0.2:50003}}
> * The _Spark_ service calls back to the file server host for a JAR file using 
> the driver host's default IP:  {{http://192.168.1.2:50005/jars/code.jar}}
> * The _Spark_ service, being on a different network than the driver host, 
> cannot see the {{192.168.1.0/24}} address space, and fails to connect to the 
> file server
> ** A {{netstat}} on the _Spark_ service host will show the connection to the 
> file server host as being in {{SYN_SENT}} state until the process gives up 
> trying to connect
> {code:title=Driver|borderStyle=solid}
> SparkConf conf = new SparkConf()
> .setMaster("spark://172.30.0.3:7077")
> .setAppName("TestApp")
> .set("spark.driver.host", "172.30.0.2")
> .set("spark.driver.port", "50003")
> .set("spark.fileserver.port", "50005");
> JavaSparkContext sc = new JavaSparkContext(conf);
> sc.addJar("target/code.jar");
> {code}
> {code:title=Stacktrace|borderStyle=solid}
> 15/12/22 12:48:33 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
> 172.30.0.3): java.net.SocketTimeoutException: connect timed out
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>   at java.net.Socket.connect(Socket.java:589)
>   at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>   at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>   at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>   at sun.net.www.http.HttpClient.(HttpClient.java:211)
>   at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>   at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933)
>   at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:555)
>   at org.apache.spark.util.Utils$.fetchFile(Utils.scala:356)
>   at 
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:405)
>   at 
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:397)
>   at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> scala.collection.mutable.HashTabl