Hello,

Recently, I have upgraded my setup to Spark 1.2 from Spark 1.1.

I have 4 node Ubuntu Spark Cluster.
With Spark 1.1, I used to write Spark Scala program in Eclipse on my Windows
development host and submit the job on Ubuntu Cluster, from Eclipse (Windows
machine).

As on my network not all ports between Spark cluster and development machine
are open, I set spark process ports to valid ports. 
On Spark 1.1 this works perfectly.

When I try to run the same program with same user defined ports on Spark 1.2
cluster it gives me connection time out for port *56117*.

I referred the Spark 1.2 configuration page
(http://spark.apache.org/docs/1.2.0/configuration.html) but there are no new
ports mentioned.

*Here is my code for reference:*
   
                val conf = new SparkConf()
                                                .setMaster(sparkMaster)
                                                .setAppName("Spark SVD")
                                                
.setSparkHome("/usr/local/spark")
                                                .setJars(jars)
                                              .set("spark.driver.host", 
"consb2a")              //Windows host
(Development machine)
                                                .set("spark.driver.port", 
"51810")
                                                .set("spark.fileserver.port", 
"51811")
                                                .set("spark.broadcast.port", 
"51812")
                                                
.set("spark.replClassServer.port", "51813")
                                                .set("spark.blockManager.port", 
"51814")
                                                .set("spark.executor.port", 
"51815")
                                                .set("spark.executor.memory", 
"2g")
                                                .set("spark.driver.memory", 
"4g")
    val sc = new SparkContext(conf)

*Here is Exception:*
15/01/21 15:44:08 INFO BlockManagerMasterActor: Registering block manager
wynchcs217.wyn.cnw.co.nz:37173 with 1059.9 MB RAM, BlockManagerId(2,
wynchcs217.wyn.cnw.co.nz, 37173)
15/01/21 15:44:08 INFO BlockManagerMasterActor: Registering block manager
wynchcs219.wyn.cnw.co.nz:53850 with 1059.9 MB RAM, BlockManagerId(1,
wynchcs219.wyn.cnw.co.nz, 53850)
15/01/21 15:44:08 INFO BlockManagerMasterActor: Registering block manager
wynchcs220.wyn.cnw.co.nz:35670 with 1060.3 MB RAM, BlockManagerId(0,
wynchcs220.wyn.cnw.co.nz, 35670)
15/01/21 15:44:08 INFO BlockManagerMasterActor: Registering block manager
wynchcs218.wyn.cnw.co.nz:46890 with 1059.9 MB RAM, BlockManagerId(3,
wynchcs218.wyn.cnw.co.nz, 46890)
15/01/21 15:52:23 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0,
wynchcs217.wyn.cnw.co.nz): java.io.IOException: Connecting to
CONSB2A.cnw.co.nz/143.96.130.27:56117 timed out (120000 ms)
        at
org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:188)
        at
org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
        at
org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:78)
        at
org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
        at
org.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)
        at
org.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:701)

15/01/21 15:52:23 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID
2, wynchcs220.wyn.cnw.co.nz, NODE_LOCAL, 1366 bytes)
15/01/21 15:55:35 INFO TaskSchedulerImpl: Cancelling stage 0
15/01/21 15:55:35 INFO TaskSchedulerImpl: Stage 0 was cancelled
15/01/21 15:55:35 INFO DAGScheduler: Job 0 failed: count at
RowMatrix.scala:76, took 689.331309 s
Exception in thread "main" org.apache.spark.SparkException: Job 0 cancelled
because Stage 0 was cancelled


Can you please let me know how can I define the port 56117 to some other
port ?

Thanks,
  Shailesh





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-2-How-to-change-Default-Random-port-tp21306.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to