I looked @ the 1.3.0 code and figured where this can be added In org.apache.spark.deploy.yarn ApplicationMaster.scala:282 is
actorSystem = AkkaUtils.createActorSystem("sparkYarnAM", Utils.localHostName, 0, conf = sparkConf, securityManager = securityMgr)._1 If I change it to below, then I can start it on the port I want. val port = sparkConf.getInt("spark.am.actor.port", 0) // New property ... actorSystem = AkkaUtils.createActorSystem("sparkYarnAM", Utils.localHostName, port, conf = sparkConf, securityManager = securityMgr)._1 Thoughts? Any other place where any change is needed? On Wed, Mar 25, 2015 at 4:44 PM, Shixiong Zhu <zsxw...@gmail.com> wrote: > There is no configuration for it now. > > Best Regards, > Shixiong Zhu > > 2015-03-26 7:13 GMT+08:00 Manoj Samel <manojsamelt...@gmail.com>: > >> There may be firewall rules limiting the ports between host running spark >> and the hadoop cluster. In that case, not all ports are allowed. >> >> Can it be a range of ports that can be specified ? >> >> On Wed, Mar 25, 2015 at 4:06 PM, Shixiong Zhu <zsxw...@gmail.com> wrote: >> >>> It's a random port to avoid port conflicts, since multiple AMs can run >>> in the same machine. Why do you need a fixed port? >>> >>> Best Regards, >>> Shixiong Zhu >>> >>> 2015-03-26 6:49 GMT+08:00 Manoj Samel <manojsamelt...@gmail.com>: >>> >>>> Spark 1.3, Hadoop 2.5, Kerbeors >>>> >>>> When running spark-shell in yarn client mode, it shows following >>>> message with a random port every time (44071 in example below). Is there a >>>> way to specify that port to a specific port ? It does not seem to be part >>>> of ports specified in >>>> http://spark.apache.org/docs/latest/configuration.html spark.xxx.port >>>> ... >>>> >>>> Thanks, >>>> >>>> 15/03/25 22:27:10 INFO Client: Application report for >>>> application_1427316153428_0014 (state: ACCEPTED) >>>> 15/03/25 22:27:10 INFO YarnClientSchedulerBackend: ApplicationMaster >>>> registered as Actor[akka.tcp://sparkYarnAM@xyz >>>> :44071/user/YarnAM#-1989273896] >>>> >>> >>> >> >