[ 
https://issues.apache.org/jira/browse/SPARK-9279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639242#comment-14639242
 ] 

Omar Padron edited comment on SPARK-9279 at 7/23/15 5:43 PM:
-------------------------------------------------------------

No, it is not *absolutely* necessary for me (hence, the minor priority).

I understand that my particular use goes against normal best practices, but I 
don't think it's a developer's place to enforce these kinds of constraints on 
users just because it makes sense to them.  It seems to limit spark's usability 
for no gain elsewhere.

For another data point, I'd point out that I've worked with the WebUI's in HDFS 
as well as YARN in similar configurations without issue.  So, if you'd have 
software dictate these kinds of constraints, you should at least do so 
consistently across projects, imo.

Thank you for your consideration.


was (Author: opadron):
No, it is not *absolutely* necessary for me (hence, the minor priority).

I understand that my particular use goes against normal best practices, but I 
don't think its a developer's place to enforce these kinds of constraints on 
users just because it makes sense to them.  It seems to limit spark's usability 
for no gain elsewhere.

For another data point, I'd point out that I've worked with the WebUI's in HDFS 
as well as YARN in similar configurations without issue.  So, if you'd have 
software dictate these kinds of constraints, you should at least do so 
consistently across projects, imo.

Thank you for your consideration.

> Spark Master Refuses to Bind WebUI to a Privileged Port
> -------------------------------------------------------
>
>                 Key: SPARK-9279
>                 URL: https://issues.apache.org/jira/browse/SPARK-9279
>             Project: Spark
>          Issue Type: Improvement
>          Components: Web UI
>    Affects Versions: 1.4.1
>         Environment: Ubuntu Trusty running in a docker container
>            Reporter: Omar Padron
>            Priority: Minor
>
> When trying to start a spark master server as root...
> {code}
> export SPARK_MASTER_PORT=7077
> export SPARK_MASTER_WEBUI_PORT=80
> spark-class org.apache.spark.deploy.master.Master \
>     --host "$( hostname )" \
>     --port "$SPARK_MASTER_PORT" \
>     --webui-port "$SPARK_MASTER_WEBUI_PORT"
> {code}
> The process terminates with IllegalArgumentException "requirement failed: 
> startPort should be between 1024 and 65535 (inclusive), or 0 for a random 
> free port."
> But, when SPARK_MASTER_WEBUI_PORT=8080 (or anything >1024), the process runs 
> fine.
> I do not understand why the usable ports have been arbitrarily restricted to 
> the non-privileged.  Users choosing to run spark as root should be allowed to 
> choose their own ports.
> Full output from a sample run below:
> {code}
> 2015-07-23 14:36:50,892 INFO  [main] master.Master 
> (SignalLogger.scala:register(47)) - Registered signal handlers for [TERM, 
> HUP, INT]
> 2015-07-23 14:36:51,399 WARN  [main] util.NativeCodeLoader 
> (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> 2015-07-23 14:36:51,586 INFO  [main] spark.SecurityManager 
> (Logging.scala:logInfo(59)) - Changing view acls to: root
> 2015-07-23 14:36:51,587 INFO  [main] spark.SecurityManager 
> (Logging.scala:logInfo(59)) - Changing modify acls to: root
> 2015-07-23 14:36:51,588 INFO  [main] spark.SecurityManager 
> (Logging.scala:logInfo(59)) - SecurityManager: authentication disabled; ui 
> acls disabled; users with view permissions: Set(root); users with modify 
> permissions: Set(root)
> 2015-07-23 14:36:52,295 INFO  [sparkMaster-akka.actor.default-dispatcher-2] 
> slf4j.Slf4jLogger (Slf4jLogger.scala:applyOrElse(80)) - Slf4jLogger started
> 2015-07-23 14:36:52,349 INFO  [sparkMaster-akka.actor.default-dispatcher-2] 
> Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Starting remoting
> 2015-07-23 14:36:52,489 INFO  [sparkMaster-akka.actor.default-dispatcher-2] 
> Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting started; listening 
> on addresses :[akka.tcp://sparkMaster@sparkmaster:7077]
> 2015-07-23 14:36:52,497 INFO  [main] util.Utils (Logging.scala:logInfo(59)) - 
> Successfully started service 'sparkMaster' on port 7077.
> 2015-07-23 14:36:52,717 INFO  [sparkMaster-akka.actor.default-dispatcher-4] 
> server.Server (Server.java:doStart(272)) - jetty-8.y.z-SNAPSHOT
> 2015-07-23 14:36:52,759 INFO  [sparkMaster-akka.actor.default-dispatcher-4] 
> server.AbstractConnector (AbstractConnector.java:doStart(338)) - Started 
> SelectChannelConnector@sparkmaster:6066
> 2015-07-23 14:36:52,759 INFO  [sparkMaster-akka.actor.default-dispatcher-4] 
> util.Utils (Logging.scala:logInfo(59)) - Successfully started service on port 
> 6066.
> 2015-07-23 14:36:52,760 INFO  [sparkMaster-akka.actor.default-dispatcher-4] 
> rest.StandaloneRestServer (Logging.scala:logInfo(59)) - Started REST server 
> for submitting applications on port 6066
> 2015-07-23 14:36:52,765 INFO  [sparkMaster-akka.actor.default-dispatcher-4] 
> master.Master (Logging.scala:logInfo(59)) - Starting Spark master at 
> spark://sparkmaster:7077
> 2015-07-23 14:36:52,766 INFO  [sparkMaster-akka.actor.default-dispatcher-4] 
> master.Master (Logging.scala:logInfo(59)) - Running Spark version 1.4.1
> 2015-07-23 14:36:52,772 ERROR [sparkMaster-akka.actor.default-dispatcher-4] 
> ui.MasterWebUI (Logging.scala:logError(96)) - Failed to bind MasterWebUI
> java.lang.IllegalArgumentException: requirement failed: startPort should be 
> between 1024 and 65535 (inclusive), or 0 for a random free port.
>         at scala.Predef$.require(Predef.scala:233)
>         at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1977)
>         at 
> org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:238)
>         at org.apache.spark.ui.WebUI.bind(WebUI.scala:117)
>         at org.apache.spark.deploy.master.Master.preStart(Master.scala:144)
>         at akka.actor.Actor$class.aroundPreStart(Actor.scala:470)
>         at 
> org.apache.spark.deploy.master.Master.aroundPreStart(Master.scala:52)
>         at akka.actor.ActorCell.create(ActorCell.scala:580)
>         at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456)
>         at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
>         at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:219)
>         at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
>         at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2015-07-23 14:36:52,778 INFO  [Thread-1] util.Utils 
> (Logging.scala:logInfo(59)) - Shutdown hook called
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to