Good catches, Robert.

I had actually typed up a draft email a couple of days ago citing those
same two blocks of code. I deleted it when I realized like you that the
snippets did not explain why IP addresses weren’t working.

Something seems wrong here, but I’m not sure what exactly. Maybe this is a
documentation bug or missing deprecation warning.

For example, this line
<https://github.com/apache/spark/blob/a337c235a12d4ea6a7d6db457acc6b32f1915241/core/src/main/scala/org/apache/spark/deploy/master/MasterArguments.scala#L93>
seems to back up your finding that SPARK_MASTER_IP is deprecated (since the
--ip it maps to is deprecated), but no warnings are displayed and the docs
make no mention of this being deprecated.

  private def printUsageAndExit(exitCode: Int) {
    // scalastyle:off println
    System.err.println(
      "Usage: Master [options]\n" +
      "\n" +
      "Options:\n" +
      "  -i HOST, --ip HOST     Hostname to listen on (deprecated,
please use --host or -h) \n" +
      "  -h HOST, --host HOST   Hostname to listen on\n" +
      "  -p PORT, --port PORT   Port to listen on (default: 7077)\n" +
      "  --webui-port PORT      Port for web UI (default: 8080)\n" +
      "  --properties-file FILE Path to a custom Spark properties file.\n" +
      "                         Default is conf/spark-defaults.conf.")
    // scalastyle:on println
    System.exit(exitCode)
  }

Hopefully someone with better knowledge of this code can explain what’s
going on.

I’m beginning to think SPARK_MASTER_IP is straight up deprecated in favor
of SPARK_MASTER_HOST, but a warning needs to be actually thrown here in the
code
<https://github.com/apache/spark/blob/a337c235a12d4ea6a7d6db457acc6b32f1915241/core/src/main/scala/org/apache/spark/deploy/master/MasterArguments.scala#L52-L56>
and the template
<https://github.com/apache/spark/blob/a337c235a12d4ea6a7d6db457acc6b32f1915241/conf/spark-env.sh.template#L49>
and docs need to be updated.

This code hasn’t been touched much since Spark’s genesis, so I’m not
expecting anyone to know off the top of their head whether this is wrong or
right. Perhaps I should just open a PR and take it from there.

Nick
​

On Sat, Oct 17, 2015 at 11:21 PM Robert Dodier <robert.dod...@gmail.com>
wrote:

> Nicholas Chammas wrote
> > The funny thing is that Spark seems to accept this only if the value of
> > SPARK_MASTER_IP is a DNS name and not an IP address.
> >
> > When I provide an IP address, I get errors in the log when starting the
> > master:
> >
> > 15/10/15 01:47:31 ERROR NettyTransport: failed to bind to
> > /54.210.XX.XX:7077, shutting down Netty transport
>
> A couple of things. (1) That log message appears to originate at line 434
> of
> NettyTransport.scala.
> (
> https://github.com/akka/akka/blob/master/akka-remote/src/main/scala/akka/remote/transport/netty/NettyTransport.scala
> )
> It appears the exception is rethrown; is it caught somewhere else so we can
> see what the actual error was that triggered the log message? I don't see
> anything obvious in the code.
>
> (2) sbin/start-master.sh executes something.Master with --ip
> SPARK_MASTER_IP, which calls something.MasterArguments to handle its
> arguments, which says:
>
>       case ("--ip" | "-i") :: value :: tail =>
>         Utils.checkHost(value, "ip no longer supported, please use hostname
> " + value)
>         host = value
>         parse(tail)
>
>       case ("--host" | "-h") :: value :: tail =>
>         Utils.checkHost(value, "Please use hostname " + value)
>         host = value
>         parse(tail)
>
> So it would appear that the intent is that numerical IP addresses are
> disallowed, however, Utils.checkHost says:
>
>     def checkHost(host: String, message: String = "") {
>       assert(host.indexOf(':') == -1, message)
>     }
>
> which accepts numerical IP addresses just fine. Is there some other test
> that should be applied in MasterArguments? or maybe checkHost should be
> looking for some other pattern? Is it possible that MasterArguments was
> changed to disallow --ip without propagating that backwards into any
> scripts
> that call it?
>
> Hope this helps in some way.
>
> Robert Dodier
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/SPARK-MASTER-IP-actually-expects-a-DNS-name-not-IP-address-tp14613p14665.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>

Reply via email to