Specifically, something like this should probably do the trick:

  def checkHost(host: String, message: String = "") {
    assert(!HostAndPort.fromString(host).hasPort, message)
  }

  def checkHostPort(hostPort: String, message: String = "") {
    assert(HostAndPort.fromString(hostPort).hasPort, message)
  }


On Wed, Oct 14, 2015 at 2:40 PM, Thomas Dudziak <tom...@gmail.com> wrote:

> It looks like Spark 1.5.1 does not work with IPv6. When
> adding -Djava.net.preferIPv6Addresses=true on my dual stack server, the
> driver fails with:
>
> 15/10/14 14:36:01 ERROR SparkContext: Error initializing SparkContext.
> java.lang.AssertionError: assertion failed: Expected hostname
> at scala.Predef$.assert(Predef.scala:179)
> at org.apache.spark.util.Utils$.checkHost(Utils.scala:805)
> at org.apache.spark.storage.BlockManagerId.<init>(BlockManagerId.scala:48)
> at org.apache.spark.storage.BlockManagerId$.apply(BlockManagerId.scala:107)
> at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:190)
> at org.apache.spark.SparkContext.<init>(SparkContext.scala:528)
> at
> org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
>
> Looking at the checkHost method, it clearly does not work for IPv6 as it
> assumes : is not a valid part of the hostname. I think this code should use
> Guava's HostAndPort or related classes to properly deal with IPv4 and IPv6
> (and other parts of Utils already use Guava).
>
> cheers,
> Tom
>

Reply via email to