Github user gerashegalov commented on a diff in the pull request: https://github.com/apache/spark/pull/20327#discussion_r175623955 --- Diff: core/src/main/scala/org/apache/spark/ui/WebUI.scala --- @@ -126,7 +126,11 @@ private[spark] abstract class WebUI( def bind(): Unit = { assert(serverInfo.isEmpty, s"Attempted to bind $className more than once!") try { - val host = Option(conf.getenv("SPARK_LOCAL_IP")).getOrElse("0.0.0.0") + val host = if (Utils.isClusterMode(conf)) { --- End diff -- I formulated the problem more broadly in the title of the PR: "NM host for driver end points". It's not a intuitive default behavior to bind to `0.0.0.0` if the backend (YARN) is explicitly configured not to, and we need a mechanism to allow Spark to inherit the YARN-determined bind address on NM. You convinced me that the client mode is less critical, and it's easy to override via environment of spark submit (after the bug fix). Although I'd prefer using bindAddress everywhere for consistency and as it's documented.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org