[ https://issues.apache.org/jira/browse/SPARK-15941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon updated SPARK-15941: --------------------------------- Labels: bulk-closed (was: ) > Netty RPC implementation ignores the executor bind address > ---------------------------------------------------------- > > Key: SPARK-15941 > URL: https://issues.apache.org/jira/browse/SPARK-15941 > Project: Spark > Issue Type: Bug > Affects Versions: 1.6.1 > Reporter: Marco Capuccini > Priority: Major > Labels: bulk-closed > > When using Netty RPC implementation, which is the default one in Spark 1.6.x, > the executor addresses that I see in the Spark application UI (the one on > port 4040) are the IP addresses of the machines, even if I start the slaves > with the -H option, in order to bind each slave to the hostname of the > machine. > This is a big deal when using Spark with HDFS, as the executor addresses need > to match the hostnames of the DataNodes, to achieve data locality. > When setting spark.rpc=akka everything works as expected, and the executor > addresses in the Spark UI match the hostname, which the slaves are bound to. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org