[ 
https://issues.apache.org/jira/browse/SPARK-15941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15341396#comment-15341396
 ] 

Marco Capuccini commented on SPARK-15941:
-----------------------------------------

[~tgraves] I ran Spark in standalone mode. The UI page is the executor page. I 
am using 1.6.1. I didn't try 2.x. I guess it is a problem with the Netty RPC 
implementation, as akka manages to set the executor names in the application to 
their bind address. 

It also seems that the Netty RPC implementation tries to do some reverse DNS 
with the IPs, as if I set up Consul (http://consul.io) in the cluster, what I 
see in the executor page are the Consul domain names instead of the IPs. 
However, if the user setup a bind address using the -H option, the bind address 
should be the one used by Spark in order to send tasks to the executors, 
otherwise matching this names with the names used by HDFS becomes a nightmare. 
 

> Netty RPC implementation ignores the executor bind address
> ----------------------------------------------------------
>
>                 Key: SPARK-15941
>                 URL: https://issues.apache.org/jira/browse/SPARK-15941
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.6.1
>            Reporter: Marco Capuccini
>
> When using Netty RPC implementation, which is the default one in Spark 1.6.x, 
> the executor addresses that I see in the Spark application UI (the one on 
> port 4040) are the IP addresses of the machines, even if I start the slaves 
> with the -H option, in order to bind each slave to the hostname of the 
> machine.
> This is a big deal when using Spark with HDFS, as the executor addresses need 
> to match the hostnames of the DataNodes, to achieve data locality.
> When setting spark.rpc=akka everything works as expected, and the executor 
> addresses in the Spark UI match the hostname, which the slaves are bound to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to