[ 
https://issues.apache.org/jira/browse/SPARK-4563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15215371#comment-15215371
 ] 

Joe Eloff commented on SPARK-4563:
----------------------------------

I have the same issue developing against Spark running on AWS while at home on 
a private network.
Obvious work around is to package the source each time and deploy it but this 
is such a waste of time vs just doing the submit from your dev environment.

Also the way this is implemented seems like an over kill. I can't see why the 
driver must be listening on its own listener again. It seems like the 
implementation is trying to make TCP behave like UDP. I might not understand 
the whole problem but in my opinion once the driver(client) has made the 
connection to the master(server) the NAT tunnel should now be open. The master 
should just talk back on that end point that was created by making the TCP 
connection from the client to the server.  
In this case you don't need to have another random listener that needs to be 
registered that is now stuck behind a NATed network, just use the one you 
got(which is also somehow random and network controlled) when the client makes 
the initial connection.

The only case where this won't work and that is when the connection drops, but 
there is a good to great chance that the registered driver listener at this 
point won't work either as the network has dropped. It is TCP after all so it 
should stay connected for as long as there is network. But even so you could 
always build in a client side keep alive to make it even more robust with very 
little extra overhead as to just keep alive the NAT tunnel. But tis is only 
needed if there is going to be longer periods of no comms, which is not really 
the case here as te whole point is to run a job with a constant feed back loop.

> Allow spark driver to bind to different ip then advertise ip
> ------------------------------------------------------------
>
>                 Key: SPARK-4563
>                 URL: https://issues.apache.org/jira/browse/SPARK-4563
>             Project: Spark
>          Issue Type: Improvement
>          Components: Deploy
>            Reporter: Long Nguyen
>            Priority: Minor
>
> Spark driver bind ip and advertise is not configurable. spark.driver.host is 
> only bind ip. SPARK_PUBLIC_DNS does not work for spark driver. Allow option 
> to set advertised ip/hostname



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to