[ 
https://issues.apache.org/jira/browse/SPARK-25180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16596566#comment-16596566
 ] 

Steve Loughran commented on SPARK-25180:
----------------------------------------

Reviewing a bit more, I think the root cause was

* standalone came up with laptop on network AP 1, with IP Addr 1
* laptop moved to different room, different AP and different IP Addr
* So the old Addr didn't work no more.

Assumption: standalone service is coming up on the external address, not the 
loopback, and the shell can't work with it when the external address moves

I'd close that as a WONTFIX "don't do that", though I worry there's a security 
implication: if things really are coming on the public IP address, then if the 
ports aren't locked, you've just granted malicious callers on the same network 
the ability run anything they want on your system

> Spark standalone failure in Utils.doFetchFile() if nslookup of local hostname 
> fails
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-25180
>                 URL: https://issues.apache.org/jira/browse/SPARK-25180
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.3.0
>         Environment: mac laptop running on a corporate guest wifi, presumably 
> a wifi with odd DNS settings.
>            Reporter: Steve Loughran
>            Priority: Minor
>
> trying to save work on spark standalone can fail if netty RPC cannot 
> determine the hostname. While that's a valid failure on a real cluster, in 
> standalone falling back to localhost rather than inferred "hw13176.lan" value 
> may be the better option.
> note also, the abort* call failed; NPE.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to