[ 
https://issues.apache.org/jira/browse/SPARK-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362932#comment-14362932
 ] 

Saisai Shao commented on SPARK-6304:
------------------------------------

As I said, normally user will not set these two configurations 
{{spark.driver.host}} and {{spark.driver.port}} to let SparkContext to set. 
SparkContext will internally choose driver's host name and random port for 
these two configurations, the reason to do so is to avoid port contention whey 
multiple driver running on the same machine. 

Spark Streaming takes this assumption to remove these two configurations when 
recovering from checkpoint file, to avoid port contention. Yes this is a bug 
for usage scenarios like yours.

> Checkpointing doesn't retain driver port
> ----------------------------------------
>
>                 Key: SPARK-6304
>                 URL: https://issues.apache.org/jira/browse/SPARK-6304
>             Project: Spark
>          Issue Type: Bug
>          Components: Streaming
>    Affects Versions: 1.2.1
>            Reporter: Marius Soutier
>
> In a check-pointed Streaming application running on a fixed driver port, the 
> setting "spark.driver.port" is not loaded when recovering from a checkpoint.
> (The driver is then started on a random port.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to