[ 
https://issues.apache.org/jira/browse/SPARK-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14135116#comment-14135116
 ] 

Hari Shreedharan commented on SPARK-3129:
-----------------------------------------

It looks like Akka makes it difficult to connect back to a client (in this case 
a BlockManagerSlaveActor) from a new server (in this case, 
BlockManagerMasterActor). Since ActorRefs are serializable, I am going to 
actually serialize the ActorRef to BlockManagerSlaveActor to the HDFS location 
rather their locations - so we can simply startup from that to connect to the 
slaves.

> Prevent data loss in Spark Streaming
> ------------------------------------
>
>                 Key: SPARK-3129
>                 URL: https://issues.apache.org/jira/browse/SPARK-3129
>             Project: Spark
>          Issue Type: New Feature
>            Reporter: Hari Shreedharan
>            Assignee: Hari Shreedharan
>         Attachments: SecurityFix.diff, StreamingPreventDataLoss.pdf
>
>
> Spark Streaming can small amounts of data when the driver goes down - and the 
> sending system cannot re-send the data (or the data has already expired on 
> the sender side). The document attached has more details. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to