[ https://issues.apache.org/jira/browse/SPARK-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352102#comment-14352102 ]
Shixiong Zhu commented on SPARK-5124: ------------------------------------- The problem is that the message may come from caller.receive, but the callee wants to send the reply to caller.receiveAndReply. However, I cannot find a use case now. But I find some RpcEndpoint may need to know the sender's address. So I added the sender method to RpcCallContext. And I also removed replyWithSender since it can be replaced with RpcCallContext.sender.sendWithReply(msg, self) now. > Standardize internal RPC interface > ---------------------------------- > > Key: SPARK-5124 > URL: https://issues.apache.org/jira/browse/SPARK-5124 > Project: Spark > Issue Type: Sub-task > Components: Spark Core > Reporter: Reynold Xin > Assignee: Shixiong Zhu > Attachments: Pluggable RPC - draft 1.pdf, Pluggable RPC - draft 2.pdf > > > In Spark we use Akka as the RPC layer. It would be great if we can > standardize the internal RPC interface to facilitate testing. This will also > provide the foundation to try other RPC implementations in the future. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org