[ 
https://issues.apache.org/jira/browse/SPARK-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14348215#comment-14348215
 ] 

Shixiong Zhu commented on SPARK-5124:
-------------------------------------

I made some changes to the above APIs.

1. I added a new trait RpcResponse because RpcResponseCallback can not be used 
directly. Both `receive` and `receiveAndReply` methods can receive messages if 
the sender is an RpcEndpoint, so if the receiver wants to reply the sender, it 
needs an approach to specify which methods it wants to send. Here is the 
interface of RpcResponse:

{quote}
private[spark] trait RpcResponse {
  def reply(response: Any): Unit
  def replyWithSender(response: Any, sender: RpcEndpointRef): Unit
  def fail(e: Throwable): Unit
}
{quote}

Calling `reply` will send the message to RpcEndpoint.receive, and calling 
`replyWithSender` will send the message to RpcEndpoint.replyWithSender.

2.Insteading of adding a trait like ThreadSafeRpcEndpoint, I added a new method 
`setupThreadSafeEndpoint` in RpcEnv, so that RpcEnv can decide how to implement 
the thread-safe semantics internally. For `setupEndpoint`, RpcEnv won't 
guarantee sending messages thread-safely.

[~vanzin] For the local Endpoint idea, I think we need a local message 
dispatcher to send/receive messages between different Endpoint. It can not be 
implemented just in the Endpoint. We will also need EndpointRef to refer to an 
Endpoint, and a generic trait for the local message dispatcher and RpcEnv. It 
looks a bit complex. What do you think?

Please help review my PR: https://github.com/apache/spark/pull/4588

> Standardize internal RPC interface
> ----------------------------------
>
>                 Key: SPARK-5124
>                 URL: https://issues.apache.org/jira/browse/SPARK-5124
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Spark Core
>            Reporter: Reynold Xin
>            Assignee: Shixiong Zhu
>         Attachments: Pluggable RPC - draft 1.pdf, Pluggable RPC - draft 2.pdf
>
>
> In Spark we use Akka as the RPC layer. It would be great if we can 
> standardize the internal RPC interface to facilitate testing. This will also 
> provide the foundation to try other RPC implementations in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to