[ 
https://issues.apache.org/jira/browse/SPARK-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337724#comment-14337724
 ] 

Shixiong Zhu commented on SPARK-5124:
-------------------------------------

[~vanzin], thanks for the suggestions. I agree most of them. Just a little 
comment for the following point:

{quote}
The default Endpoint has no thread-safety guarantees. You can wrap an Endpoint 
in an EventLoop if you want messages to be handled using a queue, or 
synchronize your receive() method (although that can block the dispatcher 
thread, which could be bad). But this would easily allow actors to process 
multiple messages concurrently if desired.
{quote}

Every Endpoint with EventLoop needs to have an exclusive thread. So it will 
increase the thread number significantly and pay some cost for the extra thread 
context switch. However, I think we can have a global Dispatcher for the 
Endpoints that need thread-safety guarantees. Endpoint needs to register itself 
to the Dispatcher. Dispatcher will dispatch the messages to these Endpoints and 
guarantee the thread-safety. This Dispatcher can only have a few threads and 
queues for dispatching messages.

> Standardize internal RPC interface
> ----------------------------------
>
>                 Key: SPARK-5124
>                 URL: https://issues.apache.org/jira/browse/SPARK-5124
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Spark Core
>            Reporter: Reynold Xin
>            Assignee: Shixiong Zhu
>         Attachments: Pluggable RPC - draft 1.pdf, Pluggable RPC - draft 2.pdf
>
>
> In Spark we use Akka as the RPC layer. It would be great if we can 
> standardize the internal RPC interface to facilitate testing. This will also 
> provide the foundation to try other RPC implementations in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to