Reynold Xin created SPARK-11095:
-----------------------------------

             Summary: Simplify Netty RPC implementation by using a separate 
thread pool for each endpoint
                 Key: SPARK-11095
                 URL: https://issues.apache.org/jira/browse/SPARK-11095
             Project: Spark
          Issue Type: Sub-task
          Components: Spark Core
            Reporter: Reynold Xin
            Assignee: Shixiong Zhu


The dispatcher class and the inbox class of the current Netty-based RPC 
implementation is fairly complicated. It uses a single, shared thread pool to 
execute all the endpoints. This is similar to how Akka does actor message 
dispatching. The benefit of this design is that this RPC implementation can 
support a very large number of endpoints, as they are all multiplexed into a 
single thread pool for execution. The downside is the complexity resulting from 
synchronization and coordination.

An alternative implementation is to have a separate message queue and thread 
pool for each endpoint. The dispatcher simply routes the messages to the 
appropriate message queue, and the threads poll the queue for messages to 
process.

If the endpoint is single threaded, then the thread pool should contain only a 
single thread. If the endpoint supports concurrent execution, then the thread 
pool should contain more threads.

Two additional things we need to be careful with are:

1. An endpoint should only process normal messages after OnStart is called. 
This can be done by having the thread that starts the endpoint processing 
OnStart.

2. An endpoint should process OnStop after all normal messages have been 
processed. I think this can be done by having a busy loop to spin until the 
size of the message queue is 0.







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to