[ 
https://issues.apache.org/jira/browse/SPARK-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-5375.
------------------------------
    Resolution: Duplicate

Sounds like an exact duplicate of SPARK-3607, which is resolve in master for 
1.3.0.

> Specify more clearly about the max thread meaning in the ConnectionManager
> --------------------------------------------------------------------------
>
>                 Key: SPARK-5375
>                 URL: https://issues.apache.org/jira/browse/SPARK-5375
>             Project: Spark
>          Issue Type: Improvement
>    Affects Versions: 1.1.0
>            Reporter: DjvuLee
>
> In the ConnectionManager.scala file, there is three thread pool: 
> handleMessageExecutor,  handleReadWriteExecutor, handleConnectExecutor.
> such as:
> private val handleMessageExecutor = new ThreadPoolExecutor(
>           conf.getInt("spark.core.connection.handler.threads.min", 20),
>          conf.getInt("spark.core.connection.handler.threads.max", 60),
>          conf.getInt("spark.core.connection.handler.threads.keepalive", 60), 
>          TimeUnit.SECONDS,
>          new LinkedBlockingDeque[Runnable](),
>          Utils.namedThreadFactory("handle-message-executor"))
> Since we use a LinkedBlockingDeque, so the max thread parameter has no 
> meaning. Every time I read the code, this  can lead to Confusing for me , 
> Maybe we can add some comment in those place?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to