[ 
https://issues.apache.org/jira/browse/THRIFT-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14490710#comment-14490710
 ] 

James E. King, III commented on THRIFT-3084:
--------------------------------------------

The proper way to do this would be to deprecate TThreadedServer in 0.9.x with a 
comment that says it disappears in 1.0.  I am okay with that as well.  The 
default ThreadFactory is essentially how TThtreadedServer works anyway; however 
I will be adding concurrent client limits via Semaphore in THRIFT-3084 so the 
standard servers should have everything they need to be production quality and 
resource bound.

> C++ add concurrent client limit to threaded servers
> ---------------------------------------------------
>
>                 Key: THRIFT-3084
>                 URL: https://issues.apache.org/jira/browse/THRIFT-3084
>             Project: Thrift
>          Issue Type: Improvement
>          Components: C++ - Library
>    Affects Versions: 0.8, 0.9, 0.9.1, 0.9.2
>            Reporter: James E. King, III
>
> The TThreadedServer and TThreadPoolServer do not impose limits on the number 
> of simultaneous connections, which is not useful in production as bad clients 
> can drive a server to consume too many file descriptors or have too many 
> threads.
> 1. Add a barrier to TServerTransport that will be checked before accept().
> 2. In the onClientConnected override (see THRIFT-3083) if the server reaches 
> the limit of the number of accepted clients, enable the barrier.
> 3. In the onClientDisconnected override if the count of connected clients 
> falls below the maximum concurrent limit, clear the barrier.  This will allow 
> the limit to be changed dynamically at runtime (lowered) with drain off 
> clients until more can be accepted.
> Alternate proposal: Implement a Semaphore and have the servers block the 
> serve() thread if the client that arrived puts the server at the concurrent 
> client limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to