[ 
https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14585226#comment-14585226
 ] 

Hrishikesh Gadre commented on SOLR-7344:
----------------------------------------

>>did this solve the distributed deadlock issue

Yes it would *solve* the distributed deadlock issue. Remember how deadlock can 
happen in the first place? 

- *All* worker threads are processing top level requests (either request 
forwarding or scatter-gather querying)
- During the request processing, they sent sub-requests and are waiting for the 
results of those requests.
- These sub requests can not be processed since there are no threads available 
for processing.

How would this approach fix the problem? - By not allowing top-level requests 
to consume only a (configurable) portion of thread pool. This ensures that a 
portion of thread pool is available for processing sub requests. This is as 
good as having two thread pools.

>>did this address the need to limit concurrent requests without accidentally 
>>decreasing throughput for some request loads (think of the >>differences 
>>between high fanout and low fanout query request types for example).

It should. But this depends upon choosing the appropriate size for various 
request types.

>>did it make life harder for clients

The clients are not at all aware of this change. So I don't think it would be a 
problem. 


> Allow Jetty thread pool limits while still avoiding distributed deadlock.
> -------------------------------------------------------------------------
>
>                 Key: SOLR-7344
>                 URL: https://issues.apache.org/jira/browse/SOLR-7344
>             Project: Solr
>          Issue Type: Improvement
>          Components: SolrCloud
>            Reporter: Mark Miller
>         Attachments: SOLR-7344.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to