[
https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14586349#comment-14586349
]
Mark Miller commented on SOLR-7344:
-----------------------------------
bq. Dude (mark), you're just being obstinate now. Relying on timeouts to break
distributed deadlock is horrible and will cause random unrelated requests to
also fail.
I don't agree. The first time you try to do a streaming api query that requires
enough recursion to use that many threads will fail and the user will realize
what is happening. I think this is better behavior!
I prefer that to just using unlimited threads. I can just as easily say you are
being obstinate about having Solr just eat up as many threads as it can no
matter what. Most of the devs I know hate that about Solr. I do to.
bq. We need a separate queue with a high limit (or a higher limit)
Yes, the limit can be higher. It doesn't need to be *that* high by default.
That is my point. We could decide that Solr should not spin up more than 300
threads by default and that part for the streaming API might fail with much
recursive crap that requires lots of threads unless a user says specifically,
yes what I'm doing requires ridiculous thread creation so I'll allow that. I'm
saying that could be okay. You might not like it, but it's a perfectly
acceptable path. I don't agree it should be some silly high number like what we
do now.
> Allow Jetty thread pool limits while still avoiding distributed deadlock.
> -------------------------------------------------------------------------
>
> Key: SOLR-7344
> URL: https://issues.apache.org/jira/browse/SOLR-7344
> Project: Solr
> Issue Type: Improvement
> Components: SolrCloud
> Reporter: Mark Miller
> Attachments: SOLR-7344.patch
>
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]