[
https://issues.apache.org/jira/browse/LUCENE-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12976027#action_12976027
]
Earwin Burrfoot commented on LUCENE-2840:
-----------------------------------------
I use the following scheme:
* There is a fixed pool of threads shared by all searches, that limits total
concurrency.
* Each new search apprehends at most a fixed number of threads from this pool
(say, 2-3 of 8 in my setup),
* and these threads churn through segments as through a queue (in maxDoc order,
but I think even that is unnecessary).
No special smart binding between threads and segments (eg. 1 thread for each
biggie, 1 thread for all of the small ones) -
means simpler code, and zero possibility of stalling, when there are threads to
run, segments to search, but binding policy does not connect them.
Using fewer threads per-search than total available is a precaution against
biggie searches blocking fast ones.
> Multi-Threading in IndexSearcher (after removal of MultiSearcher and
> ParallelMultiSearcher)
> -------------------------------------------------------------------------------------------
>
> Key: LUCENE-2840
> URL: https://issues.apache.org/jira/browse/LUCENE-2840
> Project: Lucene - Java
> Issue Type: Sub-task
> Components: Search
> Reporter: Uwe Schindler
> Priority: Minor
> Fix For: 4.0
>
>
> Spin-off from parent issue:
> {quote}
> We should discuss about how many threads should be spawned. If you have an
> index with many segments, even small ones, I think only the larger segments
> should be separate threads, all others should be handled sequentially. So
> maybe add a maxThreads cound, then sort the IndexReaders by maxDoc and then
> only spawn maxThreads-1 threads for the bigger readers and then one
> additional thread for the rest?
> {quote}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]