[ https://issues.apache.org/jira/browse/LUCENE-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12771005#action_12771005 ]
Jake Mannix commented on LUCENE-1997: ------------------------------------- That should speed things up, but that's way subleading in complexity. This is an additive term O(numSegments * numDesiredResults) total operations when done "slowly" (as opposed to the best merge, which is O(numDesiredResults * log(numSegments)) ), in comparison to the primary subleading piece for multiPQ, which is O(numSegments * numDesiredResults * log(numDesiredResults) * log(numHitsPerSegment) ), so that's taking a piece of the CPU time which is smaller by a factor of 20-100 already than the total PQ insert time, and reducing it by a further factor of maybe 5-10. If it's easy to code up, sure, why not. But it's not really "inner loop" necessary optimizations anymore, I'd argue. > Explore performance of multi-PQ vs single-PQ sorting API > -------------------------------------------------------- > > Key: LUCENE-1997 > URL: https://issues.apache.org/jira/browse/LUCENE-1997 > Project: Lucene - Java > Issue Type: Improvement > Components: Search > Affects Versions: 2.9 > Reporter: Michael McCandless > Assignee: Michael McCandless > Attachments: LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, > LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, > LUCENE-1997.patch > > > Spinoff from recent "lucene 2.9 sorting algorithm" thread on java-dev, > where a simpler (non-segment-based) comparator API is proposed that > gathers results into multiple PQs (one per segment) and then merges > them in the end. > I started from John's multi-PQ code and worked it into > contrib/benchmark so that we could run perf tests. Then I generified > the Python script I use for running search benchmarks (in > contrib/benchmark/sortBench.py). > The script first creates indexes with 1M docs (based on > SortableSingleDocSource, and based on wikipedia, if available). Then > it runs various combinations: > * Index with 20 balanced segments vs index with the "normal" log > segment size > * Queries with different numbers of hits (only for wikipedia index) > * Different top N > * Different sorts (by title, for wikipedia, and by random string, > random int, and country for the random index) > For each test, 7 search rounds are run and the best QPS is kept. The > script runs singlePQ then multiPQ, and records the resulting best QPS > for each and produces table (in Jira format) as output. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org For additional commands, e-mail: java-dev-h...@lucene.apache.org