[ 
https://issues.apache.org/jira/browse/LUCENE-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12772809#action_12772809
 ] 

Mark Miller edited comment on LUCENE-1997 at 11/3/09 1:07 AM:
--------------------------------------------------------------

Actually - while I cannot share any current info I have, I'll share an example 
from my last job. I worked on a system that librarians used to maintain a 
newspaper archive. The feed for the paper would come in daily and the 
librarians would "enhance" the data - adding keywords, breaking up stories, 
etc. Then reporters or end users could search this data. Librarians, who I 
learned are odd in there requirements by nature, insisted on bringing in 
thousands of results that they could scroll through at a time. This was 
demanded at paper after paper. So we regularly fed back up to 5000 thousand 
results at a time with our software (though they'd have preferred no limit - 
"what are you talking about ! I want them all!" - we made them click more 
buttons for that :) ). Thats just one small example, but I know for a fact 
there are many, many more.

*edit* 

We also actually ran into many situations were there were lots of segments in 
this scenario as well - before I knew better, I'd regularly build the indexes 
with a high merge factor for speed - and then be stuck, unable to optimize 
because it killed performance and newspapers need to be up pretty much 24/7 - 
without bringing there server to a crawl - so I would be unable to optimize 
(this was before you could optimize down to n segments and work slowly over 
time). Not the greatest example, but a situation I found myself in.

      was (Author: markrmil...@gmail.com):
    Actually - while I cannot share any current info I have, I'll share an 
example from my last job. I worked on a system that librarians used to maintain 
a newspaper archive. The feed for the paper would come in daily and the 
librarians would "enhance" the data - adding keywords, breaking up stories, 
etc. Then reporters or end users could search this data. Librarians, who I 
learned are odd in there requirements by nature, insisted on bringing in 
thousands of results that they could scroll through at a time. This was 
demanded at paper after paper. So we regularly fed back up to 5000 thousand 
results at a time with our software (though they'd have preferred no limit - 
"what are you talking about ! I want them all!" - we made them click more 
buttons for that :) ). Thats just one small example, but I know for a fact 
there are many, many more.
  
> Explore performance of multi-PQ vs single-PQ sorting API
> --------------------------------------------------------
>
>                 Key: LUCENE-1997
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1997
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: 2.9
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>         Attachments: LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, 
> LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, 
> LUCENE-1997.patch, LUCENE-1997.patch
>
>
> Spinoff from recent "lucene 2.9 sorting algorithm" thread on java-dev,
> where a simpler (non-segment-based) comparator API is proposed that
> gathers results into multiple PQs (one per segment) and then merges
> them in the end.
> I started from John's multi-PQ code and worked it into
> contrib/benchmark so that we could run perf tests.  Then I generified
> the Python script I use for running search benchmarks (in
> contrib/benchmark/sortBench.py).
> The script first creates indexes with 1M docs (based on
> SortableSingleDocSource, and based on wikipedia, if available).  Then
> it runs various combinations:
>   * Index with 20 balanced segments vs index with the "normal" log
>     segment size
>   * Queries with different numbers of hits (only for wikipedia index)
>   * Different top N
>   * Different sorts (by title, for wikipedia, and by random string,
>     random int, and country for the random index)
> For each test, 7 search rounds are run and the best QPS is kept.  The
> script runs singlePQ then multiPQ, and records the resulting best QPS
> for each and produces table (in Jira format) as output.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org

Reply via email to