Hi.
 
I couldn't find the answer to this question in the mailing list archive.
In case I missed it, please let me know the keyword phrase I should be
looking for, if not a direct link.
 
All the 'Lucene' powered implementations I saw (well, primarily those
utilizing Solr) return exact count of the number of documents found. It
means that the query is resolved across the whole data set in precise
fashion. If the number of searched documents is huge (eg, > 1billion),
this should present quite a problem. I wonder if that's the default
behaviour of Lucene or rather the frameworks that utilize it? Is it
possible to:
 
- get the top 1000 results WITHOUT executing query across whole data set
- in other words, can Lucene:
  - chunk out top X results by 'approximate' fast search, which will
return _approximate_ total number of found documents, similar to
'Google' total pages found count
  - and perform more accurate search within that chunk
 
Is such functionality built in or it has be customized? If it's
built-in, what algorithms are used to 'chunk out' the results and get
approximate docs count? What classes should I look at?
 
Thanks!
 
Vlad
 
PS: it's pretty much the functionality Google has - you can't get more
than 1000 matches per query (meaning, you can get even '10M' documents
found, but if you'll try to browse beyond '1000' results, you'll get an
error page).

Reply via email to