[
https://issues.apache.org/jira/browse/LUCENE-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12487595
]
Andy Liu commented on LUCENE-855:
---------------------------------
In your updated benchmark, you're combining the range filter with a term query
that matches one document. I don't believe that's the typical use case for a
range filter. Usually the user employs a range to filter a large document set.
I created a different benchmark to compare standard range filter,
MemoryCachedRangeFilter, and Matt's FieldCacheRangeFilter using
MatchAllDocsQuery, ConstantScoreQuery, and TermQuery (matching one doc like the
last benchmark). Here are the results:
Reader opened with 100000 documents. Creating RangeFilters...
RangeFilter w/MatchAllDocsQuery:
========================
* Bits: 4421
* Search: 5285
RangeFilter w/ConstantScoreQuery:
========================
* Bits: 4200
* Search: 8694
RangeFilter w/TermQuery:
========================
* Bits: 4088
* Search: 4133
MemoryCachedRangeFilter w/MatchAllDocsQuery:
========================
* Bits: 80
* Search: 1142
MemoryCachedRangeFilter w/ConstantScoreQuery:
========================
* Bits: 79
* Search: 482
MemoryCachedRangeFilter w/TermQuery:
========================
* Bits: 73
* Search: 95
FieldCacheRangeFilter w/MatchAllDocsQuery:
========================
* Bits: 0
* Search: 1146
FieldCacheRangeFilter w/ConstantScoreQuery:
========================
* Bits: 1
* Search: 356
FieldCacheRangeFilter w/TermQuery:
========================
* Bits: 0
* Search: 19
Here's some points:
1. When searching in a filter, bits() is called, so the search time includes
bits() time.
2. Matt's FieldCacheRangeFilter is faster for ConstantScoreQuery, although not
by much. Using MatchAllDocsQuery, they run neck-and-neck. FCRF is much faster
for TermQuery since MCRF has to create the BItSet for the range before the
search is executed.
3. I get less document hits when running FieldCacheRangeFilter with
ConstantScoreQuery. Matt, there may be a bug in getNextSetBit(). Not sure if
this would affect the benchmark.
4. I'd be interested to see performance numbers when FieldCacheRangeFilter is
used with ChainedFilter. I suspect that MCRF would be faster in this case,
since I'm assuming that FCRF has to reconstruct a standard BitSet during
clone().
> MemoryCachedRangeFilter to boost performance of Range queries
> -------------------------------------------------------------
>
> Key: LUCENE-855
> URL: https://issues.apache.org/jira/browse/LUCENE-855
> Project: Lucene - Java
> Issue Type: Improvement
> Components: Search
> Affects Versions: 2.1
> Reporter: Andy Liu
> Assigned To: Otis Gospodnetic
> Attachments: FieldCacheRangeFilter.patch,
> FieldCacheRangeFilter.patch, FieldCacheRangeFilter.patch,
> MemoryCachedRangeFilter.patch, MemoryCachedRangeFilter_1.4.patch
>
>
> Currently RangeFilter uses TermEnum and TermDocs to find documents that fall
> within the specified range. This requires iterating through every single
> term in the index and can get rather slow for large document sets.
> MemoryCachedRangeFilter reads all <docId, value> pairs of a given field,
> sorts by value, and stores in a SortedFieldCache. During bits(), binary
> searches are used to find the start and end indices of the lower and upper
> bound values. The BitSet is populated by all the docId values that fall in
> between the start and end indices.
> TestMemoryCachedRangeFilterPerformance creates a 100K RAMDirectory-backed
> index with random date values within a 5 year range. Executing bits() 1000
> times on standard RangeQuery using random date intervals took 63904ms. Using
> MemoryCachedRangeFilter, it took 876ms. Performance increase is less
> dramatic when you have less unique terms in a field or using less number of
> documents.
> Currently MemoryCachedRangeFilter only works with numeric values (values are
> stored in a long[] array) but it can be easily changed to support Strings. A
> side "benefit" of storing the values are stored as longs, is that there's no
> longer the need to make the values lexographically comparable, i.e. padding
> numeric values with zeros.
> The downside of using MemoryCachedRangeFilter is there's a fairly significant
> memory requirement. So it's designed to be used in situations where range
> filter performance is critical and memory consumption is not an issue. The
> memory requirements are: (sizeof(int) + sizeof(long)) * numDocs.
> MemoryCachedRangeFilter also requires a warmup step which can take a while to
> run in large datasets (it took 40s to run on a 3M document corpus). Warmup
> can be called explicitly or is automatically called the first time
> MemoryCachedRangeFilter is applied using a given field.
> So in summery, MemoryCachedRangeFilter can be useful when:
> - Performance is critical
> - Memory is not an issue
> - Field contains many unique numeric values
> - Index contains large amount of documents
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]