[ 
https://issues.apache.org/jira/browse/LUCENE-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Liu updated LUCENE-855:
----------------------------

    Attachment: contrib-filters.tar.gz

I made a few changes to MemoryCachedRangeFilter:

- SortedFieldCache's values[] now contains only sorted unique values, while 
docId[] has been changed to a ragged 2D array with an array of docId's 
corresponding to each unique value.  Since there's no longer repeated values in 
values[]. forward() and rewind() are no longer required.  This also addresses 
the O(n) special case that Hoss brought up where every value is identical.
- bits() now returns OpenBitSetWrapper, a subclass of BitSet that uses Solr's 
OpenBitSet as a delegate.  Wrapping OpenBitSet presents some challenges.  Since 
the internal bits store of BitSet is private, it's difficult to perform 
operations between BitSet and OpenBitSet (like or, and, etc).
- An in-memory OpenBitSet cache is kept.  During warmup, the global range is 
partitioned and OpenBitSet instances are created for each partition.  During 
bits(), these cached OpenBitSet instances that fall in between the lower and 
upper ranges are used.
- Moved MCRF to contrib/ due to the Solr dependancy

Using the current (and incomplete) benchmark, MemoryCachedRangeFilter is 
slightly faster than FCRF when used in conjuction with ConstantRangeQuery and 
MatchAllDocsQuery:

Reader opened with 100000 documents.  Creating RangeFilters...

TermQuery

FieldCacheRangeFilter
  * Total: 88ms
  * Bits: 0ms
  * Search: 14ms

MemoryCachedRangeFilter
  * Total: 89ms
  * Bits: 17ms
  * Search: 31ms

RangeFilter
  * Total: 9034ms
  * Bits: 4483ms
  * Search: 4521ms

Chained FieldCacheRangeFilter
  * Total: 33ms
  * Bits: 3ms
  * Search: 9ms

Chained MemoryCachedRangeFilter
  * Total: 77ms
  * Bits: 19ms
  * Search: 30ms


ConstantScoreQuery

FieldCacheRangeFilter
  * Total: 541ms
  * Bits: 2ms
  * Search: 485ms

MemoryCachedRangeFilter
  * Total: 473ms
  * Bits: 23ms
  * Search: 390ms

RangeFilter
  * Total: 13777ms
  * Bits: 4451ms
  * Search: 9298ms

Chained FieldCacheRangeFilter
  * Total: 12ms
  * Bits: 2ms
  * Search: 5ms

Chained MemoryCachedRangeFilter
  * Total: 80ms
  * Bits: 16ms
  * Search: 44ms


MatchAllDocsQuery

FieldCacheRangeFilter
  * Total: 1231ms
  * Bits: 3ms
  * Search: 1115ms

MemoryCachedRangeFilter
  * Total: 1222ms
  * Bits: 53ms
  * Search: 1149ms

RangeFilter
  * Total: 10689ms
  * Bits: 4954ms
  * Search: 5583ms

Chained FieldCacheRangeFilter
  * Total: 937ms
  * Bits: 1ms
  * Search: 862ms

Chained MemoryCachedRangeFilter
  * Total: 921ms
  * Bits: 19ms
  * Search: 894ms

Hoss, those were great comments you made.  I'd be happy to continue on and make 
those changes, although if the feeling around town is that Matt's range filter 
is the preferred implementation, I'll stop here.

> MemoryCachedRangeFilter to boost performance of Range queries
> -------------------------------------------------------------
>
>                 Key: LUCENE-855
>                 URL: https://issues.apache.org/jira/browse/LUCENE-855
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: 2.1
>            Reporter: Andy Liu
>         Assigned To: Otis Gospodnetic
>         Attachments: contrib-filters.tar.gz, FieldCacheRangeFilter.patch, 
> FieldCacheRangeFilter.patch, FieldCacheRangeFilter.patch, 
> FieldCacheRangeFilter.patch, FieldCacheRangeFilter.patch, 
> MemoryCachedRangeFilter.patch, MemoryCachedRangeFilter_1.4.patch, 
> TestRangeFilterPerformanceComparison.java, 
> TestRangeFilterPerformanceComparison.java
>
>
> Currently RangeFilter uses TermEnum and TermDocs to find documents that fall 
> within the specified range.  This requires iterating through every single 
> term in the index and can get rather slow for large document sets.
> MemoryCachedRangeFilter reads all <docId, value> pairs of a given field, 
> sorts by value, and stores in a SortedFieldCache.  During bits(), binary 
> searches are used to find the start and end indices of the lower and upper 
> bound values.  The BitSet is populated by all the docId values that fall in 
> between the start and end indices.
> TestMemoryCachedRangeFilterPerformance creates a 100K RAMDirectory-backed 
> index with random date values within a 5 year range.  Executing bits() 1000 
> times on standard RangeQuery using random date intervals took 63904ms.  Using 
> MemoryCachedRangeFilter, it took 876ms.  Performance increase is less 
> dramatic when you have less unique terms in a field or using less number of 
> documents.
> Currently MemoryCachedRangeFilter only works with numeric values (values are 
> stored in a long[] array) but it can be easily changed to support Strings.  A 
> side "benefit" of storing the values are stored as longs, is that there's no 
> longer the need to make the values lexographically comparable, i.e. padding 
> numeric values with zeros.
> The downside of using MemoryCachedRangeFilter is there's a fairly significant 
> memory requirement.  So it's designed to be used in situations where range 
> filter performance is critical and memory consumption is not an issue.  The 
> memory requirements are: (sizeof(int) + sizeof(long)) * numDocs.  
> MemoryCachedRangeFilter also requires a warmup step which can take a while to 
> run in large datasets (it took 40s to run on a 3M document corpus).  Warmup 
> can be called explicitly or is automatically called the first time 
> MemoryCachedRangeFilter is applied using a given field.
> So in summery, MemoryCachedRangeFilter can be useful when:
> - Performance is critical
> - Memory is not an issue
> - Field contains many unique numeric values
> - Index contains large amount of documents

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to