Hi Mike, hi Paul, Mike: You are right, the algorithm has one advantage and one disadvantage:
- There is not only a logarithmic bound, there is a hard upper bound. In my algorithm with 8 precisions (so from 1 to 8 bytes length of keys) the maximum numbers of terms to be visited is limited to 3825 terms (see the java docs or the cited paper). The upper limit only applies, if you make a very big range and have a lot of different homogenous distributed terms inside the range (without the optimization). Testing with a 500,000 document index (6 GB) with numeric values (doubles) has shown, that even with large ranges the maximum number of terms, you mostly only get about 300 terms to be visited. This is not related to index size! - The index size is bigger: You store for each numeric field not only one term in index, but eight, so index size increases. But this is neglectible in my opinion and for large indexes the speed increase is great. - Inside Luke, the values of such "Trie" fields are not human readable (because of the encoding). Even when stored, the current implementation uses the special encoding to store the field. For displaying the field you have to use the decoder from the TrieUtils class. But this is the same with current DateUtils from Lucene (but they are more readable :-) ) Comparisions with the above 500,000 doc index showed that the old RangeQuery (with raised BooleanQuery clause count) took about 30-40 secs to complete, ConstantScoreRangeQuery took 5 secs and TrieRangeQuery took <100ms to complete (on an Opteron64 machine, Java 1.5). You can test a little bit on http://www.pangaea.de/advanced/advsearch.php by entering something into the geographic bounding box or temporal coverage). As you can see, the usage of this range query type is optimal for geographic searches using doubles (not fixed decimals!), longs or dates as keys. I have no problem with including it into Lucene contrib. I just have some questions/comments: - Code is Apache 2.0 licensed, so it is simple to include. I would change the package prefix, update the JavaDocs and create a contrib patch out of it. References to commons logging can be removed (they are just for debugging). Code is Java 1.5 (using StringBuilder), but this could easily be changed. - I want to be able to develop the code further once in contrib, is this possible? How would be the best to handle this? Let the code stay in my SVN and you update it or let me commit to the contrib folder in Lucene? Currently the code is in SVN of panFMP (www.panfmp.org) that uses it. When donated to Apache, I would put a dependency into panFMP to the contrib Package, once released and remove it from my tree. I do not want to get the code into a dead end or start a fork of it inside contrib, because I want to actively maintain it. My intentions for giving the code to Lucene were some questions from other projects (from geographic information systems), to also use the optimized range queries for such type of geo queries, e.g. GeoNetwork Opensource, also using Lucene, is interested. Maybe Solr wants to make use of it (using another field data type). Instead of distributing the code to different projects, I wanted to make it available as plugin for everybody from Lucene itself. I would start an issue in JIRA and attach the patch. Uwe > -----Original Message----- > From: Michael McCandless [mailto:[EMAIL PROTECTED] > Sent: Tuesday, November 25, 2008 6:41 PM > To: java-dev@lucene.apache.org > Subject: Re: TrieRangeQuery for contrib? was: [jira] Commented: (LUCENE- > 1461) Cached filter for a single term field > > > I would love to see this find it's way into Lucene! > > It puts a logarithmic bound (I think?) on the number of terms whose > docs must be visited to implement the range query, at the expense > of an increase in index size for those fields that need range search. > > For large indexes this should make an incredible difference on > RangeQuery performance. > > Mike > > Uwe Schindler wrote: > > > I do not want to attach this to the issue report, just for > > completeness: > > > > I implemented (based on RangeFilter) another approach for faster > > RangeQueries, based on longs stored in index in a special format. > > > > The idea behind this is to store the longs in different precision in > > index > > and partition the query range in such a way, that the outer > > boundaries are > > search using terms from the highest precision, but the center of the > > search > > Range with lower precision. The implementation stores the longs in 8 > > different precisions (using a class called TrieUtils). It also has > > support > > for Doubles, using the IEEE 754 floating-point "double format" bit > > layout > > with some bit mappings to make them binary sortable. The approach is > > used in > > rather big indexes, query times are even on low performance desktop > > computers <<100 ms (!) for very big ranges on indexes with 500000 > > docs. > > > > I called this RangeQuery variant and format "TrieRangeRange" query > > because > > the idea looks like the well-known Trie structures (but it is not > > identical > > to real tries, but algorithms are related to it). > > > > I was thinking about supplying this to Lucene contrib, any thoughts > > about > > it? > > > > More infos from the JavaDoc of the project: > > > http://www.panfmp.org/apidocs/de/pangaea/metadataportal/search/TrieRangeQu > er > > y.html > > > http://www.panfmp.org/apidocs/de/pangaea/metadataportal/utils/TrieUtils.ht > ml > > > > > > Source: > > > http://panfmp.svn.sourceforge.net/viewvc/panfmp/main/trunk/src/de/pangaea/ > me > > tadataportal/search/TrieRangeQuery.java?view=markup > > > http://panfmp.svn.sourceforge.net/viewvc/panfmp/main/trunk/src/de/pangaea/ > me > > tadataportal/utils/TrieUtils.java?view=markup > > > > > > Uwe > > > > ----- > > Uwe Schindler > > H.-H.-Meier-Allee 63, D-28213 Bremen > > http://www.thetaphi.de > > eMail: [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]