[
https://issues.apache.org/jira/browse/LUCENE-1606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Robert Muir updated LUCENE-1606:
--------------------------------
Attachment: LUCENE-1606.patch
here is an updated patch. In my opinion all that is needed is to add the random
testing, and code reorganization (but no algorithmic/feature changes), maybe
better docs too.
This patch has:
* more tests, especially surrogate handling
* some more automaton paring
* the seek positions and common suffix are always valid UTF-8 strings
* backtracking is aware of U+FFFF, so terms can have it.
This patch works correctly on both trunk and flex, although I don't have an
included test for U+FFFF since I can't put it in a trunk index.
I'm not too terribly happy about the way the unicode handling works here, its
correct but could be coded better.
The code I wrote is not the easiest to read and suggestions welcome :)
But it is all localized to one method, cleanupPosition(). This is defined as
{code}
* if the seek position cannot be converted to valid UTF-8,
* then return the next valid String (in UTF-16 sort order) that
* can be converted to valid UTF-8.
{code}
if you have ideas on how to make this nicer I am happy to hear them.
> Automaton Query/Filter (scalable regex)
> ---------------------------------------
>
> Key: LUCENE-1606
> URL: https://issues.apache.org/jira/browse/LUCENE-1606
> Project: Lucene - Java
> Issue Type: New Feature
> Components: Search
> Reporter: Robert Muir
> Assignee: Robert Muir
> Priority: Minor
> Fix For: 3.1
>
> Attachments: automaton.patch, automatonMultiQuery.patch,
> automatonmultiqueryfuzzy.patch, automatonMultiQuerySmart.patch,
> automatonWithWildCard.patch, automatonWithWildCard2.patch,
> BenchWildcard.java, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch,
> LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch,
> LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch,
> LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch,
> LUCENE-1606_nodep.patch
>
>
> Attached is a patch for an AutomatonQuery/Filter (name can change if its not
> suitable).
> Whereas the out-of-box contrib RegexQuery is nice, I have some very large
> indexes (100M+ unique tokens) where queries are quite slow, 2 minutes, etc.
> Additionally all of the existing RegexQuery implementations in Lucene are
> really slow if there is no constant prefix. This implementation does not
> depend upon constant prefix, and runs the same query in 640ms.
> Some use cases I envision:
> 1. lexicography/etc on large text corpora
> 2. looking for things such as urls where the prefix is not constant (http://
> or ftp://)
> The Filter uses the BRICS package (http://www.brics.dk/automaton/) to convert
> regular expressions into a DFA. Then, the filter "enumerates" terms in a
> special way, by using the underlying state machine. Here is my short
> description from the comments:
> The algorithm here is pretty basic. Enumerate terms but instead of a
> binary accept/reject do:
>
> 1. Look at the portion that is OK (did not enter a reject state in the
> DFA)
> 2. Generate the next possible String and seek to that.
> the Query simply wraps the filter with ConstantScoreQuery.
> I did not include the automaton.jar inside the patch but it can be downloaded
> from http://www.brics.dk/automaton/ and is BSD-licensed.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]