[
https://issues.apache.org/jira/browse/LUCENE-2090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12782075#action_12782075
]
Robert Muir commented on LUCENE-2090:
-------------------------------------
I guess now you have me starting to think about byte[] contains()
Because really the real worst case, which I bet a lot of users do, are not
things like *foobar but instead *foobar\* !
in UTF-8 you can do such things safely, I would have to sucker out the "longest
common constant sequence" out of a DFA.
This might be more generally applicable.
commonSuffix is easy... at least it makes progress for now, even slightly later
in trunk.
this could be a later improvement.
> convert automaton to char[] based processing and TermRef / TermsEnum api
> ------------------------------------------------------------------------
>
> Key: LUCENE-2090
> URL: https://issues.apache.org/jira/browse/LUCENE-2090
> Project: Lucene - Java
> Issue Type: Improvement
> Components: Search
> Reporter: Robert Muir
> Priority: Minor
> Fix For: 3.1
>
>
> The automaton processing is currently done with String, mostly because
> TermEnum is based on String.
> it is easy to change the processing to work with char[], since behind the
> scenes this is used anyway.
> in general I think we should make sure char[] based processing is exposed in
> the automaton pkg anyway, for things like pattern-based tokenizers and such.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]