[
https://issues.apache.org/jira/browse/LUCENE-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15952874#comment-15952874
]
Steve Rowe commented on LUCENE-7760:
------------------------------------
+1
Tests look good. I like your simplification of my explanation ("[long tokens]
are chopped up at [maxTokenLength] and emitted as multiple tokens"); a more
precise description about rule matching, including the possibility of emitted
tokens not being exactly maxTokenLength, is not likely to help many people.
> StandardAnalyzer/Tokenizer.setMaxTokenLength's javadocs are lying
> -----------------------------------------------------------------
>
> Key: LUCENE-7760
> URL: https://issues.apache.org/jira/browse/LUCENE-7760
> Project: Lucene - Core
> Issue Type: Bug
> Reporter: Michael McCandless
> Assignee: Michael McCandless
> Fix For: master (7.0), 6.6
>
> Attachments: LUCENE-7760.patch
>
>
> The javadocs claim that too-long tokens are discarded, but in fact they are
> simply chopped up. The following test case unexpectedly passes:
> {noformat}
> public void testMaxTokenLengthNonDefault() throws Exception {
> StandardAnalyzer a = new StandardAnalyzer();
> a.setMaxTokenLength(5);
> assertAnalyzesTo(a, "ab cd toolong xy z", new String[]{"ab", "cd",
> "toolo", "ng", "xy", "z"});
> a.close();
> }
> {noformat}
> We should at least fix the javadocs ...
> (I hit this because I was trying to also add {{setMaxTokenLength}} to
> {{EnglishAnalyzer}}).
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]