[
https://issues.apache.org/jira/browse/LUCENE-2407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Uwe Schindler updated LUCENE-2407:
----------------------------------
Fix Version/s: (was: 4.3)
4.4
> make CharTokenizer.MAX_WORD_LEN parametrizable
> ----------------------------------------------
>
> Key: LUCENE-2407
> URL: https://issues.apache.org/jira/browse/LUCENE-2407
> Project: Lucene - Core
> Issue Type: Improvement
> Components: modules/analysis
> Affects Versions: 3.0.1
> Reporter: javi
> Priority: Minor
> Fix For: 4.4
>
>
> as discussed here
> http://n3.nabble.com/are-long-words-split-into-up-to-256-long-tokens-tp739914p739914.html
> it would be nice to be able to parametrize that value.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]