[
https://issues.apache.org/jira/browse/LUCENE-2747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12929936#action_12929936
]
Steven Rowe commented on LUCENE-2747:
-------------------------------------
Robert, your patch looks good - I have a couple of questions:
* You removed {{TestHindiFilters.testTokenizer()}},
{{TestIndicTokenizer.testBasics()}} and {{TestIndicTokenizer.testFormat()}},
but these would be useful in {{TestStandardAnalyzer}} and
{{TestUAX29Tokenizer}}, wouldn't they?
* You did not remove {{ArabicLetterTokenizer}} and {{IndicTokenizer}},
presumably so that they can be used with Lucene 4.0+ when the supplied
{{Version}} is less than 3.1 -- good catch, I had forgotten this requirement --
but when can we actually get rid of these? Since they will be staying,
shouldn't their tests remain too, but using {{Version.LUCENE_30}} instead of
{{TEST_VERSION_CURRENT}}?
> Deprecate/remove language-specific tokenizers in favor of StandardTokenizer
> ---------------------------------------------------------------------------
>
> Key: LUCENE-2747
> URL: https://issues.apache.org/jira/browse/LUCENE-2747
> Project: Lucene - Java
> Issue Type: Improvement
> Components: Analysis
> Affects Versions: 3.1, 4.0
> Reporter: Steven Rowe
> Fix For: 3.1, 4.0
>
> Attachments: LUCENE-2747.patch
>
>
> As of Lucene 3.1, StandardTokenizer implements UAX#29 word boundary rules to
> provide language-neutral tokenization. Lucene contains several
> language-specific tokenizers that should be replaced by UAX#29-based
> StandardTokenizer (deprecated in 3.1 and removed in 4.0). The
> language-specific *analyzers*, by contrast, should remain, because they
> contain language-specific post-tokenization filters. The language-specific
> analyzers should switch to StandardTokenizer in 3.1.
> Some usages of language-specific tokenizers will need additional work beyond
> just replacing the tokenizer in the language-specific analyzer.
> For example, PersianAnalyzer currently uses ArabicLetterTokenizer, and
> depends on the fact that this tokenizer breaks tokens on the ZWNJ character
> (zero-width non-joiner; U+200C), but in the UAX#29 word boundary rules, ZWNJ
> is not a word boundary. Robert Muir has suggested using a char filter
> converting ZWNJ to spaces prior to StandardTokenizer in the converted
> PersianAnalyzer.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]