[ 
https://issues.apache.org/jira/browse/LUCENE-2747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12930024#action_12930024
 ] 

Robert Muir commented on LUCENE-2747:
-------------------------------------

bq. You removed TestHindiFilters.testTokenizer(), 
TestIndicTokenizer.testBasics() and TestIndicTokenizer.testFormat(), but these 
would be useful in TestStandardAnalyzer and TestUAX29Tokenizer, wouldn't they?

oh, i just deleted everything associated with that tokenizer...

bq. You did not remove ArabicLetterTokenizer and IndicTokenizer, presumably so 
that they can be used with Lucene 4.0+ when the supplied Version is less than 
3.1 - good catch, I had forgotten this requirement - but when can we actually 
get rid of these? Since they will be staying, shouldn't their tests remain too, 
but using Version.LUCENE_30 instead of TEST_VERSION_CURRENT?

i removed the indictokenizer (unreleased) and deleted its tests.
but i kept and deprecated the arabic one, since we have released it.


> Deprecate/remove language-specific tokenizers in favor of StandardTokenizer
> ---------------------------------------------------------------------------
>
>                 Key: LUCENE-2747
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2747
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Analysis
>    Affects Versions: 3.1, 4.0
>            Reporter: Steven Rowe
>             Fix For: 3.1, 4.0
>
>         Attachments: LUCENE-2747.patch
>
>
> As of Lucene 3.1, StandardTokenizer implements UAX#29 word boundary rules to 
> provide language-neutral tokenization.  Lucene contains several 
> language-specific tokenizers that should be replaced by UAX#29-based 
> StandardTokenizer (deprecated in 3.1 and removed in 4.0).  The 
> language-specific *analyzers*, by contrast, should remain, because they 
> contain language-specific post-tokenization filters.  The language-specific 
> analyzers should switch to StandardTokenizer in 3.1.
> Some usages of language-specific tokenizers will need additional work beyond 
> just replacing the tokenizer in the language-specific analyzer.  
> For example, PersianAnalyzer currently uses ArabicLetterTokenizer, and 
> depends on the fact that this tokenizer breaks tokens on the ZWNJ character 
> (zero-width non-joiner; U+200C), but in the UAX#29 word boundary rules, ZWNJ 
> is not a word boundary.  Robert Muir has suggested using a char filter 
> converting ZWNJ to spaces prior to StandardTokenizer in the converted 
> PersianAnalyzer.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to