[ https://issues.apache.org/jira/browse/LUCENE-2747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12929945#action_12929945 ]
Steven Rowe commented on LUCENE-2747: ------------------------------------- bq. I'm not too keen on this. For classics and ancient texts the standard analyzer is not as good as the simple analyzer. I think it is important to have a tokenizer that does not try to be too smart. I think it'd be good to have a SimpleAnalyzer based upon UAX#29, too. {{UAX29Tokenizer}} could be combined with {{LowercaseFilter}} to provide that, no? Robert is arguing in the reopened LUCENE-2167 for {{StandardTokenizer}} to be stripped down so that it only implements UAX#29 rules (i.e., dropping URL+Email recognition), so if that comes to pass, {{StandardAnalyzer}} would just be UAX#29+lowercase+stopword (with English stopwords by default, but those can be overridden in the ctor) -- would that make you happy? > Deprecate/remove language-specific tokenizers in favor of StandardTokenizer > --------------------------------------------------------------------------- > > Key: LUCENE-2747 > URL: https://issues.apache.org/jira/browse/LUCENE-2747 > Project: Lucene - Java > Issue Type: Improvement > Components: Analysis > Affects Versions: 3.1, 4.0 > Reporter: Steven Rowe > Fix For: 3.1, 4.0 > > Attachments: LUCENE-2747.patch > > > As of Lucene 3.1, StandardTokenizer implements UAX#29 word boundary rules to > provide language-neutral tokenization. Lucene contains several > language-specific tokenizers that should be replaced by UAX#29-based > StandardTokenizer (deprecated in 3.1 and removed in 4.0). The > language-specific *analyzers*, by contrast, should remain, because they > contain language-specific post-tokenization filters. The language-specific > analyzers should switch to StandardTokenizer in 3.1. > Some usages of language-specific tokenizers will need additional work beyond > just replacing the tokenizer in the language-specific analyzer. > For example, PersianAnalyzer currently uses ArabicLetterTokenizer, and > depends on the fact that this tokenizer breaks tokens on the ZWNJ character > (zero-width non-joiner; U+200C), but in the UAX#29 word boundary rules, ZWNJ > is not a word boundary. Robert Muir has suggested using a char filter > converting ZWNJ to spaces prior to StandardTokenizer in the converted > PersianAnalyzer. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org