CharDelimiterTokenizer
----------------------

                 Key: LUCENE-1216
                 URL: https://issues.apache.org/jira/browse/LUCENE-1216
             Project: Lucene - Java
          Issue Type: Improvement
          Components: Analysis
            Reporter: Hiroaki Kawai


WhitespaceTokenizer is very useful for space separated languages, but my 
Japanese text is not always separated by a space. So, I created an alternative 
Tokenizer that we can specify the delimiter. The file submitted will be an 
improvement of the current WhitespaceTokenizer.

I tried to extend it from CharTokenizer, but CharTokenizer has a limitation 
that a token can't be longer than 255 chars.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to