[ 
https://issues.apache.org/jira/browse/LUCENE-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13183775#comment-13183775
 ] 

Raimund Merkert commented on LUCENE-1227:
-----------------------------------------

For me, this also works (for my purposes, at least):

        String str =  <read contents of reader to string>

        TokenStream tokens = new KeywordTokenizer(new StringReader(str.trim());
        tokens= new NGramTokenFilter(tokens, minNGram, maxNGram);


                
> NGramTokenizer to handle more than 1024 chars
> ---------------------------------------------
>
>                 Key: LUCENE-1227
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1227
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: modules/analysis
>            Reporter: Hiroaki Kawai
>            Priority: Minor
>         Attachments: LUCENE-1227.patch, NGramTokenizer.patch, 
> NGramTokenizer.patch
>
>
> Current NGramTokenizer can't handle character stream that is longer than 
> 1024. This is too short for non-whitespace-separated languages.
> I created a patch for this issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to