[ https://issues.apache.org/jira/browse/LUCENE-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12477593 ]
Doron Cohen commented on LUCENE-759: ------------------------------------ Hi Otis, > (and I think as result the input stream would remain dangling open) I take this part back - closing tokenStream would close the reader, and at least for the case that I thought of, invertDocument, the tokenStream is properly closed. Can you comment on the input length: is it correct to handle only the first 1024 characters? Thanks, Doron > Add n-gram tokenizers to contrib/analyzers > ------------------------------------------ > > Key: LUCENE-759 > URL: https://issues.apache.org/jira/browse/LUCENE-759 > Project: Lucene - Java > Issue Type: Improvement > Components: Analysis > Reporter: Otis Gospodnetic > Assigned To: Otis Gospodnetic > Priority: Minor > Fix For: 2.2 > > Attachments: LUCENE-759-filters.patch, LUCENE-759.patch, > LUCENE-759.patch, LUCENE-759.patch > > > It would be nice to have some n-gram-capable tokenizers in contrib/analyzers. > Patch coming shortly. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]