[ https://issues.apache.org/jira/browse/LUCENE-400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12558717#action_12558717 ]
Steven Rowe commented on LUCENE-400: ------------------------------------ Removed the duplicate link (to LUCENE-759), since that issue is about character-level n-grams, and this issue is about word-level n-grams. > NGramFilter -- construct n-grams from a TokenStream > --------------------------------------------------- > > Key: LUCENE-400 > URL: https://issues.apache.org/jira/browse/LUCENE-400 > Project: Lucene - Java > Issue Type: Improvement > Components: Analysis > Affects Versions: unspecified > Environment: Operating System: All > Platform: All > Reporter: Sebastian Kirsch > Priority: Minor > Fix For: 2.4 > > Attachments: LUCENE-400.patch, NGramAnalyzerWrapper.java, > NGramAnalyzerWrapperTest.java, NGramFilter.java, NGramFilterTest.java > > > This filter constructs n-grams (token combinations up to a fixed size, > sometimes > called "shingles") from a token stream. > The filter sets start offsets, end offsets and position increments, so > highlighting and phrase queries should work. > Position increments > 1 in the input stream are replaced by filler tokens > (tokens with termText "_" and endOffset - startOffset = 0) in the output > n-grams. (Position increments > 1 in the input stream are usually caused by > removing some tokens, eg. stopwords, from a stream.) > The filter uses CircularFifoBuffer and UnboundedFifoBuffer from Apache > Commons-Collections. > Filter, test case and an analyzer are attached. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]