[ 
https://issues.apache.org/jira/browse/LUCENE-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12477649
 ] 

Otis Gospodnetic commented on LUCENE-759:
-----------------------------------------

Ah, didn't see your comments here earlier, Doron.  Yes, I think you are correct 
about the 1024 limit  - when I wrote that Tokenizer I was thinking TokenFilter, 
and thus I was thinking that that input Reader represents a Token, which was 
wrong.  So, I thought, "oh, 1024 chars/token, that will be enough".  I ended up 
needing TokenFilters for SOLR-81, so that's what I checked in.  Those operate 
on tokens and don't have the 1024 limitation.

Anyhow, feel free to slap your test + the fix in and thanks for checking!


> Add n-gram tokenizers to contrib/analyzers
> ------------------------------------------
>
>                 Key: LUCENE-759
>                 URL: https://issues.apache.org/jira/browse/LUCENE-759
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Analysis
>            Reporter: Otis Gospodnetic
>         Assigned To: Otis Gospodnetic
>            Priority: Minor
>             Fix For: 2.2
>
>         Attachments: LUCENE-759-filters.patch, LUCENE-759.patch, 
> LUCENE-759.patch, LUCENE-759.patch
>
>
> It would be nice to have some n-gram-capable tokenizers in contrib/analyzers. 
>  Patch coming shortly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to