[ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15878984#comment-15878984
 ] 

Amrit Sarkar commented on SOLR-10186:
-------------------------------------

Erick,

First draft, SOLR-10186.patch, is uploaded which allows CharTokenizer-derived 
tokenizers and KeywordTokenizer to configure the max token length in their 
definition in schema.

{code:xml}
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/KeywordTokenizerFactory.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LetterTokenizer.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LetterTokenizerFactory.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/UnicodeWhitespaceTokenizer.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/WhitespaceTokenizer.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/WhitespaceTokenizerFactory.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/util/CharTokenizer.java
{code}

Currently finishing up relevant comments for the new arguments, modified and 
new constructors in respective classes and thorough tests.

As all the classes/tokenizers are part of lucene core, I agree with Mr Smiley 
of opening JIRA under Lucene project and probably link this JIRA there. 

Let me know your thoughts.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> ---------------------------------------------------------------------------------------------
>
>                 Key: SOLR-10186
>                 URL: https://issues.apache.org/jira/browse/SOLR-10186
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Erick Erickson
>            Priority: Minor
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to