[ https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Amrit Sarkar updated LUCENE-7705: --------------------------------- Attachment: LUCENE-7705.patch > Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the > max token length > --------------------------------------------------------------------------------------------- > > Key: LUCENE-7705 > URL: https://issues.apache.org/jira/browse/LUCENE-7705 > Project: Lucene - Core > Issue Type: Improvement > Reporter: Amrit Sarkar > Assignee: Erick Erickson > Priority: Minor > Attachments: LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch > > > SOLR-10186 > [~erickerickson]: Is there a good reason that we hard-code a 256 character > limit for the CharTokenizer? In order to change this limit it requires that > people copy/paste the incrementToken into some new class since incrementToken > is final. > KeywordTokenizer can easily change the default (which is also 256 bytes), but > to do so requires code rather than being able to configure it in the schema. > For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes > (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) > (Factories) it would take adding a c'tor to the base class in Lucene and > using it in the factory. > Any objections? -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org