[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated LUCENE-7705:
---------------------------------
    Attachment: LUCENE-7705

Yes Erick, I saw the "ant precommit" errors, tab instead of whitespaces, got it.

I am still seeing this:
{code}
   [junit4] Tests with failures [seed: C3F5B66314F27B5E]:
   [junit4]   - 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers
{code}
{code}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestMaxTokenLenTokenizer -Dtests.method=testSingleFieldSameAnalyzers 
-Dtests.seed=C3F5B66314F27B5E -Dtests.slow=true -Dtests.locale=fr-CA 
-Dtests.timezone=Asia/Qatar -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.10s | 
TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers <<<
   [junit4]    > Throwable #1: java.lang.RuntimeException: Exception during 
query
   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([C3F5B66314F27B5E:A927890C4C11AB91]:0)
   [junit4]    >        at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:896)
   [junit4]    >        at 
org.apache.solr.util.TestMaxTokenLenTokenizer.testSingleFieldSameAnalyzers(TestMaxTokenLenTokenizer.java:104)
   [junit4]    >        at java.lang.Thread.run(Thread.java:745)
   [junit4]    > Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
   [junit4]    >        xml response was: <?xml version="1.0" encoding="UTF-8"?>
   [junit4]    > <response>
   [junit4]    > <lst name="responseHeader"><int name="status">0</int><int 
name="QTime">11</int></lst><result name="response" numFound="0" 
start="0"></result>
   [junit4]    > </response>
   [junit4]    >        request was:q=letter0:lett&wt=xml
   [junit4]    >        at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:889)
   [junit4]    >        ... 40 more
{code}

But if it working for you, I am good.

You didn't include the newly created files again in the latest patch, I have 
posted a new one with "precommit" sorted and included all the files. 

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> ---------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-7705
>                 URL: https://issues.apache.org/jira/browse/LUCENE-7705
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Amrit Sarkar
>            Assignee: Erick Erickson
>            Priority: Minor
>         Attachments: LUCENE-7705, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to