[ 
https://issues.apache.org/jira/browse/LUCENE-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12832076#action_12832076
 ] 

Michael McCandless commented on LUCENE-2257:
--------------------------------------------

OK, I'm glad to hear that.

The attached patch applies to 2.9, and I think should apply fine to the 
revision of Lucene you're using (779312) that you're using within Solr.  I'd 
recommend checking out that exact revision of Lucene (svn co -r779312 ...), 
applying this patch, building a JAR, and replacing Solr's Lucene JAR with this 
one.

It's only queries that contain terms above the 2.1B mark (your last ~390 M 
terms) that will hit the exception.  Once you find such a query it should 
always hit the exception on this large segment.

> relax the per-segment max unique term limit
> -------------------------------------------
>
>                 Key: LUCENE-2257
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2257
>             Project: Lucene - Java
>          Issue Type: Improvement
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>            Priority: Minor
>             Fix For: 2.9.2, 3.0.1, 3.1
>
>         Attachments: LUCENE-2257.patch
>
>
> Lucene can't handle more than 2.1B (limit of signed 32 bit int) unique terms 
> in a single segment.
> But I think we can improve this to termIndexInterval (default 128) * 2.1B.  
> There is one place (internal API only) where Lucene uses an int but should 
> use a long.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org

Reply via email to