That looks spooky.  It looks like either the norms array is not
large enough or that docID is too large.  Do you know how many
docs you have in your index?

Is this easy to reproduce, maybe on a smaller index?

There was a very large change recently (LUCENE-843) to speed
up indexing and it's possible that this introduced a bug.  Is
the build you are using after July 4?

Mike

"Rafael Rossini" <[EMAIL PROTECTED]> wrote:
> Hello all,
> 
> I´m using solr in an app, but I´m getting an error that it might be a
> lucene
> problem. When I perform a simple query like q = brasil I´m getting this
> exception:
> 
> java.lang.ArrayIndexOutOfBoundsException: 1226511
>    at org.apache.lucene.search.TermScorer.score(TermScorer.java:74)
>    at org.apache.lucene.search.TermScorer.score(TermScorer.java:61)
>    at
>    org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:146)
>    at org.apache.lucene.search.Searcher.search(Searcher.java:118)
>    at org.apache.lucene.search.Searcher.search(Searcher.java:97)
> 
> I´m using a very recent build from lucene. In the TermScorer.class, line
> 74
> is:
> 
> score *= normDecoder[norms[doc] & 0xFF]; // normalize for field
> 
> Thanks for any help, and sorry for cross-posting

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to