I don´t know the exact date of the build, but it is certainly before July 4,
and before the LUCENE-843 patch was committed. My index has 1.119.934 docs
on it and is about 8.2G.

I really don´t know how to reproduce this, the only query that I get this
error, so far, is "brasil"... and I don´t know about the docID being too
large, because in my app, I index daily more than 2000 docs, and I can
access the newer with no problems...

Do you have any ideia how can I debug better this, or how can I solve it?

Thanks a lot


On 7/24/07, Michael McCandless <[EMAIL PROTECTED]> wrote:


That looks spooky.  It looks like either the norms array is not
large enough or that docID is too large.  Do you know how many
docs you have in your index?

Is this easy to reproduce, maybe on a smaller index?

There was a very large change recently (LUCENE-843) to speed
up indexing and it's possible that this introduced a bug.  Is
the build you are using after July 4?

Mike

"Rafael Rossini" <[EMAIL PROTECTED]> wrote:
> Hello all,
>
> I´m using solr in an app, but I´m getting an error that it might be a
> lucene
> problem. When I perform a simple query like q = brasil I´m getting this
> exception:
>
> java.lang.ArrayIndexOutOfBoundsException: 1226511
>    at org.apache.lucene.search.TermScorer.score(TermScorer.java:74)
>    at org.apache.lucene.search.TermScorer.score(TermScorer.java:61)
>    at
>    org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:146)
>    at org.apache.lucene.search.Searcher.search(Searcher.java:118)
>    at org.apache.lucene.search.Searcher.search(Searcher.java:97)
>
> I´m using a very recent build from lucene. In the TermScorer.class, line
> 74
> is:
>
> score *= normDecoder[norms[doc] & 0xFF]; // normalize for field
>
> Thanks for any help, and sorry for cross-posting

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Reply via email to