Ok. Just to followup, I performed the same steps with another of our indexes
and did not have the same issue:
Opening index @ /lucenedata/index4
Segments file=segments_85 numSegments=1 version=FORMAT_HAS_PROX [Lucene 2.4]
1 of 1: name=_42 docCount=3986767
compound=true
hasProx=true
numFiles=1
size (MB)=3,467.235
no deletions
test: open reader.........OK
test: fields, norms.......OK [6 fields]
test: terms, freq, prox...OK [6678265 terms; 285071252 terms/docs pairs;
335057297 tokens]
test: stored fields.......OK [23920602 total field count; avg 6 fields
per doc]
test: term vectors........OK [0 total vector count; avg 0 term/freq
vector fields per doc]
No problems were detected with this index.
Opening the index works fine and searching works fine on this index.
This makes me wonder if the root is not some sort of memory/buffer issue. I
don't know what happens when Lucene opens the index, but 18GB is a pretty
big file.
My admins say that Oracle has as much memory as it needs, but I am not sure.
Maybe Marcelo has some thoughts on this.
--
View this message in context:
http://www.nabble.com/java.io.IOException%3A-read-past-EOF-non-corrupt-index-tp21319971p21335241.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]