Hi, It seems this problem only happens when the index files get really large. Could it be because java has trouble handling very large files on windows machine (guess there is max file size on windows)?
In Lucene, I think there is a maxDoc kind of parameter that you can use to specify, when the index gets really large containing more than that many of documents, it will not try to merge the index files into one. Could this be used to stop the index files from growing forever? Cheers, Jian On 8/27/05, Ouyang, Hui <[EMAIL PROTECTED]> wrote: > > Hi, > I had lots of "docs out of order" issues when the index is optimized. I > did the changes based on the suggestion in this link > http://nagoya.apache.org/bugzilla/show_bug.cgi?id=23650 > > It seems this issue is solved. But some index have "read past EOF" when I > do optimization. The index is over 2G and there are some documents deleted > from the index. It is based on Lucene 1.4.3 on Windows. > Does anyone know how to avoid this issue? Thx. > > Regards, > hui > > > > > merging segments _1ny5 (38708 docs) _1ot0 (1000 docs) _1t2m (4810 > docs)java.io.I > > OException: read past EOF > > at > > org.apache.lucene.index.CompoundFileReader$CSInputStream.readInternal > > (CompoundFileReader.java:218) > > at > > org.apache.lucene.store.InputStream.readBytes(InputStream.java:61) > > at > > org.apache.lucene.index.SegmentReader.norms(SegmentReader.java:356) > > at > > org.apache.lucene.index.SegmentReader.norms(SegmentReader.java:323) > > at > > org.apache.lucene.index.SegmentMerger.mergeNorms(SegmentMerger.java:4 > > 29) > > at > > org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94) > > at > > org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:51 > > 0) > > at > > org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:370) > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > >