I think, I have a test case to reproduce java.io.IOException: read past EOF
execption while merging. The attached code generates this exception upon
executing it.

Suresh.


On 12/19/07, Michael McCandless <[EMAIL PROTECTED]> wrote:
>
>
> Grant Ingersoll wrote:
>
> > The field that is causing the problem in the stack trace is neither
> > binary nor compressed, nor is it even stored.
>
> This would also be possible with the one bug I found on hitting an
> exception in DocumentsWriter.addDocument.
>
> Basically the bug can cause only a subset of the stored fields to be
> added to the fdt file even though the vint header claimed more stored
> fields were written.  Grant, you're really sure you saw no exception
> in Solr's logs right?  Note that the exception would corrupt the
> index but would then not be detected until that corrupted segment
> gets merged, so it could have been in an earlier batch of added docs,
> for example.
>
> I've been testing various combinations of changing schema, turning on/
> off stored for the same field, interested deletions, empty stored
> fields, etc, and can't otherwise get the bug to come out.  It's a
> sneaky one!
>
> Mike
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.document.Field;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.SerialMergeScheduler;
import org.apache.lucene.index.LogDocMergePolicy;

import java.io.IOException;

public class TestCase
{
    public static void main(String args[]) throws IOException
    {
        IndexWriter writer = new IndexWriter(FSDirectory
                .getDirectory("/tmp/_tmp_index"), false, new 
StandardAnalyzer());
        writer.setMaxBufferedDocs(2);
        writer.setRAMBufferSizeMB(IndexWriter.DISABLE_AUTO_FLUSH);
        writer.setMergeScheduler(new SerialMergeScheduler());
        writer.setMergePolicy(new LogDocMergePolicy());

        Document document = new Document();

        Field storedField = new Field("stored", "stored", Field.Store.YES,
                Field.Index.NO);
        Field termVectorField = new Field("termVector", "termVector",
                Field.Store.NO, Field.Index.UN_TOKENIZED,
                Field.TermVector.WITH_POSITIONS_OFFSETS);

        document.add(storedField);
        writer.addDocument(document);
        writer.addDocument(document);

        document = new Document();
        document.add(storedField);
        document.add(termVectorField);
        writer.addDocument(document);
        writer.optimize();
        writer.close();

        writer = new IndexWriter(FSDirectory.getDirectory("/tmp/_index"),
                false, new StandardAnalyzer());
        writer.setMaxBufferedDocs(2);
        writer.setRAMBufferSizeMB(IndexWriter.DISABLE_AUTO_FLUSH);
        writer.setMergeScheduler(new SerialMergeScheduler());
        writer.setMergePolicy(new LogDocMergePolicy());

        Directory[] indexDirs = { FSDirectory.getDirectory("/tmp/_tmp_index") };
        writer.addIndexes(indexDirs);
        writer.close();
    }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to