One helpful thing to do is call IndexWriter.setInfoStream(...) and
save the resulting output.  This prints details about which segments
were merged, and what the merged segment name is.  This might provide
some useful details for example was your deleted segments file one
that was just merged away, or, was it a segment that was just created
by merging.


Great tip! I'll try that

Alas, given that that exception is "permanent" when this happens, I
don't think it's a locking issue (even if the tmp dir sweeps away lock
files).  Locking issues usually result in transient exceptions (like
yours) when opening the reader, but once the writer is done
committing, then readers open fine without exceptions.


OK -- so we rule out the lock dir hypothesis.

You are certain that only one writer is open at a time against the
index right?  Lucene's write lock tries to ensure that, but still good
to verify.


Yes,  the application uses one thread per index to update it... I've been
checking possible misuses but I am pretty sure that there is  no other
writer hanging around.

When the exception occurs, is it possible to get a listing of all
files in the directory?


I've just added that to the source code!

Are there any interesting customizations in your usage of Lucene?


I have a small library wrapped around Lucene to make it work with MMBase
(MMBase Lucene module). But the usage of Lucene classes is pretty
straightforward: opening, updating and closing writers; opening, searching
and closing readers... This seems to work well most of the time. Like I
said, the problem surfaces after a day or so of running on the production
server. I could send some source code if you think it would help.

Thanks for your help!

   Hes.

Reply via email to