Kiran Kumar K.G <[EMAIL PROTECTED]> wrote:

> I'm currently having a problem overwriting an old index. Every
> night, the contents of a database I'm using get updated, so the
> lucene indexes are also recreated every night. The technique I'm
> currently using is just to start a new index on top of the old one
> (IndexWriter writer = new IndexWriter(filePath, new
> StandardAnalyzer(), true) ) but sproatically I get an IO exception:
> couldn't delete _2oil.fdt or something to that effect.

     I ran into this as well; I didn't get it on Solaris, just when I
tried running it on a win2k laptop (didn't feel like being stuck at my
desk all the time :-).  

> I'm pretty sure nothing is using the index (but then again I am querying it
> using a COM+ wrapper, so who really knows what's going on behind the
> scenes)...

     I remember being reasonably sure that there was nothing holding
onto a file, but having the same experience.  I had some vague
suspicions about some very garbage-collection-like process going on,
where my application had released the file handles, but maybe the OS
hadn't reclaimed them yet.  But I know zip about windows programming,
and I don't even know if windows has anything remotely like garbage
collection...  Hm, I know a few people who know windows, maybe I'll
sound them out and see if they have any clues, just on general
principles.  It'd be nice to at least know why this is happening.

> anyone have any ideas how to avoid this? Worst case senario I
> would at least like the old index to still be usable (the one from the night
> before). Is there a way to transactionally update an index or something to
> that effect?


     What I ended up doing to resolve this was to build the index in a
separate directory, then move that over to the new directory.  Then I
realized that was kind of ugly, so I built a scheme where I:

     keep it all in subdirectories,

     and build the updated index in new subdirecty directory with a
     unique name,

     then write the name of the most-up-to-date directory in a config file,

     then poke the system to cause it to reread the config file and
     reload the index.

     Then I realized that if I'm going to this much trouble, I should
probably just go the whole mile and reindex instead of building a new
index - note that I wouldn't have bothered, except for our application
we have multiple users both searching documents and editing them.  It
really does make more sense for us to reindex each document right
after a user saves.

Steven J. Owens
[EMAIL PROTECTED]

--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to