Hi Wu,

The simplest solution is to synchronize calls to a
ParallelWriter.addDocument() method that calls IndexWriter.addDocument()
for each sub-index.  This will work assuming there are no exceptions and
assuming you never refresh your IndexReader within
ParallelWriter.addDocument().  If exceptions occur writing one of the
sub-indexes, then you need to recover them.  The best approach I've
found is to delete the unequal final subdocuments and optimize all the
subindexes to restore equal doc ids.

This approach has the consequence of single-threading all index
writing.  I'm working on a solution to avoid this, but it may require
deeper integration into the higher level IndexManager mechanism (which
does reader reopening, journaling, recovery, and a lot of other things).

If you can get by with single threading, I have a ParallelWriter class
now that I could contribute.  If not, I'm considering the more general
solution now, but will only be able to contribute it if it can be kept
separate from the much larger IndexManager mechanism (which is more
specific to my app and thus not likely a fit for your app anyway).

Chuck


wu fox wrote on 06/12/2006 02:43 AM:
> Hi Chuck:
>  I am still looking forward to a solution which ensure to to meet the
> constraints of
> ParallelReader so that I can use it for my seach programm. I have
> tried a lot of methods but none of them
> is good enough for me because of obvious
> bugs. Can you help me? thanks in advance
>


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to