On Tue, Apr 14, 2009 at 9:25 PM, Jeremy Volkman <[email protected]> wrote:

> Implementing this way allows me to write RAM indexes out to disk without
> blocking readers, and only block readers when I need to remap any filtered
> docs that may have been updated or deleted during the flushing process. I
> think this may beat using a straight IW for my requirements, but I'm not
> positive yet.

I think testing out-of-the-box NRT's performance should be your next
step: if it's sufficient, why bring all the complexity of tracking
these RAM indices?

> So I've currently got a SuppressedIndexReader extends FilterIndexReader, but
> due to 1483 and 1573 I had to implement IndexReader.getFieldCacheKey() to
> get any sort of decent search performance, which I'd rather not do since I'm
> aware its only temporary.

It's temporary because it's needed for the current field cache API,
which we hope to replace with LUCENE-831.  Still, it will likely be
shipped w/ 2.9 and then removed in 3.0.

LUCENE-1313 aims to support the RAM buffering "for real", for cases
where performance of the current NRT is in fact limiting, but we still
have some iterating to do with that one

> Is it possible to perform a bunch of adds and deletes from an IW in an
> atomic action? Should I use addIndexesNoOptimize?

IW doesn't support this, so you'll have externally sychronize to
achieve this.  Earlier patches on LUCENE-1313 did have a Transaction
class for an atomic set of updates.

> If I go the filtered searcher direction, my filter will have to be aware of
> the portion of the MultiReader that corresponds to the disk index. Can I
> assume that my disk index will populate the lower portion of doc id space if
> it comes first in the list passed to the MultiReader constructor? The code
> says yes but the docs don't say anything.

This is true today, but is an implementation detail that's free to
change from release to release.

Also, I'd worry about search performance of the filtered searcher
approach, if that's an issue in your app.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to