That's an interesting idea.

I always wonder however how much exactly would we gain, vs. the effort spent
to develop, debug and maintain it. Just some thoughts that we should
consider regarding this:

* For very large indices, where we think this will generally be good for, I
believe it's reasonable to assume that the search index will sit on its own
machine, or set of CPUs, RAM and HD. Therefore given that very few will run
on the OS other than the search index, I assume the OS cache will be enough
(if not better)?

* In other cases, where the search app runs together w/ other apps, I'm not
sure how much we'll gain. I can assume such apps will use a smaller index,
or will not need to support high query load? If so, will they really care if
we cache their data, vs. the OS?

Like I said, these are just thoughts. I don't mean to cancel the idea w/
them, just to think how much will it improve performance (vs. maybe even
hurt it?). Often I find it that some optimizations that are done will
benefit very large indices. But these usually get their decent share of
resources, and the JVM itself is run w/ larger heap etc. So these
optimizations turn out to not affect such indices much after all. And for
smaller indices, performance is usually not a problem (well ... they might
just fit entirely in RAM).

Shai

On Wed, Jul 22, 2009 at 6:21 PM, Nigel <nigelspl...@gmail.com> wrote:

> In discussions of Lucene search performance, the importance of OS caching
> of index data is frequently mentioned.  The typical recommendation is to
> keep plenty of unallocated RAM available (e.g. don't gobble it all up with
> your JVM heap) and try to avoid large I/O operations that would purge the OS
> cache.
>
> I'm curious if anyone has thought about (or even tried) caching the
> low-level index data in Java, rather than in the OS.  For example, at the
> IndexInput level there could be an LRU cache of byte[] blocks, similar to
> how a RDBMS caches index pages.  (Conveniently, BufferedIndexInput already
> reads in 1k chunks.) You would reverse the advice above and instead make
> your JVM heap as large as possible (or at least large enough to achieve a
> desired speed/space tradeoff).
>
> This approach seems like it would have some advantages:
>
> - Explicit control over how much you want cached (adjust your JVM heap and
> cache settings as desired)
> - Cached index data won't be purged by the OS doing other things
> - Index warming might be faster, or at least more predictable
>
> The obvious disadvantage for some situations is that more RAM would now be
> tied up by the JVM, rather than managed dynamically by the OS.
>
> Any thoughts?  It seems like this would be pretty easy to implement
> (subclass FSDirectory, return subclass of FSIndexInput that checks the cache
> before reading, cache keyed on filename + position), but maybe I'm
> oversimplifying, and for that matter a similar implementation may already
> exist somewhere for all I know.
>
> Thanks,
> Chris
>

Reply via email to