Greg Stark <[EMAIL PROTECTED]> writes: > So I would suggest using something like 100us as the threshold for > determining whether a buffer fetch came from cache.
I see no reason to hardwire such a number. On any hardware, the distribution is going to be double-humped, and it will be pretty easy to determine a cutoff after minimal accumulation of data. The real question is whether we can afford a pair of gettimeofday() calls per read(). This isn't a big issue if the read actually results in I/O, but if it doesn't, the percentage overhead could be significant. If we assume that the effective_cache_size value isn't changing very fast, maybe it would be good enough to instrument only every N'th read (I'm imagining N on the order of 100) for this purpose. Or maybe we need only instrument reads that are of blocks that are close to where the ARC algorithm thinks the cache edge is. One small problem is that the time measurement gives you only a lower bound on the time the read() actually took. In a heavily loaded system you might not get the CPU back for long enough to fool you about whether the block came from cache or not. Another issue is what we do with the effective_cache_size value once we have a number we trust. We can't readily change the size of the ARC lists on the fly. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org