On Mon, Oct 19, 2009 at 2:54 PM, Kevin Grittner <kevin.gritt...@wicourts.gov> wrote: > How about calculating an effective percentage based on other > information. effective_cache_size, along with relation and database > size, come to mind.
I think previous proposals for this have fallen down when you actually try to work out a formula for this. The problem is that you could have a table which is much smaller than effective_cache_size but is never in cache due to it being one of many such tables. I think it would still be good to have some naive kind of heuristic here as long as it's fairly predictable for DBAs. But the long-term strategy here I think is to actually have some way to measure the real cache hit rate on a per-table basis. Whether it's by timing i/o operations, programmatic access to dtrace, or some other kind of os interface, if we could know the real cache hit rate it would be very helpful. Perhaps we could extrapolate from the shared buffer cache percentage. If there's a moderately high percentage in shared buffers then it seems like a reasonable supposition to assume the filesystem cache would have a similar distribution. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers