Decibel! wrote:
we can just look at the hit rate for the object. But we'd also need stats for how often we find pages for a relation in the OS cache, which no one has come up with a good method for.
Makes me wonder if we could (optionally, I guess, since timing stuff is apparently slow on some systems) also keep save the average time it took for a block to get ready in pg_statio_all_tables. Or, (if possible), save the averages for random and sequential pages separately. Then rather than using guessed values in the config files it seems the plans could use the actual averages per table. That would address both poor guesses on random_page_cost, effective_cache_size, etc - as well as get things right on systems where some tablespaces are fast and some are slow. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers