Hi, I find it a bit surprising there are almost no results demonstrating the impact of the proposed changes on some typical workloads. It touches code (syscache, ...) that is quite sensitive performance-wise, and adding even just a little bit of overhead may hurt significantly. Even on systems that don't have issues with cache bloat, etc.
I think this is something we need - benchmarks measuring the overhead on a bunch of workloads (both typical and corner cases). Especially when there was a limit on cache size in the past, and it was removed because it was too expensive / hurting in some cases. I can't imagine committing any such changes without this information. This is particularly important as the patch was about one particular issue (bloat due to negative entries) initially, but then the scope grew quite a it. AFAICS the thread now talks about these workloads: * negative entries (due to search_path lookups etc.) * many tables accessed randomly * many tables with only a small subset accessed frequently * many tables with subsets accessed in subsets (due to pooling) * ... Unfortunately, some of those cases seems somewhat contradictory (i.e. what works for one hurts the other), so I doubt it's possible to improve all of them at once. But that makes the bencharking even more important. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services