Robert Haas <robertmh...@gmail.com> writes:
> On Tue, Dec 20, 2016 at 10:09 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> I don't understand why we'd make that a system-wide behavior at all,
>> rather than expecting each process to manage its own cache.

> Individual backends don't have a really great way to do time-based
> stuff, do they?  I mean, yes, there is enable_timeout() and friends,
> but I think that requires quite a bit of bookkeeping.

If I thought that "every ten minutes" was an ideal way to manage this,
I might worry about that, but it doesn't really sound promising at all.
Every so many queries would likely work better, or better yet make it
self-adaptive depending on how much is in the local syscache.

The bigger picture here though is that we used to have limits on syscache
size, and we got rid of them (commit 8b9bc234a, see also
https://www.postgresql.org/message-id/flat/5141.1150327541%40sss.pgh.pa.us)
not only because of the problem you mentioned about performance falling
off a cliff once the working-set size exceeded the arbitrary limit, but
also because enforcing the limit added significant overhead --- and did so
whether or not you got any benefit from it, ie even if the limit is never
reached.  Maybe the present patch avoids imposing a pile of overhead in
situations where no pruning is needed, but it doesn't really look very
promising from that angle in a quick once-over.

BTW, I don't see the point of the second patch at all?  Surely, if
an object is deleted or updated, we already have code that flushes
related catcache entries.  Otherwise the caches would deliver wrong
data.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to