From: "Jeff Janes" <jeff.ja...@gmail.com>
On Tue, Jun 18, 2013 at 3:40 PM, MauMau <maumau...@gmail.com> wrote:
Really?  Would the catcache be polluted with entries for nonexistent
tables? I'm surprised at this.  I don't think it is necessary to speed up
the query that fails with nonexistent tables, because such queries should
be eliminated during application development.


I was thinking the same thing, optimizing for failure is nice if there are
no tradeoffs, but not so nice if it leaks memory.  But apparently the
negative cache was added for real reasons, not just theory. See discussion
from when it was added:

http://www.postgresql.org/message-id/19585.1012350...@sss.pgh.pa.us

Thanks for the info. I (probably) understood why negative catcache entries are necessary.


Hmm. I could repeat this, and it seems that the catcache for
pg_statistic accumulates negative cache entries. Those slowly take up
the memory.

Seems that we should somehow flush those, when the table is dropped. Not
sure how, but I'll take a look.

As Heikki san said as above, there should be something wrong somewhere, shouldn't there? In my testing, just repeating CREATE (TEMPORARY) TABLE, SELECT against it, and DROP TABLE on it led to more than 400MB of CacheMemoryContext, after which I stopped the test. It seems that the catcache grows without bounds simply by repeating simple transactions.

I wish to know the conditions where this happens and take all workarounds in my application to avoid the problem. Cooperation would be much appreciated.

Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to