From: "Heikki Linnakangas" <hlinnakan...@vmware.com>
On 18.06.2013 15:48, Heikki Linnakangas wrote:
Hmm. I could repeat this, and it seems that the catcache for
pg_statistic accumulates negative cache entries. Those slowly take up
the memory.

Digging a bit deeper, this is a rather common problem with negative catcache entries. In general, nothing stops you from polluting the cache with as many negative cache entries as you like. Just do "select * from table_that_doesnt_exist" for as many non-existent table names as you want, for example. Those entries are useful at least in theory; they speed up throwing the error the next time you try to query the same non-existent table.

Really? Would the catcache be polluted with entries for nonexistent tables? I'm surprised at this. I don't think it is necessary to speed up the query that fails with nonexistent tables, because such queries should be eliminated during application development.


But there is a crucial difference in this case; the system created a negative cache entry for the pg_statistic row of the table, but once the relation is dropped, the cache entry keyed on the relation's OID, is totally useless. It should be removed.

We have this problem with a few other catcaches too, which have what is effectively a foreign key relationship with another catalog. For example, the RELNAMENSP catcache is keyed on pg_class.relname, pg_class.relnamespace, yet any negative entries are not cleaned up when the schema is dropped. If you execute this repeatedly in a session:

CREATE SCHEMA foo;
SELECT * from foo.invalid; -- throws an error
DROP SCHEMA foo;

it will leak similarly to the original test case, but this time the leak is into the RELNAMENSP catcache.

To fix that, I think we'll need to teach the catalog cache about the relationships between the caches.

Thanks for your concise explanation. Do you think it is difficult to fix that bug? That sounds so to me... though I don't know the design of catcaches yet.

Could you tell me the conditions where this bug occurs and how to avoid it? I thought of the following:

[Condition]
1. Create and drop the same table repeatedly on the same session. Whether the table is a temporary table is irrelevant. 2. Do SELECT against the table. INSERT/DELETE/UPDATE won't cause the catcache leak. 3. Whether the processing happens in a PL/pgSQL function is irrelevant. The leak occurs even when you do not use PL/pgSQL.


[Wordaround]
Use CREATE TABLE IF NOT EXISTS and TRUNCATE (or ON COMMIT DROP in case of temporary tables) to avoid repeated creation/deletion of the same table.

Regards
MauMau






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to