From: "Tom Lane" <t...@sss.pgh.pa.us>
That's correct.  We used to have a limit on the size of catcache
(if memory serves, it was something like 5000 entries).  We got rid of
it after observing that performance fell off a cliff as soon as you had
a working set larger than the cache limit.  Trust me, if we had a limit,
you'd still be here complaining, the complaint would just take a
different form ;-)

Yes, I can imagine. Now I'll believe that caching catalog entries in local memory without bound is one of PostgreSQL's elaborations for performance. 64-bit computing makes that approach legit. Oracle avoids duplicate catalog entries by storing them in a shared memory, but that should necessate some kind of locking when accessing the shared catalog entries. PostgreSQL's approach, which does not require locking, is better for many-core environments.

I concur with Merlin's advice to rethink your schema.  100000 tables is
far beyond what any sane design could require, and is costing you on
many levels (I'm sure the OS and filesystem aren't that happy with it
either).

I agree. I'll suggest that to the customer, too. Thank you very much.

Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to