Robert Haas <robertmh...@gmail.com> writes:
> ... It seems that we used to have
> some kind of LRU algorithm to prevent excessive memory usage, but we
> rippped it out because it was too expensive (see commit
> 8b9bc234ad43dfa788bde40ebf12e94f16556b7f).

Not only was it too expensive, but performance fell off a cliff as soon
as you had a catalog working set large enough to cause the code to
actually do something,  I'm not in favor of putting anything like that
back in ---- people who have huge catalogs will just start complaining
about something different, ie, why did their apps get so much slower.

The short answer here is "if you want a database with 100000 tables,
you'd better be running it on more than desktop-sized hardware".

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to