On Thu, Sep 29, 2011 at 10:22 AM, Robert Haas <[email protected]> wrote: > I can't really explain why people seem to keep wanting to create > hundreds of thousands or even millions of tables, but it's not like > MauMau's customer is the first one to try to do this, and I'm sure > they won't be the last. I don't want to de-optimize the more common > (and sensible) cases too much, but "slow" still trumps "fails > outright".
Yeah -- maybe baby steps in the right direction would be track cache memory usage and add instrumentation so the user could get a readout on usage -- this would also help us diagnose memory issues in the field. Also, thinking about it more, a DISCARD based cache flush (DISCARD CACHES TO xyz) wrapping a monolithic LRU sweep could help users deal with these cases without having to figure out how to make an implementation that pleases everyone. merlin -- Sent via pgsql-hackers mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
