On Mon, Nov 12, 2012 at 5:17 PM, Simon Riggs <si...@2ndquadrant.com> wrote: > On 12 November 2012 16:51, Robert Haas <robertmh...@gmail.com> wrote: > >> Although there may be some workloads that access very large numbers of >> tables repeatedly, I bet that's not typical. > > Transactions with large numbers of DDL statements are typical at > upgrade (application or database release level) and the execution time > of those is critical to availability. > > I'm guessing you mean large numbers of tables and accessing each one > multiple times?
Yes, that is what I meant. >> Rather, I bet that a >> session which accesses 10,000 tables is most likely to access them >> just once each - and right now we don't handle that case very well; >> this is not the first complaint about big relcaches causing problems. > > pg_restore frequently accesses tables more than once as it runs, but > not more than a dozen times each, counting all types of DDL. Hmm... yeah. Some of those accesses are probably one right after another so any cache-flushing behavior would be fine; but index creations for example might happen quite a bit later in the file, IIRC. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers