"Kevin Grittner" <kevin.gritt...@wicourts.gov> writes:
> I'm afraid pg_dump didn't get very far with this before:
 
> pg_dump: WARNING:  out of shared memory
> pg_dump: SQL command failed
 
> Given how fast it happened, I suspect that it was 2672 tables into
> the dump, versus 26% of the way through 5.5 million tables.

Yeah, I didn't think about that.  You'd have to bump
max_locks_per_transaction up awfully far to get to where pg_dump
could dump millions of tables, because it wants to lock each one.

It might be better to try a test case with lighter-weight objects,
say 5 million simple functions.

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to