Tatsuo Ishii <is...@postgresql.org> writes:
>> Ok, I modified the part of pg_dump where tremendous number of LOCK
>> TABLE are issued. I replace them with single LOCK TABLE with multiple
>> tables. With 100k tables LOCK statements took 13 minutes in total, now
>> it only takes 3 seconds. Comments?

> Shall I commit to master and all supported branches?

I'm not excited by this patch.  It dodges the O(N^2) lock behavior for
the initial phase of acquiring the locks, but it does nothing for the
lock-related slowdown occurring in all pg_dump's subsequent commands.
I think we really need to get in the server-side fix that Jeff Janes is
working on, and then re-measure to see if something like this is still
worth the trouble.  I am also a tad concerned about whether we might not
have problems with parsing memory usage, or some such, with thousands of
tables being listed in a single command.

                        regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to