"Kevin Grittner" <kevin.gritt...@wicourts.gov> writes:
> Tom Lane <t...@sss.pgh.pa.us> wrote:
>> Do you have the opportunity to try an experiment on hardware
>> similar to what you're running that on?  Create a database with 7
>> million tables and see what the dump/restore times are like, and
>> whether pg_dump/pg_restore appear to be CPU-bound or
>> memory-limited when doing it.
 
> If these can be empty (or nearly empty) tables, I can probably swing
> it as a background task.  You didn't need to match the current 1.3
> TB database size I assume?

Empty is fine.

>> If they aren't, we could conclude that millions of TOC entries
>> isn't a problem.
 
> I'd actually be rather more concerned about the effects on normal
> query plan times, or are you confident that won't be an issue?

This is only a question of what happens internally in pg_dump and
pg_restore --- I'm not suggesting we change anything on the database
side.

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to