Tom Lane <t...@sss.pgh.pa.us> wrote:
 
> Do you have the opportunity to try an experiment on hardware
> similar to what you're running that on?  Create a database with 7
> million tables and see what the dump/restore times are like, and
> whether pg_dump/pg_restore appear to be CPU-bound or
> memory-limited when doing it.
 
If these can be empty (or nearly empty) tables, I can probably swing
it as a background task.  You didn't need to match the current 1.3
TB database size I assume?
 
> If they aren't, we could conclude that millions of TOC entries
> isn't a problem.
 
I'd actually be rather more concerned about the effects on normal
query plan times, or are you confident that won't be an issue?
 
> A compromise we could consider is some sort of sub-TOC-entry
> scheme that gets the per-BLOB entries out of the main speed
> bottlenecks, while still letting us share most of the logic.  For
> instance, I suspect that the first bottleneck in pg_dump would be
> the dependency sorting, but we don't really need to sort all the
> blobs individually for that.
 
That might also address the plan time issue, if it actually exists.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to