On Wed, Feb 06, 2008 at 03:13:24PM +0000, Simon Riggs wrote:
> pg_dump allows you to specify -s --schema-only, or -a --data-only.
> 
> The -s option creates the table, as well as creating constraints and
> indexes. These objects need to be dropped prior to loading, if we are to
> follow the performance recommendations in the docs. But the only way to
> do that is to manually edit the script to produce a cut down script.
> 
> So it would be good if we could dump objects in 3 groups
> 1. all commands required to re-create table
> 2. data
> 3. all commands required to complete table after data load
> 
> My proposal is to provide two additional modes:
> --schema-pre-load corresponding to (1) above
> --schema-post-load corresponding to (3) above
> 
> This would then allow this sequence of commands 
> 
> pg_dump --schema-pre-load
> pg_dump --data-only
> pg_dump --schema-post-load
> 
> to be logically equivalent, but faster than
> 
> pg_dump --schema-only
> pg_dump --data-only
> 
> both forms of which are equivalent to just
> 
> pg_dump
> 
> 
> [Assuming data isn't changing between invocations...]

I've been considering just this. Another otpion I came up with was a more
generic switch where you'd have:
pg_dump --what=tables
pg_dump --what=indexes,constraints

or something like that. Would give more flexibility, but I'm not sure if
that's worthwhile.

Having the ability to just this filtering that you're talking about would
be very handy - I've needed it more than once.

Where would you load primary keys and such contrants? Pre- or post dump? I
think the case could be made for either one...

//Magnus

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to