Coming to this thread a bit late as I've been out of email connectivity for the past week...
On Tue, Jun 17, 2008 at 2:43 AM, Tom Lane <[EMAIL PROTECTED]> wrote: > In any case, trying to define a module as a schema doesn't help at all > to solve the hard problem, which is how to get this stuff to play nice > with pg_dump. I think that the agreed-on solution was that pg_dump > should emit some kind of "LOAD MODULE foo" command, and *not* dump any > of the individual objects in the module. We can't have that if we try > to equate modules with schemas instead of making them a new kind of > object. This is certainly the end result that I'd like, and intend to work towards. My main concern has been cases where a module-owned table gets updated with data that would not be recreated/updated by the LOAD MODULE in the dump. PostGIS support tables are one example of this, PL/Java classpath / function information is another. There are probably many more. I see two potential solutions: a) explicitly mark such tables as requiring data to be dumped somehow, and have pg_dump emit "upsert" statements for all rows in the table. b) allow modules to define a function that can pg_dump can call to emit appropriate extra restore commands, above whatever LOAD MODULE foo does. This has the downside of requiring more work from module owners (though perhaps a default function that effectively does option a) could be provided), with a potential upside of allowing module dumps to become upgrade-friendly by not being tied to a particular version's table layout. Thoughts? Tom -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers