Le mardi 26 février 2008, Tom Lane a écrit :
>  Or in more practical terms in this case, we have to balance
> speed against potentially-large costs in maintainability, datatype
> extensibility, and suchlike issues if we are going to try to get more
> than percentage points out of straight COPY.

Could COPY begin with checking the table type involved and use some internal 
knowledge about -core types to avoid extensibility costs, if any? Ok that 
sounds as a maintainability cost :)

Or maybe just provide an option to pg_dump to force usage of binary COPY 
format, which then allow pg_restore to skip alltogether the data parsing. If 
that's not the case (no data parsing), maybe it's time for another COPY 
format to be invented?

On the binary compatibility between architectures, I'm wondering whether using 
pg_dump in binary format from the new architecture couldn't be a solution.
Of course, when you only have the binary archives, lost server A and need to 
get the data to server B which do not share the A architecture, you're not in 
a comfortable situation. But pg_dump binary option would make clear you don't 
want to use it for your regular backups...
And it wouldn't help the case when data is not coming from PostgreSQL. It 
could still be a common enough use case to bother?

Just trying to put some ideas in the game, hoping this is more helpful than 
not,
-- 
dim

They did not know it was impossible, so they did it! -- Mark Twain

Attachment: signature.asc
Description: This is a digitally signed message part.

Reply via email to