Jeff Davis <[EMAIL PROTECTED]> writes:
> On Tue, 2006-10-03 at 00:42 +0200, Alexander Staubo wrote:
>> Why does pg_dump serialize data less efficiently than PostgreSQL when  
>> using the "custom" format?

> What you're saying is more theoretical. If pg_dump used specialized
> compression based on the data type of the columns, and everything was
> optimal, you're correct. There's no situation in which the dump *must*
> be bigger. However, since there is no practical demand for such
> compression, and it would be a lot of work ...

There are several reasons for not being overly tense about the pg_dump
format:

* We don't have infinite manpower

* Cross-version and cross-platform portability of the dump files is
  critical

* The more complicated it is, the more chance for bugs, which you'd
  possibly not notice until you *really needed* that dump.

In practice, pushing the data through gzip gets most of the potential
win, for a very small fraction of the effort it would take to have a
smart custom compression mechanism.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to