On Tue, Apr 13, 2010 at 03:03:58PM -0400, Tom Lane wrote: > Joachim Wieland <j...@mcknight.de> writes: > > If we still cannot do this, then what I am asking is: What does the > > project need to be able to at least link against such a compression > > algorithm? > > Well, what we *really* need is a convincing argument that it's worth > taking some risk for. I find that not obvious. You can pipe the output > of pg_dump into your-choice-of-compressor, for example, and that gets > you the ability to spread the work across multiple CPUs in addition to > eliminating legal risk to the PG project. And in any case the general > impression seems to be that the main dump-speed bottleneck is on the > backend side not in pg_dump's compression.
My client uses pg_dump -Fc and produces about 700GB of compressed postgresql dump nightly from multiple hosts. They also depend on being able to read and filter the dump catalog. A faster compression algorithm would be a huge benefit for dealing with this volume. -dg -- David Gould da...@sonic.net 510 536 1443 510 282 0869 If simplicity worked, the world would be overrun with insects. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers