On Mar 21, 2010, at 8:50 AM, David Newall wrote:
> Tom Lane wrote:
>> I would bet that the reason for the slow throughput is that gzip
>> is fruitlessly searching for compressible sequences. It won't find many.
>>
>
>
> Indeed, I didn't expect much reduction in size, but I also didn't expect
] pg_dump far too slow
To: "David Newall"
Cc: "Tom Lane" , pgsql-performance@postgresql.org,
robertmh...@gmail.com
Date: Sunday, March 21, 2010, 10:33 AM
One more from me
If you think that the pipe to GZIP may be causing pg_dump to stall, try putting
something like buffe
Tom Lane wrote:
I would bet that the reason for the slow throughput is that gzip
is fruitlessly searching for compressible sequences. It won't find many.
Indeed, I didn't expect much reduction in size, but I also didn't expect
a four-order of magnitude increase in run-time (i.e. output at
Craig Ringer writes:
> On 21/03/2010 9:17 PM, David Newall wrote:
>> and wonder if I should read up on gzip to find why it would work so
>> slowly on a pure text stream, albeit a representation of PDF which
>> intrinsically is fairly compressed.
> In fact, PDF uses deflate compression, the same a
One more from me
If you think that the pipe to GZIP may be causing pg_dump to stall, try
putting something like buffer(1) in the pipeline ... it doesn't generally
come with Linux, but you can download source or create your own very easily
... all it needs to do is asynchronously poll stdin an
On 21/03/2010 9:17 PM, David Newall wrote:
Thanks for all of the suggestions, guys, which gave me some pointers on
new directions to look, and I learned some interesting things.
Unfortunately one of these processes dropped eventually, and, according
to top, the only non-idle process running w
Thanks for all of the suggestions, guys, which gave me some pointers on
new directions to look, and I learned some interesting things.
The first interesting thing was that piping (uncompressed) pg_dump into
gzip, instead of using pg_dump's internal compressor, does bring a lot
of extra paralle
On Sun, 14 Mar 2010, David Newall wrote:
nohup time pg_dump -f database.dmp -Z9 database
I presumed pg_dump was CPU-bound because of gzip compression, but a test I
ran makes that seem unlikely...
There was some discussion about this a few months ago at
http://archives.postgresql.org/pgsql-
As a fellow PG newbie, some thoughts / ideas
1. What is the prupose of the dump (backup, migration, ETL, etc.)? Why
plain? Unless you have a need to load this into a different brand of
database at short notice, I'd use native format.
2. If you goal is indeed to get the data into another DB,
On Sun, Mar 14, 2010 at 4:01 AM, David Newall
wrote:
> an expected 40 - 45GB of compressed output. CPU load is 100% on the core
> executing pg_dump, and negligible on all others cores. The system is
> read-mostly, and largely idle. The exact invocation was:
>
> nohup time pg_dump -f databas
David Newall writes:
> [ very slow pg_dump of table with large bytea data ]
Did you look at "vmstat 1" output to see whether the system was under
any large I/O load?
Dumping large bytea data is known to be slow for a couple of reasons:
1. The traditional text output format for bytea is a bit po
Evening all,
Maiden post to this list. I've a performance problem for which I'm
uncharacteristically in need of good advice.
I have a read-mostly database using 51GB on an ext3 filesystem on a
server running Ubuntu 9.04 and PG 8.3. Forty hours ago I started a
plain-format dump, compressed
12 matches
Mail list logo