Re: [GENERAL] Slow pgdump

2005-11-28 Thread Jim C. Nasby
I'm making a bit of a guess here, but I suspect the issue is that a
single large dump will hold a transaction open for the entire time. That
will affect vacuums at a minimum; not sure what else could be affected.

On Tue, Nov 22, 2005 at 05:13:44PM -0800, Patrick Hatcher wrote:
 
 OS - RH3
 Pg - 7.4.9
 Ram - 8G
 Disk-709G  Raid 0+1
 
 We are having a pgdump issue that we can't seem to find an answer for
 
 Background:
 Production server contains 11 databases of which 1 database comprises 85%
 of the 194G used on the drive.  This one large db contains 12 schemas.
 Within the schemas of the large db, there maybe 1 or 2 views that span
 across 2 schemas.
 
 If we do a backup using pgdump against the entire database, it will take
 upwards of 8+ hours for the backup to complete.
 
 If we split the backup up to do a pgdump for the first 10 dbs and then do a
 pgdump by schema on the 1 large db, the the backup takes only 3.5hrs
 
 The other than using the schema switch, there is no compression happening
 on either dump.
 
 Any ideas why this might be happening or where we can check for issues?
 
 TIA
 Patrick Hatcher
 Development Manager  Analytics/MIO
 Macys.com
 
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org
 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Slow pgdump

2005-11-22 Thread Patrick Hatcher

OS - RH3
Pg - 7.4.9
Ram - 8G
Disk-709G  Raid 0+1

We are having a pgdump issue that we can't seem to find an answer for

Background:
Production server contains 11 databases of which 1 database comprises 85%
of the 194G used on the drive.  This one large db contains 12 schemas.
Within the schemas of the large db, there maybe 1 or 2 views that span
across 2 schemas.

If we do a backup using pgdump against the entire database, it will take
upwards of 8+ hours for the backup to complete.

If we split the backup up to do a pgdump for the first 10 dbs and then do a
pgdump by schema on the 1 large db, the the backup takes only 3.5hrs

The other than using the schema switch, there is no compression happening
on either dump.

Any ideas why this might be happening or where we can check for issues?

TIA
Patrick Hatcher
Development Manager  Analytics/MIO
Macys.com


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org