Actually, I would find this to be an interesting project, but we're on the verge of moving to 8.0 via Slony and will have a replicated cluster, reducing the need for live dumps on the primary read/write database.

It's too bad round tuits are so expensive!

I was trying to think of a way today in which pg_dump might be able to use statistics in almost the opposite way of pg_autovacuum, such that it steered clear of objects in heavy use, but I'm not familiar enough with the source to know how this might work.

-tfo

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On May 23, 2005, at 11:12 PM, Tom Lane wrote:

"Thomas F. O'Connell" <[EMAIL PROTECTED]> writes:

I'd like to use pg_dump to grab a live backup and, based on the
documentation, this would seem to be a realistic possibility. When I
try, though, during business hours, when people are frequently
logging in and otherwise using the application, the application
becomes almost unusable (to the point where logins take on the order
of minutes).


The pg_dump sources contain some comments about throttling the rate
at which data is pulled from the server, with a statement that this
idea was discussed during July 2000 and eventually dropped.  Perhaps
you can think of a better implementation.

            regards, tom lane



---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
     joining column's datatypes do not match

Reply via email to