On Tue, Apr 27, 2010 at 11:13 AM, Kevin Grittner <[email protected]> wrote: > Merlin Moncure <[email protected]> wrote: > >> The proposal only seems a win to me if a fair percentage of the >> larger files don't change, which strikes me as a relatively low >> level case to optimize for. > > That's certainly a situation we face, with a relatively slow WAN in > the middle. > > http://archives.postgresql.org/pgsql-admin/2009-07/msg00071.php > > I don't know how rare or common that is.
hm...interesting read. pretty clever. Your archiving requirements are high. With the new stuff (HS/SR) taken into consideration, would you have done your DR the same way if you had to do it all over again? Part of my concern here is that manual filesystem level backups are going to become an increasingly arcane method of doing things as the HS/SR train starts leaving the station. hm, it would be pretty neat to see some of the things you do pushed into logical (pg_dump) style backups...with some enhancements so that it can skip tables haven't changed and are exhibited in a previously supplied dump. This is more complicated but maybe more useful for a broader audience? Side question: is it impractical to backup via pg_dump a hot standby because of query conflict issues? merlin -- Sent via pgsql-hackers mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
