[continuous backup] On Tuesday 15 June 2010 21.42:52 Oliver Kohll - Mailing Lists wrote: > 1) Continuously ship the WAL records to somewhere on the test server > unknown to Postgres but run the test machine as a normal database > completely separately. If a backup is needed, delete the test database, > restore to the last full backup (a filesystem backup?) and copy all WAL > records into Postgres' directory so it can see them. Start it up > configured to replay them, up to a certain time. > > 2) Run two instances of Postgres on the test/backup server on different > ports, one configured as a replication slave, one normal. I'm not sure > if this is possible with the RPM builds I'm using.
Both scenarious are possible. I don't know the rpm builds you're using; the
Debian packages allow configuring two instances on two different ports
AFAIK. Possibly the rpm installation do, too. Even if not: hacking up a
2nd start script which runs postgres against a different data directory /
config file should be quite trivial.
Keeping the base backup plus all the WAL files for the case you need to
restore will need quite a bit of diskspace if your database is reasonably
big (on some database I administrated, I scheduled weekly base backups and
kept a week of WAL - since we sometimes had quite a lot changes in the db,
WAL was quickly 10 times as big as the base backup. So depending on your DB
load, keeping a 2nd installation of postgres running and continuously
reading the WAL files might be cheaper in terms of disk space.
(and with 9.0, you even have a near real-time read-only copy of the db for
free gratis...)
cheers
-- vbi
--
90% of the people do not understand copyright,
the other 10% simply ignore it.
-- Aigars Mahinovs
signature.asc
Description: This is a digitally signed message part.
