Hi,
I have setup replication on Postgresql 9.1 in a simple scenario using a
master and a slave. I now need a reliable way (and preferably fast way)
to do backups of the database.
I tried using 'archive_command' on the master and created a script that
simply bzips files and copies them to a directory, but after 1 day that
directory was so huge that even a simple 'rm' complained that the
'argument list is too long'.
I can see that even though I have specified 30 minutes in the
postgresql.conf, a new archive file is written every *minute* resulting
in way too many files in the archive directory. This happens even when
*nobody* is using the DB, or with just very light use. However I have
noticed that sometimes this *does* work correctly and an archive is only
written out every 30 mins.
Is there a way to force this to just write to the archive directory
*only* every 30 mins as I only really need backups to be current to the
last 30 mins or so?
===============
checkpoint_segments = 3 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 30min # range 30s-1h
archive_timeout = 30 # force a logfile segment switch after this
wal_keep_segments = 14375
===============
If this is NOT the right way to backup the database, then can you please
suggest an alternative method? I could write a script that calls
'pg_start_backup' and 'pg_stop_backup' with an rsync in between to a
backup server and run it every 30 mins. I am thinking that this would
work, even if people are busy using the DB (reads/writes)?
I want to avoid using pg_dump as I think that would require I pause
writes to the DB until the backup is finished?
Any help appreciated, thanks,
Khusro
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin