On Wed, Feb 15, 2012 at 4:42 PM, Khusro Jaleel <mailing-li...@kerneljack.com
> wrote:

> Hi,
>
> I have setup replication on Postgresql 9.1 in a simple scenario using a
> master and a slave. I now need a reliable way (and preferably fast way) to
> do backups of the database.
>
> I tried using 'archive_command' on the master and created a script that
> simply bzips files and copies them to a directory, but after 1 day that
> directory was so huge that even a simple 'rm' complained that the 'argument
> list is too long'.
>
> I can see that even though I have specified 30 minutes in the
> postgresql.conf, a new archive file is written every *minute* resulting in
> way too many files in the archive directory. This happens even when
> *nobody* is using the DB, or with just very light use. However I have
> noticed that sometimes this *does* work correctly and an archive is only
> written out every 30 mins.
>
> Is there a way to force this to just write to the archive directory *only*
> every 30 mins as I only really need backups to be current to the last 30
> mins or so?
>

No, there's no way to do this. If you have good number of
transactions/minute (and/or the transactions are quite large) you will have
a lot of stuff written to transaction log, so wal files would rotate quite
often,


> I want to avoid using pg_dump as I think that would require I pause writes
> to the DB until the backup is finished?
>

pg_dump won't block writes, thanks to MVCC. It may increase bloat and it
will block DDL operations (ALTER TABLE/etc), but if your database is
relatively small but have high load and you need frequent backups, this may
be a way to go.

-- 
Vladimir Rusinov
http://greenmice.info/

Reply via email to