Performance.
On our production DB the fast-archiver transfers the datadir in about half
as much time as basebackup.
And since this happens on every failover (since clearing the datadir and
resyncing as if from scratch also takes about half the time as a rsync of
an existing datadir)
--Mike
On
Just curious, is there a reason why you can't use pg_basebackup ?
On Wed, Sep 19, 2012 at 12:27 PM, Mike Roest wrote:
>
>> Is there any hidden issue with this that we haven't seen. Or does anyone
>> have suggestions as to an alternate procedure that will allow 2 slaves to
>> sync concurrently.
>
> Is there any hidden issue with this that we haven't seen. Or does anyone
> have suggestions as to an alternate procedure that will allow 2 slaves to
> sync concurrently.
>
> With some more testing I've done today I seem to have found an issue with
this procedure.
When the slave starts up after t
Our sync script is setup to fail if the pg_start_backup fails as if it
fails for some other reason the sync won't be valid as the backup_label
file will be missing so the slave won't have the correct location to
restart from.
Originally I had gone down the road of changing the sync script such tha
> Specifically what is the error?
>
psql (9.1.5)
Type "help" for help.
postgres=# select pg_start_backup('hotbackup',true);
pg_start_backup
-
61/B20
(1 row)
postgres=# select pg_start_backup('hotbackup',true);
ERROR: a backup is already in progress
HINT: Run pg_stop_backup
On Wed, Sep 19, 2012 at 8:59 AM, Mike Roest wrote:
> Hey Everyone,
> We currently have a 9.1.5 postgres cluster running using streaming
> replication. We have 3 nodes right now
>
> 2 - local that are setup with pacemaker for a HA master/slave set failover
> cluster
> 1 - remote as a DR.
>
> C
Hey Everyone,
We currently have a 9.1.5 postgres cluster running using streaming
replication. We have 3 nodes right now
2 - local that are setup with pacemaker for a HA master/slave set failover
cluster
1 - remote as a DR.
Currently we're syncing with the pretty standard routine
clear local