On Thu, Mar 22, 2012 at 5:58 PM, Debanjan Bhattacharyya <
b.deban...@gmail.com> wrote:
> Hi
> I have two queries on WAL archiving.
> I am using Postgres 8.2 and I have successfully setup a primary
> hot-standy (with continuous recovery from WAL files) db pair.
>
> I want to find the time stamp and
Hi
I have two queries on WAL archiving.
I am using Postgres 8.2 and I have successfully setup a primary
hot-standy (with continuous recovery from WAL files) db pair.
I want to find the time stamp and name of last WAL archive applied by
the HOT standby db in continuous recovery mode.
I also need to
Alanoly Andrews wrote:
> What I would like to tell the postgres engine on the primary is to
> be "satisfied" if the archiving to the primary location succeeded
> and to NOT re-try if the failure was in the remote copy
>
> Is there a way to achive this through the "archive_command" or
> otherwis
In PostgreSQL, instance cannot recognize the network failure, however, I
think you can do with OS scripting and calling in "archive_command".
Eg:-
archive command = '/home/scripts/arch_copy.sh %p %f'
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
On Mon, Ju
Hello,
I have a Warm Standby set up on two machines (running AIX and Korn shell) in a
Postgres 8.4.7 environment. The "archive_command" is set to copy a completed
WAL archive to two locations, one on the primary and the other on the Standby
machine. Thus:
archive_command = '/bin/cp %p /pgarcl
Mike Atkin writes:
> So... normally this would be curtains for the database but the missing
> segment is from a 4 hour period of practically zero activity. It was
> only archived because it hit the archive_timeout. Is there any way I
> can force postgres to ignore this segment and attempt to red
Hi all,
One of my postgres 8.2 databases had been getting critically low on
disk space and with no extra suitable hardware available to
accommodate it I decided I would use amazon S3 to store the blob data
which was taking up most of the room. The plan was that I could then
do a vacuumlo and then
On Mon, 2011-03-07 at 07:48 +0100, A B wrote:
>
> Is it possible to copy archived WAL files and a base backup from a 64
> bit CentOS enviroment to a 32 bit CentOS environment and get the
> database up and running on the 32 bit machine by using the base backup
> and the WAL files?
No.
Regards,
-
Hello.
Is it possible to copy archived WAL files and a base backup from a 64
bit CentOS enviroment to a 32 bit CentOS environment and get the
database up and running on the 32 bit machine by using the base backup
and the WAL files?
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
On Tue, Jun 23, 2009 at 10:18:30PM +0200, Jakov Sosic wrote:
> On Fri, 19 Jun 2009 09:43:28 -0600
> torrez wrote:
>
> > Hello,
> > I'm implementing WAL archiving and PITR on my production DB.
> > I've set up my TAR, WAL archives and pg_xlog all to be store on a
> > separate disk then my DB.
On Fri, 19 Jun 2009 09:43:28 -0600
torrez wrote:
> Hello,
> I'm implementing WAL archiving and PITR on my production DB.
> I've set up my TAR, WAL archives and pg_xlog all to be store on a
> separate disk then my DB.
> I'm at the point where i'm running 'Select pg_start_backup('xxx');'.
>
On Freitag 19 Juni 2009 torrez wrote:
> time tar -czf /pbo/podbackuprecovery/tars/pod-backup-$
> {CURRDATE}.tar.gz /pbo/pod > /pbo/podbackuprecovery/pitr_logs/backup-
> tar-log-${CURRDATE}.log 2>&1
If you have a multi-core/multi-CPU machine, try to used pbzip2 (parallel
bzip2), which can use all
torrez wrote:
> The problem is that this tar took just over 25 hours to complete. I
> expected this to be a long process because since my DB is about 100
> gigs.
> But 25hrs seems a bit too long. Does anyone have any ideas how to cut
> down on this time?
Don't gzip it online?
--
Alvaro
Hello,
I'm implementing WAL archiving and PITR on my production DB.
I've set up my TAR, WAL archives and pg_xlog all to be store on a
separate disk then my DB.
I'm at the point where i'm running 'Select pg_start_backup('xxx');'.
Here's the command i've run for my tar:
time tar -czf /p
Assume we have a primary database server and a warm
standby. WAL files get shipped over to the standby and
applied as they arrive.
Questions:
1. I know the recommendation is to fail over to the
standby and re-configure the primary as a new standby.
Is it possible to do that without any data loss?
from that point where
It has leaved before machine was go down ?
_
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Vishal Arora
Sent: Monday, February 25, 2008 11:30 AM
To: Shilpa Sudhakar
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] WAL archiving
rg
> Subject: Re: [ADMIN] WAL archiving
>
> Thanks a lot Vishal for the info.
>
> I have one more query postgres automatically cleans the WAL files
> present in the pg_xlog directoryright?
YES! it actually recycles the files which are of no further intrest.
So instead of me
> Date: Mon, 25 Feb 2008 15:30:46 +1030> From: [EMAIL PROTECTED]> To: [EMAIL
> PROTECTED]> CC: pgsql-admin@postgresql.org> Subject: Re: [ADMIN] WAL
> archiving> > Thanks a lot Vishal for the info.> > I have one more query
> postgres automatically c
Feb 2008 09:54:34 +1030
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] WAL archiving
>
> Shilpa Sudhakar wrote:
> > Hi Vishal,
> >
> > Below is the setup in the postgresql.conf file
> >
> > fs
09:53:35 +1030
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: Re: [ADMIN] WAL archiving
>
> Hi Vishal,
>
> Below is the setup in the postgresql.conf file
>
> fsync = true # turns forced synchronization on or off
> wal_sync_method = fsync # the default varie
Thanks loads Dawid,
I'll test the process on the TEST box and note down the time it takes.
Dawid Kuroczko wrote:
On Fri, Feb 22, 2008 at 12:23 AM, Shilpa Sudhakar
<[EMAIL PROTECTED]> wrote:
Since the wal logs keep increasing, do we take the base backup every now
and then so that we can
You can have a warm standby system in place. Pls check this link -
http://archives.postgresql.org/sydpug/2006-10/msg1.php
> Date: Mon, 25 Feb 2008 09:54:34 +1030> From: [EMAIL PROTECTED]> To: [EMAIL
> PROTECTED]> CC: pgsql-admin@postgresql.org> Subject: Re: [ADMIN
On Fri, Feb 22, 2008 at 12:23 AM, Shilpa Sudhakar
<[EMAIL PROTECTED]> wrote:
> Since the wal logs keep increasing, do we take the base backup every now
> and then so that we can delete the old log files?
> How often do we take a base filesystem backup keeping in mind that our
> systems are
Date: Fri, 22 Feb 2008 09:53:25 +1030
> From: [EMAIL PROTECTED]
> To: pgsql-admin@postgresql.org
> Subject: [ADMIN] WAL archiving
>
> Hi All,
>
> I am new to postgres and have been slowly learning the concepts.
>
> Regarding WAL archiving, we first take a base backup and t
On 2/22/08, Shilpa Sudhakar <[EMAIL PROTECTED]> wrote:
>
> I am new to postgres and have been slowly learning the concepts.
>
> Regarding WAL archiving, we first take a base backup and then save all
> the wal logs for PITR.
Not exactly.
1. adjust acrhive_command to save wal's to backup dir
2. exe
Hi All,
I am new to postgres and have been slowly learning the concepts.
Regarding WAL archiving, we first take a base backup and then save all
the wal logs for PITR.
Both the base backup and wal logs are stored in another disk.
Since the wal logs keep increasing, do we take the base backup
Hi,
I recently had set up WAL archiving in a testing environment and all went
well. As a test I let the disk fill up until archiving no longer was
possible.
Then a co-worker noticed this disk filling up and removed some WALL
segments, that were not yet archived. Now the archiving proces hangs.
Me
27 matches
Mail list logo