To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
*how* are the backups being generated?
On Thu, Nov 29, 2012 at 5:16 PM, Sabry Sadiq ssa...@whispir.com wrote:
Currently backups are performed on the master database and I want to
offload that load to the standby
Sabry
Hi All,
Has anyone been successful in offloading the database backup from the
production database to the standby database?
Kind Regards,
Sabry
Sabry Sadiq
Systems Administrator
Whispir
Level 30 360 Collins Street
Melbourne / Victoria 3000 / Australia
GPO Box 130 / Victoria 3001 / Australia
is free of errors, virus, interception or interference.
-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:11 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
Yes. Works fine in 9.2.x.
On Thu, Nov 29, 2012 at 4
Yes. Works fine in 9.2.x.
On Thu, Nov 29, 2012 at 4:59 PM, Sabry Sadiq ssa...@whispir.com wrote:
Hi All,
Has anyone been successful in offloading the database backup from the
production database to the standby database?
Kind Regards,
Sabry
--
Sent via pgsql-admin mailing list
Regards,
Sabry
Sabry Sadiq
Systems Administrator
-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:11 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
Yes. Works fine in 9.2.x.
On Thu, Nov 29
Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:13 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
There aren't any, assuming that all of the servers are using the same
postgresql.conf. I'm referring to running
J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:13 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
There aren't any, assuming that all of the servers are using the same
postgresql.conf. I'm referring to running pg_basebackup.
On Thu
3 8630 9990 / E mailto:ssa...@whispir.com
1300 WHISPIR / 1300 944 774
www.whispir.com
-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:15 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
I don't know
Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:17 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
*how* are the backups being generated?
On Thu, Nov 29, 2012 at 5:16 PM, Sabry Sadiq ssa...@whispir.com wrote:
Currently backups are performed
: Friday, 30 November 2012 12:15 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
I don't know, I've never tried. If I had to guess, I'd say no, as that
version doesn't support cascading replication.
You never stated, how are you currently performing backups
Sabry Sadiq wrote:
Does it work well with version 9.1.3?
It might work better in 9.1.6:
http://www.postgresql.org/support/versioning/
And it would probably pay to keep up-to-date as new minor releases
become available.
-Kevin
--
Sent via pgsql-admin mailing list
Hi Everybody,
I am experimenting with backups and restores
I am running into something curious and would appreciate any suggestions.
Backing up from:
Postgres 8.3.0
Windows 2003 sp1 server (32bit)
-Took a compressed binary backup of a single db (the default option in
pgAdminIII,
Hi Everybody,
I am experimenting with backups and restores
I am running into something curious and would appreciate any suggestions.
Backing up from:
Postgres 8.3.0
Windows 2003 sp1 server (32bit)
-Took a compressed binary backup of a single db (the default option in
pgAdminIII,
On 09/21/2012 01:01 AM, Kasia Tuszynska wrote:
Hi Everybody,
I am experimenting with backups and restores….
I am running into something curious and would appreciate any suggestions.
Backing up from:
Postgres 8.3.0
Windows 2003 sp1 server (32bit)
-Took a compressed binary backup of a single
Hi
I am working as sql dba recently our team had oppurtunity to work on
postgres databases and i had experience on sql server and on windows
platform and now our company had postgres databases on solaris platform
can u please suggest how to take the back up of postgress databases by step
by step
lohita nama namaloh...@gmail.com wrote:
I am working as sql dba recently our team had oppurtunity to work
on postgres databases and i had experience on sql server and on
windows platform and now our company had postgres databases on
solaris platform
can u please suggest how to take the
Hi,
I would recommend this:
http://www.postgresql.org/docs/9.1/static/backup.html
Very straightforward and easy reading ...
-fred
On Mon, Jun 18, 2012 at 10:50 AM, lohita nama namaloh...@gmail.com wrote:
Hi
I am working as sql dba recently our team had oppurtunity to work on
postgres
Hello, everyone. I want to throw a scenario out there to see what y'all think.
Soon, my cluster backups will be increasing in size inordinately. They're going
to immediately go to 3x as large as they currently are with the potential to be
about 20x within a year or so.
My current setup uses
Hi Scott,
Why you do not replicate this master to the other location/s using other
methods like bucardo?, you can pick the tables you really want get
replicated there.
For the backup turn to hot backup (tar $PGDATA)+ archiving, easier,
faster and more efficient rather than a logical copy with
Both good points, thanks, although I suspect that a direct network copy of the
pg_data directory will be faster than a tar/untar event.
- Original Message -
Hi Scott,
Why you do not replicate this master to the other location/s using
other
methods like bucardo?, you can pick the
On Apr 25, 2012, at 10:11 AM, Scott Whitney wrote:
I believe, then, that when I restart server #3 (the standby who is
replicating), he'll say oh, geez, I was down, let me catch up on all that
crap that happened while I was out of the loop, he'll replay the WAL files
that were written while
On 04/25/2012 09:11 AM, Scott Whitney wrote:
...
My current setup uses a single PG 8.x...
My _new_ setup will instead be 2 PG 9.x ...
It is best to specify actual major version. While 8.0.x or 9.1.x is
sufficient to discuss features and capabilities, 9.1 is a different
major release than 9.0,
I mean bucardo (even though there are more tools like this one) just
for the replication stuff and the hot database backup only for the
backup stuff and only one bounce is needed to turn the archiving on, you
do not need to turn anything at all down during the backup.
A.A
On 04/25/2012
On Tue, 2011-12-27 at 13:01 +0530, nagaraj L M wrote:
Hi sir
Can u tell how to take back up individual schema in
PostgresQL
Use the -n command line option
(http://www.postgresql.org/docs/9.1/interactive/app-pgdump.html).
--
Guillaume
http://blog.guillaume.lelarge.info
Hi sir
Can u tell how to take back up individual schema in PostgresQL
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Karuna Karpe karuna.ka...@os3infotech.com wrote:
I want get cold backup of database cluster, but in database
cluster there are four non-built-in tablespaces. So, when get the
cold backup of database cluster and restore on another machine and
I check tablespaces for that there is no any
Hi,
I want get cold backup of database cluster, but in database cluster
there are four non-built-in tablespaces. So, when get the cold backup of
database cluster and restore on another machine and I check tablespaces for
that there is no any non-built-in tablespace is available.
So,Please
I'm making a base backup with 9.1rc by following 24.3.3 in manual:
http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
1. SELECT pg_start_backup('label');
2. perform file system backup with tar
3. SELECT pg_stop_backup();
But when I was performing step 2, I got warning from tar
On Sun, 2011-09-11 at 01:19 +0800, Rural Hunter wrote:
I'm making a base backup with 9.1rc by following 24.3.3 in manual:
http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
1. SELECT pg_start_backup('label');
2. perform file system backup with tar
3. SELECT pg_stop_backup();
OK, thank you.
于2011年9月11日 1:30:48,Guillaume Lelarge写到:
On Sun, 2011-09-11 at 01:19 +0800, Rural Hunter wrote:
I'm making a base backup with 9.1rc by following 24.3.3 in manual:
http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
1. SELECT pg_start_backup('label');
2. perform
On Fri, Mar 18, 2011 at 4:55 PM, Stephen Rees sr...@pandora.com wrote:
Robert,
Thank you for reply. I had the wrong end of the stick regarding pg_dump and
hot-standby.
I will take a look at omnipitr, as you suggest.
Per your comment
You have to stop replay while you are doing the dumps
Robert,
Thank you for reply. I had the wrong end of the stick regarding
pg_dump and hot-standby.
I will take a look at omnipitr, as you suggest.
Per your comment
You have to stop replay while you are doing the dumps like this
how do I stop, then resume, replay with both the master and hot-
Using PostgreSQL 9.0.x
I cannot use pg_dump to generate a backup of a database on a hot-
standby server, because it is, by definition, read-only. However, it
seems that I can use COPY TO within a serializable transaction to
create a consistent set of data file(s). For example,
BEGIN
Stephen Rees sr...@pandora.com wrote:
I cannot use pg_dump to generate a backup of a database on a hot-
standby server, because it is, by definition, read-only.
That seems like a non sequitur -- I didn't think pg_dump wrote
anything to the source database. Have you actually tried? If so,
On Tue, Mar 15, 2011 at 5:50 PM, Stephen Rees sr...@pandora.com wrote:
Using PostgreSQL 9.0.x
I cannot use pg_dump to generate a backup of a database on a hot-standby
server, because it is, by definition, read-only.
That really makes no sense :-) You can use pg_dump on a read-only
slave, but
Hello.
In the docs of 8.4 I read that one way of doing filesystem backup of
PostgreSQL is to
1. run rsync
2. stop the server
3. run second rsync
4. start server
But what would happen if you
1. run rsync
2. throw server through the window and buy new server
3. copy the rsynced data
4. start
On Mar 1, 2011, at 3:20 PM, A B wrote:
But what would happen if you
1. run rsync
2. throw server through the window and buy new server
3. copy the rsynced data
4. start server
now, what would happen?
I guess the server would think: uh-oh, it has crashed, I'll try to fix it.
This will
Hi All,
I am new to postgresql. I have pgadmin installed on my windows machine locally
using which i m connecting to the client server and accessing the database. I
want to take the backup of client database. but it seems hard the database is
very large. and when i select any database and hit
Sorry for the delay.
On Thu, Mar 4, 2010 at 3:47 PM, Mikko Partio mpar...@gmail.com wrote:
Hi
I'm currently testing Pg 9.0.0 alpha 4 and the hot standby feature (with
streaming replication) is working great. I tried to take a filesystem backup
from a hot standby, but I guess that is not
Hi
I'm currently testing Pg 9.0.0 alpha 4 and the hot standby feature (with
streaming replication) is working great. I tried to take a filesystem backup
from a hot standby, but I guess that is not possible since executing SELECT
pg_start_backup('ss') returns an error? Or can I just tar $PGDATA
Hello,
I am curious if there is a way to know which databases have changed (any write
transaction) since a given timestamp? I use pg_dump nightly to backup several
databases within the cluster, but I would like to only pg_dump those databases
which have actually changed during the day. Is
Hello Postgres Gurus,
I have a restore problem.
If you do the backup as a text file:
pg_dump.exe -i -h machine -p 5432 -U postgres -F p -v -f
C:\dbname_text.dump.backup dbname
You can see the order in which the restore will happen. And the restore seems
to be happening in the following order
Kasia Tuszynska ktuszyn...@esri.com writes:
The problem arises, if data in lets say the adam schema is dependent on
tables in the public schema, since the data in the public schema does not
exist yet, being created later.
That's not supposed to happen. Are you possibly running an early 8.3
We have a server that backups and then recreates our production database on
a nightly basis
In order to drop and recreate the database we would stop and restart the
server - this would
Effectively kick off any straggling users so we could get our refresh done.
No problem.
Now we have more than
Hi,
you can use
pg_ctl stop -m fast
pg_ctl start
who kill client and abort current transaction
and if you have multiple database you can use the -D option for
specify database directory
-manu
Le 15 oct. 08 à 16:11, Mark Steben a écrit :
We have a server that backups and then recreates
Hello Mark,
I don't know a command in postgres to do that, but if you're running
postgres on Linux try it on the command line:
for pid in `psql -A -t -c select procpid from pg_stat_activity`; do
pg_ctl kill TERM $i; done
Best regards.
Ps: Sorry, but my english isn't so good.
--
Fabrízio
Hi all,
Sorry, but I found a little bug in the command line...
To solve just replace $i for $pid:
for pid in `psql -A -t -c select procpid from pg_stat_activity`; do
pg_ctl kill TERM $pid; done
Sorry... :-)
Fabrízio de Royes Mello escreveu:
Hello Mark,
I don't know a command in postgres
On Tue, Jul 15, 2008 at 11:08:27AM -0500, Campbell, Lance wrote:
1) On the primary server, all WAL files will be written to a backup
directory. Once a night I will delete all of the WAL files on the primary
server from the backup directory. I will create a full file SQL dump of the
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 15, 2008 9:46 PM
To: Campbell, Lance
Subject: Re: [ADMIN] Backup and failover process
You can not mix WAL recovery/restore and pg_dump restores. To restore a
pg_dump, you
require a fully functioning postgresql server, which makes its own WAL
files
Got it. Thanks a bunch. Your last email put it all together.
Thanks,
-Original Message-
From: Evan Rempel [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 16, 2008 10:22 AM
To: Campbell, Lance
Subject: Re: [ADMIN] Backup and failover process
postgres does not use time to determine
Campbell, Lance [EMAIL PROTECTED] wrote:
PostgreSQL: 8.2
I am about to change my backup and failover procedure from dumping a
full
file SQL dump of our data every so many minutes
You're currently running pg_dump every so many minutes?
to using WAL files.
Be sure you have read (and
: Tuesday, July 15, 2008 12:24 PM
To: Campbell, Lance; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup and failover process
Campbell, Lance [EMAIL PROTECTED] wrote:
PostgreSQL: 8.2
I am about to change my backup and failover procedure from dumping a
full
file SQL dump of our data every so many
Campbell, Lance [EMAIL PROTECTED] wrote:
I have read this documentation.
I wanted to check if there was some type of timestamp
My previous email omitted the URL I meant to paste:
http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html#RECOVERY-CONFIG-SETTINGS
-Kevin
Campbell, Lance [EMAIL PROTECTED] wrote:
What happens if you take an SQL snapshot of a database while
creating WAL archives then later restore from that SQL snapshot and
apply those WAL files?
What do you mean by an SQL snapshot of a database? WAL files only
come into play for backup
PostgreSQL: 8.2
I am about to change my backup and failover procedure from dumping a full file
SQL dump of our data every so many minutes to using WAL files. Could someone
review the below strategy to identify if this strategy has any issues?
1) On the primary server, all WAL files will
Scott Marlowe [EMAIL PROTECTED] writes:
I wonder what it's meaning by invalid arg?
On my Fedora machine, man write explains EINVAL thusly:
EINVAL fd is attached to an object which is unsuitable for
writing; or
the file was opened with the O_DIRECT flag, and
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
[EMAIL PROTECTED] wrote:
PostgreSQL 8.2.4
RedHat ES4
I have a nightly cron job that is (supposed) to dump a specific
database to magnetic tape:
/usr/local/bin/pg_dump dbname /dev/st0
This runs, and doesn't throw any errors, but
Phillip Smith [EMAIL PROTECTED] writes:
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
[EMAIL PROTECTED] wrote:
A couple of possible things to try; pg_dump to a text file and try
cat'ting that to the tape drive, or pipe it through tar and then to the
tape.
What would the correct syntax be
Tom Lane wrote:
Phillip Smith [EMAIL PROTECTED] writes:
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
[EMAIL PROTECTED] wrote:
A couple of possible things to try; pg_dump to a text file and try
cat'ting that to the tape drive, or pipe it through tar and then to the
tape.
What would the
Coming in the middle of this thread, so slap me if I'm off base here.
tar will accept standard in as:
tar -cf -
the '-f -' says take input.
That would be to write to stdout :) I can't figure out how to accept from
stdin :(
-f is where the send the output, either a file, a device (such
What would the correct syntax be for that - I can't figure out how to
make tar accept stdin:
I don't think it can. Instead, maybe dd with blocksize set equal to the
tape drive's required blocksize would do? You'd have to check what options
your
dd version has for padding out the last
On Wed, 27 Feb 2008 13:48:38 +1100
Phillip Smith [EMAIL PROTECTED] wrote:
Coming in the middle of this thread, so slap me if I'm off base here.
tar will accept standard in as:
tar -cf -
the '-f -' says take input.
That would be to write to stdout :) I can't figure out how to
Sorry Steve, I missed the reply all by 3 pixels :)
tar -cf -
the '-f -' says take input.
That would be to write to stdout :) I can't figure out how to accept
from stdin :(
-f is where the send the output, either a file, a device (such as
tape) or stdout (aka '-')
Not
On Tue, Feb 26, 2008 at 9:38 PM, Phillip Smith
[EMAIL PROTECTED] wrote:
Do we think this is a Postgres problem, a Linux problem or a problem
specific to my hardware setup? Was I wrong to think that I should be able to
stream directly from pg_dump to /dev/st0? I would have thought it
Do we think this is a Postgres problem, a Linux problem or a problem
specific to my hardware setup? Was I wrong to think that I should be
able to stream directly from pg_dump to /dev/st0? I would have
thought it *should* work, but maybe I was wrong in the first place
with that?
On Tue, Feb 26, 2008 at 10:20 PM, Phillip Smith
[EMAIL PROTECTED] wrote:
Do we think this is a Postgres problem, a Linux problem or a problem
specific to my hardware setup? Was I wrong to think that I should be
able to stream directly from pg_dump to /dev/st0? I would have
Do we think this is a Postgres problem, a Linux problem or a
problem specific to my hardware setup? Was I wrong to think
that I should be able to stream directly from pg_dump to
/dev/st0? I would have thought it *should* work, but maybe
I was wrong in the first place
Scott Marlowe [EMAIL PROTECTED] writes:
I wonder what it's meaning by invalid arg?
On my Fedora machine, man write explains EINVAL thusly:
EINVAL fd is attached to an object which is unsuitable for writing; or
the file was opened with the O_DIRECT flag, and either
PostgreSQL 8.2.4
RedHat ES4
I have a nightly cron job that is (supposed) to dump a specific database to
magnetic tape:
/usr/local/bin/pg_dump dbname /dev/st0
This runs, and doesn't throw any errors, but when I try to restore it fails
because the tape is incomplete:
[EMAIL
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
[EMAIL PROTECTED] wrote:
PostgreSQL 8.2.4
RedHat ES4
I have a nightly cron job that is (supposed) to dump a specific database to
magnetic tape:
/usr/local/bin/pg_dump dbname /dev/st0
This runs, and doesn't throw any errors, but
Simon Riggs wrote:
On Fri, 2008-01-25 at 11:34 +1100, Phillip Smith wrote:
We have a center in Europe who has just started to use PostgreSQL and was
asking me if there are any Symantec product or other products that backup
this type of database.
It doesn't appear to.
The
On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
That sentence has no place in any discussion about backup because the
risk is not just a few transactions, it is a corrupt and inconsistent
database from which both old and new data would be inaccessible.
Hmm? I thought the whole
Simon Riggs wrote:
On Thu, 2008-01-31 at 07:21 -0500, Chander Ganesan wrote:
If you don't mind if you lose some transactions
That sentence has no place in any discussion about backup because the
risk is not just a few transactions, it is a corrupt and inconsistent
database from which
On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:
On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
That sentence has no place in any discussion about backup because the
risk is not just a few transactions, it is a corrupt and inconsistent
database from
Simon Riggs wrote:
As far as I am concerned, if any Postgres user loses data then we're all
responsible.
Remember, our license says this software is given without any warranty
whatsoever, implicit or explicit, written or implied, given or sold,
alive or deceased.
--
Alvaro Herrera
Magnus Hagander wrote:
On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:
On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
That sentence has no place in any discussion about backup because the
risk is not just a few transactions, it is a corrupt and
On Thu, 2008-01-31 at 12:09 -0300, Alvaro Herrera wrote:
Simon Riggs wrote:
As far as I am concerned, if any Postgres user loses data then we're all
responsible.
Remember, our license says this software is given without any warranty
whatsoever, implicit or explicit, written or implied,
On Thu, 2008-01-31 at 10:02 -0500, Chander Ganesan wrote:
Magnus Hagander wrote:
On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:
On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
That sentence has no place in any discussion about backup because
Subject: [ADMIN] Backup
Date: Thu, 24 Jan 2008 14:08:26 -0500
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; pgsql-admin@postgresql.org
CC: [EMAIL PROTECTED]
Hi,
We have a center in Europe who has just started to use PostgreSQL and
was asking me
Thank you very much Scott..
I'll keep you updated on my progress.
Thanks again.
Nuwan.
Scott Marlowe [EMAIL PROTECTED] wrote: On Jan 26, 2008 3:06 PM, NUWAN
LIYANAGE wrote:
Yes, I was thinking of doing a pg_dumpall, but my only worry was that the
singl file is going to be pretty large. I
Yes, I was thinking of doing a pg_dumpall, but my only worry was that the singl
file is going to be pretty large. I guess I don't have to worry too much about
that.
But my question to you sir is, If I want to create the development db using
this pg dump file, how do I actually edit create
On Fri, 2008-01-25 at 11:34 +1100, Phillip Smith wrote:
We have a center in Europe who has just started to use PostgreSQL and was
asking me if there are any Symantec product or other products that backup
this type of database.
It doesn't appear to.
The design of the PITR system allows a
On Jan 25, 2008 1:55 PM, NUWAN LIYANAGE [EMAIL PROTECTED] wrote:
Hello,
I have a 450gb production database, and was trying to create a development
database using a bkp.
I was following the instructions on postgres documentation, and came across
the paragraph that says...
If you are
Hello,
I have a 450gb production database, and was trying to create a
development database using a bkp.
I was following the instructions on postgres documentation, and came across
the paragraph that says...
If you are using tablespaces that do not reside underneath this (data)
Hi,
We have a center in Europe who has just started to use PostgreSQL
and was asking me if there are any Symantec product or other products
that backup this type of database. We presently run VERITAS ver9.1 on
windows2003 server. What is being used by users out there now. We are
thinking
We have a center in Europe who has just started to use PostgreSQL and was
asking me if there are any Symantec product or other products that backup
this type of database.
It doesn't appear to. I've just been through the whole rigmarole of
BackupExec for some Windows Servers, and I couldn't
Brian Modra wrote:
The documentation about WAL says that you can start a live backup, as
long as you use WAL backup also.
I'm concerned about the integrity of the tar file. Can someone help me
with that?
If you are using point in time recovery:
Sorry to be hammering this point, but I want to be totally sure its OK,
rather than 5 months down the line attempt to recover, and it fails...
Are you absolutely certain that the tar backup of the file that changed, is
OK? (And that even if that file is huge, tar has managed to save the file as
Steve Holdoway [EMAIL PROTECTED] writes:
You can be absolutely certain that the tar backup of a file that's changed is
a complete waste of time. Because it changed while you were copying it.
That is, no doubt, the reasoning that prompted the gnu tar people to
make it do what it does, but it
Am Mittwoch, 16. Januar 2008 schrieb Tom Lane:
(Thinks for a bit...) Actually I guess there's one extra assumption in
there, which is that tar must issue its reads in multiples of our page
size. But that doesn't seem like much of a stretch.
There is something about that here:
Peter Eisentraut [EMAIL PROTECTED] writes:
Am Mittwoch, 16. Januar 2008 schrieb Tom Lane:
(Thinks for a bit...) Actually I guess there's one extra assumption in
there, which is that tar must issue its reads in multiples of our page
size. But that doesn't seem like much of a stretch.
There
Brian Modra wrote:
Sorry to be hammering this point, but I want to be totally sure its
OK, rather than 5 months down the line attempt to recover, and it fails...
Are you absolutely certain that the tar backup of the file that
changed, is OK? (And that even if that file is huge, tar has
Hi, Brian
We have been doing PITR backups since the feature first became available
in postgresql. We first used tar, then, due to the dreadful warning
being emitted by tar (which made us doubt that it was actually archiving
that particular file) we decided to try CPIO, which actually emits
On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
The important thing is to start archiving the WAL files *prior* to
the first OS backup, or you will end up with an unusable data base.
Why does the recovery need WAL files from before the backup?
Tom
---(end of
Tom Davies [EMAIL PROTECTED] writes:
On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
The important thing is to start archiving the WAL files *prior* to
the first OS backup, or you will end up with an unusable data base.
Why does the recovery need WAL files from before the backup?
It doesn't,
On Jan 16, 2008 4:56 PM, Tom Davies [EMAIL PROTECTED] wrote:
On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
The important thing is to start archiving the WAL files *prior* to
the first OS backup, or you will end up with an unusable data base.
Why does the recovery need WAL files from before
If you don't start archiving log files, your first backup won't be valid
-- well I suppose you could do it the hard way and start the backup and
the log archiving at exactly the same time (can't picture how to time
that), but the point is you need the current log when you kick off the
backup.
Hi,
I use a script like the example below to generate a list of the WAL files
that have to be saved by the backup job. I take the the names of the first
and last WAL files from the backup HISTORYFILE generated by
pg_start_backup() and pg_stop_backup(). The names of the WAL files between
the
Sebastian Reitenbach [EMAIL PROTECTED] writes:
The WAL files have names like this:
00010001003C
I am wonder what the meaning of the two 1 in the filename is?
The first one (the first 8 hex digits actually) are the current
timeline number. The second one isn't very interesting,
Hi,
Tom Lane [EMAIL PROTECTED] wrote:
Sebastian Reitenbach [EMAIL PROTECTED] writes:
The WAL files have names like this:
00010001003C
I am wonder what the meaning of the two 1 in the filename is?
The first one (the first 8 hex digits actually) are the current
timeline
1 - 100 of 276 matches
Mail list logo