Re: [ADMIN] Backup

2012-11-30 Thread Lonni J Friedman
I can generate a backup using pg_dump on a slave with 9.2.1.  No clue
if it works with 9.1.x (or older) versions though.

On Thu, Nov 29, 2012 at 5:17 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Pg_dump



 Sabry Sadiq
 Systems Administrator

 Whispir
 Level 30 360 Collins Street
 Melbourne / Victoria 3000 / Australia
 GPO Box 130 / Victoria 3001 / Australia
 T +61 3 8630 9900 / M +61 428 571 636
 F +61 3 8630 9990 / E mailto:ssa...@whispir.com
 1300 WHISPIR / 1300 944 774
 www.whispir.com




 This communication contains information which is confidential and the 
 copyright of Whispir or a third party.  If you have received this email in 
 error please notify us by return email or telephone Whispir on 1300 WHISPIR 
 (1300 9447747) or +613 8630 9900 and  delete the document and delete all 
 copies immediately. If you are the intended recipient of this communication 
 you should not copy, disclose or distribute this communication without the 
 authority of Whispir. Any views expressed in this Communication are those of 
 the individual sender, except where the sender specifically states them to be 
 the views of Whispir. Except as required at law, Whispir does not represent, 
 warrant and/or guarantee that the integrity of this communication has been 
 maintained nor that the communication is free of errors, virus, interception 
 or interference.


 -Original Message-
 From: Lonni J Friedman [mailto:netll...@gmail.com]
 Sent: Friday, 30 November 2012 12:17 PM
 To: Sabry Sadiq
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup

 *how* are the backups being generated?

 On Thu, Nov 29, 2012 at 5:16 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Currently backups are performed on the master database and I want to
 offload that load to the standby



 Sabry Sadiq
 Systems Administrator

 Whispir
 Level 30 360 Collins Street
 Melbourne / Victoria 3000 / Australia
 GPO Box 130 / Victoria 3001 / Australia T +61 3 8630 9900 / M +61 428
 571 636 F +61 3 8630 9990 / E mailto:ssa...@whispir.com
 1300 WHISPIR / 1300 944 774
 www.whispir.com


 -Original Message-
 From: Lonni J Friedman [mailto:netll...@gmail.com]
 Sent: Friday, 30 November 2012 12:15 PM
 To: Sabry Sadiq
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup

 I don't know, I've never tried.  If I had to guess, I'd say no, as that 
 version doesn't support cascading replication.

 You never stated, how are you currently performing backups?

 On Thu, Nov 29, 2012 at 5:13 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Does it work well with version 9.1.3?



 Sabry Sadiq
 Systems Administrator

 Whispir


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Backup

2012-11-29 Thread Sabry Sadiq
Hi All,

Has anyone been successful in offloading the database backup from the 
production database to the standby database?

Kind Regards,
Sabry




Sabry Sadiq
Systems Administrator

Whispir
Level 30 360 Collins Street
Melbourne / Victoria 3000 / Australia
GPO Box 130 / Victoria 3001 / Australia
T +61 3 8630 9900 / M +61 428 571 636
F +61 3 8630 9990 / E ssa...@whispir.com
1300 WHISPIR / 1300 944 774

www.whispir.com




This communication contains information which is confidential and the copyright 
of Whispir or a third party. If you have received this email in error please 
notify us by return email or telephone Whispir on 1300 WHISPIR (1300 9447747) 
or +613 8630 9900 and delete the document and delete all copies immediately. If 
you are the intended recipient of this communication you should not copy, 
disclose or distribute this communication without the authority of Whispir. Any 
views expressed in this Communication are those of the individual sender, 
except where the sender specifically states them to be the views of Whispir. 
Except as required at law, Whispir does not represent, warrant and/or guarantee 
that the integrity of this communication has been maintained nor that the 
communication is free of errors,virus, interception or interference.






Re: [ADMIN] Backup

2012-11-29 Thread Sabry Sadiq
HI Lonni,

I am new to postgres.. How do I go about doing it? What are the gotchas?

Kind Regards,
Sabry




Sabry Sadiq
Systems Administrator

Whispir
Level 30 360 Collins Street
Melbourne / Victoria 3000 / Australia
GPO Box 130 / Victoria 3001 / Australia
T +61 3 8630 9900 / M +61 428 571 636
F +61 3 8630 9990 / E mailto:ssa...@whispir.com
1300 WHISPIR / 1300 944 774
www.whispir.com




This communication contains information which is confidential and the copyright 
of Whispir or a third party.  If you have received this email in error please 
notify us by return email or telephone Whispir on 1300 WHISPIR (1300 9447747) 
or +613 8630 9900 and  delete the document and delete all copies immediately. 
If you are the intended recipient of this communication you should not copy, 
disclose or distribute this communication without the authority of Whispir. Any 
views expressed in this Communication are those of the individual sender, 
except where the sender specifically states them to be the views of Whispir. 
Except as required at law, Whispir does not represent, warrant and/or guarantee 
that the integrity of this communication has been maintained nor that the 
communication is free of errors, virus, interception or interference.


-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:11 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup

Yes.  Works fine in 9.2.x.

On Thu, Nov 29, 2012 at 4:59 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Hi All,



 Has anyone been successful in offloading the database backup from the
 production database to the standby database?



 Kind Regards,

 Sabry




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup

2012-11-29 Thread Lonni J Friedman
Yes.  Works fine in 9.2.x.

On Thu, Nov 29, 2012 at 4:59 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Hi All,



 Has anyone been successful in offloading the database backup from the
 production database to the standby database?



 Kind Regards,

 Sabry





-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup

2012-11-29 Thread Lonni J Friedman
There aren't any, assuming that all of the servers are using the same
postgresql.conf.  I'm referring to running pg_basebackup.

On Thu, Nov 29, 2012 at 5:11 PM, Sabry Sadiq ssa...@whispir.com wrote:
 HI Lonni,

 I am new to postgres.. How do I go about doing it? What are the gotchas?

 Kind Regards,
 Sabry




 Sabry Sadiq
 Systems Administrator
 -Original Message-
 From: Lonni J Friedman [mailto:netll...@gmail.com]
 Sent: Friday, 30 November 2012 12:11 PM
 To: Sabry Sadiq
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup

 Yes.  Works fine in 9.2.x.

 On Thu, Nov 29, 2012 at 4:59 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Hi All,



 Has anyone been successful in offloading the database backup from the
 production database to the standby database?



 Kind Regards,

 Sabry


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup

2012-11-29 Thread Sabry Sadiq
Does it work well with version 9.1.3?



Sabry Sadiq
Systems Administrator

Whispir
Level 30 360 Collins Street
Melbourne / Victoria 3000 / Australia
GPO Box 130 / Victoria 3001 / Australia
T +61 3 8630 9900 / M +61 428 571 636
F +61 3 8630 9990 / E mailto:ssa...@whispir.com
1300 WHISPIR / 1300 944 774
www.whispir.com




This communication contains information which is confidential and the copyright 
of Whispir or a third party.  If you have received this email in error please 
notify us by return email or telephone Whispir on 1300 WHISPIR (1300 9447747) 
or +613 8630 9900 and  delete the document and delete all copies immediately. 
If you are the intended recipient of this communication you should not copy, 
disclose or distribute this communication without the authority of Whispir. Any 
views expressed in this Communication are those of the individual sender, 
except where the sender specifically states them to be the views of Whispir. 
Except as required at law, Whispir does not represent, warrant and/or guarantee 
that the integrity of this communication has been maintained nor that the 
communication is free of errors, virus, interception or interference.


-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:13 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup

There aren't any, assuming that all of the servers are using the same 
postgresql.conf.  I'm referring to running pg_basebackup.

On Thu, Nov 29, 2012 at 5:11 PM, Sabry Sadiq ssa...@whispir.com wrote:
 HI Lonni,

 I am new to postgres.. How do I go about doing it? What are the gotchas?

 Kind Regards,
 Sabry




 Sabry Sadiq
 Systems Administrator
 -Original Message-
 From: Lonni J Friedman [mailto:netll...@gmail.com]
 Sent: Friday, 30 November 2012 12:11 PM
 To: Sabry Sadiq
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup

 Yes.  Works fine in 9.2.x.

 On Thu, Nov 29, 2012 at 4:59 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Hi All,



 Has anyone been successful in offloading the database backup from the
 production database to the standby database?



 Kind Regards,

 Sabry

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup

2012-11-29 Thread Lonni J Friedman
I don't know, I've never tried.  If I had to guess, I'd say no, as
that version doesn't support cascading replication.

You never stated, how are you currently performing backups?

On Thu, Nov 29, 2012 at 5:13 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Does it work well with version 9.1.3?



 Sabry Sadiq
 Systems Administrator

 Whispir
 Level 30 360 Collins Street
 Melbourne / Victoria 3000 / Australia
 GPO Box 130 / Victoria 3001 / Australia
 T +61 3 8630 9900 / M +61 428 571 636
 F +61 3 8630 9990 / E mailto:ssa...@whispir.com
 1300 WHISPIR / 1300 944 774
 www.whispir.com




 This communication contains information which is confidential and the 
 copyright of Whispir or a third party.  If you have received this email in 
 error please notify us by return email or telephone Whispir on 1300 WHISPIR 
 (1300 9447747) or +613 8630 9900 and  delete the document and delete all 
 copies immediately. If you are the intended recipient of this communication 
 you should not copy, disclose or distribute this communication without the 
 authority of Whispir. Any views expressed in this Communication are those of 
 the individual sender, except where the sender specifically states them to be 
 the views of Whispir. Except as required at law, Whispir does not represent, 
 warrant and/or guarantee that the integrity of this communication has been 
 maintained nor that the communication is free of errors, virus, interception 
 or interference.


 -Original Message-
 From: Lonni J Friedman [mailto:netll...@gmail.com]
 Sent: Friday, 30 November 2012 12:13 PM
 To: Sabry Sadiq
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup

 There aren't any, assuming that all of the servers are using the same 
 postgresql.conf.  I'm referring to running pg_basebackup.

 On Thu, Nov 29, 2012 at 5:11 PM, Sabry Sadiq ssa...@whispir.com wrote:
 HI Lonni,

 I am new to postgres.. How do I go about doing it? What are the gotchas?

 Kind Regards,
 Sabry




 Sabry Sadiq
 Systems Administrator
 -Original Message-
 From: Lonni J Friedman [mailto:netll...@gmail.com]
 Sent: Friday, 30 November 2012 12:11 PM
 To: Sabry Sadiq
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup

 Yes.  Works fine in 9.2.x.

 On Thu, Nov 29, 2012 at 4:59 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Hi All,



 Has anyone been successful in offloading the database backup from the
 production database to the standby database?



 Kind Regards,

 Sabry


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup

2012-11-29 Thread Sabry Sadiq
Currently backups are performed on the master database and I want to offload 
that load to the standby



Sabry Sadiq
Systems Administrator

Whispir
Level 30 360 Collins Street
Melbourne / Victoria 3000 / Australia
GPO Box 130 / Victoria 3001 / Australia
T +61 3 8630 9900 / M +61 428 571 636
F +61 3 8630 9990 / E mailto:ssa...@whispir.com
1300 WHISPIR / 1300 944 774
www.whispir.com


-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:15 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup

I don't know, I've never tried.  If I had to guess, I'd say no, as that version 
doesn't support cascading replication.

You never stated, how are you currently performing backups?

On Thu, Nov 29, 2012 at 5:13 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Does it work well with version 9.1.3?



 Sabry Sadiq
 Systems Administrator

 Whispir
 Level 30 360 Collins Street
 Melbourne / Victoria 3000 / Australia
 GPO Box 130 / Victoria 3001 / Australia T +61 3 8630 9900 / M +61 428
 571 636 F +61 3 8630 9990 / E mailto:ssa...@whispir.com
 1300 WHISPIR / 1300 944 774
 www.whispir.com




 This communication contains information which is confidential and the 
 copyright of Whispir or a third party.  If you have received this email in 
 error please notify us by return email or telephone Whispir on 1300 WHISPIR 
 (1300 9447747) or +613 8630 9900 and  delete the document and delete all 
 copies immediately. If you are the intended recipient of this communication 
 you should not copy, disclose or distribute this communication without the 
 authority of Whispir. Any views expressed in this Communication are those of 
 the individual sender, except where the sender specifically states them to be 
 the views of Whispir. Except as required at law, Whispir does not represent, 
 warrant and/or guarantee that the integrity of this communication has been 
 maintained nor that the communication is free of errors, virus, interception 
 or interference.


 -Original Message-
 From: Lonni J Friedman [mailto:netll...@gmail.com]
 Sent: Friday, 30 November 2012 12:13 PM
 To: Sabry Sadiq
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup

 There aren't any, assuming that all of the servers are using the same 
 postgresql.conf.  I'm referring to running pg_basebackup.

 On Thu, Nov 29, 2012 at 5:11 PM, Sabry Sadiq ssa...@whispir.com wrote:
 HI Lonni,

 I am new to postgres.. How do I go about doing it? What are the gotchas?

 Kind Regards,
 Sabry




 Sabry Sadiq
 Systems Administrator
 -Original Message-
 From: Lonni J Friedman [mailto:netll...@gmail.com]
 Sent: Friday, 30 November 2012 12:11 PM
 To: Sabry Sadiq
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup

 Yes.  Works fine in 9.2.x.

 On Thu, Nov 29, 2012 at 4:59 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Hi All,



 Has anyone been successful in offloading the database backup from
 the production database to the standby database?



 Kind Regards,

 Sabry

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup

2012-11-29 Thread Sabry Sadiq
Pg_dump



Sabry Sadiq
Systems Administrator

Whispir
Level 30 360 Collins Street
Melbourne / Victoria 3000 / Australia
GPO Box 130 / Victoria 3001 / Australia
T +61 3 8630 9900 / M +61 428 571 636
F +61 3 8630 9990 / E mailto:ssa...@whispir.com
1300 WHISPIR / 1300 944 774
www.whispir.com




This communication contains information which is confidential and the copyright 
of Whispir or a third party.  If you have received this email in error please 
notify us by return email or telephone Whispir on 1300 WHISPIR (1300 9447747) 
or +613 8630 9900 and  delete the document and delete all copies immediately. 
If you are the intended recipient of this communication you should not copy, 
disclose or distribute this communication without the authority of Whispir. Any 
views expressed in this Communication are those of the individual sender, 
except where the sender specifically states them to be the views of Whispir. 
Except as required at law, Whispir does not represent, warrant and/or guarantee 
that the integrity of this communication has been maintained nor that the 
communication is free of errors, virus, interception or interference.


-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:17 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup

*how* are the backups being generated?

On Thu, Nov 29, 2012 at 5:16 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Currently backups are performed on the master database and I want to
 offload that load to the standby



 Sabry Sadiq
 Systems Administrator

 Whispir
 Level 30 360 Collins Street
 Melbourne / Victoria 3000 / Australia
 GPO Box 130 / Victoria 3001 / Australia T +61 3 8630 9900 / M +61 428
 571 636 F +61 3 8630 9990 / E mailto:ssa...@whispir.com
 1300 WHISPIR / 1300 944 774
 www.whispir.com


 -Original Message-
 From: Lonni J Friedman [mailto:netll...@gmail.com]
 Sent: Friday, 30 November 2012 12:15 PM
 To: Sabry Sadiq
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup

 I don't know, I've never tried.  If I had to guess, I'd say no, as that 
 version doesn't support cascading replication.

 You never stated, how are you currently performing backups?

 On Thu, Nov 29, 2012 at 5:13 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Does it work well with version 9.1.3?



 Sabry Sadiq
 Systems Administrator

 Whispir

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup

2012-11-29 Thread Lonni J Friedman
*how* are the backups being generated?

On Thu, Nov 29, 2012 at 5:16 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Currently backups are performed on the master database and I want to offload 
 that load to the standby



 Sabry Sadiq
 Systems Administrator

 Whispir
 Level 30 360 Collins Street
 Melbourne / Victoria 3000 / Australia
 GPO Box 130 / Victoria 3001 / Australia
 T +61 3 8630 9900 / M +61 428 571 636
 F +61 3 8630 9990 / E mailto:ssa...@whispir.com
 1300 WHISPIR / 1300 944 774
 www.whispir.com


 -Original Message-
 From: Lonni J Friedman [mailto:netll...@gmail.com]
 Sent: Friday, 30 November 2012 12:15 PM
 To: Sabry Sadiq
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup

 I don't know, I've never tried.  If I had to guess, I'd say no, as that 
 version doesn't support cascading replication.

 You never stated, how are you currently performing backups?

 On Thu, Nov 29, 2012 at 5:13 PM, Sabry Sadiq ssa...@whispir.com wrote:
 Does it work well with version 9.1.3?



 Sabry Sadiq
 Systems Administrator

 Whispir


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup

2012-11-29 Thread Kevin Grittner
Sabry Sadiq wrote:

 Does it work well with version 9.1.3?

It might work better in 9.1.6:

http://www.postgresql.org/support/versioning/

And it would probably pay to keep up-to-date as new minor releases
become available.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Backup and Restore from 8.3.0 to 9.1.3

2012-09-24 Thread Kasia Tuszynska
Hi Everybody,

I am experimenting with backups and restores
I am running into something curious and would appreciate any suggestions.

Backing up from:
Postgres 8.3.0
Windows 2003 sp1 server (32bit)

-Took a compressed binary backup of a single db (the default option in 
pgAdminIII, right click backup)

-Took a plane backup of the same single db, (plane option in the right 
click backup gui in pgAdminIII)

Restoring to:
Postgres 9.1.3
Windows 2008 R2 (64bit)
Aside from not finding the pldbgapi.dll, when restoring, what seem to contain 
debug functions


-Tried restoring the compressed binary backup with pg_restore - Error

o   I get an error message: pg_restore [archiver (db)] could not execute query: 
ERROR: permissions denied for schema bob

§  The bob schema is owned by the bob user, which is superuser, it is the user 
with which I am restoring

§  The object being restored :

·will be placed in and owned by a user/schema rob,

·the name for this object is derived from a query from a bob 
owned/stored table,

·rob is not superuser

·all schemas grant usage privs to public

·all users are part of public

-Tried restoring the plane backup as a .sql file in psql, as the bob 
user - Success !

o   The object in question was created

Question #1: what does the error mean?

-Does it mean that the bob user cannot create an object in the rob 
schema?

-Does it mean that the bob user cannot query from an object stored in 
the bob schema and owned by bob?
Question #2: if the restore was successful from a sql file but not from a dump 
file, where is the difference in the restoration process from those two files?

-Is there a difference in locking or concurrency mechanisms employed by 
those two restoration methods?

-I listed out the objects in the dump file, does the list indicate the 
order of restoration?
Question #3: the object is a gist index, we are also restoring a btree index, 
the btree index name also derives from a query on a bob owned/stored table, and 
yet the btree index gets created with both restoration methods, but the gist 
does not

-Could this experience have anything to do with this being a gist index?

Thank you,
Sincerely,
Kasia


[ADMIN] Backup and Restore from 8.3.0 to 9.1.3

2012-09-20 Thread Kasia Tuszynska
Hi Everybody,

I am experimenting with backups and restores
I am running into something curious and would appreciate any suggestions.

Backing up from:
Postgres 8.3.0
Windows 2003 sp1 server (32bit)

-Took a compressed binary backup of a single db (the default option in 
pgAdminIII, right click backup)

-Took a plane backup of the same single db, (plane option in the right 
click backup gui in pgAdminIII)

Restoring to:
Postgres 9.1.3
Windows 2008 R2 (64bit)
Aside from not finding the pldbgapi.dll, when restoring, what seem to contain 
debug functions


-Tried restoring the compressed binary backup with pg_restore - Error

o   I get an error message: pg_restore [archiver (db)] could not execute query: 
ERROR: permissions denied for schema bob

§  The bob schema is owned by the bob user, which is superuser, it is the user 
with which I am restoring

§  The object being restored :

·will be placed in and owned by a user/schema rob,

·the name for this object is derived from a query from a bob 
owned/stored table,

·rob is not superuser

·all schemas grant usage privs to public

·all users are part of public

-Tried restoring the plane backup as a .sql file in psql, as the bob 
user - Success !

o   The object in question was created

Question #1: what does the error mean?

-Does it mean that the bob user cannot create an object in the rob 
schema?

-Does it mean that the bob user cannot query from an object stored in 
the bob schema and owned by bob?
Question #2: if the restore was successful from a sql file but not from a dump 
file, where is the difference in the restoration process from those two files?

-Is there a difference in locking or concurrency mechanisms employed by 
those two restoration methods?

-I listed out the objects in the dump file, does the list indicate the 
order of restoration?
Question #3: the object is a gist index, we are also restoring a btree index, 
the btree index name also derives from a query on a bob owned/stored table, and 
yet the btree index gets created with both restoration methods, but the gist 
does not

-Could this experience have anything to do with this being a gist index?

Thank you,
Sincerely,
Kasia


Re: [ADMIN] Backup and Restore from 8.3.0 to 9.1.3

2012-09-20 Thread Craig Ringer

On 09/21/2012 01:01 AM, Kasia Tuszynska wrote:

Hi Everybody,

I am experimenting with backups and restores….

I am running into something curious and would appreciate any suggestions.

Backing up from:

Postgres 8.3.0

Windows 2003 sp1 server (32bit)

-Took a compressed binary backup of a single db (the default option in
pgAdminIII, right click backup)

-Took a plane backup of the same single db, (plane option in the right
click backup gui in pgAdminIII)


Did you back up using the old version of PgAdmin-III with the old 
pg_dump? If so, problems restoring into PostgreSQL 9.1 are likely.


If possible, dump the old database using the *new* pg_dump. If you're 
using the PgAdmin-III GUI, connecting to the old DB from the new PgAdmin 
on the new computer should do the trick.


--
Craig Ringer



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] backup

2012-06-18 Thread lohita nama
Hi

I am working as sql dba recently our team had oppurtunity to work on
postgres databases and i had experience on sql server and on windows
platform and now our company had postgres databases on solaris platform

can u please suggest how to take the back up of postgress databases by step
by step procudure

awaiting for your reply thanks for your help


Re: [ADMIN] backup

2012-06-18 Thread Kevin Grittner
lohita nama namaloh...@gmail.com wrote:
 
 I am working as sql dba recently our team had oppurtunity to work
 on postgres databases and i had experience on sql server and on
 windows platform and now our company had postgres databases on
 solaris platform
 
 can u please suggest how to take the back up of postgress
 databases by step by step procudure
 
As a new user of PostgreSQL you may not yet have noticed how good
the manuals are.  You should find what you need in this chapter:
 
http://www.postgresql.org/docs/current/interactive/backup.html
 
If you are on an older version of PostgreSQL, click the link at the
top for the version you are using.
 
If you still have questions after reading the manual, feel free to
post with a more specific question.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] backup

2012-06-18 Thread Frederiko Costa
Hi,

I would recommend this:
http://www.postgresql.org/docs/9.1/static/backup.html

Very straightforward and easy reading ...

-fred




On Mon, Jun 18, 2012 at 10:50 AM, lohita nama namaloh...@gmail.com wrote:

 Hi

 I am working as sql dba recently our team had oppurtunity to work on
 postgres databases and i had experience on sql server and on windows
 platform and now our company had postgres databases on solaris platform

 can u please suggest how to take the back up of postgress databases by
 step by step procudure

 awaiting for your reply thanks for your help



[ADMIN] Backup/disaster recovery and bandwidth (long)

2012-04-25 Thread Scott Whitney
Hello, everyone. I want to throw a scenario out there to see what y'all think. 

Soon, my cluster backups will be increasing in size inordinately. They're going 
to immediately go to 3x as large as they currently are with the potential to be 
about 20x within a year or so. 

My current setup uses a single PG 8.x server doing nightly dumps (not ideal but 
sufficient for the moment, and one of the main reasons to move to PG 9) which 
are then downloaded from my hosting center to our offices for DR purposes. Each 
night I pull down roughly 5GB of compressed pg_dump data. Dumping this takes 
about 1.5hrs. Downloading this at 15Mbps takes about an hour. Soon I'll be 
looking at somewhere around 7hrs for the dumps to complete and downloading a 
12GB file (which will take about 3 hrs). Oh, and I'll have to pay for 
significant bandwidth overage since I'm charged on a 95%, and while an hour a 
day does NOT kick me up to 15Mbps usage at 95%, 3hrs per night certainly will, 
so there's a real cost associated with this strategy as well. 

While the time of the actual dumps is not a huge issue, the time of the 
download IS a large concern, especially since my support folks use that file 
daily to extract individual customer databases for restore in assisting 
customer support issues. 

So, while now I have my pg_dumps completed around 2AM and downloaded to my 
local network at about 3AM, with the increase in our database sizes, what will 
be happening is that my pg_dump will not be completed until around 7AM, and the 
download would not be completed until around 10AM, best-case scenario. Add into 
that support trying to restore a database...more on that in a moment. 

My _new_ setup will instead be 2 PG 9.x servers with hot-standby enabled (at my 
hosting center) and a 3rd PG 9.x server at my local office also replicating off 
of the master. Each one of those servers will perform his own pg_dumps of the 
individual databases for backup/disaster recovery purposes, and while each dump 
might not be consistent with one another, each SERVER will have dumps 
consistent to itself, which is viable for our situation, and does not require 
me to download 12GB (or more) each night with all of those associated 
nightmares, costs and other problems. 

Alright, well, I've got that part all thought out, and it seems like a good way 
to do it to me, but I'm _still_ running into the situation that I've got to 
take 8hrs-ish to run the pg_dump no matter where it runs, and when my support 
folks need it (which they do daily), this basically means that if they have to 
have a customer database up NOW NOW NOW for support reasons, they simply cannot 
have it within an hour in many cases. Specifically, one database takes between 
2 and 7.5hrs to pg_dump depending on which format I use, so if they need a 
CURRENT copy, they're at least 4 hours out. Additionally, they can't directly 
use the replicating server at my local office, because they need to test the 
problems the customers are having which include pesky things like INSERT, 
UPDATE and DELETE, so they have to restore this data to another internal PG 
backend. 

Enter my outside-the-box thinking. 

I rather assume that you cannot do a start/stop backup on a hot-standby server. 
HOWEVER, what if 

I set up a 4th database server internally at my office. Each night I stop PG on 
my 3rd server (the local one replicating off of the master) and rsync my 
pg_data directory to this new 4th server. I bring up the 4th server NOT as a 
standby, but as a master. They would then have all customer data on an 
internal, usable PG system from the time of the rsync, and while it might not 
reflect the immediate state of the database, that's pretty well always true, 
and they're used to that, since whenever they clone a site, they're using the 
dumps done around midnight anyway. 

I believe, then, that when I restart server #3 (the standby who is 
replicating), he'll say oh, geez, I was down, let me catch up on all that crap 
that happened while I was out of the loop, he'll replay the WAL files that 
were written while he was down, and then he'll catch back up. 

Does this sound like a viable option? Or does someone have additional 
suggestions? 


Re: [ADMIN] Backup/disaster recovery and bandwidth (long)

2012-04-25 Thread amador alvarez

Hi Scott,
Why you do not replicate this master to the other location/s using other 
methods like bucardo?, you can pick the tables you really want get 
replicated there.
For the backup turn to hot backup (tar $PGDATA)+ archiving, easier, 
faster and more efficient rather than a logical copy with pgdump.


A.A

On 04/25/2012 09:11 AM, Scott Whitney wrote:
Hello, everyone. I want to throw a scenario out there to see what 
y'all think.


Soon, my cluster backups will be increasing in size inordinately. 
They're going to immediately go to 3x as large as they currently are 
with the potential to be about 20x within a year or so.


My current setup uses a single PG 8.x server doing nightly dumps (not 
ideal but sufficient for the moment, and one of the main reasons to 
move to PG 9) which are then downloaded from my hosting center to our 
offices for DR purposes. Each night I pull down roughly 5GB of 
compressed pg_dump data. Dumping this takes about 1.5hrs. Downloading 
this at 15Mbps takes about an hour. Soon I'll be looking at somewhere 
around 7hrs for the dumps to complete and downloading a 12GB file 
(which will take about 3 hrs). Oh, and I'll have to pay for 
significant bandwidth overage since I'm charged on a 95%, and while an 
hour a day does NOT kick me up to 15Mbps usage at 95%, 3hrs per night 
certainly will, so there's a real cost associated with this strategy 
as well.


While the time of the actual dumps is not a huge issue, the time of 
the download IS a large concern, especially since my support folks use 
that file daily to extract individual customer databases for restore 
in assisting customer support issues.


So, while now I have my pg_dumps completed around 2AM and downloaded 
to my local network at about 3AM, with the increase in our database 
sizes, what will be happening is that my pg_dump will not be completed 
until around 7AM, and the download would not be completed until around 
10AM, best-case scenario. Add into that support trying to restore a 
database...more on that in a moment.


My _new_ setup will instead be 2 PG 9.x servers with hot-standby 
enabled (at my hosting center) and a 3rd PG 9.x server at my local 
office also replicating off of the master. Each one of those servers 
will perform his own pg_dumps of the individual databases for 
backup/disaster recovery purposes, and while each dump might not be 
consistent with one another, each SERVER will have dumps consistent to 
itself, which is viable for our situation, and does not require me to 
download 12GB (or more) each night with all of those associated 
nightmares, costs and other problems.


Alright, well, I've got that part all thought out, and it seems like a 
good way to do it to me, but I'm _still_ running into the situation 
that I've got to take 8hrs-ish to run the pg_dump no matter where it 
runs, and when my support folks need it (which they do daily), this 
basically means that if they have to have a customer database up NOW 
NOW NOW for support reasons, they simply cannot have it within an hour 
in many cases. Specifically, one database takes between 2 and 7.5hrs 
to pg_dump depending on which format I use, so if they need a CURRENT 
copy, they're at least 4 hours out. Additionally, they can't directly 
use the replicating server at my local office, because they need to 
test the problems the customers are having which include pesky things 
like INSERT, UPDATE and DELETE, so they have to restore this data to 
another internal PG backend.


Enter my outside-the-box thinking.

I rather assume that you cannot do a start/stop backup on a 
hot-standby server. HOWEVER, what if


I set up a 4th database server internally at my office. Each night I 
stop PG on my 3rd server (the local one replicating off of the master) 
and rsync my pg_data directory to this new 4th server. I bring up the 
4th server NOT as a standby, but as a master. They would then have all 
customer data on an internal, usable PG system from the time of the 
rsync, and while it might not reflect the immediate state of the 
database, that's pretty well always true, and they're used to that, 
since whenever they clone a site, they're using the dumps done 
around midnight anyway.


I believe, then, that when I restart server #3 (the standby who is 
replicating), he'll say oh, geez, I was down, let me catch up on all 
that crap that happened while I was out of the loop, he'll replay the 
WAL files that were written while he was down, and then he'll catch 
back up.


Does this sound like a viable option? Or does someone have additional 
suggestions?


Re: [ADMIN] Backup/disaster recovery and bandwidth (long)

2012-04-25 Thread Scott Whitney
Both good points, thanks, although I suspect that a direct network copy of the 
pg_data directory will be faster than a tar/untar event. 

- Original Message -

 Hi Scott,
 Why you do not replicate this master to the other location/s using
 other
 methods like bucardo?, you can pick the tables you really want get
 replicated there.
 For the backup turn to hot backup (tar $PGDATA)+ archiving, easier,
 faster and more efficient rather than a logical copy with pgdump.

 A.A

 On 04/25/2012 09:11 AM, Scott Whitney wrote:
  Hello, everyone. I want to throw a scenario out there to see what
  y'all think.
 
  Soon, my cluster backups will be increasing in size inordinately.
  They're going to immediately go to 3x as large as they currently
  are
  with the potential to be about 20x within a year or so.
 
  My current setup uses a single PG 8.x server doing nightly dumps
  (not
  ideal but sufficient for the moment, and one of the main reasons to
  move to PG 9) which are then downloaded from my hosting center to
  our
  offices for DR purposes. Each night I pull down roughly 5GB of
  compressed pg_dump data. Dumping this takes about 1.5hrs.
  Downloading
  this at 15Mbps takes about an hour. Soon I'll be looking at
  somewhere
  around 7hrs for the dumps to complete and downloading a 12GB file
  (which will take about 3 hrs). Oh, and I'll have to pay for
  significant bandwidth overage since I'm charged on a 95%, and while
  an
  hour a day does NOT kick me up to 15Mbps usage at 95%, 3hrs per
  night
  certainly will, so there's a real cost associated with this
  strategy
  as well.
 
  While the time of the actual dumps is not a huge issue, the time of
  the download IS a large concern, especially since my support folks
  use
  that file daily to extract individual customer databases for
  restore
  in assisting customer support issues.
 
  So, while now I have my pg_dumps completed around 2AM and
  downloaded
  to my local network at about 3AM, with the increase in our database
  sizes, what will be happening is that my pg_dump will not be
  completed
  until around 7AM, and the download would not be completed until
  around
  10AM, best-case scenario. Add into that support trying to restore a
  database...more on that in a moment.
 
  My _new_ setup will instead be 2 PG 9.x servers with hot-standby
  enabled (at my hosting center) and a 3rd PG 9.x server at my local
  office also replicating off of the master. Each one of those
  servers
  will perform his own pg_dumps of the individual databases for
  backup/disaster recovery purposes, and while each dump might not be
  consistent with one another, each SERVER will have dumps consistent
  to
  itself, which is viable for our situation, and does not require me
  to
  download 12GB (or more) each night with all of those associated
  nightmares, costs and other problems.
 
  Alright, well, I've got that part all thought out, and it seems
  like a
  good way to do it to me, but I'm _still_ running into the situation
  that I've got to take 8hrs-ish to run the pg_dump no matter where
  it
  runs, and when my support folks need it (which they do daily), this
  basically means that if they have to have a customer database up
  NOW
  NOW NOW for support reasons, they simply cannot have it within an
  hour
  in many cases. Specifically, one database takes between 2 and
  7.5hrs
  to pg_dump depending on which format I use, so if they need a
  CURRENT
  copy, they're at least 4 hours out. Additionally, they can't
  directly
  use the replicating server at my local office, because they need to
  test the problems the customers are having which include pesky
  things
  like INSERT, UPDATE and DELETE, so they have to restore this data
  to
  another internal PG backend.
 
  Enter my outside-the-box thinking.
 
  I rather assume that you cannot do a start/stop backup on a
  hot-standby server. HOWEVER, what if
 
  I set up a 4th database server internally at my office. Each night
  I
  stop PG on my 3rd server (the local one replicating off of the
  master)
  and rsync my pg_data directory to this new 4th server. I bring up
  the
  4th server NOT as a standby, but as a master. They would then have
  all
  customer data on an internal, usable PG system from the time of the
  rsync, and while it might not reflect the immediate state of the
  database, that's pretty well always true, and they're used to that,
  since whenever they clone a site, they're using the dumps done
  around midnight anyway.
 
  I believe, then, that when I restart server #3 (the standby who is
  replicating), he'll say oh, geez, I was down, let me catch up on
  all
  that crap that happened while I was out of the loop, he'll replay
  the
  WAL files that were written while he was down, and then he'll catch
  back up.
 
  Does this sound like a viable option? Or does someone have
  additional
  suggestions?


Re: [ADMIN] Backup/disaster recovery and bandwidth (long)

2012-04-25 Thread Scott Ribe
On Apr 25, 2012, at 10:11 AM, Scott Whitney wrote:

 I believe, then, that when I restart server #3 (the standby who is 
 replicating), he'll say oh, geez, I was down, let me catch up on all that 
 crap that happened while I was out of the loop, he'll replay the WAL files 
 that were written while he was down, and then he'll catch back up.
 
 Does this sound like a viable option? Or does someone have additional 
 suggestions?

Perfectly viable. However, the WAL files must exist for this to happen. So you 
need to set wal_keep_segments appropriately, or set up WAL archiving.

Note that you could even provide a more up-to-date database for your people to 
work with. If the testbed is nearly up to date, then an rsync to update it 
would take very little time. So you could shut down the replica, rsync, and 
bring the replica back up.


-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup/disaster recovery and bandwidth (long)

2012-04-25 Thread Steve Crawford

On 04/25/2012 09:11 AM, Scott Whitney wrote:

...
My current setup uses a single PG 8.x...
My _new_ setup will instead be 2 PG 9.x ...
It is best to specify actual major version. While 8.0.x or 9.1.x is 
sufficient to discuss features and capabilities, 9.1 is a different 
major release than 9.0, not a minor update to version 9.


I set up a 4th database server internally at my office. Each night I 
stop PG on my 3rd server (the local one replicating off of the master) 
and rsync my pg_data directory to this new 4th server. I bring up the 
4th server NOT as a standby, but as a master


Does this sound like a viable option? Or does someone have additional 
suggestions?


And speaking of major versions, what you really want is PostgreSQL 
version 9.2 with cascading replication:

http://www.postgresql.org/docs/devel/static/warm-standby.html#CASCADING-REPLICATION

Unfortunately that version is currently in development/testing and will 
probably not be released till toward the end of the year (based on my 
totally uninformed guesstimate method).


With cascading replication you could maintain a constantly up-to-date 
local copy which you could cascade-replicate to other clusters as 
necessary. Whether you maintain one or more constantly updated local 
cascaded replicas off your primary local replica or just spin one off as 
necessary will be determined by how quickly you need to access the 
test/dev/debug replica. It's likely that you can come up with a pretty 
fast method of spinning off a cascaded replica as needed.


Note that there is no requirement for the various replicas to reside on 
different servers. You can run multiple clusters on a single machine and 
one cluster can replicate to one or more others. Not good as a backup 
strategy, obviously, but might be just the ticket for your needs - 
especially since the data copies necessary to bring up a replica are all 
on your local disks - no network transfer required.


You may want to experiment with a current development copy of 9.2 to see 
how it works (and provide bug reports to the developers). Perhaps 9.2 
final will be released before your database grows too big for current 
solutions.


Cheers,
Steve



Re: [ADMIN] Backup/disaster recovery and bandwidth (long)

2012-04-25 Thread amador alvarez
I  mean bucardo (even though there are more tools like this one) just 
for the replication stuff and the hot database backup only for the 
backup stuff and only one bounce is needed to turn the archiving on, you 
do not need to turn anything at all down during the backup.


A.A

On 04/25/2012 10:23 AM, Scott Ribe wrote:

On Apr 25, 2012, at 10:11 AM, Scott Whitney wrote:


I believe, then, that when I restart server #3 (the standby who is replicating), he'll 
say oh, geez, I was down, let me catch up on all that crap that happened while I 
was out of the loop, he'll replay the WAL files that were written while he was 
down, and then he'll catch back up.

Does this sound like a viable option? Or does someone have additional 
suggestions?

Perfectly viable. However, the WAL files must exist for this to happen. So you 
need to set wal_keep_segments appropriately, or set up WAL archiving.

Note that you could even provide a more up-to-date database for your people to 
work with. If the testbed is nearly up to date, then an rsync to update it 
would take very little time. So you could shut down the replica, rsync, and 
bring the replica back up.




--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] backup of schema

2011-12-27 Thread Guillaume Lelarge
On Tue, 2011-12-27 at 13:01 +0530, nagaraj L M wrote:
 Hi sir
   Can u tell how to take back up individual schema in 
 PostgresQL
 

Use the -n command line option
(http://www.postgresql.org/docs/9.1/interactive/app-pgdump.html).


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com
  PostgreSQL Sessions #3: http://www.postgresql-sessions.org


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] backup of schema

2011-12-27 Thread nagaraj L M
Hi sir
  Can u tell how to take back up individual schema in PostgresQL

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] backup non-built-in tablespace

2011-12-02 Thread Kevin Grittner
Karuna Karpe karuna.ka...@os3infotech.com wrote:
 
 I want get cold backup of database cluster, but in database
 cluster there are four non-built-in tablespaces. So, when get the
 cold backup of database cluster and restore on another machine and
 I check tablespaces for that there is no any non-built-in
 tablespace is available.
 
 So,Please can any one let me know that, how to get non-built-in
 tablespace backup?
 
From this page:
 
http://www.postgresql.org/docs/current/interactive/continuous-archiving.html
 
There is this:
 
| Be certain that your backup dump includes all of the files under
| the database cluster directory (e.g., /usr/local/pgsql/data). If
| you are using tablespaces that do not reside underneath this
| directory, be careful to include them as well (and be sure that
| your backup dump archives symbolic links as links, otherwise the
| restore will corrupt your tablespaces).
 
You might want to search that page for the other mentions of
tablespace to find the descriptions of when to make a backup, how
to restore, and caveats.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] backup non-built-in tablespace

2011-12-01 Thread Karuna Karpe
Hi,

   I want get cold backup of database cluster, but in database cluster
there are four non-built-in tablespaces. So, when get the cold backup of
database cluster and restore on another machine and I check tablespaces for
that there is no any non-built-in tablespace is available.

So,Please can any one let me know that, how to get non-built-in tablespace
backup?

Thank You.

Regards,
Karuna Karpe.


[ADMIN] Backup Question for Point-in-Time Recovery

2011-09-10 Thread Rural Hunter
I'm making a base backup with 9.1rc by following 24.3.3 in manual: 
http://www.postgresql.org/docs/9.1/static/continuous-archiving.html

1. SELECT pg_start_backup('label');
2. perform file system backup with tar
3. SELECT pg_stop_backup();

But when I was performing step 2, I got warning from tar command that 
file was changed when tar was reading file on some files such as 
'postgres/base/16405/5659097'.  Will I get a good backup in this case?




--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup Question for Point-in-Time Recovery

2011-09-10 Thread Guillaume Lelarge
On Sun, 2011-09-11 at 01:19 +0800, Rural Hunter wrote:
 I'm making a base backup with 9.1rc by following 24.3.3 in manual: 
 http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
 1. SELECT pg_start_backup('label');
 2. perform file system backup with tar
 3. SELECT pg_stop_backup();
 
 But when I was performing step 2, I got warning from tar command that 
 file was changed when tar was reading file on some files such as 
 'postgres/base/16405/5659097'.  Will I get a good backup in this case?
 

Yes. But note that you can use pg_basebackup in 9.1. It does everything
needed to have a complete file backup.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup Question for Point-in-Time Recovery

2011-09-10 Thread Rural Hunter

OK, thank you.

于2011年9月11日 1:30:48,Guillaume Lelarge写到:

On Sun, 2011-09-11 at 01:19 +0800, Rural Hunter wrote:

I'm making a base backup with 9.1rc by following 24.3.3 in manual:
http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
1. SELECT pg_start_backup('label');
2. perform file system backup with tar
3. SELECT pg_stop_backup();

But when I was performing step 2, I got warning from tar command that
file was changed when tar was reading file on some files such as
'postgres/base/16405/5659097'.  Will I get a good backup in this case?



Yes. But note that you can use pg_basebackup in 9.1. It does everything
needed to have a complete file backup.






--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup hot-standby database.

2011-03-21 Thread Robert Treat
On Fri, Mar 18, 2011 at 4:55 PM, Stephen Rees sr...@pandora.com wrote:
 Robert,

 Thank you for reply. I had the wrong end of the stick regarding pg_dump and
 hot-standby.
 I will take a look at omnipitr, as you suggest.

 Per your comment

 You have to stop replay while you are doing the dumps like this

 how do I stop, then resume, replay with both the master and hot-standby
 available throughout?


If you are using WAL file based replication, you need some logic in
your restore script that will enable it to stop feeding xlog segments
into the slave (think if pause file exists, return, else cp xlog
file). This would leave the slave available, just with no updates
coming in.

If you are using streaming, I think it's much harder. There are some
new function to pause and resume streaming WAL coming in 9.1, it might
be possible to back-patch those, but we haven't looked at it yet.


Robert Treat
play: xzilla.net
work: omniti.com
hiring: l42.org/lg

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup hot-standby database.

2011-03-18 Thread Stephen Rees

Robert,

Thank you for reply. I had the wrong end of the stick regarding  
pg_dump and hot-standby.

I will take a look at omnipitr, as you suggest.

Per your comment

You have to stop replay while you are doing the dumps like this
how do I stop, then resume, replay with both the master and hot- 
standby available throughout?


- Steve

On Mar 15, 2011, at 3:04 PM, Robert Treat wrote:

On Tue, Mar 15, 2011 at 5:50 PM, Stephen Rees sr...@pandora.com  
wrote:

Using PostgreSQL 9.0.x

I cannot use pg_dump to generate a backup of a database on a hot- 
standby

server, because it is, by definition, read-only.


That really makes no sense :-)  You can use pg_dump on a read-only
slave, but I think the issue that people tend to run into is that the
pg_dump operations get canceled out by incoming changes before it can
finish. You can of course modify the configs to work around this
somewhat, but eventually it becomes a problem.


However, it seems that I
can use COPY TO within a serializable transaction to create a  
consistent set

of data file(s). For example,

BEGIN TRANSACTION;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
COPY t1 TO '/tmp/t1';
COPY t2 TO '/tmp/t2';

... etc ...

COPY tn TO '/tmp/tn';
COMMIT TRANSACTION;

I can then use pg_dump to export the corresponding database schema  
from the

master DBMS.

Is this going to scale to a multi-GB database, where it will take  
hours to
export the data from all of the tables, or are there scalability  
issues of

which I should be aware?



Well, basically that's in in a nutshell. You have to stop replay while
you are doing the dumps like this, so eventually that delay becomes
unbearable for most people (especially on the order of hours).

There are several ways to work around this... you can use filesystem
snapshots to make copies and dump from there; great if you have the
option. If you don't you might want to look into omnipitr, it can
create filesystem level backups from a slave (not the same as a
logical export, but it might do).


Robert Treat
play: xzilla.net
work: omniti.com
hiring: l42.org/lg



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Backup hot-standby database.

2011-03-15 Thread Stephen Rees

Using PostgreSQL 9.0.x

I cannot use pg_dump to generate a backup of a database on a hot- 
standby server, because it is, by definition, read-only. However, it  
seems that I can use COPY TO within a serializable transaction to  
create a consistent set of data file(s). For example,


BEGIN TRANSACTION;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
COPY t1 TO '/tmp/t1';
COPY t2 TO '/tmp/t2';

... etc ...

COPY tn TO '/tmp/tn';
COMMIT TRANSACTION;

I can then use pg_dump to export the corresponding database schema  
from the master DBMS.


Is this going to scale to a multi-GB database, where it will take  
hours to export the data from all of the tables, or are there  
scalability issues of which I should be aware?


Thanks in advance,

- SteveR

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup hot-standby database.

2011-03-15 Thread Kevin Grittner
Stephen Rees sr...@pandora.com wrote:
 
 I cannot use pg_dump to generate a backup of a database on a hot- 
 standby server, because it is, by definition, read-only.
 
That seems like a non sequitur -- I didn't think pg_dump wrote
anything to the source database.  Have you actually tried?  If so,
please show your commands and the error.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup hot-standby database.

2011-03-15 Thread Robert Treat
On Tue, Mar 15, 2011 at 5:50 PM, Stephen Rees sr...@pandora.com wrote:
 Using PostgreSQL 9.0.x

 I cannot use pg_dump to generate a backup of a database on a hot-standby
 server, because it is, by definition, read-only.

That really makes no sense :-)  You can use pg_dump on a read-only
slave, but I think the issue that people tend to run into is that the
pg_dump operations get canceled out by incoming changes before it can
finish. You can of course modify the configs to work around this
somewhat, but eventually it becomes a problem.

 However, it seems that I
 can use COPY TO within a serializable transaction to create a consistent set
 of data file(s). For example,

 BEGIN TRANSACTION;
 SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
 COPY t1 TO '/tmp/t1';
 COPY t2 TO '/tmp/t2';

 ... etc ...

 COPY tn TO '/tmp/tn';
 COMMIT TRANSACTION;

 I can then use pg_dump to export the corresponding database schema from the
 master DBMS.

 Is this going to scale to a multi-GB database, where it will take hours to
 export the data from all of the tables, or are there scalability issues of
 which I should be aware?


Well, basically that's in in a nutshell. You have to stop replay while
you are doing the dumps like this, so eventually that delay becomes
unbearable for most people (especially on the order of hours).

There are several ways to work around this... you can use filesystem
snapshots to make copies and dump from there; great if you have the
option. If you don't you might want to look into omnipitr, it can
create filesystem level backups from a slave (not the same as a
logical export, but it might do).


Robert Treat
play: xzilla.net
work: omniti.com
hiring: l42.org/lg

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Backup question

2011-03-01 Thread A B
Hello.

In the docs of 8.4 I read that one way of doing filesystem backup of
PostgreSQL is to
1. run rsync
2. stop the server
3. run second rsync
4. start server

But what would happen if you
1. run rsync
2. throw server through the window and buy new server
3. copy the rsynced data
4. start server

now, what would happen?
I guess the server would think: uh-oh, it has crashed, I'll try to fix it.

I understand that you will loose data, but will it be able to get
database in shape and running?
And to reduce the amount of lost data,  is there any clever sync
command I can run before step 1 to write more stuff to disk?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup question

2011-03-01 Thread Vibhor Kumar

On Mar 1, 2011, at 3:20 PM, A B wrote:

 
 But what would happen if you
 1. run rsync
 2. throw server through the window and buy new server
 3. copy the rsynced data
 4. start server
 now, what would happen?
 I guess the server would think: uh-oh, it has crashed, I'll try to fix it.

This will give you Inconsisten Backup and PG will try to recover the database

 I understand that you will loose data, but will it be able to get
 database in shape and running?
May be Or may not be. I have seen scenario's where people had to use 
pg_resetxlog 

 And to reduce the amount of lost data,  is there any clever sync
 command I can run before step 1 to write more stuff to disk?

Have a look of Hot/Online Backup.
http://www.postgresql.org/docs/8.4/interactive/continuous-archiving.html

Thanks  Regards,
Vibhor
-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Backup Postgres database remotely

2010-11-29 Thread Manasi Save
Hi All,

I am new to postgresql. I have pgadmin installed on my windows machine locally 
using which i m connecting to the client server and accessing the database. I 
want to take the backup of client database. but it seems hard the database is 
very large. and when i select any database and hit backup for that database it 
does not give me any error message but it hangs at that point. To fix this i 
have to restart the pgadmin. 

am i missing on something? please help. 

Thanks in advance. 
Regards,
Manasi. 


Re: [ADMIN] Backup from a hot standby

2010-04-05 Thread Fujii Masao
Sorry for the delay.

On Thu, Mar 4, 2010 at 3:47 PM, Mikko Partio mpar...@gmail.com wrote:
 Hi
 I'm currently testing Pg 9.0.0 alpha 4 and the hot standby feature (with
 streaming replication) is working great. I tried to take a filesystem backup
 from a hot standby, but I guess that is not possible since executing SELECT
 pg_start_backup('ss') returns an error? Or can I just tar $PGDATA and copy
 the xlog files generated during the backup, since the server is in constant
 recovery mode?  In Oracle it is possible to take backups from a data guard
 standby, which is very useful since that way making the backups does not
 stress the master server.

You can take a base backup of the standby while it's in progress, without
calling pg_start_backup() and pg_stop_backup(). Please see the following
document.
http://developer.postgresql.org/pgdocs/postgres/backup-incremental-updated.html

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Backup from a hot standby

2010-03-03 Thread Mikko Partio
Hi

I'm currently testing Pg 9.0.0 alpha 4 and the hot standby feature (with
streaming replication) is working great. I tried to take a filesystem backup
from a hot standby, but I guess that is not possible since executing SELECT
pg_start_backup('ss') returns an error? Or can I just tar $PGDATA and copy
the xlog files generated during the backup, since the server is in constant
recovery mode?  In Oracle it is possible to take backups from a data guard
standby, which is very useful since that way making the backups does not
stress the master server.

Anyway, I must say that we are very pleased to see HS and SR features, and
we hope that 9.0.0 goes to beta (and to an official release) soon!

Regards

Mikko


[ADMIN] Backup only changed databases

2009-04-14 Thread Benjamin Minshall

Hello,

I am curious if there is a way to know which databases have changed (any write 
transaction) since a given timestamp?  I use pg_dump nightly to backup several 
databases within the cluster, but I would like to only pg_dump those databases 
which have actually changed during the day.  Is there a solution to this?  I 
have to use pg_dump, because I need to be able to selectively restore databases 
without disrupting the rest of the cluster.  I'm running postgreSQL 8.3.7

Thank you.

--
Benjamin Minshall minsh...@intellicon.biz

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] backup question

2009-03-31 Thread Kasia Tuszynska
Hello Postgres Gurus,

I have a restore problem.

If you do the backup as a text file:
pg_dump.exe -i -h machine -p 5432 -U postgres -F p -v -f 
C:\dbname_text.dump.backup dbname

You can see the order in which the restore will happen. And the restore seems 
to be happening in the following order
create objects  (objects named in descending order of schema name.tablename), 
adam.table is created before public.table
populate objects  (objects named in descending order of schema name.tablename), 
adam.table is populated before public.table
grant privs to objects  (objects named in descending order of schema 
name.tablename) adam.table is granted privs to before public.table

The problem arises, if data in lets say the adam schema is dependent on tables 
in the public schema, since the data in the public schema does not exist yet, 
being created later.

If there was an option to do a backup and restore of a single schema, that 
would solve my problem.

Has anyone dealt with this issue before? Any workarounds?

Thank you,
Sincerely,
Kasia


Re: [ADMIN] backup question

2009-03-31 Thread Tom Lane
Kasia Tuszynska ktuszyn...@esri.com writes:
 The problem arises, if data in lets say the adam schema is dependent on 
 tables in the public schema, since the data in the public schema does not 
 exist yet, being created later.

That's not supposed to happen.  Are you possibly running an early 8.3
release?  pg_dump 8.3.0-5 had a problem with user-defined operator
classes that could cause misordering of the output.  If it's not that,
we'll need a lot more information.

BTW, the -i switch is dangerous.  Don't use it as a routine measure.

regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] backup server - how to disconnect

2008-10-15 Thread Mark Steben
We have a server that backups and then recreates our production database on
a nightly basis

In order to drop and recreate the database we would stop and restart the
server - this would

Effectively kick off any straggling users so we could get our refresh done.
No problem.

Now we have more than one database and stopping and restarting the server to
clean up

The backup database would also kill any process going on in the new
database.

 

Is there a command in postgres to disconnect all users short of restarting

The server?  I looked at pg_ctl kill TERM but there I would need to

Specify specific PIDs.  Not something I want to do manually at 1 am every
morning.

 

We are at postgres 8.2.5.  Thanks for your help

 

 mailto:[EMAIL PROTECTED] Committed to
Creating @utoEnthusiasts. Please provide us with your feedback. 

  _  

Mark Steben│Database Administrator│  http://www.autorevenue.com/
@utoRevenueT
95 Ashley Ave. West Springfield, MA., 01089 
413-243-4800 x1512 (Phone) │ 413-732-1824 (Fax)

A Division of Dominion Enterprises

 

 

 

 



Re: [ADMIN] backup server - how to disconnect

2008-10-15 Thread Emmanuel BERTHOULE

Hi,

you can use

pg_ctl stop -m fast
pg_ctl start

who kill client and abort current transaction
and if you have multiple database you can use the -D option for  
specify database directory



-manu


Le 15 oct. 08 à 16:11, Mark Steben a écrit :

We have a server that backups and then recreates our production  
database on a nightly basis
In order to drop and recreate the database we would stop and restart  
the server - this would
Effectively kick off any straggling users so we could get our  
refresh done.  No problem.
Now we have more than one database and stopping and restarting the  
server to clean up
The backup database would also kill any process going on in the new  
database.


Is there a command in postgres to disconnect all users short of  
restarting

The server?  I looked at pg_ctl kill TERM but there I would need to
Specify specific PIDs.  Not something I want to do manually at 1 am  
every morning.


We are at postgres 8.2.5.  Thanks for your help

Committed to Creating @utoEnthusiasts. Please provide us with your  
feedback.

Mark Steben│Database Administrator│ @utoRevenue™
95 Ashley Ave. West Springfield, MA., 01089
413-243-4800 x1512 (Phone) │ 413-732-1824 (Fax)
A Division of Dominion Enterprises








Re: [ADMIN] backup server - how to disconnect

2008-10-15 Thread Fabrízio de Royes Mello

Hello Mark,

I don't know a command in postgres to do that, but if you're running 
postgres on Linux try it on the command line:


for pid in `psql -A -t -c select procpid from pg_stat_activity`; do 
pg_ctl kill TERM $i; done



Best regards.

Ps: Sorry, but my english isn't so good.

--
Fabrízio de Royes Mello
Coordenador Desenvolvimento de Software
[EMAIL PROTECTED]
DBSeller Informática Ltda. - http://www.dbseller.com.br
(51) 3076-5101

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] backup server - how to disconnect

2008-10-15 Thread Fabrízio de Royes Mello

Hi all,

Sorry, but I found a little bug in the command line...

To solve just replace $i for $pid:

for pid in `psql -A -t -c select procpid from pg_stat_activity`; do 
pg_ctl kill TERM $pid; done


Sorry... :-)

Fabrízio de Royes Mello escreveu:

Hello Mark,

I don't know a command in postgres to do that, but if you're running 
postgres on Linux try it on the command line:


for pid in `psql -A -t -c select procpid from pg_stat_activity`; do 
pg_ctl kill TERM $i; done



Best regards.

Ps: Sorry, but my english isn't so good.



Cordialmente,

--
Fabrízio de Royes Mello
Coordenador Desenvolvimento de Software
[EMAIL PROTECTED]
DBSeller Informática Ltda. - http://www.dbseller.com.br
(51) 3076-5101

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup and failover process

2008-07-16 Thread Andrew Sullivan
On Tue, Jul 15, 2008 at 11:08:27AM -0500, Campbell, Lance wrote:
 1)   On the primary server, all WAL files will be written to a backup 
 directory.  Once a night I will delete all of the WAL files on the primary 
 server from the backup directory.  I will create a full file SQL dump of the 
 database and put it into the same backup folder that the WAL files are put 
 in.  The backup directory will be rsynced to the failover server.  This will 
 cause the failover server to delete all of the WAL files it has copies of 
 each night.   
 2)On the primary server, I will then check periodically with cron 
 during the day to see if there is a new WAL file.  If there is a new WAL file 
 I will then copy it to the fail over server.
 3)  At the end of the day I will repeat step #1.

I think your outline sounds rather fragile.  Moreover, I don't
understand why you're planning to delete WAL files from the target
server.  It seems to me you'd be better off using pg_standby along
with some helper applications.  (Command Prompt has a tool we use for
this, and I believe it's been released, but I'll have to check.)

A

-- 
Andrew Sullivan
[EMAIL PROTECTED]
+1 503 667 4564 x104
http://www.commandprompt.com/

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup and failover process

2008-07-16 Thread Campbell, Lance
Is this a correct understanding?

When restoring using archiving, it is only possible to restore to a
database using WAL files if the database you are restoring to was
created before the first WAL file you wish to apply was created.

So based on the above, if one were to create a backup of a database
using pg_dump and then at a later time restore the database using the
SQL backup, any WAL files that were created after the pg_dump was
created would not work because the database would see itself as existing
after the WAL files.
 
Is this correct? 

-Original Message-
From: Evan Rempel [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 15, 2008 9:46 PM
To: Campbell, Lance
Subject: Re: [ADMIN] Backup and failover process

You can not mix WAL recovery/restore and pg_dump restores. To restore a
pg_dump, you
require a fully functioning postgresql server, which makes its own WAL
files. After the
restore of the pg_dump, you can not interject the WAL archive files.

The WAL archive files can only be used to roll-forward from known
checkpoints (known by
the recovery mode internal to postgresql.

You can use pg_dump to get specific snapshots of databases, but you can
not roll transactions
forward from the pg_dump using WAL files.

We use pg_dump to get daily snapshots of databases so that if a user
accidentally
breaks their database, we can upon request, restore to the previous
nights backup.

We also use filesystem backups and WAL files to allow us to recover to
any point in
time given a disaster.

It would be VERY nice to use filesystem backups and WAL files to recover
a SINGLE database,
but that is currently on my wish list.

Evan Rempel.


Campbell, Lance wrote:
 Kevin,
 I have read this documentation.  I still does not answer my basic
 question.  What happens if you take an SQL snapshot of a database
while
 creating WAL archives then later restore from that SQL snapshot and
 apply those WAL files?  Will there be a problem if the transactions
 within the newest WAL file after the SQL snapshot was taken cause
 problems when they are applied?  I would assume yes but I wanted to
 check if there was some type of timestamp that would prevent an issue
 from occurring?
 
 Thanks,
 
 -Original Message-
 From: Kevin Grittner [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, July 15, 2008 12:24 PM
 To: Campbell, Lance; pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup and failover process
 
 Campbell, Lance [EMAIL PROTECTED] wrote: 
 PostgreSQL: 8.2
 I am about to change my backup and failover procedure from dumping a
 full 
 file SQL dump of our data every so many minutes
  
 You're currently running pg_dump every so many minutes?
  
 to using WAL files.
  
 Be sure you have read (and understand) this section of the docs:
  
 http://www.postgresql.org/docs/8.2/interactive/backup.html
  
 -Kevin
 


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup and failover process

2008-07-16 Thread Campbell, Lance
Got it.  Thanks a bunch.  Your last email put it all together.

Thanks,

-Original Message-
From: Evan Rempel [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 16, 2008 10:22 AM
To: Campbell, Lance
Subject: Re: [ADMIN] Backup and failover process

postgres does not use time to determine if a WAL file contains
transactions
that come before or after a pg-dump. It uses transaction numbers.

pg_dump does NOT dump the transaction numbers. When reloading from a
pg-dump
file, the instance of postgresql that you are loading into will generate
all of the transaction numbers, and they will NOT be appropriate for
using
the WAL files. You will not be able to copy WALL files into the xlog
directory
and roll them into a different instance of postgresql.

To make use of the WAL files, you MUST have the filesystem level restore
of the entire
postgresql cluster/instance, not just the pg_dump of a single database.

In my opinion, to get reliable point in time failover, you need one of

1. The system administrators provide failover as part of thier postgres
offering
2. You need assistance from the system administrators to set up failover
3. You need to administer the machines yourself.

Just my $0.02

Evan.

Campbell, Lance wrote:
 Evan,
 During failover my idea was to load my database from a nightly dump
 created from pg_dump.  Then apply all of the WAL files from when the
 full backup was created by pg_dump.
 
 Are you saying that the database would not be able to load the WAL
 files?
 
 I thought I might have to start PostgreSQL in standard mode.  Restore
 from the pg_dump.  Then start it using archiving.  Then restore using
 WAL files.  Does this not work?
 
 Thanks,
 
 -Original Message-
 From: Evan Rempel [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, July 15, 2008 9:46 PM
 To: Campbell, Lance
 Subject: Re: [ADMIN] Backup and failover process
 
 You can not mix WAL recovery/restore and pg_dump restores. To restore
a
 pg_dump, you
 require a fully functioning postgresql server, which makes its own WAL
 files. After the
 restore of the pg_dump, you can not interject the WAL archive files.
 
 The WAL archive files can only be used to roll-forward from known
 checkpoints (known by
 the recovery mode internal to postgresql.
 
 You can use pg_dump to get specific snapshots of databases, but you
can
 not roll transactions
 forward from the pg_dump using WAL files.
 
 We use pg_dump to get daily snapshots of databases so that if a user
 accidentally
 breaks their database, we can upon request, restore to the previous
 nights backup.
 
 We also use filesystem backups and WAL files to allow us to recover to
 any point in
 time given a disaster.
 
 It would be VERY nice to use filesystem backups and WAL files to
recover
 a SINGLE database,
 but that is currently on my wish list.
 
 Evan Rempel.
 
 
 Campbell, Lance wrote:
 Kevin,
 I have read this documentation.  I still does not answer my basic
 question.  What happens if you take an SQL snapshot of a database
 while
 creating WAL archives then later restore from that SQL snapshot and
 apply those WAL files?  Will there be a problem if the transactions
 within the newest WAL file after the SQL snapshot was taken cause
 problems when they are applied?  I would assume yes but I wanted to
 check if there was some type of timestamp that would prevent an issue
 from occurring?

 Thanks,

 -Original Message-
 From: Kevin Grittner [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, July 15, 2008 12:24 PM
 To: Campbell, Lance; pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Backup and failover process

 Campbell, Lance [EMAIL PROTECTED] wrote: 
 PostgreSQL: 8.2
 I am about to change my backup and failover procedure from dumping a
 full 
 file SQL dump of our data every so many minutes
  
 You're currently running pg_dump every so many minutes?
  
 to using WAL files.
  
 Be sure you have read (and understand) this section of the docs:
  
 http://www.postgresql.org/docs/8.2/interactive/backup.html
  
 -Kevin

 
 


-- 
Evan Rempel[EMAIL PROTECTED]
Senior Programmer Analyst250.721.7691
Computing Services
University of Victoria

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup and failover process

2008-07-15 Thread Kevin Grittner
 Campbell, Lance [EMAIL PROTECTED] wrote: 
 PostgreSQL: 8.2
 I am about to change my backup and failover procedure from dumping a
full 
 file SQL dump of our data every so many minutes
 
You're currently running pg_dump every so many minutes?
 
 to using WAL files.
 
Be sure you have read (and understand) this section of the docs:
 
http://www.postgresql.org/docs/8.2/interactive/backup.html
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup and failover process

2008-07-15 Thread Campbell, Lance
Kevin,
I have read this documentation.  I still does not answer my basic
question.  What happens if you take an SQL snapshot of a database while
creating WAL archives then later restore from that SQL snapshot and
apply those WAL files?  Will there be a problem if the transactions
within the newest WAL file after the SQL snapshot was taken cause
problems when they are applied?  I would assume yes but I wanted to
check if there was some type of timestamp that would prevent an issue
from occurring?

Thanks,

-Original Message-
From: Kevin Grittner [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 15, 2008 12:24 PM
To: Campbell, Lance; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup and failover process

 Campbell, Lance [EMAIL PROTECTED] wrote: 
 PostgreSQL: 8.2
 I am about to change my backup and failover procedure from dumping a
full 
 file SQL dump of our data every so many minutes
 
You're currently running pg_dump every so many minutes?
 
 to using WAL files.
 
Be sure you have read (and understand) this section of the docs:
 
http://www.postgresql.org/docs/8.2/interactive/backup.html
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup and failover process

2008-07-15 Thread Kevin Grittner
 Campbell, Lance [EMAIL PROTECTED] wrote: 
 
 I have read this documentation.
 
 I wanted to check if there was some type of timestamp
 
My previous email omitted the URL I meant to paste:
 
http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html#RECOVERY-CONFIG-SETTINGS
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup and failover process

2008-07-15 Thread Kevin Grittner
 Campbell, Lance [EMAIL PROTECTED] wrote: 
 
 What happens if you take an SQL snapshot of a database while
 creating WAL archives then later restore from that SQL snapshot and
 apply those WAL files?
 
What do you mean by an SQL snapshot of a database?  WAL files only
come into play for backup techniques which involve file copies, not
dumps done using SQL commands (like pg_dump).
 
 Will there be a problem if the transactions
 within the newest WAL file after the SQL snapshot was taken cause
 problems when they are applied?
 
Point In Time Recovery (PITR) backup techniques allow you to restore
to a specified point in time, so you could restore up to the point
immediately before the problem transactions.
 
 I would assume yes but I wanted to
 check if there was some type of timestamp that would prevent an
issue
 from occurring?
 
Take another look at this section -- in particular,
recovery_target_time.
 
I hope this helps.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Backup and failover process

2008-07-15 Thread Campbell, Lance
PostgreSQL: 8.2
I am about to change my backup and failover procedure from dumping a full file 
SQL dump of our data every so many minutes to using WAL files.  Could someone 
review the below strategy to identify if this strategy has any issues?
 
1)   On the primary server, all WAL files will be written to a backup 
directory.  Once a night I will delete all of the WAL files on the primary 
server from the backup directory.  I will create a full file SQL dump of the 
database and put it into the same backup folder that the WAL files are put in.  
The backup directory will be rsynced to the failover server.  This will cause 
the failover server to delete all of the WAL files it has copies of each night. 
  
2)On the primary server, I will then check periodically with cron 
during the day to see if there is a new WAL file.  If there is a new WAL file I 
will then copy it to the fail over server.
3)  At the end of the day I will repeat step #1.
 
 
In the event of a failure a script is ran that converts the failover server to 
the primary server.  After starting PostgreSQL the server would load the full 
file SQL dump.  The server would then apply all of the WAL files it has in the 
backup directory.
 
Is there any problems with the process I am considering?  My only concern is in 
step one.  If I create a full file SQL dump how do I know that some of the 
transactions have not already been applied by the first WAL file that is 
created each night?  What will happen if I try to restore from the first WAL 
file?  Will PostgreSQL some how know that some of the transactions have already 
been applied from the first WAL file?  Will it just ignore those transactions?  
Or will PostgreSQL just fail to reload the WAL file?
 
Thanks, 
 
Lance Campbell
Project Manager/Software Architect
Web Services at Public Affairs
University of Illinois
217.333.0382
http://webservices.uiuc.edu http://webservices.uiuc.edu 
My e-mail address has changed to [EMAIL PROTECTED]
 


Re: [ADMIN] Backup to Tape Incomplete

2008-02-27 Thread Phillip Smith
Scott Marlowe [EMAIL PROTECTED] writes:
  I wonder what it's meaning by invalid arg?
 
 On my Fedora machine, man write explains EINVAL thusly:
 
EINVAL fd  is attached to an object which is unsuitable for
writing; or
   the file was opened with  the  O_DIRECT  flag,  and  either
the
   address  specified  in buf, the value specified in count, or
the
   current file offset is not suitably aligned.
 
 I'm not sure that writing to a tape is quite like O_DIRECT, but the
mention of
 an un-aligned count seems pretty relevant.  If you grant the assumption
that
 the underlying problem is that the tape drive has to be written to in
 multiples of its blocksize, then this supports the idea that a violation
of
 that rule would be reported as EINVAL.

Interesting... If I set the block-size to 32, everything goes without any
error,
But not ending with PostgreSQL database dump complete:

[EMAIL PROTECTED] ~]$ mt setblk 32
[EMAIL PROTECTED] ~]$ mt stat
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 32 bytes. Density code 0x25 (DDS-3).
Soft error count since last status=0
General status bits on (4101):
 BOT ONLINE IM_REP_EN
[EMAIL PROTECTED] ~]$ pg_dump dbname  /dev/st0
[EMAIL PROTECTED] ~]$ tail -  /dev/st0
REVOKE ALL ON TABLE stock_test FROM dbuser;
GRANT ALL ON TABLE stock_test TO dbuser;
GRANT ALL ON TABLE stock_test TO PUBLIC;


--
-- Name: stores; Type: ACL; Schema: public; Owner: dbuser
--

[EMAIL PROTECTED] ~]$


If I set the block-size to 0... Well it's really not happy:

[EMAIL PROTECTED] ~]$ mt setblk 0
[EMAIL PROTECTED] ~]$ mt stat
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x25 (DDS-3).
Soft error count since last status=0
General status bits on (4101):
 BOT ONLINE IM_REP_EN
[EMAIL PROTECTED] ~]$ pg_dump dbname  /dev/st0
[EMAIL PROTECTED] ~]$ tail -  /dev/st0
tail: error reading `standard input': Cannot allocate memory
[EMAIL PROTECTED] ~]$ cat /dev/st0  /tmp/test-restore.sql
cat: /dev/st0: Cannot allocate memory
[EMAIL PROTECTED] ~]$


THINK BEFORE YOU PRINT - Save paper if you don't really need to print this

***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments
e-mail.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Phillip Smith
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
[EMAIL PROTECTED] wrote:
 PostgreSQL 8.2.4
  RedHat ES4

  I have a nightly cron job that is (supposed) to dump a specific 
 database to  magnetic tape:
 /usr/local/bin/pg_dump dbname  /dev/st0

  This runs, and doesn't throw any errors, but when I try to restore it 
 fails  because the tape is incomplete:

 A couple of possible things to try; pg_dump to a text file and try
cat'ting that to the tape drive, or pipe it through tar and then to the
tape.

What would the correct syntax be for that - I can't figure out how to make
tar accept stdin:

[EMAIL PROTECTED] ~]$ pg_dump dbname | tar cvf /dev/st0
tar: Cowardly refusing to create an empty archive
Try `tar --help' for more information.
[EMAIL PROTECTED] ~]$ pg_dump dbname | tar cvf /dev/st0 -
tar: -: Cannot stat: No such file or directory
[EMAIL PROTECTED] ~]$ 


THINK BEFORE YOU PRINT - Save paper if you don't really need to print this

***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments
e-mail.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Tom Lane
Phillip Smith [EMAIL PROTECTED] writes:
 On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
 [EMAIL PROTECTED] wrote:
 A couple of possible things to try; pg_dump to a text file and try
 cat'ting that to the tape drive, or pipe it through tar and then to the
 tape.

 What would the correct syntax be for that - I can't figure out how to make
 tar accept stdin:

I don't think it can.  Instead, maybe dd with blocksize set equal to the
tape drive's required blocksize would do?  You'd have to check what
options your dd version has for padding out the last partial block.
Padding with spaces should work fine, not totally sure if nulls would
be OK.

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Geoffrey

Tom Lane wrote:

Phillip Smith [EMAIL PROTECTED] writes:

On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
[EMAIL PROTECTED] wrote:

A couple of possible things to try; pg_dump to a text file and try

cat'ting that to the tape drive, or pipe it through tar and then to the
tape.



What would the correct syntax be for that - I can't figure out how to make
tar accept stdin:


I don't think it can.


Coming in the middle of this thread, so slap me if I'm off base here. 
tar will accept standard in as:


tar -cf -

the '-f -' says take input.

--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Phillip Smith
 Coming in the middle of this thread, so slap me if I'm off base here. 
 tar will accept standard in as:
 
 tar -cf -
 
 the '-f -' says take input.

That would be to write to stdout :) I can't figure out how to accept from
stdin :(

-f is where the send the output, either a file, a device (such as tape) or
stdout (aka '-')


THINK BEFORE YOU PRINT - Save paper if you don't really need to print this

***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments
e-mail.

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Phillip Smith
 What would the correct syntax be for that - I can't figure out how to 
 make tar accept stdin:

 I don't think it can.  Instead, maybe dd with blocksize set equal to the
tape drive's required blocksize would do?  You'd have to check what options
your
 dd version has for padding out the last partial block. Padding with spaces
should work fine, not totally sure if nulls would be OK.

I don't think it can either, which kind of makes sense. Tar is an archiving
utility to create an archive of files; not to create an archive of the
contents of files. Subtle difference, but makes sense. If I created a tar
archive from the stream coming out of tar, what would be listed when I did
'tar tvf /dev/st0'?

I think I'll hack the backup to output it to a temp file, then tar that file
to tape.

At the very least:

#!/bin/bash

$DBNAME='dbname'
$TMPFILE=/tmp/$DBNAME.date.sql
$TAPE_DRIVE='/dev/st0'

/usr/local/bin/pg_dump $DBNAME  $TMPFILE
/bin/tar cvf $TAPE_DRIVE --label=$TMPFILE $TMPFILE
/bin/rm -f $TMPFILE


THINK BEFORE YOU PRINT - Save paper if you don't really need to print this

***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments
e-mail.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Steve Holdoway
On Wed, 27 Feb 2008 13:48:38 +1100
Phillip Smith [EMAIL PROTECTED] wrote:

  Coming in the middle of this thread, so slap me if I'm off base here. 
  tar will accept standard in as:
  
  tar -cf -
  
  the '-f -' says take input.
 
 That would be to write to stdout :) I can't figure out how to accept from
 stdin :(
 
 -f is where the send the output, either a file, a device (such as tape) or
 stdout (aka '-')

Not quite. tar cf - will pipe to stdout, but tar xf - will pipe from stdin. 

For the OP's problem, I'd try piping through dd as a file buffer, and run sync 
on completion. Is there a /dev/rst0, like on some of the old unixes ( doesn't 
look like it after checking my CentOS 4 server)? The difference was that rst0 
was unbuffered, and st0 was buffered. Either way, the sync may help.

Worth a try (:

Steve.



pgpe5Z0BG3Wup.pgp
Description: PGP signature


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Phillip Smith
Sorry Steve, I missed the reply all by 3 pixels :)

   tar -cf -
   
   the '-f -' says take input.
  
  That would be to write to stdout :) I can't figure out how to accept
  from stdin :(
  
  -f is where the send the output, either a file, a device (such as
  tape) or stdout (aka '-')
 
 Not quite. tar cf - will pipe to stdout, but tar xf - will pipe from 
 stdin.

Yes, true; my head was in tar c mode :)

 For the OP's problem, I'd try piping through dd as a file buffer, and 
 run sync on completion. Is there a /dev/rst0, like on some of the old 
 unixes (doesn't look like it after checking my CentOS 4 server)? The 
 difference was that rst0 was unbuffered, and st0 was buffered. Either 
 way, the sync may help.

There's no /dev/rst0 block device, but I'm more familiar with tar than dd,
so I think I'll just rewrite the script to tar to a temp file first :)

Do we think this is a Postgres problem, a Linux problem or a problem
specific to my hardware setup? Was I wrong to think that I should be able to
stream directly from pg_dump to /dev/st0? I would have thought it *should*
work, but maybe I was wrong in the first place with that?


THINK BEFORE YOU PRINT - Save paper if you don't really need to print this

***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments
e-mail.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Scott Marlowe
On Tue, Feb 26, 2008 at 9:38 PM, Phillip Smith
[EMAIL PROTECTED] wrote:

  Do we think this is a Postgres problem, a Linux problem or a problem
  specific to my hardware setup? Was I wrong to think that I should be able to
  stream directly from pg_dump to /dev/st0? I would have thought it *should*
  work, but maybe I was wrong in the first place with that?

If you can dd the file onto your tape drive, then it's some weird
interaction between pg_dump and your system I'd think.  Could the be
some maximum size that you can buffer through pipes / redirects on
your machine?

I'd test to see if cat pgdumpfile.sql  /dev/st0 works or not.  If it
fails at the same approximate size, then it's something to do with
redirection.  If tar works but redirects fail, then the problem isn't
with postgresql.  i.e. do something similar to what you're doing with
pgsql and see which things fail and which ones don't.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Phillip Smith
   Do we think this is a Postgres problem, a Linux problem or a problem  
  specific to my hardware setup? Was I wrong to think that I should be 
  able to  stream directly from pg_dump to /dev/st0? I would have 
  thought it *should*  work, but maybe I was wrong in the first place 
  with that?
 
 If you can dd the file onto your tape drive, then it's some weird
interaction between
 pg_dump and your system I'd think.  Could the be some maximum size that
you can buffer
 through pipes / redirects on your machine?
 
 I'd test to see if cat pgdumpfile.sql  /dev/st0 works or not.  If it
fails at the
 same approximate size, then it's something to do with redirection.  If 
tar works but
 redirects fail, then the problem isn't with postgresql.  i.e. do something
similar to
 what you're doing with pgsql and see which things fail and which ones
don't.

It appears to be me :(

[EMAIL PROTECTED] ~]$ cat /tmp/dbname080225.sql  /dev/st0
cat: write error: Invalid argument

It ran for a good 30 minutes, then died with that.


THINK BEFORE YOU PRINT - Save paper if you don't really need to print this

***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments
e-mail.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Scott Marlowe
On Tue, Feb 26, 2008 at 10:20 PM, Phillip Smith
[EMAIL PROTECTED] wrote:

Do we think this is a Postgres problem, a Linux problem or a problem
specific to my hardware setup? Was I wrong to think that I should be
able to  stream directly from pg_dump to /dev/st0? I would have
thought it *should*  work, but maybe I was wrong in the first place
with that?
  
   If you can dd the file onto your tape drive, then it's some weird
  interaction between
   pg_dump and your system I'd think.  Could the be some maximum size that
  you can buffer
   through pipes / redirects on your machine?
  
   I'd test to see if cat pgdumpfile.sql  /dev/st0 works or not.  If it
  fails at the
   same approximate size, then it's something to do with redirection.  If 
  tar works but
   redirects fail, then the problem isn't with postgresql.  i.e. do something
  similar to
   what you're doing with pgsql and see which things fail and which ones
  don't.

  It appears to be me :(

  [EMAIL PROTECTED] ~]$ cat /tmp/dbname080225.sql  /dev/st0
  cat: write error: Invalid argument

  It ran for a good 30 minutes, then died with that.

I wonder what it's meaning by invalid arg?  Is something in the .sql
file somehow coming across as an argument?  Can you cat the sql file
to /dev/null successfully?

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Phillip Smith
 Do we think this is a Postgres problem, a Linux problem or a
 problem specific to my hardware setup? Was I wrong to think
 that I should be able to  stream directly from pg_dump to
 /dev/st0? I would have thought it *should*  work, but maybe
 I was wrong in the first place with that?

If you can dd the file onto your tape drive, then it's some
weird interaction between pg_dump and your system I'd think.
Could the be some maximum size that you can buffer through
pipes / redirects on your machine?
   
I'd test to see if cat pgdumpfile.sql  /dev/st0 works or not.
If it fails at the same approximate size, then it's something
to do with redirection.  If tar works but redirects fail, then
the problem isn't with postgresql.  i.e. do something similar
to what you're doing with pgsql and see which things fail and
which ones don't.

   It appears to be me :(
 
   [EMAIL PROTECTED] ~]$ cat /tmp/dbname080225.sql  /dev/st0
   cat: write error: Invalid argument
 
   It ran for a good 30 minutes, then died with that.

 I wonder what it's meaning by invalid arg?  Is something in the
 .sql file somehow coming across as an argument?  Can you cat the
 sql file to /dev/null successfully?

Yes.
[EMAIL PROTECTED] ~]$ cat /tmp/dbname080225.sql  /dev/null
[EMAIL PROTECTED] ~]$


THINK BEFORE YOU PRINT - Save paper if you don't really need to print this

***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments
e-mail.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Backup to Tape Incomplete

2008-02-26 Thread Tom Lane
Scott Marlowe [EMAIL PROTECTED] writes:
 I wonder what it's meaning by invalid arg?

On my Fedora machine, man write explains EINVAL thusly:

   EINVAL fd  is attached to an object which is unsuitable for writing; or
  the file was opened with  the  O_DIRECT  flag,  and  either  the
  address  specified  in buf, the value specified in count, or the
  current file offset is not suitably aligned.

I'm not sure that writing to a tape is quite like O_DIRECT, but the
mention of an un-aligned count seems pretty relevant.  If you grant
the assumption that the underlying problem is that the tape drive
has to be written to in multiples of its blocksize, then this supports
the idea that a violation of that rule would be reported as EINVAL.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[ADMIN] Backup to Tape Incomplete

2008-02-24 Thread Phillip Smith
PostgreSQL 8.2.4
RedHat ES4

I have a nightly cron job that is (supposed) to dump a specific database to
magnetic tape:
/usr/local/bin/pg_dump dbname  /dev/st0

This runs, and doesn't throw any errors, but when I try to restore it fails
because the tape is incomplete:
[EMAIL PROTECTED] ~]$ cat /dev/st0 | tail


--
-- Name: rewards_points; Type: ACL; Schema: public; Owner: dbname
--

REVOKE ALL ON TABLE rewards_points FROM PUBLIC;
REVOKE ALL ON TABLE rewards_points FROM dbname;
GRANT ALL ON TABLE rewards_points TO dbname;
GRANT SELECT,INSERT,UPDATE ON TABLE rewards_points TO
[EMAIL PROTECTED] ~]$

As you can see, the end of file is half-way through a SQL statement, and
doesn't even have a new-line marker.

The database is not too big for the tape - it's a DDS-3 tape drive (12/24gb)
and the database is not even 1gb:
[EMAIL PROTECTED] ~]$ pg_dump dbname  /tmp/dbname080225.sql
[EMAIL PROTECTED] ~]$ ls -lh /tmp/dbname080225.sql
-rw-r--r--  1 postgres root 957M Feb 25 13:42 /tmp/dbname080225.sql

Is this a PostgreSQL issue or a tape drive issue? I can use tar to read and
write without any problems.

Do I need to change the block size on the tape drive?

[EMAIL PROTECTED] ~]$ mt stat
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 512 bytes. Density code 0x25 (DDS-3).
Soft error count since last status=0
General status bits on (4101):
 BOT ONLINE IM_REP_EN


THINK BEFORE YOU PRINT - Save paper if you don't really need to print this

***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments
e-mail.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Backup to Tape Incomplete

2008-02-24 Thread Scott Marlowe
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
[EMAIL PROTECTED] wrote:
 PostgreSQL 8.2.4
  RedHat ES4

  I have a nightly cron job that is (supposed) to dump a specific database to
  magnetic tape:
 /usr/local/bin/pg_dump dbname  /dev/st0

  This runs, and doesn't throw any errors, but when I try to restore it fails
  because the tape is incomplete:

A couple of possible things to try; pg_dump to a text file and try
cat'ting that to the tape drive, or pipe it through tar and then to
the tape.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Chander Ganesan

Simon Riggs wrote:

On Fri, 2008-01-25 at 11:34 +1100, Phillip Smith wrote:
  

We have a center in Europe who has just started to use PostgreSQL and was
asking me if there are any Symantec product or other products that backup
this type of database.
  
It doesn't appear to. 



The design of the PITR system allows a product-agnostic backup. Anything
that can backup a file can backup PostgreSQL. There is no need for
special certifications of hologram logos.

You may need to write a few lines of script to do it, but that's not a
problem surely?

So you can use pg_dump or PITR, as you choose.
  
If you don't mind if you lose some transactions you can also use file 
system snapshottingwhich would work just as well as pg_dump, and 
probably have less impact on the running database (assuming you had 
decent enough hardware).


So long as the product you are using can snapshot the file system prior 
to performing a backup, you can use just about any product (without 
having to really do much/any work).  When you restore a snapshotted file 
system that contains your postgresql database, postgresql (on restart) 
will enter auto-recovery mode and recover (as it does in the case of the 
crash) to the last transaction that was completed successfully prior to 
the snapshot being created.  Note that this would only work if all your 
tablespaces were on the same file system, and would be unable to roll 
forward using WAL files generated after the backup.


However, you should keep in mind that - like a pg_dump - you won't be 
able to perform PITR recovery from such a backup.  Also, the recovery 
time may be non-trivial depending on your WAL settings.


--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC  27560
Phone: 877-258-8987/919-463-0999
http://www.otg-nc.com
As me about Expert PostgreSQL  PostGIS Training delivered worldwide.



Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Martijn van Oosterhout
On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
 That sentence has no place in any discussion about backup because the
 risk is not just a few transactions, it is a corrupt and inconsistent
 database from which both old and new data would be inaccessible.

Hmm? I thought the whole point of a filesystem snapshot was that it's
the same as if the system crashed. And I was fairly sure we could
recover from that...

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Chander Ganesan

Simon Riggs wrote:

On Thu, 2008-01-31 at 07:21 -0500, Chander Ganesan wrote:
  

If you don't mind if you lose some transactions



That sentence has no place in any discussion about backup because the
risk is not just a few transactions, it is a corrupt and inconsistent
database from which both old and new data would be inaccessible.

As far as I am concerned, if any Postgres user loses data then we're all
responsible.
  
I understand your point, but indicating that you can't trust a 
point-in-time snapshot of the database is, IMHO, the same as saying you 
can't trust PostgreSQL's automatic crash recovery, since the two are 
essentially the same thing...


--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC  27560
919-463-0999/877-258-8987
http://www.otg-nc.com


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Magnus Hagander
On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:
 On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
  That sentence has no place in any discussion about backup because the
  risk is not just a few transactions, it is a corrupt and inconsistent
  database from which both old and new data would be inaccessible.
 
 Hmm? I thought the whole point of a filesystem snapshot was that it's
 the same as if the system crashed. And I was fairly sure we could
 recover from that...

That was my assumption as well. *Assuming* that the filesystem snapshot is
consistent. There are a bunch of solutions that don't do consistent
snapshots between different partitions, so if your WAL or one tablespace is
on a different partition, you'll get corruption anyway... (seen this in
Big Commercial Database, so that's not a pg problem)

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Alvaro Herrera
 Simon Riggs wrote:

 As far as I am concerned, if any Postgres user loses data then we're all
 responsible.

Remember, our license says this software is given without any warranty
whatsoever, implicit or explicit, written or implied, given or sold,
alive or deceased.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Chander Ganesan

Magnus Hagander wrote:

On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:
  

On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:


That sentence has no place in any discussion about backup because the
risk is not just a few transactions, it is a corrupt and inconsistent
database from which both old and new data would be inaccessible.
  

Hmm? I thought the whole point of a filesystem snapshot was that it's
the same as if the system crashed. And I was fairly sure we could
recover from that...



That was my assumption as well. *Assuming* that the filesystem snapshot is
consistent. There are a bunch of solutions that don't do consistent
snapshots between different partitions, so if your WAL or one tablespace is
on a different partition, you'll get corruption anyway... (seen this in
Big Commercial Database, so that's not a pg problem)
  
Agreed.  That's why I made it a point to mention that all of your 
tablespaces should be on the same file system...  In hindsight, I should 
have also stated that your WAL logs should be on the same file system as 
well...


--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC  27560
919-463-0999/877-258-8987
http://www.otg-nc.com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Simon Riggs
On Thu, 2008-01-31 at 12:09 -0300, Alvaro Herrera wrote:
  Simon Riggs wrote:
 
  As far as I am concerned, if any Postgres user loses data then we're all
  responsible.
 
 Remember, our license says this software is given without any warranty
 whatsoever, implicit or explicit, written or implied, given or sold,
 alive or deceased.

Yes! ...I meant via the free press, not via the courts.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Simon Riggs
On Thu, 2008-01-31 at 10:02 -0500, Chander Ganesan wrote:
 Magnus Hagander wrote:
  On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:

  On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
  
  That sentence has no place in any discussion about backup because the
  risk is not just a few transactions, it is a corrupt and inconsistent
  database from which both old and new data would be inaccessible.

  Hmm? I thought the whole point of a filesystem snapshot was that it's
  the same as if the system crashed. And I was fairly sure we could
  recover from that...
  
 
  That was my assumption as well. *Assuming* that the filesystem snapshot is
  consistent. There are a bunch of solutions that don't do consistent
  snapshots between different partitions, so if your WAL or one tablespace is
  on a different partition, you'll get corruption anyway... (seen this in
  Big Commercial Database, so that's not a pg problem)

 Agreed.  That's why I made it a point to mention that all of your 
 tablespaces should be on the same file system...  In hindsight, I should 
 have also stated that your WAL logs should be on the same file system as 
 well...

I think we all understand and agree, I just start twitching when anyone
talks about it being OK to lose transactions when backing up. You meant
the ones currently in progress, not the ones already committed and on
disk.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com 


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] Backup

2008-01-29 Thread Guido Barosio
AFAIK Dominic needs a plug-in certified by Symantec, and that is not the case.

As you may have read prior to this mail, the common way is to pg_dump
against a file, picking up that file later with BackupExec as a
regular file.

We are currently (www.globant.com) using it that way, no problems at all.

gb.-

On Jan 30, 2008 2:15 AM, Vishal Arora [EMAIL PROTECTED] wrote:

 PostgreSQL has its own inbuilt mechanism for backing up the database. you
 can refer to the postgres manual online for more information.

  http://www.postgresql.org/docs/8.2/interactive/backup.html

  - Vishal





  
  Subject: [ADMIN] Backup
 Date: Thu, 24 Jan 2008 14:08:26 -0500
 From: [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]; pgsql-admin@postgresql.org
 CC: [EMAIL PROTECTED]




 Hi,
 We have a center in Europe who has just started to use PostgreSQL and
 was asking me if there are any Symantec product or other products that
 backup this type of database.   We presently run VERITAS ver9.1 on
 windows2003 server.  What is being used by users out there now.  We are
 thinking of upgrading to the latest Symantec backup exec software but am not
 sure if this version does backup PostgreSQL. When called for support they
 told me to go to the Symantec site and search the compatibility list. I
 didn't find anything dealing with PostgreSQL database.
 Any answer would help me simplify my backup situation.
 Thanks,


 Dominic Carlucci
 Production Control Analyst

 CAE Inc.
 8585 Cote de Liesse
 St. Laurent, Quebec H4T 1G6

 (514)341-2000 #2936
 [EMAIL PROTECTED]




 
 Live the life in style with MSN Lifestyle. Check out! Try it now!




-- 
Guido Barosio
---
http://www.globant.com
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] backup including symbolic links?

2008-01-27 Thread NUWAN LIYANAGE
Thank you very much Scott.. 
I'll keep you updated on my progress.
Thanks again.
Nuwan.

Scott Marlowe [EMAIL PROTECTED] wrote: On Jan 26, 2008 3:06 PM, NUWAN 
LIYANAGE  wrote:
 Yes, I was thinking of doing a pg_dumpall, but my only worry was that the
 singl file is going to be pretty large. I guess I don't have to worry too
 much about that.
 But my question to you sir is, If I want to create the development db using
 this pg dump file, how do I actually edit create tablespace statements so
 they will be created in the directory I want them to be. Or should I even
 worry about this.. (I want my data directory to be in E: drive including all
 the tablespaces.)

Take a look through the docs on pg_dump and pg_dumpall.

A few of the options to look at are the ones to dump just data or just
schema.  Quite often you don't need the data, just the schema, for
development.  pg_dump -s dbname will dump just the schema, and
pg_dumpall --globals will dump just the global database info, i.e.
usernames, tablespaces, etc...

Then just edit in your favorite text editor and point the resulting
file(s) at your dev db with psql f mydump.sql where mydump.sql is the
file you got from one or more of the above operations.

After that, I highly recommend that any changes you wish to make, do
so with .sql scripts (stored in your favorite version control system),
so that you can then apply them to your production database later with
minimal fuss.

Note that any changes that should be all or nothing to the
production database can be applied in a transaction (i.e. wrapped in a
begin/commit pair) and then either all or none of the changes will be
made...

example
begin;
create table abc 
alter table xyz ...
insert ...
commit;

if any of those commands fail (things like creating unique indexes
might fail on production where they didn't in development) then no
harm, no foul, just figure out what went wrong and update your script
so it takes care of those problems.


   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

Re: [ADMIN] backup including symbolic links?

2008-01-26 Thread NUWAN LIYANAGE
Yes, I was thinking of doing a pg_dumpall, but my only worry was that the singl 
file is going to be pretty large. I guess I don't have to worry too much about 
that.
  But my question to you sir is, If I want to create the development db using 
this pg dump file, how do I actually edit create tablespace statements so they 
will be created in the directory I want them to be. Or should I even worry 
about this.. (I want my data directory to be in E: drive including all the 
tablespaces.) 
  Thank you very much for your reply.
  Nuwan.
  
Scott Marlowe [EMAIL PROTECTED] wrote:
  On Jan 25, 2008 1:55 PM, NUWAN LIYANAGE wrote:
 Hello,

 I have a 450gb production database, and was trying to create a development
 database using a bkp.
 I was following the instructions on postgres documentation, and came across
 the paragraph that says...
  If you are using tablespaces that do not reside underneath this (data)
 directory, be careful to include them as well (and be sure that your backup
 dump archives symbolic links as links, otherwise the restore will mess up
 your tablespaces).
 I have a seperate pg_tablespaces folder under E:\ drive since there wasn't
 enough space in my C:\ drive to put them.

 My db is 8.2 and is running on windows 2003.

 Can anyone tell me how to backup my database (including the symbolic links
 as links).

A standard pg_dumpall --globals will dump the create tablespace
statement, which you can edit as needed for your new machine. A plain
pg_dump of the database will dump out the tables with tablespace
statements. A pg_dumpall of the whole database cluster can also be
used to do this.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


   
-
Never miss a thing.   Make Yahoo your homepage.

Re: [ADMIN] Backup

2008-01-25 Thread Simon Riggs
On Fri, 2008-01-25 at 11:34 +1100, Phillip Smith wrote:
  We have a center in Europe who has just started to use PostgreSQL and was
  asking me if there are any Symantec product or other products that backup
  this type of database.
 
 It doesn't appear to. 

The design of the PITR system allows a product-agnostic backup. Anything
that can backup a file can backup PostgreSQL. There is no need for
special certifications of hologram logos.

You may need to write a few lines of script to do it, but that's not a
problem surely?

So you can use pg_dump or PITR, as you choose.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] backup including symbolic links?

2008-01-25 Thread Scott Marlowe
On Jan 25, 2008 1:55 PM, NUWAN LIYANAGE [EMAIL PROTECTED] wrote:
 Hello,

  I have a 450gb production database, and was trying to create a development
 database using a bkp.
  I was following the instructions on postgres documentation, and came across
 the paragraph that says...
   If you are using tablespaces that do not reside underneath this (data)
 directory, be careful to include them as well (and be sure that your backup
 dump archives symbolic links as links, otherwise the restore will mess up
 your tablespaces).
  I have a seperate pg_tablespaces folder under E:\ drive since there wasn't
 enough space in my C:\ drive to put them.

  My db is 8.2 and is running on windows 2003.

  Can anyone tell me how to backup my database (including the symbolic links
 as links).

A standard pg_dumpall --globals will dump the create tablespace
statement, which you can edit as needed for your new machine.  A plain
pg_dump of the database will dump out the tables with tablespace
statements.  A pg_dumpall of the whole database cluster can also be
used to do this.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] backup including symbolic links?

2008-01-25 Thread NUWAN LIYANAGE
Hello, 
  
I have a 450gb production database, and was trying to create a 
development database using a bkp.
  I was following the instructions on postgres documentation, and came across 
the paragraph that says...
If you are using tablespaces that do not reside underneath this (data)  
directory, be careful to include them as well (and be sure that your  backup 
dump archives symbolic links as links, otherwise the restore  will mess up your 
tablespaces).
  I have a seperate pg_tablespaces folder under E:\ drive since there wasn't 
enough space in my C:\ drive to put them.
  
  My db is 8.2 and is running on windows 2003.
  
  Can anyone tell me how to backup my database (including the symbolic links as 
links).
  Thank you
  Nuwan
  
   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

[ADMIN] Backup

2008-01-24 Thread Dominic Carlucci
Hi,
We have a center in Europe who has just started to use PostgreSQL
and was asking me if there are any Symantec product or other products
that backup this type of database.   We presently run VERITAS ver9.1 on
windows2003 server.  What is being used by users out there now.  We are
thinking of upgrading to the latest Symantec backup exec software but am
not sure if this version does backup PostgreSQL. When called for support
they told me to go to the Symantec site and search the compatibility
list. I didn't find anything dealing with PostgreSQL database.
Any answer would help me simplify my backup situation.
Thanks, 

Dominic Carlucci
Production Control Analyst

CAE Inc.
8585 Cote de Liesse
St. Laurent, Quebec H4T 1G6 

(514)341-2000 #2936
[EMAIL PROTECTED] 

 

 


Re: [ADMIN] Backup

2008-01-24 Thread Phillip Smith
 We have a center in Europe who has just started to use PostgreSQL and was
 asking me if there are any Symantec product or other products that backup
 this type of database.

It doesn't appear to. I've just been through the whole rigmarole of
BackupExec for some Windows Servers, and I couldn't find anything to deal
with PostgreSQL.

Just dump Postgres to the File System and backup that dump. Depends on what
your Recovery Point requirements are.


THINK BEFORE YOU PRINT - Save paper if you don't really need to print this

***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments
e-mail.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Backup of live database

2008-01-16 Thread Joshua D. Drake

Brian Modra wrote:
The documentation about WAL says that you can start a live backup, as 
long as you use WAL backup also.
I'm concerned about the integrity of the tar file. Can someone help me 
with that?


If you are using point in time recovery:

http://www.postgresql.org/docs/8.2/static/continuous-archiving.html

You do not have to worry about it.

Joshua D. Drake








On 16/01/2008, *Joshua D. Drake* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Brian Modra wrote:
  Hi,
  If tar reports that a file was modified while it was being archived,
  does that mean that the file was archived correctly, or is it
corrupted
  in the archive?
  Does tar take a snapshot of the file so that even if it is
modified, at
  least the archive is safe?

You can not use tar to backup postgresql if it is running.

http://www.postgresql.org/docs/8.2/static/backup.html
http://www.postgresql.org/docs/8.2/static/backup.html

Sincerely,

Joshua D. Drake

  Thanks
 
  --
  Brian Modra   Land line: +27 23 5411 462
  Mobile: +27 79 183 8059
  6 Jan Louw Str, Prince Albert, 6930
  Postal: P.O. Box 2, Prince Albert 6930
  South Africa




--
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 183 8059
6 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [ADMIN] Backup of live database

2008-01-16 Thread Brian Modra
Sorry to be hammering this point, but I want to be totally sure its OK,
rather than 5 months down the line attempt to recover, and it fails...

Are you absolutely certain that the tar backup of the file that changed, is
OK? (And that even if that file is huge, tar has managed to save the file as
it was before it was changed - otherwise I'm afraid that the first part of
the file is saved to tar, and then the file is modified, and the last part
of the file is saved to tar from the point it was modified - and so
therefore not consistent with the first part... And therefore the file has
lost its integrity, so even a WAL restore won't help because the base files
themselves are corrupt in the tar file?

On 16/01/2008, Joshua D. Drake [EMAIL PROTECTED] wrote:

 Brian Modra wrote:
  The documentation about WAL says that you can start a live backup, as
  long as you use WAL backup also.
  I'm concerned about the integrity of the tar file. Can someone help me
  with that?

 If you are using point in time recovery:

 http://www.postgresql.org/docs/8.2/static/continuous-archiving.html

 You do not have to worry about it.

 Joshua D. Drake






 
  On 16/01/2008, *Joshua D. Drake* [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:
 
  Brian Modra wrote:
Hi,
If tar reports that a file was modified while it was being
 archived,
does that mean that the file was archived correctly, or is it
  corrupted
in the archive?
Does tar take a snapshot of the file so that even if it is
  modified, at
least the archive is safe?
 
  You can not use tar to backup postgresql if it is running.
 
  http://www.postgresql.org/docs/8.2/static/backup.html
  http://www.postgresql.org/docs/8.2/static/backup.html
 
  Sincerely,
 
  Joshua D. Drake
 
Thanks
   
--
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 183 8059
6 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
 
 
 
 
  --
  Brian Modra   Land line: +27 23 5411 462
  Mobile: +27 79 183 8059
  6 Jan Louw Str, Prince Albert, 6930
  Postal: P.O. Box 2, Prince Albert 6930
  South Africa




-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 183 8059
6 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa


Re: [ADMIN] Backup of live database

2008-01-16 Thread Tom Lane
Steve Holdoway [EMAIL PROTECTED] writes:
 You can be absolutely certain that the tar backup of a file that's changed is 
 a complete waste of time. Because it changed while you were copying it. 

That is, no doubt, the reasoning that prompted the gnu tar people to
make it do what it does, but it has zero to do with reality for
Postgres' usage in PITR base backups.  What we care about is consistency
on the page level: as long as each page of the backed-up file correctly
represents *some* state of that page while the backup was in progress,
everything is okay, because replay of the WAL log will correct any pages
that are out-of-date, missing, or shouldn't be there at all.  And
Postgres always writes whole pages.  So as long as write() and read()
are atomic --- which is the case on all Unixen I know of --- everything
works.

(Thinks for a bit...) Actually I guess there's one extra assumption in
there, which is that tar must issue its reads in multiples of our page
size.  But that doesn't seem like much of a stretch.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Backup of live database

2008-01-16 Thread Peter Eisentraut
Am Mittwoch, 16. Januar 2008 schrieb Tom Lane:
 (Thinks for a bit...) Actually I guess there's one extra assumption in
 there, which is that tar must issue its reads in multiples of our page
 size.  But that doesn't seem like much of a stretch.

There is something about that here: 
http://www.gnu.org/software/tar/manual/html_node/tar_149.html#SEC149

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Backup of live database

2008-01-16 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes:
 Am Mittwoch, 16. Januar 2008 schrieb Tom Lane:
 (Thinks for a bit...) Actually I guess there's one extra assumption in
 there, which is that tar must issue its reads in multiples of our page
 size.  But that doesn't seem like much of a stretch.

 There is something about that here: 
 http://www.gnu.org/software/tar/manual/html_node/tar_149.html#SEC149

AFAICT that's talking about the I/O chunk size *on the archive file*.
It doesn't say anything specific about the chunk size on the file side.

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] Backup of live database

2008-01-16 Thread David Wall


Brian Modra wrote:
Sorry to be hammering this point, but I want to be totally sure its 
OK, rather than 5 months down the line attempt to recover, and it fails...


Are you absolutely certain that the tar backup of the file that 
changed, is OK? (And that even if that file is huge, tar has managed 
to save the file as it was before it was changed - otherwise I'm 
afraid that the first part of the file is saved to tar, and then the 
file is modified, and the last part of the file is saved to tar from 
the point it was modified - and so therefore not consistent with the 
first part... And therefore the file has lost its integrity, so even a 
WAL restore won't help because the base files themselves are corrupt 
in the tar file?
Not sure if the answers you got answered your question or not.  Here's 
my take:


1) If the database is not running, tar works fine.

2) If the database is running, you can ONLY use tar if you also use WAL 
archiving since the database will not only need the tar files, which 
will be inconsistent, but also the WAL files (in your $PGDATA/pg_xlog) 
in order to recover from those inconsistencies.  I find this is best if 
you are creating a warm standby that is keeping a backup database in 
sync with a primary.


3) If the database is running, use pg_dump to create a consistent backup.

4) No matter what, as previously mentioned, you should test your backup 
procedures to ensure you can reliably restore.


Good luck,
David

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Backup of live database

2008-01-16 Thread Tom Arthurs

Hi, Brian

We have been doing PITR backups since the feature first became available 
in postgresql.  We first used tar, then, due to the dreadful warning 
being emitted by tar (which made us doubt that it was actually archiving 
that particular file) we decided to try CPIO, which actually emits much 
the same warnings, though not as verbose, so I think that tar will work 
as well (we never bothered going back to tar, mostly through laziness, 
so I can personally say that it works.)  Actually I have reason to 
believe you can use any series of OS commands that create copies or 
archives of the files, as long as those commands don't exit prematurely 
on warnings.


The important thing is to start archiving the WAL files *prior* to the 
first OS backup, or you will end up with an unusable data base.


We have actually tested and used recovered data bases with this scheme. 
 We use WAL archiving to replicate a warm standby data base which we 
have failed over to (and failed back from) many times, and I've had to 
do an actual PITR recovery to to recover several tables that got 
accidentally deleted by bad procedures/code/brain burned DBA's :)


Brian Modra wrote:
Sorry to be hammering this point, but I want to be totally sure its OK, 
rather than 5 months down the line attempt to recover, and it fails...


Are you absolutely certain that the tar backup of the file that changed, 
is OK? (And that even if that file is huge, tar has managed to save the 
file as it was before it was changed - otherwise I'm afraid that the 
first part of the file is saved to tar, and then the file is modified, 
and the last part of the file is saved to tar from the point it was 
modified - and so therefore not consistent with the first part... And 
therefore the file has lost its integrity, so even a WAL restore won't 
help because the base files themselves are corrupt in the tar file?


On 16/01/2008, *Joshua D. Drake* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Brian Modra wrote:
  The documentation about WAL says that you can start a live backup, as
  long as you use WAL backup also.
  I'm concerned about the integrity of the tar file. Can someone
help me
  with that?

If you are using point in time recovery:

http://www.postgresql.org/docs/8.2/static/continuous-archiving.html

You do not have to worry about it.

Joshua D. Drake






 
  On 16/01/2008, *Joshua D. Drake* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]
  mailto: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
 
  Brian Modra wrote:
Hi,
If tar reports that a file was modified while it was being
archived,
does that mean that the file was archived correctly, or is it
  corrupted
in the archive?
Does tar take a snapshot of the file so that even if it is
  modified, at
least the archive is safe?
 
  You can not use tar to backup postgresql if it is running.
 
  http://www.postgresql.org/docs/8.2/static/backup.html
http://www.postgresql.org/docs/8.2/static/backup.html
  http://www.postgresql.org/docs/8.2/static/backup.html
 
  Sincerely,
 
  Joshua D. Drake
 
Thanks
   
--
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 183 8059
6 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
 
 
 
 
  --
  Brian Modra   Land line: +27 23 5411 462
  Mobile: +27 79 183 8059
  6 Jan Louw Str, Prince Albert, 6930
  Postal: P.O. Box 2, Prince Albert 6930
  South Africa




--
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 183 8059
6 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [ADMIN] Backup of live database

2008-01-16 Thread Tom Davies


On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
The important thing is to start archiving the WAL files *prior* to  
the first OS backup, or you will end up with an unusable data base.


Why does the recovery need WAL files from before the backup?

Tom

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [ADMIN] Backup of live database

2008-01-16 Thread Tom Lane
Tom Davies [EMAIL PROTECTED] writes:
 On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
 The important thing is to start archiving the WAL files *prior* to  
 the first OS backup, or you will end up with an unusable data base.

 Why does the recovery need WAL files from before the backup?

It doesn't, but there's no reasonable way to start both processes at
exactly the same instant, so the standard advice is to start archiving
first.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Backup of live database

2008-01-16 Thread Scott Marlowe
On Jan 16, 2008 4:56 PM, Tom Davies [EMAIL PROTECTED] wrote:

 On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
  The important thing is to start archiving the WAL files *prior* to
  the first OS backup, or you will end up with an unusable data base.

 Why does the recovery need WAL files from before the backup?

It's a timeline thing.  The database is coherent at time x1.  The wal
file started at point x0 and moving forward, at some point, matches
up.  You run the start_archive command which tells pgsql you're
starting your backup at point x1.  You start the backup.  You now have
a backup of the pgsql datastore that's a mix of what you had at x1
when you started, and x2 where you stopped.

You apply the WAL from x0 forward to, say x3., and it conveniently
rewrites the datastore to be coherent.  If your WAL was from some
point between x1 and x2 you might have some data in the database that
the WAL file wouldn't write over, but was incoherent in regards to
what you'd get from point x3.  So, some pages now are out of date,
because your WAL file isn't old enough.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Backup of live database

2008-01-16 Thread Tom Arthurs
If you don't start archiving log files, your first backup won't be valid 
-- well I suppose you could do it the hard way and start the backup and 
the log archiving  at exactly the same time (can't picture how to time 
that), but the point is you need the current log when you kick off the 
backup.  If you kick off archiving first, you are assured of a valid 
backup (when the recovery is done.)  You may get some extra log files 
that way, but better too many than too few.  (been there, done that.)


Tom Davies wrote:


On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
The important thing is to start archiving the WAL files *prior* to the 
first OS backup, or you will end up with an unusable data base.


Why does the recovery need WAL files from before the backup?

Tom

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] backup WAL files,

2008-01-15 Thread Sebastian Reitenbach
Hi,

I use a script like the example below to generate a list of the WAL files 
that have to be saved by the backup job. I take the the names of the first 
and last WAL files from the backup HISTORYFILE generated by 
pg_start_backup() and pg_stop_backup(). The names of the WAL files between 
the first and the last I calculate the following way:

HISTORYFILE=`find ${ARCHIVEPATH} -name *.backup -exec 
grep -l backup-${DATABASE}-${NOW}  {} \;`
FIRSTWALFILE=`grep START WAL LOCATION ${HISTORYFILE}  | 
awk '{print $6} ' | sed -e 's/)$//g'`
LASTWALFILE=`grep STOP WAL LOCATION ${HISTORYFILE} | 
awk '{print $6} ' | sed -e 's/)$//g'`
echo FIRSTWALFILE=$FIRSTWALFILE
echo LASTWALFILE=$LASTWALFILE

FILE_PREFIX=`echo $FIRSTWALFILE | cut -c 1-15`
FIRST_SUFFIX=`echo $FIRSTWALFILE | cut -c 16-`
LAST_SUFFIX=`echo $LASTWALFILE | cut -c 16-`

CNTA=`echo obase=10;ibase=16; $FIRST_SUFFIX | bc`
CNTE=`echo obase=10;ibase=16; $LAST_SUFFIX | bc`
echo $CNTA $CNTE

while [ $CNTA -le $CNTE ];do
echo ${FILE_PREFIX}${FIRST_SUFFIX} #  outfile
FIRST_SUFFIX=`echo obase=16;ibase=16; ${FIRST_SUFFIX} + 1 | bc`
CNTA=$(($CNTA+1))
done


The WAL files have names like this:
00010001003C

I am wonder what the meaning of the two 1 in the filename is? Are the WAL 
file names counted up to F ?
Then I'll run into problems anyways as these int number are too large to be 
handled by bash.

any insight is highly appreciated.

kind regards
Sebastian


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] backup WAL files,

2008-01-15 Thread Tom Lane
Sebastian Reitenbach [EMAIL PROTECTED] writes:
 The WAL files have names like this:
 00010001003C

 I am wonder what the meaning of the two 1 in the filename is?

The first one (the first 8 hex digits actually) are the current
timeline number.  The second one isn't very interesting, it's
an artifact of the way that WAL file locations are converted to
file names internally.

 Are the WAL 
 file names counted up to F ?
 Then I'll run into problems anyways as these int number are too large to be 
 handled by bash.

You definitely should not expect to convert the names to integers.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] backup WAL files,

2008-01-15 Thread Sebastian Reitenbach
Hi,
Tom Lane [EMAIL PROTECTED] wrote: 
 Sebastian Reitenbach [EMAIL PROTECTED] writes:
  The WAL files have names like this:
  00010001003C
 
  I am wonder what the meaning of the two 1 in the filename is?
 
 The first one (the first 8 hex digits actually) are the current
 timeline number.  The second one isn't very interesting, it's
 an artifact of the way that WAL file locations are converted to
 file names internally.
thanks for this information.

 
  Are the WAL 
  file names counted up to F ?
  Then I'll run into problems anyways as these int number are too large to 
be 
  handled by bash.
 
 You definitely should not expect to convert the names to integers.

Then I do not understand why only the names of the first and the last WAL 
file are stored in the backup history file. I assumed that when I count from 
the first to the last I catch all WAL files needed for a complete backup.
Then I have no idea how to figure out, which of the WAL files are needed for 
the backup job. Or do I have to handle this via the file modification 
timestamps?

does anybody has a pointer to documentation where I can read up about how 
the names of the WAL files are created/used in postgres?

thanks
Sebastian


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


  1   2   3   >