Re: [ADMIN] backup

2012-06-18 Thread Frederiko Costa
Hi,

I would recommend this:
http://www.postgresql.org/docs/9.1/static/backup.html

Very straightforward and easy reading ...

-fred




On Mon, Jun 18, 2012 at 10:50 AM, lohita nama  wrote:

> Hi
>
> I am working as sql dba recently our team had oppurtunity to work on
> postgres databases and i had experience on sql server and on windows
> platform and now our company had postgres databases on solaris platform
>
> can u please suggest how to take the back up of postgress databases by
> step by step procudure
>
> awaiting for your reply thanks for your help
>


Re: [ADMIN] Moving the data directory

2011-02-15 Thread Frederiko Costa
You might find something in the documentation about migrating the data
directory to another server.

~Fred
Linkedin profile: http://www.linkedin.com/in/frederikocosta



On Tue, Feb 15, 2011 at 2:58 PM, Sairam Krishnamurthy
 wrote:
> Frederiko,
>
> I will try that. I was hoping to have a better solution than to reinstall
> postgres.
>
> Thanks,
> Sairam Krishnamurthy
> +1 612 859 8161
>
> On 02/15/2011 04:56 PM, Frederiko Costa wrote:
>
> What I meant was that when you install postgresql, you initialize the
> area where your data directory will be. There you have the database,
> config files, logs, etc. Before you restore the DB into the new pgsql,
> as far as I'm aware, you have to create/init this area.
>
> On Tue, Feb 15, 2011 at 2:27 PM, Sairam Krishnamurthy
>  wrote:
>
> Can you explain a bit? I dint get what you mean data area.
>
> Thanks,
> Sairam Krishnamurthy
> +1 612 859 8161
>
> On 02/15/2011 04:09 PM, Frederiko Costa wrote:
>
> I would recommend to create the data area in the destination pgsql
> server and dump the source database into it, rather than copying the
> data directory.
>
> On Tue, Feb 15, 2011 at 1:41 PM, Sairam Krishnamurthy
>  wrote:
>
> All,
>
> I am using Ubuntu 10.04 and Postgres 8.4, I have been trying to move
> my data dir to another device for the past couple of days but ran into
> a lot of problems.
>
> I did the following:
>
> 1. Copy the contents of /var/lib/postgresql/8.4/main to my folder.
> 2. Changed the data_directory in postgresql.conf.
>
> When I start postgres using /etc/init.d script I get the following error,
>
> * Starting PostgreSQL 8.4 database server
>  * Error: could not exec /usr/lib/postgresql/8.4/bin/pg_ctl
> /usr/lib/postgresql/8.4/bin/pg_ctl start -D
> /media/extension/var/lib/postgresql/8.4/main/ -l
> /var/log/postgresql/postgresql-8.4-main.log -s -o  -c
> config_file="/etc/postgresql/8.4/main/postgresql.conf" :
>
>
> I tried changing the ownership of the
> /media/extension/var/lib/postgresql dir to , postgres and
> root. Now of them worked and I keep getting the same error message.
>
> I found a lot of similar problems online but none had a solution.
>
> Can someone help me with this?
>
> TIA
>
> Thanks,
> Sairam Krishnamurthy
> +1 612 859 8161
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>
>
>

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Moving the data directory

2011-02-15 Thread Frederiko Costa
What I meant was that when you install postgresql, you initialize the
area where your data directory will be. There you have the database,
config files, logs, etc. Before you restore the DB into the new pgsql,
as far as I'm aware, you have to create/init this area.

On Tue, Feb 15, 2011 at 2:27 PM, Sairam Krishnamurthy
 wrote:
> Can you explain a bit? I dint get what you mean data area.
>
> Thanks,
> Sairam Krishnamurthy
> +1 612 859 8161
>
> On 02/15/2011 04:09 PM, Frederiko Costa wrote:
>
> I would recommend to create the data area in the destination pgsql
> server and dump the source database into it, rather than copying the
> data directory.
>
> On Tue, Feb 15, 2011 at 1:41 PM, Sairam Krishnamurthy
>  wrote:
>
> All,
>
> I am using Ubuntu 10.04 and Postgres 8.4, I have been trying to move
> my data dir to another device for the past couple of days but ran into
> a lot of problems.
>
> I did the following:
>
> 1. Copy the contents of /var/lib/postgresql/8.4/main to my folder.
> 2. Changed the data_directory in postgresql.conf.
>
> When I start postgres using /etc/init.d script I get the following error,
>
> * Starting PostgreSQL 8.4 database server
>  * Error: could not exec /usr/lib/postgresql/8.4/bin/pg_ctl
> /usr/lib/postgresql/8.4/bin/pg_ctl start -D
> /media/extension/var/lib/postgresql/8.4/main/ -l
> /var/log/postgresql/postgresql-8.4-main.log -s -o  -c
> config_file="/etc/postgresql/8.4/main/postgresql.conf" :
>
>
> I tried changing the ownership of the
> /media/extension/var/lib/postgresql dir to , postgres and
> root. Now of them worked and I keep getting the same error message.
>
> I found a lot of similar problems online but none had a solution.
>
> Can someone help me with this?
>
> TIA
>
> Thanks,
> Sairam Krishnamurthy
> +1 612 859 8161
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>
>

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Moving the data directory

2011-02-15 Thread Frederiko Costa
I would recommend to create the data area in the destination pgsql
server and dump the source database into it, rather than copying the
data directory.

On Tue, Feb 15, 2011 at 1:41 PM, Sairam Krishnamurthy
 wrote:
> All,
>
> I am using Ubuntu 10.04 and Postgres 8.4, I have been trying to move
> my data dir to another device for the past couple of days but ran into
> a lot of problems.
>
> I did the following:
>
> 1. Copy the contents of /var/lib/postgresql/8.4/main to my folder.
> 2. Changed the data_directory in postgresql.conf.
>
> When I start postgres using /etc/init.d script I get the following error,
>
> * Starting PostgreSQL 8.4 database server
>  * Error: could not exec /usr/lib/postgresql/8.4/bin/pg_ctl
> /usr/lib/postgresql/8.4/bin/pg_ctl start -D
> /media/extension/var/lib/postgresql/8.4/main/ -l
> /var/log/postgresql/postgresql-8.4-main.log -s -o  -c
> config_file="/etc/postgresql/8.4/main/postgresql.conf" :
>
>
> I tried changing the ownership of the
> /media/extension/var/lib/postgresql dir to , postgres and
> root. Now of them worked and I keep getting the same error message.
>
> I found a lot of similar problems online but none had a solution.
>
> Can someone help me with this?
>
> TIA
>
> Thanks,
> Sairam Krishnamurthy
> +1 612 859 8161
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres on NAS/NFS

2011-02-11 Thread Frederiko Costa
Hello,

I have had the experience of running a database hosted on a dedicated
NFS share. I have not had any problems. I would rather have this
solution than having to use SAN, due to the flexibility I would get
plus the less overhead in management.

I recommend that you skim through a paper from Oracle/Netapp (from
2004, I think), where they talk a little bit about running a database
on NFS (I'm not recommending neither Oracle, nor Netapp). But I think
it's worth to mention it because it clarified a few things to me. In
this scenario, the conclusion I get is that performance gain in a SAN
does not payoff the management overhead you get. Specially in the
scenario you're describing, which does not seem to have a heavy usage.

Paying close attention to details to avoid any corruption issue in a
shared environment, I don't think it would be a problem going for NFS.

Regards,
~Frederiko Costa
Linkedin profile: http://www.linkedin.com/in/frederikocosta



On Wed, Feb 9, 2011 at 11:59 AM, Bryan Keller  wrote:
> I am considering running a Postgres with the database hosted on a NAS via 
> NFS. I have read a few things on the Web saying this is not recommended, as 
> it will be slow and could potentially cause data corruption.
>
> My goal is to have the database on a shared filesystem so in case of server 
> failure, I can start up a standby Postgres server and point it to the same 
> database. I would rather not use a SAN as I have heard horror stories about 
> managing them. Also they are extremely expensive. A DAS would be another 
> option, but I'm not sure if a DAS can be connected to two servers for server 
> failover purposes.
>
> Currently I am considering not using a shared filesystem and instead using 
> replication between the two servers.
>
> I am wondering what solutions have others used for my active-passive Postgres 
> failover scenario? Is a NAS still not a recommended approach? Will a DAS 
> work? Or is replication the best approach?
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] restore database files

2010-12-15 Thread Frederiko Costa
Yes, it is. Have a look at pg_dump utility. The documentation contains
plenty of examples.

On Tue, Dec 14, 2010 at 12:14 PM, gosta100 wrote:

>
> Hello everyone,
>
>   I have a copy of the data folder of a Windows postgres installation. Is
> it possible to restore the databases contained in there to another postgres
> server?
>
> Thank you.
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/restore-database-files-tp3305210p3305210.html
> Sent from the PostgreSQL - admin mailing list archive at Nabble.com.
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>


Re: [ADMIN] Find all running postgres DB servers on a network

2010-11-18 Thread Frederiko Costa
I believe in this case you will have to talk with the sysadmin and have an
agreement about this requirement for your application. I don't know about
Spiceworks, but I see nmap as an application working on layer 3/4, so you
may be subject to firewall rules or anything defined in the security policy.

~Fred

On Thu, Nov 18, 2010 at 7:26 AM, ojas dubey  wrote:

> Thank you Rich,Fred,Scott,Viktor and Gerard for your replies.
>
> Actually I am neither the system administrator nor the person who set up
> all the servers. I am developing an application which would provide the user
> with a list of running Postgres DB servers from which the user can select
> one. So I was wondering if scanning ports  using nmap or Spiceworks would
> get me into trouble with the System administrator for trying to flood the
> network with my requests or not ?
>
> Regards,
> Ojas
>
>
> On Thu, Nov 18, 2010 at 2:51 AM, Rich  wrote:
>
>> Use nmap. Unless you deliberately changed the IP port you should have no
>> problem.  Are you the one who setup all the servers?
>>
>>
>> On Wed, Nov 17, 2010 at 4:03 PM, Frederiko Costa wrote:
>>
>>> True. However, I was just assuming that Postgres was running on default
>>> ports. If not, you could also probe in port ranges or even probe the network
>>> for open ports to have an idea and get closer. It might be faster option if
>>> software such as Spiceworks is not being used.
>>>
>>> Spiceworks looks a good option too.
>>>
>>> ~Fred
>>>
>>>
>>> On Wed, Nov 17, 2010 at 12:56 PM, Scott Whitney wrote:
>>>
>>>> That only works in the event that you have PG listening on port 5432.
>>>>
>>>> A product like Spiceworks will provide much more detail, presuming you
>>>> have the IT credentials to talk to the machines.
>>>>
>>>> --
>>>>
>>>> nmap is the way to go. Try to scan for port 5432 in a range of IP of
>>>> your
>>>> LAN.
>>>>
>>>> ~Fred
>>>> Linkedin profile: http://www.linkedin.com/in/frederikocosta
>>>>
>>>>
>>>> On Sun, Nov 14, 2010 at 8:52 AM, ojas dubey 
>>>> wrote:
>>>>
>>>> > Hi,
>>>> >
>>>> > I wanted to know if there is a way to get the hostnames of all the
>>>> systems
>>>> > running PostGres DB servers on a local network on Windows (XP/Vista/7)
>>>> using
>>>> > JDBC or any other Java API ?
>>>> >
>>>> >
>>>> > Regards,
>>>> > Ojas
>>>> >
>>>>
>>>>
>>>>
>>>
>>
>


Re: [ADMIN] Find all running postgres DB servers on a network

2010-11-17 Thread Frederiko Costa
True. However, I was just assuming that Postgres was running on default
ports. If not, you could also probe in port ranges or even probe the network
for open ports to have an idea and get closer. It might be faster option if
software such as Spiceworks is not being used.

Spiceworks looks a good option too.

~Fred

On Wed, Nov 17, 2010 at 12:56 PM, Scott Whitney  wrote:

> That only works in the event that you have PG listening on port 5432.
>
> A product like Spiceworks will provide much more detail, presuming you have
> the IT credentials to talk to the machines.
>
> --
>
> nmap is the way to go. Try to scan for port 5432 in a range of IP of your
> LAN.
>
> ~Fred
> Linkedin profile: http://www.linkedin.com/in/frederikocosta
>
>
> On Sun, Nov 14, 2010 at 8:52 AM, ojas dubey  wrote:
>
> > Hi,
> >
> > I wanted to know if there is a way to get the hostnames of all the
> systems
> > running PostGres DB servers on a local network on Windows (XP/Vista/7)
> using
> > JDBC or any other Java API ?
> >
> >
> > Regards,
> > Ojas
> >
>
>
>


Re: [ADMIN] Find all running postgres DB servers on a network

2010-11-17 Thread Frederiko Costa
nmap is the way to go. Try to scan for port 5432 in a range of IP of your
LAN.

~Fred
Linkedin profile: http://www.linkedin.com/in/frederikocosta


On Sun, Nov 14, 2010 at 8:52 AM, ojas dubey  wrote:

> Hi,
>
> I wanted to know if there is a way to get the hostnames of all the systems
> running PostGres DB servers on a local network on Windows (XP/Vista/7) using
> JDBC or any other Java API ?
>
>
> Regards,
> Ojas
>


[ADMIN] Data cluster initialization on NFS for different databases.

2010-05-05 Thread Frederiko Costa
Hello,

I've got a few questions in regards to the way I'd like to design my
database area. Currently I have 2 projects. These projects contain  two
dedicated databases, but their data would be stored in the same NFS server.
What I have as a plan is to run two different instances of pgsql server on
these two separate machines and save the data on two different directories -
acting as database area - exported via NFS. I believe to gain a little bit
flexibility later in case I decide one database to a different place. But I
don't think this would be decision making approach.

The question is if this approach is what most people do? I was also thinking
that if the number of projects grows, this could become a potential headache
to manage.

Would be possible to have a single directory - containing the database area
- exported and shared by different hosts, reading and writing on different
databases? If yes, is it recommended?

The last question is if NFS would be a good option as network, does anybody
recommend another network file system?

Thanks in advance.
Frederiko


Re: [ADMIN] archived WALL files question

2010-04-19 Thread Frederiko Costa



On 04/19/2010 10:26 AM, Kevin Grittner wrote:

Frederiko Costa  wrote:
   

I have seen new 16 MB segments files created in pg_xlog directory
as time goes on. Honestly, I did not understand why it got
created, because I was running it on a VM and there was no
activity. I got several new segments.

If I run the command you asked, this is the output. Neither
segment is copied, nor a new segment file gets created:

   select pg_switch_xlog();
   pg_switch_xlog

   0/8188
(1 row)
 


What version of PostgreSQL is this?
   

8.4.3


What do you see if you run?:

show archive_command;
   

cp -p %p /mnt/data/%f

show archive_mode;
   

on

show archive_timeout;
   

1min
I have just enabled that. Now, log files are being copied directly to 
the /mnt/data dir. However, the same segments are not in the pg_xlog 
dir. Is this a default behaviour?


Must I set archive_timeout? I don't think I want that, because, 
specially for the next few months, where the activity would be very 
limited and I would get several zero written segments just because 
timeout has been reached. Is this approach recommended anyway? Or is it 
better to use this approach even having limited approach?



Check your log file from the time you ran pg_switch_xlog to look for
any messages which might give a clue to what's happening.

   

The log files did not write anything about pg_switch_xlog.

-Kevin
   


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] archived WALL files question

2010-04-19 Thread Frederiko Costa
I have seen new 16 MB segments files created in pg_xlog directory as 
time goes on. Honestly, I did not understand why it got created, because 
I was running it on a VM and there was no activity. I got several new 
segments.


If I run the command you asked, this is the output. Neither segment is 
copied, nor a new segment file gets created:


 select pg_switch_xlog();
 pg_switch_xlog

 0/8188
(1 row)

Wasn't this command supposed to create a new segment file and copy the 
last non-copied segments - using command specified at archive_command - 
to the destination you defined?


I wanted to understand why the newly created wal segment files (with 16 
MB each) did not copied automatically to the new place. What's the 
criteria? They only get copied when I carry out a base backup.


Thanks for the help!

On 04/19/2010 09:41 AM, Kevin Grittner wrote:

Frederiko Costa  wrote:

   

I noticed the new logs files don't get copied to the directory
specified on the archive_command. It's only copied when I do the
pg_start_backup()/pg_stop_backup() base backup. Is this behaviour
only achieved if I set archive_timeout? I did not want to do that,
because I thought as soon as the 16MB WAL segment file got created
it would be copied to the exported directory.
 


Have you actually been filling up any 16MB WAL file segments?  How
have you determined that?  What happens when you run (as a database
superuser) this command?:

SELECT pg_switch_xlog();

If the reason you don't want to set archive_timeout is that you
don't want 16MB files piling up when there is little or no activity,
you might want to consider using pglesslog or pg_clearxlogtail.
These can eliminate nearly all the cost of the more frequent logs.

-Kevin
   


Re: [ADMIN] archived WALL files question

2010-04-19 Thread Frederiko Costa

Hi Renato,

I had the same question. I think, as far as I understood, the point is 
that if you have a few base backups, not only logs replay would be 
faster for a recovery but also you don't need to archive WAL segments 
before the base backup.


**
I also have a question regarding the frequency of log shipping  from the 
primary server to a directory exported to the standby server. The 
standby server is stopped ready to be launched in recovery mode. The 
point is in the primary server. I noticed the new logs files don't get 
copied to the directory specified on the archive_command. It's only 
copied when I do the pg_start_backup()/pg_stop_backup() base backup. Is 
this behaviour only achieved if I set archive_timeout? I did not want to 
do that, because I thought as soon as the 16MB WAL segment file got 
created it would be copied to the exported directory. Besides, I don't 
think I would need to perform base backups frequently.


Any advice?

Thanks

On 04/16/2010 12:00 AM, Renato Oliveira wrote:

I am sorry Kevin, I really appreciate your experience and your knowledge, and 
that's why I am asking; I thought the base backup was only necessary once. For 
example once you have done your first base backup, that is it, all you need is 
to replay the logs and backup the logs.

What would be the reason(s) for you to do weekly base backups?

Thank you very much

Best regards

Renato



Renato Oliveira
Systems Administrator
e-mail:renato.olive...@grant.co.uk

Tel: +44 (0)1763 260811
Fax: +44 (0)1763 262410
http://www.grant.co.uk/

Grant Instruments (Cambridge) Ltd

Company registered in England, registration number 658133

Registered office address:
29 Station Road,
Shepreth,
CAMBS SG8 6GB
UK

-Original Message-


From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
Sent: 15 April 2010 17:02
To: Renato Oliveira;pgsql-admin@postgresql.org
Subject: Re: [ADMIN] archived WALL files question

Renato Oliveira  wrote:

   

I was reading again the documentation... "The archive command
should generally de designed to refuse to overwrite any
pre-existing archive file."  This means it will keep writing logs
to the folder specified forever, and without an intervention, the
media will run out of space.
 

Overwriting an existing file wouldn't help with that, since the
filenames keep changing.  It might, for example, prevent
accidentally wiping out the WAL files from one database cluster with
WAL files from another by copying the postgresql.conf file and
neglecting to change the archive script.

   

What do you guys do with regards to this situation, for example:
How to you clean up the old archived logs?
 

We keep two weekly base backups and all the WAL files needed to
recover from the earlier of the two to present.  We also keep an
archival copy of the first base backup of each month with just the
WAL files needed to start it.  We delete WAL files when no longer
needed to support this retention policy.  It's all pretty automatic
based on bash scripts run from cron jobs.

Of course, you'll want to tailor your strategy to your business
needs.

-Kevin



-Original Message-


P Please consider the environment before printing this email
CONFIDENTIALITY: The information in this e-mail and any attachments is 
confidential. It is intended only for the named recipients(s). If you are not 
the named recipient please notify the sender immediately and do not disclose 
the contents to another person or take copies.

VIRUSES: The contents of this e-mail or attachment(s) may contain viruses which 
could damage your own computer system. Whilst Grant Instruments (Cambridge) Ltd 
has taken every reasonable precaution to minimise this risk, we cannot accept 
liability for any damage which you sustain as a result of software viruses. You 
should therefore carry out your own virus checks before opening the 
attachment(s).

OpenXML: For information about the OpenXML file format in use within Grant 
Instruments please visit ourhttp://www.grant.co.uk/Support/openxml.html


   


Re: [ADMIN] archived WALL files question

2010-04-16 Thread Frederiko Costa

Hi Renato,

I had the same question. I think, as far as I understood, the point is 
that if you have a few base backups, not only logs replay would be 
faster for a recovery but also you don't need to archive WAL segments 
before the base backup.


**
I also have a question regarding the frequency of log shipping  from the 
primary server to a directory exported to the standby server. The 
standby server is stopped ready to be launched in recovery mode. The 
point is in the primary server. I noticed the new logs files don't get 
copied to the directory specified on the archive_command. It's only 
copied when I do the pg_start_backup()/pg_stop_backup() base backup. Is 
this behaviour only achieved if I set archive_timeout? I did not want to 
do that, because I thought as soon as the 16MB WAL segment file got 
created it would be copied to the exported directory. Besides, I don't 
think I would need to perform base backups frequently.


Any advice?

Thanks

On 04/16/2010 12:00 AM, Renato Oliveira wrote:

I am sorry Kevin, I really appreciate your experience and your knowledge, and 
that's why I am asking; I thought the base backup was only necessary once. For 
example once you have done your first base backup, that is it, all you need is 
to replay the logs and backup the logs.

What would be the reason(s) for you to do weekly base backups?

Thank you very much

Best regards

Renato



Renato Oliveira
Systems Administrator
e-mail: renato.olive...@grant.co.uk

Tel: +44 (0)1763 260811
Fax: +44 (0)1763 262410
http://www.grant.co.uk/

Grant Instruments (Cambridge) Ltd

Company registered in England, registration number 658133

Registered office address:
29 Station Road,
Shepreth,
CAMBS SG8 6GB
UK

-Original Message-


From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
Sent: 15 April 2010 17:02
To: Renato Oliveira; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] archived WALL files question

Renato Oliveira  wrote:

   

I was reading again the documentation... "The archive command
should generally de designed to refuse to overwrite any
pre-existing archive file."  This means it will keep writing logs
to the folder specified forever, and without an intervention, the
media will run out of space.
 

Overwriting an existing file wouldn't help with that, since the
filenames keep changing.  It might, for example, prevent
accidentally wiping out the WAL files from one database cluster with
WAL files from another by copying the postgresql.conf file and
neglecting to change the archive script.

   

What do you guys do with regards to this situation, for example:
How to you clean up the old archived logs?
 

We keep two weekly base backups and all the WAL files needed to
recover from the earlier of the two to present.  We also keep an
archival copy of the first base backup of each month with just the
WAL files needed to start it.  We delete WAL files when no longer
needed to support this retention policy.  It's all pretty automatic
based on bash scripts run from cron jobs.

Of course, you'll want to tailor your strategy to your business
needs.

-Kevin



-Original Message-


P Please consider the environment before printing this email
CONFIDENTIALITY: The information in this e-mail and any attachments is 
confidential. It is intended only for the named recipients(s). If you are not 
the named recipient please notify the sender immediately and do not disclose 
the contents to another person or take copies.

VIRUSES: The contents of this e-mail or attachment(s) may contain viruses which 
could damage your own computer system. Whilst Grant Instruments (Cambridge) Ltd 
has taken every reasonable precaution to minimise this risk, we cannot accept 
liability for any damage which you sustain as a result of software viruses. You 
should therefore carry out your own virus checks before opening the 
attachment(s).

OpenXML: For information about the OpenXML file format in use within Grant 
Instruments please visit our http://www.grant.co.uk/Support/openxml.html


   


[ADMIN] Wal Segment files Backup Strategy for archiving

2010-04-15 Thread Frederiko Costa
Hello folks,

I wanted to use the archiving feature, with log shipping / warm-standby
server. No problem on setting up so far. However, reading the documentation
brought me one question that I'd like to share with the list.

In the documentation, as far as I understood, it's said you need to make a
base backup and copy the data cluster area to the destination you choose.
Use pg_start_backup(), copy the data/ directory (or the cluster area), and
pg_stop_backup(). It created the first checkpoint (correct?) and copied the
wal segments to the area you specified. You can even delete the files under
pg_xlog, because you will rely on the ones shipped out. The question is:
once the base backup has been made and I change any setting on this primary
server, on postgresql.conf for example, would I have to redo the procedure
and set Pgsql to backup mode if I want to apply these changes I made updated
on standby server or it's just a matter of transferring the file to the
standby server?

Thanks,
Frederiko Costa