Re: PITR

2024-02-14 Thread Greg Sabino Mullane
On Wed, Feb 14, 2024 at 1:39 PM Yongye Serkfem  wrote:

>  I checked the Postgresql.conf file and can't find the
> appropriate parameter to set the target time.
>

It is set inside the postgresql.conf file. Unless you have modified it,
there is a section in there called "Recovery Target" which has a handful of
commented-out variables starting with "recovery_"

Cheers,
Greg


Re: PITR

2024-02-14 Thread Ron Johnson
On Wed, Feb 14, 2024 at 1:39 PM Yongye Serkfem  wrote:

> Hi,
> I hope you are all doing well. I am trying to do PITR on Postgresql v12.
> Now that the recovery.conf file is not available in this version, where
> should I set the recovery_target_time? I checked the Postgresql.conf file
> and can't find the appropriate parameter to set the target time.
>

Consider using PgBackRest.  It's multithreaded, and you specify the
"restore-from start point" and "recover until" time on the command line.
It then handles everything for you, besides the final "pg_ctl start -D ...".


PITR

2024-02-14 Thread Yongye Serkfem
Hi,
I hope you are all doing well. I am trying to do PITR on Postgresql v12.
Now that the recovery.conf file is not available in this version, where
should I set the recovery_target_time? I checked the Postgresql.conf file
and can't find the appropriate parameter to set the target time.
I'm looking forward to reading your thoughts.

Best regards
Yong Serkfem


Re: PITR

2023-11-22 Thread Andreas Kretschmer




Am 22.11.23 um 11:50 schrieb Ron Johnson:
On Wed, Nov 22, 2023 at 3:12 AM Rajesh Kumar 
 wrote:




How do I do PITR. Backup strategy is weekly full backup and daily
differential backup. Using pgbackrest.

Also. In future how do i monitor time of drop commands.



https://blog.hagander.net/locating-the-recovery-point-just-before-a-dropped-table-230/


Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: PITR

2023-11-22 Thread Ron Johnson
On Wed, Nov 22, 2023 at 3:12 AM Rajesh Kumar 
wrote:

> Hi
>
> A person dropped the table and don't know time of drop.
>

Revoke his permission to drop tables?


> How do I do PITR. Backup strategy is weekly full backup and daily
> differential backup. Using pgbackrest.
>
> Also. In future how do i monitor time of drop commands.
>

Set log_statement to 'all' or 'ddl', and then grep the log files for "DROP
TABLE ".


Re: PITR partial timing

2023-10-17 Thread Ron

Don't write your own PITR.  Use PgBackRest, barman, etc.

On 10/16/23 12:08, Rama Krishnan wrote:

Hi team,

I  m testing PITR

1.if I am doing full recovery what ever the new tables were create will 
not be appear how to handle it


2. I have given my logs which time log I have to choose for PITR partial 
recovery



2023-10-16 16:22:35 UTC [16561]: [13-1] user=postgres,db=postgresLOG: 
 statement: create database pumastore;
2023-10-16 16:22:35 UTC [16561]: [14-1] user=postgres,db=postgresLOG: 
 duration: 70.433 ms
2023-10-16 16:23:02 UTC [16561]: [15-1] user=postgres,db=postgresLOG: 
 statement: CREATE TABLE categories (

   category integer NOT NULL,
   categoryname character varying(50) NOT NULL
);
2023-10-16 16:23:02 UTC [16561]: [16-1] user=postgres,db=postgresLOG: 
 duration: 6.328 ms
2023-10-16 16:23:28 UTC [16561]: [17-1] user=postgres,db=postgresLOG: 
 statement: COPY categories (category, categoryname) FROM stdin;
2023-10-16 16:23:41 UTC [16561]: [18-1] user=postgres,db=postgresLOG: 
 duration: 12695.122 ms
2023-10-16 16:23:52 UTC [16561]: [19-1] user=postgres,db=postgresLOG: 
 statement: table categories;
2023-10-16 16:23:52 UTC [16561]: [20-1] user=postgres,db=postgresLOG: 
 duration: 0.573 ms
2023-10-16 16:24:20 UTC [16561]: [21-1] user=postgres,db=postgresLOG: 
 statement: select pg_switch_wal();
2023-10-16 16:24:20 UTC [16561]: [22-1] user=postgres,db=postgresLOG: 
 duration: 61.904 ms


2023-10-16 16:26:25 UTC [16561]: [23-1] user=postgres,db=postgresLOG: 
 statement: drop table categories;
2023-10-16 16:26:25 UTC [16561]: [24-1] user=postgres,db=postgresLOG: 
 duration: 5.165 ms




--
Born in Arizona, moved to Babylonia.

PITR partial timing

2023-10-17 Thread Rama Krishnan
Hi team,

I  m testing PITR

1.if I am doing full recovery what ever the new tables were create will not
be appear how to handle it

2. I have given my logs which time log I have to choose for PITR partial
recovery


2023-10-16 16:22:35 UTC [16561]: [13-1] user=postgres,db=postgresLOG:
 statement: create database pumastore;
2023-10-16 16:22:35 UTC [16561]: [14-1] user=postgres,db=postgresLOG:
 duration: 70.433 ms
2023-10-16 16:23:02 UTC [16561]: [15-1] user=postgres,db=postgresLOG:
 statement: CREATE TABLE categories (
   category integer NOT NULL,
   categoryname character varying(50) NOT NULL
);
2023-10-16 16:23:02 UTC [16561]: [16-1] user=postgres,db=postgresLOG:
 duration: 6.328 ms
2023-10-16 16:23:28 UTC [16561]: [17-1] user=postgres,db=postgresLOG:
 statement: COPY categories (category, categoryname) FROM stdin;
2023-10-16 16:23:41 UTC [16561]: [18-1] user=postgres,db=postgresLOG:
 duration: 12695.122 ms
2023-10-16 16:23:52 UTC [16561]: [19-1] user=postgres,db=postgresLOG:
 statement: table categories;
2023-10-16 16:23:52 UTC [16561]: [20-1] user=postgres,db=postgresLOG:
 duration: 0.573 ms
2023-10-16 16:24:20 UTC [16561]: [21-1] user=postgres,db=postgresLOG:
 statement: select pg_switch_wal();
2023-10-16 16:24:20 UTC [16561]: [22-1] user=postgres,db=postgresLOG:
 duration: 61.904 ms

2023-10-16 16:26:25 UTC [16561]: [23-1] user=postgres,db=postgresLOG:
 statement: drop table categories;
2023-10-16 16:26:25 UTC [16561]: [24-1] user=postgres,db=postgresLOG:
 duration: 5.165 ms


Re: PITR based recovery failing due to difference in max_connections

2023-08-04 Thread Kalit Inani
Hi,
Thanks for the quick response.
Regarding your questions,
*Why did you do that?*
This is our requirement and we are restoring this on another
instance(destination instance) where are the 'max_connection' value should
be less than that of the source instance(150 in our case).

On Fri, Aug 4, 2023 at 1:13 PM Ron  wrote:

> On 8/3/23 23:47, Kalit Inani wrote:
>
> Hi all,
> During PITR based recovery of a postgres instance, we are getting the
> following error -
> '2023-06-21 23:52:52.232 PDT [24244] FATAL:  hot standby is not possible
> because max_connections = 150 is a lower setting than on the master server
> (its value was 500)'
>
> Here are the steps we are following -
>
>1.
>
>We took a snapshot of the data disk from a Postgres leader node, let’s
>call this as source instance.
>2.
>
>Then, we modified the MAX_CONNECTIONS in that source instance to 500.
>
>
> Why did you do that?
>
>
>1.
>
>Due to the modification, the following wal_file entry gets generated - 
> rmgr:
>XLOG len (rec/tot): 54/54, tx: 0, lsn: 1/5EA0, prev 1/5E28, desc:
>PARAMETER_CHANGE max_connections=500 max_worker_processes=8
>max_wal_senders=10 max_prepared_xacts=0 max_locks_per_xact=64
>wal_level=replica wal_log_hints=off track_commit_timestamp=off
>2.
>
>Next, we did a PITR based recovery in another instance. During the
>recovery we have used a config file with MAX_CONNECTIONS as 150.
>3.
>
>However, the recovery fails with the following error - '2023-06-21
>23:52:52.232 PDT [24244] FATAL:  hot standby is not possible because
>max_connections = 150 is a lower setting than on the master server (its
>value was 500)'
>
> What are the probable solutions to fix this issue?
>
> Since it complains about a MAX_CONNECTIONS mismatch... *don't mismatch
> MAX_CONNECTIONS*.
>
> Take a snapshot *after* setting MAX_CONNECTIONS = 500.
>
> One of the approaches we tried was to set ‘hot_standby = off’ in
> postgresql.conf. By doing this, we are successfully able to restore the
> source’s content on the destination instance. However, is this the correct
> way to move forward?
>
> We also read the postgres documentation for hot_standby -
> https://www.postgresql.org/docs/current/hot-standby.html
>
> It mentions -
> “The settings of some parameters determine the size of shared memory for
> tracking transaction IDs, locks, and prepared transactions. These shared
> memory structures must be no smaller on a standby than on the primary in
> order to ensure that the standby does not run out of shared memory during
> recovery. For example, if the primary had used a prepared transaction but
> the standby had not allocated any shared memory for tracking prepared
> transactions, then recovery could not continue until the standby's
> configuration is changed.”
>
> Does this mean that turning off hot_standby and then performing a recovery
> operation may lead to some unintended consequences? Do we always have to
> keep these parameter (‘max_connections’) values greater than equal to that
> of the source instance?
>
> Thank you,
> Kalit.
>
>
> --
> Born in Arizona, moved to Babylonia.
>


Re: PITR based recovery failing due to difference in max_connections

2023-08-03 Thread Ron

On 8/3/23 23:47, Kalit Inani wrote:


Hi all,
During PITR based recovery of a postgres instance, we are getting the 
following error -
'2023-06-21 23:52:52.232 PDT [24244] FATAL:  hot standby is not possible 
because max_connections = 150 is a lower setting than on the master server 
(its value was 500)'



Here are the steps we are following -

1.

We took a snapshot of the data disk from a Postgres leader node, let’s
call this as source instance.

2.

Then, we modified the MAX_CONNECTIONS in that source instance to 500.



Why did you do that?


1.

Due to the modification, the following wal_file entry gets generated -
rmgr: XLOG len (rec/tot): 54/54, tx: 0, lsn: 1/5EA0, prev
1/5E28, desc: PARAMETER_CHANGE max_connections=500
max_worker_processes=8 max_wal_senders=10 max_prepared_xacts=0
max_locks_per_xact=64 wal_level=replica wal_log_hints=off
track_commit_timestamp=off

2.

Next, we did a PITR based recovery in another instance. During the
recovery we have used a config file with MAX_CONNECTIONS as 150.

3.

However, the recovery fails with the following error - '2023-06-21
23:52:52.232 PDT [24244] FATAL:  hot standby is not possible because
max_connections = 150 is a lower setting than on the master server
(its value was 500)'

What are the probable solutions to fix this issue?

Since it complains about a MAX_CONNECTIONS mismatch... *don't mismatch 
MAX_CONNECTIONS*.


Take a snapshot *after* setting MAX_CONNECTIONS = 500.

One of the approaches we tried was to set ‘hot_standby = off’ in 
postgresql.conf. By doing this, we are successfully able to restore the 
source’s content on the destination instance. However, is this the correct 
way to move forward?


We also read the postgres documentation for hot_standby - 
https://www.postgresql.org/docs/current/hot-standby.html 
<https://www.postgresql.org/docs/current/hot-standby.html>


It mentions -
“The settings of some parameters determine the size of shared memory for 
tracking transaction IDs, locks, and prepared transactions. These shared 
memory structures must be no smaller on a standby than on the primary in 
order to ensure that the standby does not run out of shared memory during 
recovery. For example, if the primary had used a prepared transaction but 
the standby had not allocated any shared memory for tracking prepared 
transactions, then recovery could not continue until the standby's 
configuration is changed.”



Does this mean that turning off hot_standby and then performing a recovery 
operation may lead to some unintended consequences? Do we always have to 
keep these parameter (‘max_connections’) values greater than equal to that 
of the source instance?



Thank you,
Kalit.


--
Born in Arizona, moved to Babylonia.

PITR based recovery failing due to difference in max_connections

2023-08-03 Thread Kalit Inani
Hi all,
During PITR based recovery of a postgres instance, we are getting the
following error -
'2023-06-21 23:52:52.232 PDT [24244] FATAL:  hot standby is not possible
because max_connections = 150 is a lower setting than on the master server
(its value was 500)'

Here are the steps we are following -

   1.

   We took a snapshot of the data disk from a Postgres leader node, let’s
   call this as source instance.
   2.

   Then, we modified the MAX_CONNECTIONS in that source instance to 500.
   3.

   Due to the modification, the following wal_file entry gets generated -
   rmgr: XLOG len (rec/tot): 54/54, tx: 0, lsn: 1/5EA0, prev
   1/5E28, desc: PARAMETER_CHANGE max_connections=500
   max_worker_processes=8 max_wal_senders=10 max_prepared_xacts=0
   max_locks_per_xact=64 wal_level=replica wal_log_hints=off
   track_commit_timestamp=off
   4.

   Next, we did a PITR based recovery in another instance. During the
   recovery we have used a config file with MAX_CONNECTIONS as 150.
   5.

   However, the recovery fails with the following error -
   '2023-06-21 23:52:52.232 PDT [24244] FATAL:  hot standby is not possible
   because max_connections = 150 is a lower setting than on the master server
   (its value was 500)'


What are the probable solutions to fix this issue? One of the approaches we
tried was to set ‘hot_standby = off’ in postgresql.conf. By doing this, we
are successfully able to restore the source’s content on the destination
instance. However, is this the correct way to move forward?

We also read the postgres documentation for hot_standby -
https://www.postgresql.org/docs/current/hot-standby.html

It mentions -
“The settings of some parameters determine the size of shared memory for
tracking transaction IDs, locks, and prepared transactions. These shared
memory structures must be no smaller on a standby than on the primary in
order to ensure that the standby does not run out of shared memory during
recovery. For example, if the primary had used a prepared transaction but
the standby had not allocated any shared memory for tracking prepared
transactions, then recovery could not continue until the standby's
configuration is changed.”

Does this mean that turning off hot_standby and then performing a recovery
operation may lead to some unintended consequences? Do we always have to
keep these parameter (‘max_connections’) values greater than equal to that
of the source instance?

Thank you,
Kalit.


recovery_target_action = shutdown not working on PG11 during PITR restore

2023-06-28 Thread Maher Beg
Hello!

I'm attempting to restore a database (via pgbackrest) on PG11.20 and have
the database shutdown when all of the WAL archives have finished applying.
In my recovery.conf file I have recovery_target_action = 'shutdown' with an
appropriate recovery target time and restore command setup. hot_standby is
set to on.

When Postgres starts up it shows the correct recovery parameters in the
debug logs but after restoring and applying all the WAL archives ends up
promoting itself and has exited recovery mode (confirmed via psql select
pg_is_in_recovery()). I also see recovery.conf is renamed to recovery.done

I've tried changing the target action to pause and that still results in
promotion, as well as setting it to something invalid which causes Postgres
to correctly crash. It appears to be parsing correctly, but maybe I'm
missing an additional configuration or startup parameter to have PG11
restore and shutdown correctly. I've also tried hot_standby on and off, as
well as messing with standby_mode.

This same process with slight tweaks works just fine for PG12-PG15. Is
there anything else I should be checking or testing?

-Maher


Re: PITR and instance without any activity

2023-01-11 Thread Adrien Nayrat

On 1/9/23 11:23, Torsten Förtsch wrote:
On Mon, Jan 9, 2023 at 10:59 AM Adrien Nayrat 
mailto:adrien.nay...@anayrat.info>> wrote:



* We can't perform PITR on a cluster without any activity since 13
* It seems creating restore point doesn't record a timestamp in wal.

I have a cron job that runs this every 5 minutes:

SELECT txid_current() WHERE 
(pg_last_committed_xact()).timestamp+'5min'::INTERVAL < now()


  Combine that with a suitable value for archive_timeout.


Hello,

I don't get how it could be useful ?
When we perform PITR, we don't know if there was activity or not.

--
Adrien NAYRAT






Re: PITR and instance without any activity

2023-01-09 Thread Torsten Förtsch
On Mon, Jan 9, 2023 at 10:59 AM Adrien Nayrat 
wrote:

>
> * We can't perform PITR on a cluster without any activity since 13
> * It seems creating restore point doesn't record a timestamp in wal.
>

I have a cron job that runs this every 5 minutes:

SELECT txid_current() WHERE
(pg_last_committed_xact()).timestamp+'5min'::INTERVAL < now()

 Combine that with a suitable value for archive_timeout.


Re: PITR and instance without any activity

2023-01-09 Thread Adrien Nayrat

Hello,

I bump this thread up, I hope to have more reaction :)

TL;DR:

* We can't perform PITR on a cluster without any activity since 13
* It seems creating restore point doesn't record a timestamp in wal.

Thanks

--
Adrien NAYRAT






PITR and instance without any activity

2022-12-12 Thread Adrien Nayrat

Hello,

I came across a weird problem about PITR with a recovery_target_time and 
a cluster without any activity. The issue is quite simple, we just try 
to do a PITR by specifying a recovery_target_time after the backup.


The main issue, is that there is no timestamp in the wal as there is no 
activity. Postgres fetches the record timestamp thanks to 
getRecordTimestamp:


/* 

 * Extract timestamp from WAL record. 

 * 

 * If the record contains a timestamp, returns true, and saves the 
timestamp
 * in *recordXtime. If the record type has no timestamp, returns false. 

 * Currently, only transaction commit/abort records and restore points 
contain
 * timestamps. 

 */ 



So, Postgres tries to replay all the wal until the end, and we got this 
message:

FATAL:  recovery ended before configured recovery target was reached

Before 13, promotion was done without any error:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=dc788668bb269b10a108e87d14fefd1b9301b793

I wonder if we may add a timestamp for other wal records? Maybe when we 
switch xlog?


I may have spotted a bug, I tried to create restore point. I expected 
Postgres will get its timestamp as comment in getRecordTimestamp.
But Postgres did not find any timestamp and keep replaying the wal even 
after the restore point.


FYI, It seems I am not alone with this issue:
https://github.com/pgbackrest/pgbackrest/issues/1934
https://github.com/pgbackrest/pgbackrest/issues/1526#issuecomment-937759691

Thanks

--
Adrien NAYRAT





Re: PITR and Temp Tables

2022-04-20 Thread Huan Ruan
Many thanks for the explanation, Tom and Greg. That all makes sense.

Cheers
Huan


Re: PITR and Temp Tables

2022-04-20 Thread Greg Stark
There actually is a third backstop if no other session ever connects to
that temp schema and cleans them out.

Eventually autovacuum notices that they would need a vacuum "to prevent
wraparound". It can't actually did the vacuum on temp tables but if there's
no session attached to the temp schema it drops them.

This normally takes quite a long time to reach so if you routinely have
sessions using temp schemas it's unlikely to happen. But if you only use
temp schemas manually then eventually it would.

On Wed., Apr. 20, 2022, 09:37 Tom Lane,  wrote:

> Huan Ruan  writes:
> > Let's say at T0 a database has N session based temp tables. They would
> have
> > corresponding records in the catalog tables like pg_class and
> pg_attribute
> > that are visible to other sessions.
>
> > At T1, I do a PITR to T0. That recovered database should not have those
> > temp tables because the sessions they were created in are not present. My
> > question is what events trigger the deletion of those temp tables'
> catalog
> > records (e.g. pg_class and pg_attribute etc.) in the recovered database?
>
> Those records will still be there in the catalogs, yes.
>
> Cleaning out the contents of a temporary schema is not the responsibility
> of the WAL/recovery system.  It's done by live backends at two times:
>
> 1. A session that has used a temp schema will normally clean out the
> contained objects when it exits.
>
> 2. As a backstop in case #1 fails, a session that is about to begin using
> a temp schema will clean out any surviving contents.
>
> So if you rewound to a point where some temp objects exist, it'd be the
> responsibility of the first session that wants to use a given temp schema
> to clean out those objects.
>
> regards, tom lane
>
>
>


Re: PITR and Temp Tables

2022-04-20 Thread Tom Lane
Huan Ruan  writes:
> Let's say at T0 a database has N session based temp tables. They would have
> corresponding records in the catalog tables like pg_class and pg_attribute
> that are visible to other sessions.

> At T1, I do a PITR to T0. That recovered database should not have those
> temp tables because the sessions they were created in are not present. My
> question is what events trigger the deletion of those temp tables' catalog
> records (e.g. pg_class and pg_attribute etc.) in the recovered database?

Those records will still be there in the catalogs, yes.

Cleaning out the contents of a temporary schema is not the responsibility
of the WAL/recovery system.  It's done by live backends at two times:

1. A session that has used a temp schema will normally clean out the
contained objects when it exits.

2. As a backstop in case #1 fails, a session that is about to begin using
a temp schema will clean out any surviving contents.

So if you rewound to a point where some temp objects exist, it'd be the
responsibility of the first session that wants to use a given temp schema
to clean out those objects.

regards, tom lane




Re: PITR and Temp Tables

2022-04-20 Thread Huan Ruan
Hi Patrick

Thanks for your reply.

>
>
> My guess is that temp table entries will still be in your catalog until
> you do a VACUUM FULL of the pg_class / pg_attribute tables.
>
> But you should not care about these entries if these tables are vacuumed
> at regular intervals.
>
What I observed in one instance seems* to be like this - by the time I
connected to the recovered database (as soon as I could), an autovacuuum
has run on those catalog tables and they don't have those temp table
entries. Normally, autovacuum removes dead tuples, but those temp table
records are live tuples at T0. So if it was autovacuum that removed them in
the recovered database, were they live or dead tuples? If they are dead,
what did the deletion that made them dead?

*I would like to confirm to be 100% sure but was wondering if I can get an
explanation here.

Regards
Huan


RE: PITR and Temp Tables

2022-04-20 Thread Patrick FICHE
From: Huan Ruan 
Sent: Wednesday, April 20, 2022 2:18 PM
To: pgsql-general@lists.postgresql.org
Subject: PITR and Temp Tables

Hi All

Let's say at T0 a database has N session based temp tables. They would have 
corresponding records in the catalog tables like pg_class and pg_attribute that 
are visible to other sessions.

At T1, I do a PITR to T0. That recovered database should not have those temp 
tables because the sessions they were created in are not present. My question 
is what events trigger the deletion of those temp tables' catalog records (e.g. 
pg_class and pg_attribute etc.) in the recovered database?

Thanks
Huan

Hi,

My guess is that temp table entries will still be in your catalog until you do 
a VACUUM FULL of the pg_class / pg_attribute tables.
But you should not care about these entries if these tables are vacuumed at 
regular intervals.

Regards,
Patrick


PITR and Temp Tables

2022-04-20 Thread Huan Ruan
Hi All

Let's say at T0 a database has N session based temp tables. They would have
corresponding records in the catalog tables like pg_class and pg_attribute
that are visible to other sessions.

At T1, I do a PITR to T0. That recovered database should not have those
temp tables because the sessions they were created in are not present. My
question is what events trigger the deletion of those temp tables' catalog
records (e.g. pg_class and pg_attribute etc.) in the recovered database?

Thanks
Huan


Re: PITR for an only object in postgres

2021-03-22 Thread Kyotaro Horiguchi
At Thu, 18 Mar 2021 17:12:49 -0400, aslı cokay  wrote in 
> Hi all,
> 
> Is there any way to get for example a table version of 5 minutes ago, or we
> should PITR to 5 minutes ago, dump the table and restore with this dump ?

If you want to revert the recent changes on the table, I think there's
no way other than time-travelling using PITR.  Of couse you can't do
that only for a part of database.

If you often want to access 5-minutes ago image of the database, you
could configure a delayed standby using recovery_min_apply_delay.

I'm not sure PostgreSQL has the same feature for logical replication,
but perhaps we don't have that.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




PITR for an only object in postgres

2021-03-18 Thread aslı cokay
Hi all,

Is there any way to get for example a table version of 5 minutes ago, or we
should PITR to 5 minutes ago, dump the table and restore with this dump ?

Thanks.


Re: cannot promote after recovery for PITR

2021-02-23 Thread Luca Ferrari
On Mon, Feb 22, 2021 at 3:42 PM Luca Ferrari  wrote:
> If, instead, I do pg_wal_replay_resume(), the server is promoted (of
> course not at the PITR I want).
> Am I missing something?

Apparently I was missing caffeine and confused the usage of the functions.
pg_wal_replay_resume was what I was looking for.

Sorry for the noise.

Luca




cannot promote after recovery for PITR

2021-02-22 Thread Luca Ferrari
I'm running 12.5, I've restored a backup copy (from barman) to a
specific restore point.
The system has gone into pause, and I've checked that everything is as
I was expecting.
Now I'm unable to promote the cluster:

testdb=# SELECT pg_is_wal_replay_paused();
 pg_is_wal_replay_paused
-
 t
(1 row)

testdb=# select pg_promote( true, 120 );
WARNING:  server did not promote within 120 seconds
 pg_promote

 f


and the only thing I've got into the logs is

WARNING:  server did not promote within 120 seconds

Similarly, I cannot promote via pg_ctl even if I've checked that
PGDATA/promote is there.


If, instead, I do pg_wal_replay_resume(), the server is promoted (of
course not at the PITR I want).
Am I missing something?

Thanks,
Luca




Re: Need help with PITR for PostgreSQL 9.4.5

2020-11-06 Thread David G. Johnston
On Fri, Nov 6, 2020 at 2:34 PM Sri Linux  wrote:

> Could you please provide me the link to start new questions?
>
>>
>>>
Assuming you are using Gmail...

https://business.tutsplus.com/tutorials/how-to-compose-and-send-your-first-email-with-gmail--cms-27678

David J.


Re: Need help with PITR for PostgreSQL 9.4.5

2020-11-06 Thread Sri Linux
David

Could you please provide me the link to start new questions?

Thanks and Regards
Sri

On Fri, Nov 6, 2020 at 3:27 PM Sri Linux  wrote:

> Thank you very much David
>
>
>
> On Fri, Nov 6, 2020 at 2:11 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Friday, November 6, 2020, Sri Linux  wrote:
>>
>>> Hi All
>>>
>>> I have to start Postgres 9.4.5 vacuum for our production environment.
>>> Got interrupted with the Linux session, is there a way I can monitor if the
>>> vacuum is progressing while I reconnect to the Linux box?
>>>

>
>> Please start new email threads when you have new questions.
>>
>> If you run vacuum manually in a session, and then disconnect that
>> session, the vacuum stops just like any other command.
>>
>> David J.
>>
>>


Re: Need help with PITR for PostgreSQL 9.4.5

2020-11-06 Thread Sri Linux
Thank you very much David



On Fri, Nov 6, 2020 at 2:11 PM David G. Johnston 
wrote:

> On Friday, November 6, 2020, Sri Linux  wrote:
>
>> Hi All
>>
>> I have to start Postgres 9.4.5 vacuum for our production environment. Got
>> interrupted with the Linux session, is there a way I can monitor if the
>> vacuum is progressing while I reconnect to the Linux box?
>>
>>>

> Please start new email threads when you have new questions.
>
> If you run vacuum manually in a session, and then disconnect that session,
> the vacuum stops just like any other command.
>
> David J.
>
>


Re: Need help with PITR for PostgreSQL 9.4.5

2020-11-06 Thread David G. Johnston
On Friday, November 6, 2020, Sri Linux  wrote:

> Hi All
>
> I have to start Postgres 9.4.5 vacuum for our production environment. Got
> interrupted with the Linux session, is there a way I can monitor if the
> vacuum is progressing while I reconnect to the Linux box?
>
>>
>>>
Please start new email threads when you have new questions.

If you run vacuum manually in a session, and then disconnect that session,
the vacuum stops just like any other command.

David J.


Re: Need help with PITR for PostgreSQL 9.4.5

2020-11-06 Thread Sri Linux
Hi All

I have to start Postgres 9.4.5 vacuum for our production environment. Got
interrupted with the Linux session, is there a way I can monitor if the
vacuum is progressing while I reconnect to the Linux box?

Thanks
Sri

On Thu, Jun 25, 2020 at 1:16 PM Sri Linux  wrote:

> Thanks for your support.
>
> I will try to restore and provide results shortly without restoring
> pg_xlog file
>
> Regards,
> Sri
>
> On Thu, Jun 25, 2020 at 1:09 PM Stephen Frost  wrote:
>
>> Greetings,
>>
>> * Sri Linux (srilinu...@gmail.com) wrote:
>> > Please find the method used. Please recommend me if I have done
>> > something wrong...
>>
>> Yes, you are using 'cp' which is *not* recommended for an archive
>> command.
>>
>> > Performing a hot backup using pg_basebackup:
>> > Create a new folder as the postgres user
>> > pg_basebackup --xlog --format=t -D /server01/pgbackup/`date %Y%m%d`
>>
>> Note that your pg_basebackup is going to be copying WAL also, in
>> addition to the archive_command you've configured.
>>
>> > Restoring from Backup:
>> > Extract the contents of base.tar from the backed up folder on top of
>> the PostgreSQL installation folder:
>> > tar -xf base.tar -C /var/lib/pgsql/9.4/data (RedHat/CentOS)
>> > Assuming that there is a single database tar file (named with a number)
>> in the backup, extract the contents of this folder to the /server01 folder:
>> > tar -xf .tar -C /server01
>> > Copy any unarchived WAL log files saved from the first step back into
>> the pg_xlog folder appropriate for the OS
>>
>> Not sure what "first step" means here, but you are configuring PostgreSQL
>> with a recovery.conf later with a restore command to fetch the WAL it
>> needs
>> from your archive, so you shouldn't be needing to copy files from one
>> pg_xlog to another (which is just generally a bad idea..).
>>
>> Further, the error you're getting, as mentioned, is actually that you've
>> somehow ended up with WAL for some other cluster in your archive and
>> when this instance tries to restore it, it complains (quite
>> understandably).  A sensible tool would prevent this from being able to
>> happen by checking that the WAL that's being archived to a given
>> location matches the database that the WAL is for.
>>
>> As mentioned, you should really be considering using a purpose-built
>> tool which manages this for you, such as pgbackrest, which has such
>> checks and provides you with backup/restore commands.
>>
>> Thanks,
>>
>> Stephen
>>
>


Re: Need help with PITR for PostgreSQL 9.4.5

2020-06-25 Thread Sri Linux
Thanks for your support.

I will try to restore and provide results shortly without restoring pg_xlog
file

Regards,
Sri

On Thu, Jun 25, 2020 at 1:09 PM Stephen Frost  wrote:

> Greetings,
>
> * Sri Linux (srilinu...@gmail.com) wrote:
> > Please find the method used. Please recommend me if I have done
> > something wrong...
>
> Yes, you are using 'cp' which is *not* recommended for an archive
> command.
>
> > Performing a hot backup using pg_basebackup:
> > Create a new folder as the postgres user
> > pg_basebackup --xlog --format=t -D /server01/pgbackup/`date %Y%m%d`
>
> Note that your pg_basebackup is going to be copying WAL also, in
> addition to the archive_command you've configured.
>
> > Restoring from Backup:
> > Extract the contents of base.tar from the backed up folder on top of the
> PostgreSQL installation folder:
> > tar -xf base.tar -C /var/lib/pgsql/9.4/data (RedHat/CentOS)
> > Assuming that there is a single database tar file (named with a number)
> in the backup, extract the contents of this folder to the /server01 folder:
> > tar -xf .tar -C /server01
> > Copy any unarchived WAL log files saved from the first step back into
> the pg_xlog folder appropriate for the OS
>
> Not sure what "first step" means here, but you are configuring PostgreSQL
> with a recovery.conf later with a restore command to fetch the WAL it needs
> from your archive, so you shouldn't be needing to copy files from one
> pg_xlog to another (which is just generally a bad idea..).
>
> Further, the error you're getting, as mentioned, is actually that you've
> somehow ended up with WAL for some other cluster in your archive and
> when this instance tries to restore it, it complains (quite
> understandably).  A sensible tool would prevent this from being able to
> happen by checking that the WAL that's being archived to a given
> location matches the database that the WAL is for.
>
> As mentioned, you should really be considering using a purpose-built
> tool which manages this for you, such as pgbackrest, which has such
> checks and provides you with backup/restore commands.
>
> Thanks,
>
> Stephen
>


Re: Need help with PITR for PostgreSQL 9.4.5

2020-06-25 Thread Stephen Frost
Greetings,

* Sri Linux (srilinu...@gmail.com) wrote:
> Please find the method used. Please recommend me if I have done
> something wrong...

Yes, you are using 'cp' which is *not* recommended for an archive
command.

> Performing a hot backup using pg_basebackup:
> Create a new folder as the postgres user
> pg_basebackup --xlog --format=t -D /server01/pgbackup/`date %Y%m%d`

Note that your pg_basebackup is going to be copying WAL also, in
addition to the archive_command you've configured.

> Restoring from Backup:
> Extract the contents of base.tar from the backed up folder on top of the 
> PostgreSQL installation folder:
> tar -xf base.tar -C /var/lib/pgsql/9.4/data (RedHat/CentOS)
> Assuming that there is a single database tar file (named with a number) in 
> the backup, extract the contents of this folder to the /server01 folder:
> tar -xf .tar -C /server01
> Copy any unarchived WAL log files saved from the first step back into the 
> pg_xlog folder appropriate for the OS

Not sure what "first step" means here, but you are configuring PostgreSQL
with a recovery.conf later with a restore command to fetch the WAL it needs
from your archive, so you shouldn't be needing to copy files from one
pg_xlog to another (which is just generally a bad idea..).

Further, the error you're getting, as mentioned, is actually that you've
somehow ended up with WAL for some other cluster in your archive and
when this instance tries to restore it, it complains (quite
understandably).  A sensible tool would prevent this from being able to
happen by checking that the WAL that's being archived to a given
location matches the database that the WAL is for.

As mentioned, you should really be considering using a purpose-built
tool which manages this for you, such as pgbackrest, which has such
checks and provides you with backup/restore commands.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Need help with PITR for PostgreSQL 9.4.5

2020-06-25 Thread Sri Linux
Thank you all for the response,

Please find the method used. Please recommend me if I have done
something wrong...


Thanks and Regards,
sree

On Wed, Jun 24, 2020 at 11:36 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> The version you are running is neither up-to-date for its major version
> (9.4) nor is the major version being supported.
>
> https://www.postgresql.org/support/versioning/
>
> Thoug a functioning backup is good to have before upgrading, especially
> major versions.
>
> On Wednesday, June 24, 2020, Sri Linux  wrote:
>>
>> I am trying to do PINT backup and recovery for the standalone
>> database which is not in a cluster.
>>
>
> That is not possible.  Its unclear what exactly you mead/did though.
> Consider sharing actual commands/scripts.
>
> < 2020-06-22 16:34:08.487 CDT >LOG:  WAL file is from different database
>> system: WAL file database system identifier is 6840038030519879233,
>> pg_control database system identifier is 6841285371464745049.
>>
>
> You cannot PITR if the WAL doesn’t match the base backup for the cluster
> you are trying to restore.
>
> https://www.postgresql.org/docs/9.4/continuous-archiving.html
>
> Or, probably better, consider using a third-party system.
>
> David J.
>
>
# mkdir /server01/pgarchives
# chmod 700 /server01/pgarchives 
# chown postgres:postgres /server01/pgarchives

Enable archiving by modifying postgresql.conf:
File is generally located at /var/lib/pgsql/9.4/data/postgresql.conf
# WRITE AHEAD LOG
# - Settings - 
wal_level = hot_standby
# - Archiving
archive_mode = on
archive_command = 'cp %p /server01/pgarchives/%f'
# REPLICATION
# - Sending Server(s) -
max_wal_senders = 3

Add the postgres user to the replication role and grant permissions in 
pg_hba.conf by adding the following line to the end of the file:
local   replication postgrestrust
 
Restart the postgresql server
Verify that Write Ahead Logging is enabled by checking the contents of the 
pg_xlog folder for files with names similar to 00010001

This folder is located at /var/lib/pgsql/9.4/data/pg_xlog

Performing a hot backup using pg_basebackup:
Create a new folder as the postgres user
pg_basebackup --xlog --format=t -D /server01/pgbackup/`date %Y%m%d`

Restoring from Backup:
Extract the contents of base.tar from the backed up folder on top of the 
PostgreSQL installation folder:
tar -xf base.tar -C /var/lib/pgsql/9.4/data (RedHat/CentOS)
Assuming that there is a single database tar file (named with a number) in the 
backup, extract the contents of this folder to the /server01 folder:
tar -xf .tar -C /server01
Copy any unarchived WAL log files saved from the first step back into the 
pg_xlog folder appropriate for the OS
Ensure that both the PostgreSQL installation folder and the /server01 folder 
are owned by the postgres user:
# chown -R postgres:postgres /var/lib/pgsql/9.4/data (RedHat/CentOS)
# chown -R postgres:postgres /server01 (All)
Create a recovery.conf file in the installation folder 
(/var/lib/postgresql/9.4/main/recovery.conf) with the following content and 
make sure it is owned by the postgres user:
restore_command = 'cp /server01/pgarchive/%f %p'
The restore_command value should be the functionally opposite command used to 
archive the WAL files as configured in postgresql.conf
In the backup configuration above, WAL files are copied from "%p" to 
"/server01/pgarchive/%f"
The restore_command, therefore, needs to copy from "/server01/pgarchive/%f" to 
"%p"
In addition, it is possible to specify a specific date/time to restore to in 
this file for a point in time recovery by adding the following line to 
recovery.conf:
recovery_target_time = '2020-0-25 15:32:24 EST'
If the recovery_target_time parameter is omitted, the database will be 
recovered to the most recent transaction
Start the PostgreSQL database
If all goes well, the recovery.conf file will be renamed to recovery.done upon 
completion
Check the syslog (/var/log/syslog) for information on any errors that might 
have occurred during startup
Confirm that the backup was successfully restored


   

Re: Need help with PITR for PostgreSQL 9.4.5

2020-06-24 Thread David G. Johnston
The version you are running is neither up-to-date for its major version
(9.4) nor is the major version being supported.

https://www.postgresql.org/support/versioning/

Thoug a functioning backup is good to have before upgrading, especially
major versions.

On Wednesday, June 24, 2020, Sri Linux  wrote:
>
> I am trying to do PINT backup and recovery for the standalone
> database which is not in a cluster.
>

That is not possible.  Its unclear what exactly you mead/did though.
Consider sharing actual commands/scripts.

< 2020-06-22 16:34:08.487 CDT >LOG:  WAL file is from different database
> system: WAL file database system identifier is 6840038030519879233,
> pg_control database system identifier is 6841285371464745049.
>

You cannot PITR if the WAL doesn’t match the base backup for the cluster
you are trying to restore.

https://www.postgresql.org/docs/9.4/continuous-archiving.html

Or, probably better, consider using a third-party system.

David J.


Need help with PITR for PostgreSQL 9.4.5

2020-06-24 Thread Sri Linux
Hi All,

I am trying to do PINT backup and recovery for the standalone
database which is not in a cluster.

recovery.done is created, after restart database is having errors to start "

< 2020-06-22 16:34:08.280 CDT >LOG:  starting archive recovery
< 2020-06-22 16:34:08.456 CDT >LOG:  restored log file
"00010001" from archive
< 2020-06-22 16:34:08.487 CDT >LOG:  WAL file is from different database
system: WAL file database system identifier is 6840038030519879233,
pg_control database system identifier is 6841285371464745049.
< 2020-06-22 16:34:08.487 CDT >LOG:  invalid primary checkpoint record


Please suggest me right method to backup and restore for RHEL OS.


Thanks and Regards
Deepthi


Re: Hot and PITR backups in same server

2020-05-19 Thread Andrus

Hi!

Thank you.


As Laurentz said, even if you set it to 5TB, no WAL files older than
the files needed by the last checkpoint don't remain. If you don't
need a hot-standby, you can use pg_receivewal to save WAL files on the
"standby" server.  If you need the hot-standby, WAL files should be
archived.


So pg_receivewal  should running in parallel with hot standy server to receive 
wal files to separate directory.

Will each wal file transferred two times in this case? One time by hot standby 
server and second time by pg_receivewal.
Main server if priced by amount of bandwidth.
How to receive each wal file only once ? This would allow to decrease network 
bandwidth and thus cost two times.

pg_receivewal has compression option. Will this compress WAL files before 
transfer over network ?


If so, what you need seems to be pg_receivewal, not a full-fledged
server.


For hot standby and PITR in same server the following steps are required:

1. Create base backup using pg_basebackup
2. Create copy of base backup for PITR
3. Start hot standby server using data directory created in p.1
4. Run pg_receiceval as background process to save wal files to backup created 
in p.2

If PITR is required, specify recovery time in base backup created in p.2 in 
postgresql.conf
Start second postgresql server instance in port 5433 which uses this backup for 
recovery.

Is this OK or should something changed?

How to run pg_receivewal in background ? Is there some option which creates such service so will automatically restart if server is 
restarted ?


How to allow main server to keep sufficient number of WAL segments ?
Replication slot cannot used: if backup server stops replication_slot causes main server to fill disk space with untransferred WAL 
files.

After  that main server will also stop with "no space left on device" error.

Or is there some option like to reserve some disk space or limit wal size so 
that main server can continue on backup server crash.

Andrus. 






Re: Hot and PITR backups in same server

2020-05-19 Thread Kyotaro Horiguchi
At Mon, 18 May 2020 11:11:44 +0300, "Andrus"  wrote in 
> Hi!
> 
> >This is confused or at least confusing.
> >- "max_wal_size" of 5TB is clearly insane.

As Laurentz said, even if you set it to 5TB, no WAL files older than
the files needed by the last checkpoint don't remain. If you don't
need a hot-standby, you can use pg_receivewal to save WAL files on the
"standby" server.  If you need the hot-standby, WAL files should be
archived.

> >- I don't understand what you mean by "Start backup server for hot
> >- standby backups".
> >Do I get it right that you want to copy a streaming replication
> >standby server's data
> directory to perform PITR?
> 
> I want to copy only pg_wal directory contents from this.

If so, what you need seems to be pg_receivewal, not a full-fledged
server.

> After pg_basebackup has finished, copy of its data directory is saved
> for possible PITR.
> Its pg_wal contents will be replaced with current pg_wal directory
> contents.
> 
> After that recovery time will set it postgres.conf and separate server
> in 5433 is used to preform PITR using this data directory.
> 
> >That doesn't see like a good plan, because the standby
> >usually won't be much behind the primary server, and you can only
> >recover to a later
> >point in time.
> 
> I will get data from copy taken when hot backup server was created and
> replace its pg_wal directory contents from pg_wal in current backup
> server pg_wal directory.
> Since max pg wal size is big it should contain all WAL segments from
> time where base backup was taken.

As mentioned above, that assumption is wrong.  You need to archive WAL
files out of pg_wal directory, or pg_recievewal.

> >If you care to elaborate, perhaps the question can be answered.
> 
> Currently binary async hot standby backup server is working OK and
> replicates main sever database almost immediately.
> 
> However if important data is deleted in same day, it cannot recovered
> since hot standby server database has replicated same data as in
> master server.
> Master server disk space is limited. It cannot keep large number of
> wal segments.
> Standby server has lot of free disk space.

recovery_min_apply_delay might be useful for you.  With the setting,
the standby follows the primary after that minutes or hours.

https://www.postgresql.org/docs/12/runtime-config-replication.html#GUC-RECOVERY-MIN-APPLY-DELAY

I haven't even tried by myself, but the following steps would perhaps
work.

0. If data is found to be broken on the primary.

1. Stop the standby immediately and take a cold backup including
 pg_wal directory.

2. You may find the just dropped data in the standby.

3. If you need to advance the standby, you can proceed to recover upto
  arbitrary LSN after x min ago using recovery_target_lsn/time.  If
  you go too far, start again from the backup taken in the step 1.
  
https://www.postgresql.org/docs/12/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET


> I'm looking for a way to collect data for PITR recovery (WAL segments)
> in standby server.
> 
> I have initial base backup of data directory created using
> pg_basebackup.
> All WAL segments from master server after pg_basebackup should saved
> in backup server and should be used for PITR recovery when needed.
> 
> How to use hot standby server for this ?
> 
> Or should separate cluster in 5433 port created and pg_rewind or
> something other used for PITR.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Hot and PITR backups in same server

2020-05-18 Thread Andrus

Hi!


This is confused or at least confusing.
- "max_wal_size" of 5TB is clearly insane.
- I don't understand what you mean by "Start backup server for hot standby 
backups".
Do I get it right that you want to copy a streaming replication standby 
server's data

directory to perform PITR?

I want to copy only pg_wal directory contents from this.

After pg_basebackup has finished, copy of its data directory is saved for 
possible PITR.
Its pg_wal contents will be replaced with current pg_wal directory contents.

After that recovery time will set it postgres.conf and separate server in 5433 
is used to preform PITR using this data directory.


That doesn't see like a good plan, because the standby
usually won't be much behind the primary server, and you can only recover to a 
later
point in time.


I will get data from copy taken when hot backup server was created and replace its pg_wal directory contents from pg_wal in current 
backup server pg_wal directory.

Since max pg wal size is big it should contain all WAL segments from time where 
base backup was taken.


If you care to elaborate, perhaps the question can be answered.


Currently binary async hot standby backup server is working OK and replicates 
main sever database almost immediately.

However if important data is deleted in same day, it cannot recovered since hot standby server database has replicated same data as 
in master server.

Master server disk space is limited. It cannot  keep large number of wal 
segments.
Standby server has lot of free disk space.

I'm looking for a way to collect data for PITR recovery (WAL segments) in 
standby server.

I have initial base backup of data directory created using pg_basebackup.
All WAL segments from master server after pg_basebackup should saved in backup server and should be used for PITR recovery when 
needed.


How to use hot standby server for this ?

Or should separate cluster in 5433 port created and pg_rewind or something 
other used for PITR.

Andrus. 






Re: Hot and PITR backups in same server

2020-05-18 Thread Laurenz Albe
On Sun, 2020-05-17 at 17:41 +0300, Andrus wrote:
> How to implement hot standby and PITR recovery possibility in same backup 
> server.
> 
> Plan is:
> 
> 1. Create base backup using
> 
> pg_basebackup --checkpoint=fast --verbose --progress --write-recovery-conf -D 
> /var/lib/postgresql/12/standby
> 
> 2. Create backup copy of   /var/lib/postgresql/12/standby  directory for PITR
> 
> 3. set max_wal_size in postgresql.conf   to   5 TB
> 
> 4. Start backup server for hot standby backups.
> 
> If data from earlier point of time is required:
> 
> 1. Stop backup server
> 2. Replace its data dirctory from of initial data directory contents created 
> in previous p.2
> 4. Copy pg_wal contents from hot standby pg_wal directory to initial pg_wal 
> directory in base backup
> 5. Specify recovery time in postgresql.conf and start backup server to 
> recover to this point of time.
> 
> The most suspicius point is p.4 : copying manually pg_wal contents  from hot 
> data to base backup data.
> 
> It this OK ? Or is some better way to implement hot stadby and PITR 
> possibility in same computer ?
> Postgres 12 in Debian is used.

This is confused or at least confusing.

- "max_wal_size" of 5TB is clearly insane.

- I don't understand what you mean by "Start backup server for hot standby 
backups".

Do I get it right that you want to copy a streaming replication standby 
server's data
directory to perform PITR?  That doesn't see like a good plan, because the 
standby
usually won't be much behind the primary server, and you can only recover to a 
later
point in time.

If you care to elaborate, perhaps the question can be answered.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Hot and PITR backups in same server

2020-05-17 Thread Andrus

Hi!

How to implement hot standby and PITR recovery possibility in same backup 
server.

Plan is:

1. Create base backup using

pg_basebackup --checkpoint=fast --verbose --progress --write-recovery-conf -D 
/var/lib/postgresql/12/standby

2. Create backup copy of   /var/lib/postgresql/12/standby  directory for PITR

3. set max_wal_size in postgresql.conf   to   5 TB

4. Start backup server for hot standby backups.

If data from earlier point of time is required:

1. Stop backup server
2. Replace its data dirctory from of initial data directory contents created in 
previous p.2
4. Copy pg_wal contents from hot standby pg_wal directory to initial pg_wal 
directory in base backup
5. Specify recovery time in postgresql.conf and start backup server to recover 
to this point of time.

The most suspicius point is p.4 : copying manually pg_wal contents  from hot 
data to base backup data.

It this OK ? Or is some better way to implement hot stadby and PITR possibility 
in same computer ?
Postgres 12 in Debian is used.

Andrus.




Re: Is there any tool that provides Physical Backup plus PITR for a single database ( Not the whole PG instance ) ?

2019-12-16 Thread Grigory Smolkin

Hello!

pg_probackup can do that:
https://postgrespro.github.io/pg_probackup/#pbk-partial-restore


On 12/15/19 5:32 PM, Abraham, Danny wrote:


I assume that this magic does not exist.

Am I right ?


--
Grigory Smolkin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: Is there any tool that provides Physical Backup plus PITR for a single database ( Not the whole PG instance ) ?

2019-12-15 Thread Ron

On 12/15/19 10:32 AM, Adrian Klaver wrote:

On 12/15/19 6:33 AM, Abraham, Danny wrote:

I assume that this magic does not exist.
Am I right ?


Take a look at:

https://pgbackrest.org/user-guide.html#restore/option-db-include

and see it that meets your needs. I have never actually done it so I 
cannot add anything to the above.


Restoring one db using pgbackrest zeroes out all the other databases.

(The inability to do PITR of a single database is a serious flaw in 
Postgres, but I don't think it can be solved.)


--
Angular momentum makes the world go 'round.




Re: Is there any tool that provides Physical Backup plus PITR for a single database ( Not the whole PG instance ) ?

2019-12-15 Thread Adrian Klaver

On 12/15/19 6:33 AM, Abraham, Danny wrote:

I assume that this magic does not exist.
Am I right ?


Take a look at:

https://pgbackrest.org/user-guide.html#restore/option-db-include

and see it that meets your needs. I have never actually done it so I 
cannot add anything to the above.



Thanks
Danny







--
Adrian Klaver
adrian.kla...@aklaver.com




Is there any tool that provides Physical Backup plus PITR for a single database ( Not the whole PG instance ) ?

2019-12-15 Thread Abraham, Danny
I assume that this magic does not exist.
Am I right ?
Thanks
Danny





Is there any tool that provides Physical Backup plus PITR for a single database ( Not the whole PG instance ) ?

2019-12-15 Thread Abraham, Danny
I assume that this magic does not exist.
Am I right ?



RE: Postgres Point in time Recovery (PITR),

2019-11-11 Thread Daulat Ram
Hello, 

I am trying to setup barman for my  test environment with the following steps 
but I m not able to start the backup. Please advice !

Postgres 11.2 streaming replication on docker container , Centos
Server A : pg1 (primary)
Server B : pg2 (Standby)

I am using pg2 as a Barman server.  Barman 2.8 is installed on pg2
Also I have installed " yum install barman-cli-2.8-1.el6.noarch.rpm"  on pg1 
(postgres server) for 'barman-wal-archive.

@pg1 

1 . Connect on the server pgsql and log into the postgres account:
sudo -i -u postgres
2 . createuser --interactive -P barman
3 . createuser  barman
4 . Edit postgresql.conf , for 
listen_addresses = '*'   and sudo service postgresql restart
wal_level = replica
archive_mode = on   
archive_command = 'barman-wal-archive pg2 pgsql %p'
5. hostall all pg2/32 trust


@pg2 (Barman Server)
1 .  switch to user barman and generate the keys:   ssh-keygen -t rsa
2 . Copy the key to the user account postgres on pgsql: ssh-copy-id 
postgres@pg1
3 . Barman also requires access to the postgres account on the server pg2. Copy 
the key into the directory of the postgres user and test the connection:

ssh-copy-id postgres@localhost
ssh postgres@localhost -C true

4 . Once this is done, log in as postgres user on pg1and generate an SSH key: 
ssh-keygen -t rsa

5 . Copy the generated key to the list of authorized keys of the user barman on 
pg2: ssh-copy-id barman@pg2

6 . Test the connection to the server: 

@barmaner server /etc/barman.d/pgsql.conf

[test]
; Human readable description
description =  "Example of PostgreSQL Database (via SSH)"

ssh_command = ssh postgres@pg1
conninfo = host=pg1 user=barman dbname=postgres
backup_method = rsync
;reuse_backup = link
; Identify the standard behavior for backup operations: possible values are
; exclusive_backup (default), concurrent_backup
; concurrent_backup is the preferred method with PostgreSQL >= 9.6
backup_options = exclusive_backup

; Number of parallel workers to perform file copy during backup and recover
;parallel_jobs = 1
archiver = on
;archiver_batch_size = 50

barman@vipostgres-db-test:/etc/barman.d$ barman check pgsql
WARNING: No backup strategy set for server 'pgsql' (using default 
'exclusive_backup').
WARNING: The default backup strategy will change to 'concurrent_backup' in the 
future. Explicitly set 'backup_options' to silence this warning.
Server pgsql:
WAL archive: FAILED (please make sure WAL shipping is setup)
PostgreSQL: FAILED
directories: OK
retention policy settings: OK
backup maximum age: OK (no last_backup_maximum_age provided)
compression settings: OK
failed backups: OK (there are 0 failed backups)
minimum redundancy requirements: OK (have 0 backups, expected at least 
0)
ssh: FAILED (Connection failed using 'ssh postgres@vipostgres-cn-test 
-o BatchMode=yes -o StrictHostKeyChecking=no' return code 255)
not in recovery: OK
pg_receivexlog: FAILED
pg_receivexlog compatible: FAILED (PostgreSQL version: None, 
pg_receivexlog version: None)
receive-wal running: FAILED (See the Barman log file for more details)
archiver errors: OK

NOte: If I try to login mannulay on postgres from barman then it is aking for 
password :

 ssh postgres@pg1
Failed to add the host to the list of known hosts (/tmp/.ssh/known_hosts).
== Authorized Access Notice 
==
This computer system is the property of Exponential Interactive Inc. Activities 
are actively monitored
and unauthorized access or use of this computer system is prohibited.
==
Password:


Thanks,


-Original Message-
From: Andreas Kretschmer  
Sent: Friday, October 18, 2019 12:38 PM
To: pgsql-general@lists.postgresql.org; Daulat Ram 
; pgsql-general@lists.postgresql.org
Subject: Re: Postgres Point in time Recovery (PITR),

On 18 October 2019 07:59:21 CEST, Daulat Ram  wrote:
>Hello All,
>Can you please share some ideas and scenarios how we can do the PITR in 
>case of disaster.
>
>
>Thanks,


Consider Barman.


--
2ndQuadrant - The PostgreSQL Support Company
2019-11-11 02:05:02,204 [10832] barman.wal_archiver INFO: No xlog segments 
found from file archival for pgsql.
2019-11-11 02:05:02,204 [10833] barman.postgres WARNING: Error retrieving 
PostgreSQL status: could not connect to server: Connection refused
2019-11-11 02:05:02,205 [10832] barman.wal_archiver INFO: No xlog segments 
found from streaming for pgsql.
2019-11-11 02:05:02,205 [10833] barman.server ERROR: ArchiverFailure:failed 
opening the PostgreSQL streaming connection for server pgsql
2019-11-11 02:05:02,206 [10834] barman.wal_archiver INFO: No xlog segments 
found from file archival for test.
2019-11-11 02

Re: Postgres Point in time Recovery (PITR),

2019-10-21 Thread Horacio Miranda
Hi

> On 22/10/2019, at 4:14 AM, Adrian Klaver  wrote:
> 
> On 10/21/19 8:10 AM, Avinash Kumar wrote:
>> Hi,
>> On Mon, Oct 21, 2019 at 8:16 PM Alan Hodgson > > wrote:
>>On Mon, 2019-10-21 at 16:40 +0530, Avinash Kumar wrote:
>>> 
>>>We need to ensure that we have safe backup locations, for example,
>>>push them to AWS S3 and forget about redundancy.
>>>Why do you think only Offline Backups are reliable today ? 

The only way to ensure, you have a secondary protocol, is using some type of 
pull approach, were the backup system pull from the online system.

>>There have been examples of hackers gaining control of an
>>organization's servers or cloud accounts and not only destroying
>>their online systems but also methodically deleting all their backups.
>>  There are fewer things that can go catastrophically wrong if one has
>>actual offline backups. You have to be a lot more careful about
>>protecting anything attached to the Internet.
>> I do not agree with this. If a hacker is gaining control of your 
>> organizational servers to destroy your Online backups, can't he destroy the 
>> offline backups and your database ?

They only way to be safe is having an external company or passwords isolated 
from your organisation, my personal approach is having public certs installed 
from the secondary backup system to pull the backups from the online platforms. 
Having generated passwords with a keepass encrypted database isolated from the 
Organisations.

> 
> Well to me off-line means you have introduced an air gap between your on-line 
> presence and your off-line backups. This would prevent an intruder from 
> accessing the off-line backups.

The only way is not having the access or perhaps a 2FA to login into AWS 
platforms to ensure you know when someone is trying to login to your AWS 
accounts, Linux servers support 2FA too.

> 
>> This is not a right justification to encouraging Offline Backups over Online 
>> Backups.
>> If you are worried about storing your online backups through internet on 
>> cloud (i do not agree as you can still secure your data on cloud), store it 
>> in on a server in your Organizational network and do not push them through 
>> internet.
>> Taking Offline Backups is not the only right way to ensure Reliable Backups.
>> We are way ahead of the days where you need to face downtime to take backups.
>> Online Backups are reliable in PostgreSQL.

I think offline backups are useful as complement to the online backups, my 
current policy of backups for offline one are ( weekly, monthly, yearly with 4 
backup retention ) only one time a yearly backup was utilised to recover a 
human mistake.

PS: I think if you are really worried about hackers, perhaps you need to chat 
with your security officer to ensure alerts, accountability and 2FA plus other 
techniques are implemented in your company, nothing will stop a good hacker and 
probably it is working inside of your company, 80% of the attacks comes from 
people that works inside of the company ( Orion security Chile in 2001 shared 
this information ), no clue what todays stats are.

Ps2: don’t use passwords like secret, or s3cr3t, etc.  and don’t forget 
security is just a feeling.

> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com
> 
> 





Re: Postgres Point in time Recovery (PITR),

2019-10-21 Thread Avinash Kumar
On Mon, Oct 21, 2019 at 8:47 PM Alan Hodgson 
wrote:

> On Mon, 2019-10-21 at 20:40 +0530, Avinash Kumar wrote:
>
> can't he destroy the offline backups and your database ?
> This is not a right justification to encouraging Offline Backups over
> Online Backups.
> If you are worried about storing your online backups through internet on
> cloud (i do not agree as you can still secure your data on cloud), store it
> in on a server in your Organizational network and do not push them through
> internet.
> Taking Offline Backups is not the only right way to ensure Reliable
> Backups.
> We are way ahead of the days where you need to face downtime to take
> backups.
> Online Backups are reliable in PostgreSQL.
>
>
> I apologize, I think we have a difference in terminology here. Obviously
> you don't need to take PostgreSQL out of service to take a backup. I don't
> know that you ever did; pg_dump even has always worked fine while the
> database is available.
>
> When I say offline backup I mean a backup that is stored in a way that it
> cannot be accessed via the Internet.
>
Okay. This gives a clarity, thanks. Mix-up of offline and online in terms
of backup and network.


Re: Postgres Point in time Recovery (PITR),

2019-10-21 Thread Alan Hodgson
On Mon, 2019-10-21 at 20:40 +0530, Avinash Kumar wrote:
> can't he destroy the offline backups and your database ?  
> This is not a right justification to encouraging Offline Backups over
> Online Backups. 
> If you are worried about storing your online backups through internet
> on cloud (i do not agree as you can still secure your data on cloud),
> store it in on a server in your Organizational network and do not push
> them through internet.
> Taking Offline Backups is not the only right way to ensure Reliable
> Backups. 
> We are way ahead of the days where you need to face downtime to take
> backups. 
> Online Backups are reliable in PostgreSQL. 

I apologize, I think we have a difference in terminology here. Obviously
you don't need to take PostgreSQL out of service to take a backup. I
don't know that you ever did; pg_dump even has always worked fine while
the database is available.

When I say offline backup I mean a backup that is stored in a way that
it cannot be accessed via the Internet.


Re: Postgres Point in time Recovery (PITR),

2019-10-21 Thread Adrian Klaver

On 10/21/19 8:10 AM, Avinash Kumar wrote:

Hi,

On Mon, Oct 21, 2019 at 8:16 PM Alan Hodgson > wrote:


On Mon, 2019-10-21 at 16:40 +0530, Avinash Kumar wrote:


We need to ensure that we have safe backup locations, for example,
push them to AWS S3 and forget about redundancy.
Why do you think only Offline Backups are reliable today ? 



There have been examples of hackers gaining control of an
organization's servers or cloud accounts and not only destroying
their online systems but also methodically deleting all their backups. 



There are fewer things that can go catastrophically wrong if one has
actual offline backups. You have to be a lot more careful about
protecting anything attached to the Internet.

I do not agree with this. If a hacker is gaining control of your 
organizational servers to destroy your Online backups, can't he destroy 
the offline backups and your database ?


Well to me off-line means you have introduced an air gap between your 
on-line presence and your off-line backups. This would prevent an 
intruder from accessing the off-line backups.


This is not a right justification to encouraging Offline Backups over 
Online Backups.
If you are worried about storing your online backups through internet on 
cloud (i do not agree as you can still secure your data on cloud), store 
it in on a server in your Organizational network and do not push them 
through internet.
Taking Offline Backups is not the only right way to ensure Reliable 
Backups.
We are way ahead of the days where you need to face downtime to take 
backups.

Online Backups are reliable in PostgreSQL.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Postgres Point in time Recovery (PITR),

2019-10-21 Thread Avinash Kumar
Hi,

On Mon, Oct 21, 2019 at 8:16 PM Alan Hodgson 
wrote:

> On Mon, 2019-10-21 at 16:40 +0530, Avinash Kumar wrote:
>
>
> We need to ensure that we have safe backup locations, for example, push
> them to AWS S3 and forget about redundancy.
> Why do you think only Offline Backups are reliable today ?
>
>
>
> There have been examples of hackers gaining control of an organization's
> servers or cloud accounts and not only destroying their online systems but
> also methodically deleting all their backups.
>

> There are fewer things that can go catastrophically wrong if one has
> actual offline backups. You have to be a lot more careful about protecting
> anything attached to the Internet.
>
I do not agree with this. If a hacker is gaining control of your
organizational servers to destroy your Online backups, can't he destroy the
offline backups and your database ?
This is not a right justification to encouraging Offline Backups over
Online Backups.
If you are worried about storing your online backups through internet on
cloud (i do not agree as you can still secure your data on cloud), store it
in on a server in your Organizational network and do not push them through
internet.
Taking Offline Backups is not the only right way to ensure Reliable
Backups.
We are way ahead of the days where you need to face downtime to take
backups.
Online Backups are reliable in PostgreSQL.


Re: Postgres Point in time Recovery (PITR),

2019-10-21 Thread Alan Hodgson
On Mon, 2019-10-21 at 16:40 +0530, Avinash Kumar wrote:
> We need to ensure that we have safe backup locations, for example,
> push them to AWS S3 and forget about redundancy. 
> Why do you think only Offline Backups are reliable today ? 

There have been examples of hackers gaining control of an organization's
servers or cloud accounts and not only destroying their online systems
but also methodically deleting all their backups.

There are fewer things that can go catastrophically wrong if one has
actual offline backups. You have to be a lot more careful about
protecting anything attached to the Internet.



Re: Postgres Point in time Recovery (PITR),

2019-10-21 Thread Fabio Ugo Venchiarutti

On 21/10/2019 12:10, Avinash Kumar wrote:



On Mon, Oct 21, 2019 at 4:19 PM Fabio Ugo Venchiarutti 
mailto:f.venchiaru...@ocado.com>> wrote:


On 21/10/2019 09:52, Luca Ferrari wrote:
 > On Sat, Oct 19, 2019 at 7:46 PM Daulat Ram
mailto:daulat@exponential.com>> wrote:
 >> One more questions is, how backups are useful if we have
streaming replication . As I know, we can promote the standby as
primary in case of disaster at primary side. Do we need to schedule
backups if we have streaming replication?
 >
 > Let's speculate a little on that: do you need backups if you have a
 > RAID-1 configuration?
 > Replication helps you reduce almost to zero the time to handle a
 > disaster, backups allow you to recover in a more large time window.
 >
 > Luca
 >
 >


TBH I hear this argument more often than I wish.


Offline backups and data replication are nearly entirely orthogonal. 




Any form of instantaneous redundancy (RAID, instantaneous replication
and so on) primary is a mitigation measure to protect data &
availability against loss of infrastructure.


Backups (preferably with PITR) also do that, but that's not their
primary purpose unless you can't afford live redundancy on top of them.


Offline backups address many failure scenarios that any form of live
replication is defenseless against (eg: logical damage to the data as a
result of human errors/bugs/vandalism would hit all your replicas, but
you always can perform DR from a backup).

I think we are way ahead of the days where we need to live on Offline 
backups involving downtimes.
Today, we have several Online Consistent Backup solutions such as 
pg_basebackup (built-in with PG), pgBackRest, BARMAN, WAL-G etc 

Online Consistent Backups + Archiving of WALs are perfectly reliable.
We need to ensure that we have safe backup locations, for example, push 
them to AWS S3 and forget about redundancy.

Why do you think only Offline Backups are reliable today ?



Delayed replicas, or any online rollback capability (such as pg_rewind
off the server's own pg_wal or Oracle's flashback), somewhat live in a
grey area in between, and their effectiveness varies depending on which
level was compromised.

What if you delay your replica by a day, before you making a change 
tomorrow (that may need a rollback). Delayed Replica is for emergencies 
IMO and of course does not satisfy every possible scenario.





Trade-offs come down to individual implementers, so fair enough.


Our biases must come from different use cases/experience: a good 
fraction of recovery/sanitisation operations I had to carry out were 
very selective/application specific and involved buildup of anomalies 
over time, sometimes months.



Wouldn't have been possible without an old frozen reference for users to 
compare with, and that hat to come from very cold storage.





--
Regards

Fabio Ugo Venchiarutti
OSPCFC Network Engineering Dpt.
Ocado Technology

--


Notice: 
This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group.


If you are not the intended recipient, please notify us 
immediately and delete all copies of this message. Please note that it is 
your responsibility to scan this message for viruses.


References to the 
"Ocado Group" are to Ocado Group plc (registered in England and Wales with 
number 7098618) and its subsidiary undertakings (as that expression is 
defined in the Companies Act 2006) from time to time. The registered office 
of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, 
Hatfield, Hertfordshire, AL10 9UL.





Re: Postgres Point in time Recovery (PITR),

2019-10-21 Thread Avinash Kumar
On Mon, Oct 21, 2019 at 4:19 PM Fabio Ugo Venchiarutti <
f.venchiaru...@ocado.com> wrote:

> On 21/10/2019 09:52, Luca Ferrari wrote:
> > On Sat, Oct 19, 2019 at 7:46 PM Daulat Ram 
> wrote:
> >> One more questions is, how backups are useful if we have streaming
> replication . As I know, we can promote the standby as primary in case of
> disaster at primary side. Do we need to schedule backups if we have
> streaming replication?
> >
> > Let's speculate a little on that: do you need backups if you have a
> > RAID-1 configuration?
> > Replication helps you reduce almost to zero the time to handle a
> > disaster, backups allow you to recover in a more large time window.
> >
> > Luca
> >
> >
>
>
> TBH I hear this argument more often than I wish.
>
>
> Offline backups and data replication are nearly entirely orthogonal.


>
> Any form of instantaneous redundancy (RAID, instantaneous replication
> and so on) primary is a mitigation measure to protect data &
> availability against loss of infrastructure.
>
>
> Backups (preferably with PITR) also do that, but that's not their
> primary purpose unless you can't afford live redundancy on top of them.
>
>
> Offline backups address many failure scenarios that any form of live
> replication is defenseless against (eg: logical damage to the data as a
> result of human errors/bugs/vandalism would hit all your replicas, but
> you always can perform DR from a backup).
>
I think we are way ahead of the days where we need to live on Offline
backups involving downtimes.
Today, we have several Online Consistent Backup solutions such as
pg_basebackup (built-in with PG), pgBackRest, BARMAN, WAL-G etc 
Online Consistent Backups + Archiving of WALs are perfectly reliable.
We need to ensure that we have safe backup locations, for example, push
them to AWS S3 and forget about redundancy.
Why do you think only Offline Backups are reliable today ?

>
>
> Delayed replicas, or any online rollback capability (such as pg_rewind
> off the server's own pg_wal or Oracle's flashback), somewhat live in a
> grey area in between, and their effectiveness varies depending on which
> level was compromised.
>
What if you delay your replica by a day, before you making a change
tomorrow (that may need a rollback). Delayed Replica is for emergencies IMO
and of course does not satisfy every possible scenario.

>
>
>
>
>
> --
> Regards
>
> Fabio Ugo Venchiarutti
> OSPCFC Network Engineering Dpt.
> Ocado Technology
>
> --
>
>
> Notice:
> This email is confidential and may contain copyright material of
> members of the Ocado Group. Opinions and views expressed in this message
> may not necessarily reflect the opinions and views of the members of the
> Ocado Group.
>
> If you are not the intended recipient, please notify us
> immediately and delete all copies of this message. Please note that it is
> your responsibility to scan this message for viruses.
>
> References to the
> "Ocado Group" are to Ocado Group plc (registered in England and Wales with
> number 7098618) and its subsidiary undertakings (as that expression is
> defined in the Companies Act 2006) from time to time. The registered
> office
> of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way,
> Hatfield, Hertfordshire, AL10 9UL.
>


Thanks,
Avinash Vallarapu.


Re: Postgres Point in time Recovery (PITR),

2019-10-21 Thread Fabio Ugo Venchiarutti

On 21/10/2019 09:52, Luca Ferrari wrote:

On Sat, Oct 19, 2019 at 7:46 PM Daulat Ram  wrote:

One more questions is, how backups are useful if we have streaming replication 
. As I know, we can promote the standby as primary in case of disaster at 
primary side. Do we need to schedule backups if we have streaming replication?


Let's speculate a little on that: do you need backups if you have a
RAID-1 configuration?
Replication helps you reduce almost to zero the time to handle a
disaster, backups allow you to recover in a more large time window.

Luca





TBH I hear this argument more often than I wish.


Offline backups and data replication are nearly entirely orthogonal.


Any form of instantaneous redundancy (RAID, instantaneous replication 
and so on) primary is a mitigation measure to protect data & 
availability against loss of infrastructure.



Backups (preferably with PITR) also do that, but that's not their 
primary purpose unless you can't afford live redundancy on top of them.



Offline backups address many failure scenarios that any form of live 
replication is defenseless against (eg: logical damage to the data as a 
result of human errors/bugs/vandalism would hit all your replicas, but 
you always can perform DR from a backup).



Delayed replicas, or any online rollback capability (such as pg_rewind 
off the server's own pg_wal or Oracle's flashback), somewhat live in a 
grey area in between, and their effectiveness varies depending on which 
level was compromised.






--
Regards

Fabio Ugo Venchiarutti
OSPCFC Network Engineering Dpt.
Ocado Technology

--


Notice: 
This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group.


If you are not the intended recipient, please notify us 
immediately and delete all copies of this message. Please note that it is 
your responsibility to scan this message for viruses.


References to the 
"Ocado Group" are to Ocado Group plc (registered in England and Wales with 
number 7098618) and its subsidiary undertakings (as that expression is 
defined in the Companies Act 2006) from time to time. The registered office 
of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, 
Hatfield, Hertfordshire, AL10 9UL.





Re: Postgres Point in time Recovery (PITR),

2019-10-21 Thread Luca Ferrari
On Sat, Oct 19, 2019 at 7:46 PM Daulat Ram  wrote:
> One more questions is, how backups are useful if we have streaming 
> replication . As I know, we can promote the standby as primary in case of 
> disaster at primary side. Do we need to schedule backups if we have streaming 
> replication?

Let's speculate a little on that: do you need backups if you have a
RAID-1 configuration?
Replication helps you reduce almost to zero the time to handle a
disaster, backups allow you to recover in a more large time window.

Luca




Re: Postgres Point in time Recovery (PITR),

2019-10-19 Thread Avinash Kumar
Hi,

On Sat, Oct 19, 2019 at 11:16 PM Daulat Ram 
wrote:

> Hi All,
>
>
>
> Thanks for your suggestions.
>
> One more questions is, how backups are useful if we have streaming
> replication . As I know, we can promote the standby as primary in case of
> disaster at primary side. Do we need to schedule backups if we have
> streaming replication?
>
1. What if you realized that someone has dropped a table or accidentally
made a change that requires you to recover some table/database from old
backups ?
2. Some organizations requires you to store backups for a few days/months
and even years. This is because, you should be able to perform recovery
from past at any given point of time.

Streaming Replication (unless delayed explicitly) applies the changes
immediately. So, it may be very late by the time you realize that some
accidental change has made some damage. Because, the damage has happened on
both Master & Standby.

Regards,
Avinash Vallarapu.

>
>
> Thanks
>
>
>
> *From:* Avinash Kumar 
> *Sent:* Friday, October 18, 2019 5:28 PM
> *To:* David Steele 
> *Cc:* Luca Ferrari ; Andreas Joseph Krogh <
> andr...@visena.com>; Daulat Ram ;
> pgsql-general@lists.postgresql.org
> *Subject:* Re: Postgres Point in time Recovery (PITR),
>
>
>
> Hi Daulat,
>
>
>
> PITR entirely depends on what type of backups you choose.
> Sometimes, to reduce the amount of downtime involved while restoring and
> recovering a backup, you may also use a additional delayed standby.
>
> You could use the PG built-in feature to delay the replication and
> fast-forward it to the safest point to achieve PITR. But this requires you
> to have an additional standby.
>
>
> https://www.percona.com/blog/2018/06/28/faster-point-in-time-recovery-pitr-postgresql-using-delayed-standby/
>
>
>
> If you have several TBs of database, pgBackRest is of course a way to go
> for backups (there are few more open source solutions), but also consider
> the amount of time it takes for recovery. Keeping all of this in mind, your
> approach to PITR changes.
>
>
>
> So i would ask you this question, what is the backup tool you use and what
> is your backup strategy ? Are you taking a physical backup and performing
> continuous archiving of WALs ? The answer to your question entirely depends
> on this. :)
>
>
>
> Regards,
> Avinash Vallarapu.
>
>
>
>
>
>
>
> On Fri, Oct 18, 2019 at 5:17 PM David Steele  wrote:
>
> On 10/18/19 11:29 AM, Luca Ferrari wrote:
> > On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh
> >  wrote:
> >> We use barman (https://www.pgbarman.org/) for continuous streaming
> backup and I had to restore from it once, and it went like this:
> >
> > Just for the records, here's an example of restore with pgbackrest:
> >
> > % sudo -u postgres pgbackrest --stanza=miguel \
> > --log-level-console=info --delta restore
> > ...
> > INFO: restore backup set 20190916-125652F
> > INFO: remove invalid files/paths/links from /postgres/pgdata/11
> > INFO: cleanup removed 148 files, 3 paths
> > ...
> > INFO: write /postgres/pgdata/11/recovery.conf
> > INFO: restore global/pg_control (performed last
> >  to ensure aborted restores cannot be started)
> > INFO: restore command end: completed successfully (5113ms)
>
> pgBackRest also has a tutorial on PITR:
> https://pgbackrest.org/user-guide.html#pitr
>
> --
> -David
> da...@pgmasters.net
>
>
>
>
> --
>
> 9000799060
>


-- 
9000799060


Re: Postgres Point in time Recovery (PITR),

2019-10-19 Thread Jeff Janes
On Fri, Oct 18, 2019 at 1:59 AM Daulat Ram 
wrote:

> Hello All,
>
> Can you please share some ideas and scenarios how we can do the PITR in
> case of disaster.
>

It depends on what you mean by "disaster".  Usually I think that would mean
your server (or entire data center) was destroyed.  In this case, you would
want to restore to the latest time available.  I would say that this is not
PITR at all, that is just regular recovery.

If someone truncated a table 3 weeks ago, and you didn't realize it until
today, that is a scenario for PITR.  Are you using "disaster" to cover this
scenario?

Cheers,

Jeff

>


RE: Postgres Point in time Recovery (PITR),

2019-10-19 Thread Daulat Ram
Hi All,

Thanks for your suggestions.
One more questions is, how backups are useful if we have streaming replication 
. As I know, we can promote the standby as primary in case of disaster at 
primary side. Do we need to schedule backups if we have streaming replication?

Thanks

From: Avinash Kumar 
Sent: Friday, October 18, 2019 5:28 PM
To: David Steele 
Cc: Luca Ferrari ; Andreas Joseph Krogh 
; Daulat Ram ; 
pgsql-general@lists.postgresql.org
Subject: Re: Postgres Point in time Recovery (PITR),

Hi Daulat,

PITR entirely depends on what type of backups you choose.
Sometimes, to reduce the amount of downtime involved while restoring and 
recovering a backup, you may also use a additional delayed standby.
You could use the PG built-in feature to delay the replication and fast-forward 
it to the safest point to achieve PITR. But this requires you to have an 
additional standby.
https://www.percona.com/blog/2018/06/28/faster-point-in-time-recovery-pitr-postgresql-using-delayed-standby/

If you have several TBs of database, pgBackRest is of course a way to go for 
backups (there are few more open source solutions), but also consider the 
amount of time it takes for recovery. Keeping all of this in mind, your 
approach to PITR changes.

So i would ask you this question, what is the backup tool you use and what is 
your backup strategy ? Are you taking a physical backup and performing 
continuous archiving of WALs ? The answer to your question entirely depends on 
this. :)

Regards,
Avinash Vallarapu.



On Fri, Oct 18, 2019 at 5:17 PM David Steele 
mailto:da...@pgmasters.net>> wrote:
On 10/18/19 11:29 AM, Luca Ferrari wrote:
> On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh
> mailto:andr...@visena.com>> wrote:
>> We use barman (https://www.pgbarman.org/) for continuous streaming backup 
>> and I had to restore from it once, and it went like this:
>
> Just for the records, here's an example of restore with pgbackrest:
>
> % sudo -u postgres pgbackrest --stanza=miguel \
> --log-level-console=info --delta restore
> ...
> INFO: restore backup set 20190916-125652F
> INFO: remove invalid files/paths/links from /postgres/pgdata/11
> INFO: cleanup removed 148 files, 3 paths
> ...
> INFO: write /postgres/pgdata/11/recovery.conf
> INFO: restore global/pg_control (performed last
>  to ensure aborted restores cannot be started)
> INFO: restore command end: completed successfully (5113ms)

pgBackRest also has a tutorial on PITR:
https://pgbackrest.org/user-guide.html#pitr

--
-David
da...@pgmasters.net<mailto:da...@pgmasters.net>



--
9000799060


Re: Postgres Point in time Recovery (PITR),

2019-10-18 Thread Avinash Kumar
Hi Daulat,

PITR entirely depends on what type of backups you choose.
Sometimes, to reduce the amount of downtime involved while restoring and
recovering a backup, you may also use a additional delayed standby.
You could use the PG built-in feature to delay the replication and
fast-forward it to the safest point to achieve PITR. But this requires you
to have an additional standby.
https://www.percona.com/blog/2018/06/28/faster-point-in-time-recovery-pitr-postgresql-using-delayed-standby/

If you have several TBs of database, pgBackRest is of course a way to go
for backups (there are few more open source solutions), but also consider
the amount of time it takes for recovery. Keeping all of this in mind, your
approach to PITR changes.

So i would ask you this question, what is the backup tool you use and what
is your backup strategy ? Are you taking a physical backup and performing
continuous archiving of WALs ? The answer to your question entirely depends
on this. :)

Regards,
Avinash Vallarapu.



On Fri, Oct 18, 2019 at 5:17 PM David Steele  wrote:

> On 10/18/19 11:29 AM, Luca Ferrari wrote:
> > On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh
> >  wrote:
> >> We use barman (https://www.pgbarman.org/) for continuous streaming
> backup and I had to restore from it once, and it went like this:
> >
> > Just for the records, here's an example of restore with pgbackrest:
> >
> > % sudo -u postgres pgbackrest --stanza=miguel \
> > --log-level-console=info --delta restore
> > ...
> > INFO: restore backup set 20190916-125652F
> > INFO: remove invalid files/paths/links from /postgres/pgdata/11
> > INFO: cleanup removed 148 files, 3 paths
> > ...
> > INFO: write /postgres/pgdata/11/recovery.conf
> > INFO: restore global/pg_control (performed last
> >  to ensure aborted restores cannot be started)
> > INFO: restore command end: completed successfully (5113ms)
>
> pgBackRest also has a tutorial on PITR:
> https://pgbackrest.org/user-guide.html#pitr
>
> --
> -David
> da...@pgmasters.net
>
>
>

-- 
9000799060


Re: Postgres Point in time Recovery (PITR),

2019-10-18 Thread David Steele

On 10/18/19 11:29 AM, Luca Ferrari wrote:

On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh
 wrote:

We use barman (https://www.pgbarman.org/) for continuous streaming backup and I 
had to restore from it once, and it went like this:


Just for the records, here's an example of restore with pgbackrest:

% sudo -u postgres pgbackrest --stanza=miguel \
--log-level-console=info --delta restore
...
INFO: restore backup set 20190916-125652F
INFO: remove invalid files/paths/links from /postgres/pgdata/11
INFO: cleanup removed 148 files, 3 paths
...
INFO: write /postgres/pgdata/11/recovery.conf
INFO: restore global/pg_control (performed last
 to ensure aborted restores cannot be started)
INFO: restore command end: completed successfully (5113ms)


pgBackRest also has a tutorial on PITR:
https://pgbackrest.org/user-guide.html#pitr

--
-David
da...@pgmasters.net




Re: Postgres Point in time Recovery (PITR),

2019-10-18 Thread Luca Ferrari
On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh
 wrote:
> We use barman (https://www.pgbarman.org/) for continuous streaming backup and 
> I had to restore from it once, and it went like this:

Just for the records, here's an example of restore with pgbackrest:

% sudo -u postgres pgbackrest --stanza=miguel \
   --log-level-console=info --delta restore
...
INFO: restore backup set 20190916-125652F
INFO: remove invalid files/paths/links from /postgres/pgdata/11
INFO: cleanup removed 148 files, 3 paths
...
INFO: write /postgres/pgdata/11/recovery.conf
INFO: restore global/pg_control (performed last
to ensure aborted restores cannot be started)
INFO: restore command end: completed successfully (5113ms)




Re: Postgres Point in time Recovery (PITR),

2019-10-18 Thread Andreas Kretschmer
On 18 October 2019 07:59:21 CEST, Daulat Ram  wrote:
>Hello All,
>Can you please share some ideas and scenarios how we can do the PITR in
>case of disaster.
>
>
>Thanks,


Consider Barman.


-- 
2ndQuadrant - The PostgreSQL Support Company




Sv: Postgres Point in time Recovery (PITR),

2019-10-18 Thread Andreas Joseph Krogh

På fredag 18. oktober 2019 kl. 07:59:21, skrev Daulat Ram <
daulat@exponential.com <mailto:daulat@exponential.com>>: 

Hello All,

Can you please share some ideas and scenarios how we can do the PITR in case 
of disaster.

We use barman (https://www.pgbarman.org/ <https://www.pgbarman.org/index.html>
) for continuous streaming backup and I had to restore from it once, and it 
went like this: 

==8<=== 

$ barman recover --target-time "2018-12-06 12:20:00" --remote-ssh-command "ssh 
andreak@192.168.0.184 <mailto:andreak@192.168.0.184>" db01_11 20181130T190002 
"/home/andreak/barman-restore"
 Processing xlog segments from streaming for db01_11
 00010174002E
 00010174002F
 000101740030
 Starting remote restore for server db01_11 using backup 20181130T190002
 Destination directory: /home/andreak/barman-restore
 Doing PITR. Recovery target time: '2018-12-06 12:20:00+01:00'
 17445, dbname1, /storage/fast_ssd/11/tablespaces/dbname1
 29218, dbname2, /storage/fast_ssd/11/tablespaces/dbname2
 ... 
 29235503, dbnameX, /storage/fast_ssd/11/tablespaces/dbnameX
Copying the base backup.
 Copying required WAL segments.
 Generating recovery.conf
 Identify dangerous settings in destination directory.

 WARNING
 The following configuration files have not been saved during backup, hence 
they have not been restored.
 You need to manually restore them in order to start the recovered PostgreSQL 
instance:

 postgresql.conf
 pg_hba.conf
 pg_ident.conf

 Recovery completed (start time: 2018-12-06 13:14:53.220043, elapsed time: 4 
hours, 52 minutes, 47 seconds)

 Your PostgreSQL server has been successfully prepared for recovery! 
==8<=== 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com> 
 <https://www.visena.com> 


Re: Postgres Point in time Recovery (PITR),

2019-10-18 Thread Luca Ferrari
On Fri, Oct 18, 2019 at 7:59 AM Daulat Ram  wrote:
> Can you please share some ideas and scenarios how we can do the PITR in case 
> of disaster.

In order to be able to do PITR you need:
- a base backup of your database
- WALs from the backup going on

See <https://www.postgresql.org/docs/12/continuous-archiving.html>.
Then, to keep it simple, if the disaster happens in the time window
between the backup and the last WAL archiving, you can restore an
instance at any point in time previously the disaster itself.

Luca




Postgres Point in time Recovery (PITR),

2019-10-17 Thread Daulat Ram
Hello All,
Can you please share some ideas and scenarios how we can do the PITR in case of 
disaster.


Thanks,


Re: PITR based recovery in a primary/standby cluster setup

2019-05-21 Thread Frank Alberto Rodriguez
Hello.I'm not sure which replications issues you have, and I never used
Wall-E before, but I get some issues with PotgreSql 10 and Barman.
Try starting the primary server at first, when it finish to recovery
this should start as primary, if not then go to the postgresql data
directory and rename the recovery.conf to recovery.done and start the
server as primary. 
Then start the standby server and when  recovery target  is reached,
the standby server should not leave the recovery status and this should
weep receiving wal through the archive_command, and should not rename
the recovery.conf file.
Regards
On Tue, 2019-05-21 at 14:27 +0530, Abhijit Gharami wrote:
> Hi,
> 
> We have primary and standby PostgreSQL cluster setup and also we have
> PITR enabled on it. To improve the recovery time we are thinking of
> recovering the database to both primary and standby at the same time.
> 
> These are the steps we are following:
>   1. Restore the base backup to the both primary and standby server 
>   2. Replay the WAL files on both primary and standby and once the
> recovery target is reached stop the servers 
>   3. Start one of the server as Primary and other one as standby
> 
> 
> We have followed the above steps but when we are trying to start the
> servers as primary and standby we are having replication issues.
> 
> Could you please suggest what should be done here so that we can
> recover the database in both primary as well as in  standby server?
> 
> We are using PostgrerSQL version: 9.6.12 and for PITR we are using
> WAL-E.
> 
> Regards,
> Abhijit


PITR based recovery in a primary/standby cluster setup

2019-05-21 Thread Abhijit Gharami
Hi,

We have primary and standby PostgreSQL cluster setup and also we have PITR
enabled on it. To improve the recovery time we are thinking of recovering
the database to both primary and standby at the same time.

These are the steps we are following:
  1. Restore the base backup to the both primary and standby server
  2. Replay the WAL files on both primary and standby and once the recovery
target is reached stop the servers
  3. Start one of the server as Primary and other one as standby


We have followed the above steps but when we are trying to start the
servers as primary and standby we are having replication issues.

Could you please suggest what should be done here so that we can recover
the database in both primary as well as in  standby server?

We are using PostgrerSQL version: 9.6.12 and for PITR we are using WAL-E.

Regards,
Abhijit


Re: Monitoring PITR recovery progress

2019-01-23 Thread Alan Hodgson
On Wed, 2019-01-23 at 18:58 +0100, Ivan Voras wrote:
> And, the actual question: how to monitor the WAL replay process?
> Currently, the recovery.conf file is sitting there, with the database
> running, but pg processes are idle, and pg_stat_activity doesn't list
> anything which appears to be related to the recovery process.
> 
> 
> 

The server logs each wal segment that gets processed during recovery.
And you would definitely see a busy high-I/IO process applying the
recovery.
It also logs when the recovery is complete. And I'm pretty sure it
renames recovery.conf to recovery.done or something when it's done.

Monitoring PITR recovery progress

2019-01-23 Thread Ivan Voras
Hello,

I got a sort of POLA violation moment today - a colleague has restored a
PITR archive up to a point in time, and when the developers looked at the
data, it looked wrong - as it if wasn't from that particular time. Later,
he told me he got an error trying to use pg_dump to extract the desired
tables to restore:

Dumping the contents of table "xxx" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: canceling statement due to
conflict with recovery
DETAIL: User query might have needed to see row versions that must be
removed.

This could be the source of data problems the devs are seeing. Offhand,
this looks like it's indicating a conflict between the PITR recovery
process and pg_dump. But, something's puzzling to me: why did he manage to
connect to the db at all?

And, the actual question: how to monitor the WAL replay process? Currently,
the recovery.conf file is sitting there, with the database running, but pg
processes are idle, and pg_stat_activity doesn't list anything which
appears to be related to the recovery process.


Re: how to identify the timeline of specified recovery_target_time when do multiple PITR

2018-10-12 Thread magodo
On Thu, 2018-10-11 at 19:53 +0200, Laurenz Albe wrote:

> Indeed, your sentence
> 
> > if , for example, it recovered to "A" at "o1", then
> > the switched WAL(in case of stop-then-recover) or .partial
> > corresponding WAL(in case of promote) is the last WAL of the
> > timeline1
> 
> seems to contradict your drawing, which has B after A on timeline 1.

Err... I mean "o1" is the end of timelien1, and the last WAL is the one
"o1" was on just before recovering to "A".


> Example:
> Assume that timeline 1 reaches to 000100150030.
> We recover to point A, which is in the middle of
> 000100150020,
> and there branch to timeline 2.
> After some time, we decide to recover again, starting from a
> checkpoint in 000100150010.
> We want to recover to 2018-10-11 12:00:00.
> 
> How can you know how many WAL segments there are on timeline 1, and
> if
> there is one that extends past 2018-10-11 12:00:00 or not?

This is the exact problem I want to figure out. My approach is as you
said, I will parse each archived WAL segment via `pg_xlogdump -r
Transaction`, and try to find the first least earliest WAL against the
specified time. This is a linear search, which has complexity of O(n). 


So if you want to recover to that point of time, how do you choose the
timeline?

---
Magodo


Re: how to identify the timeline of specified recovery_target_time when do multiple PITR

2018-10-11 Thread Laurenz Albe
magodo wrote:
> > How can you know how many WAL segments there are on timeline 1, and if
> > there is one that extends past 2018-10-11 12:00:00 or not?
> 
> This is the exact problem I want to figure out. My approach is as you said,
> I will parse each archived WAL segment via `pg_xlogdump -r Transaction`,
> and try to find the first least earliest WAL against the specified time.
> This is a linear search, which has complexity of O(n).

That means that the time spent grows linearly, but it is still a lot of time
if there are a lot of WAL archives.

> So if you want to recover to that point of time, how do you choose the 
> timeline?

PostgreSQL doesn't provide auch a functionality, and I can think of three 
reasons:
1) the high cost as mentioned above
2) during archive recovery, it does not happen very often that you have to
   choose between timelines at all.  It is more typical for streaming 
replication.
3) nobody saw a use case for such a functionality

If you really need this functionality, you'll have to write it yourself.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: how to identify the timeline of specified recovery_target_time when do multiple PITR

2018-10-11 Thread Laurenz Albe
magodo wrote:
> On Thu, 2018-10-11 at 06:35 +0200, Laurenz Albe wrote:
> > magodo wrote:
> > > OK... Just take another example:
> > > 
> > >  A B
> > > BASE-+-+--o1 (recover to
> > > A)  1
> > >  | |   C
> > >  +.|...+---o2 (regret, recover to
> > > B) 2
> > >|   |
> > >+...|..--o3 (regret again, recover to
> > > C)  3
> > >| 
> > >+--
> > > -- 4
> > > 
> > > 
> > > Suppose I'm at "o3" and want to recover to point "C". Because I want to
> > > recover to the first timeline which covers this time point, it means I
> > > wish to recover to timeline 2.
> > 
> > Ah, I finally understand your question.
> > 
> > You assume tht timeline 1 and 2 have ended (that's how you drew it),
> > and that consequently timeline 3 is the "earliest existing" timeline,
> > so why doesn't PostgreSQL choose that one automatically?
> > 
> > Even though you drew it that way, timeline 1 and 2 have not ended, in
> > a way.  There may be more on them.  How should PostgreSQL know what is
> > the last WAL entry on a certain timeline?  For that, it would have to
> > restore and examine *all* WAL segments on that timeline until that fails.
> > 
> > But that is unreasonable because of the potential amount of time
> > and work involved.  Rather, PostgreSQL has to decide at the point where
> > timeline 2 branches off which one it should follow.
> 
> Thank God I finnally conveied my meaning :)
> 
> For one cluster, if , for example, it recovered to "A" at "o1", then
> the switched WAL(in case of stop-then-recover) or .partial
> corresponding WAL(in case of promote) is the last WAL of the timeline1,
> and it makes no sense to consider timeline1 will continue grow after
> "o1", because it has ended.
> 
> You meantioned: "There may be more on them.".
> 
> Would you please give me an example? I must have missed something
> out...

The timeline is determined before any WAL segments are restored, because
the timeline history (as persisted in the *.history files) determines
which WAL segments will be restored.

You seem to assume that the point A at which recovery ended is the end of the
WAL stream of timeline 1, but you cannot know that:
- it could have been a point-in-time-recovery
- it could be that during the first recovery attempt some WAL archives
  were missing, which caused a premature end of recovery,
  but they are there at the second attempt.

Indeed, your sentence

> if , for example, it recovered to "A" at "o1", then
> the switched WAL(in case of stop-then-recover) or .partial
> corresponding WAL(in case of promote) is the last WAL of the timeline1

seems to contradict your drawing, which has B after A on timeline 1.

Example:
Assume that timeline 1 reaches to 000100150030.
We recover to point A, which is in the middle of 000100150020,
and there branch to timeline 2.
After some time, we decide to recover again, starting from a
checkpoint in 000100150010.
We want to recover to 2018-10-11 12:00:00.

How can you know how many WAL segments there are on timeline 1, and if
there is one that extends past 2018-10-11 12:00:00 or not?

The only way would be to restore and read them all *before* you even
decide which timeline you want to choose for recovery.

That would be forbiddingly expensive, wouldn't it?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




parallel PITR for HA setup

2018-10-11 Thread magodo


Hello,

I have a HA setup, a standby following a primary.

At beginning, when I do PITR for it, I just recover the primary the
first, then remake the standby by doing a basebackup against primary
and restart the standby with recovery.conf properly.

This works well, however, it takes long time when I have tons of data
to recover. Then I starts to find out how to do parallel PITR for both
clusters and re-establish the relationship between them. Then I find an
issue, after recovery for both clusters, they are at same LSN. Then on
the standby, I need to create a recovery.conf file, and restart it,
which will end up with standby's LSN newer than the current flushed one
on primary.

My solution to this problem is to do a restart also on primary. I
wonder if this is a proper way.

Thank you in advance!
---
Magodo





Re: how to identify the timeline of specified recovery_target_time when do multiple PITR

2018-10-11 Thread magodo


On Thu, 2018-10-11 at 06:35 +0200, Laurenz Albe wrote:
> magodo wrote:
> > OK... Just take another example:
> > 
> >  A B
> > BASE-+-+--o1 (recover to
> > A)  1
> >  | |   C
> >  +.|...+---o2 (regret, recover to
> > B) 2
> >|   |
> >+...|..--o3 (regret again, recover to
> > C)  3
> >| 
> >+--
> > -- 4
> > 
> > 
> > Suppose I'm at "o3" and want to recover to point "C". Because I
> > want to
> > recover to the first timeline which covers this time point, it
> > means I
> > wish to recover to timeline 2.
> 
> Ah, I finally understand your question.
> 
> You assume tht timeline 1 and 2 have ended (that's how you drew it),
> and that consequently timeline 3 is the "earliest existing" timeline,
> so why doesn't PostgreSQL choose that one automatically?
> 
> Even though you drew it that way, timeline 1 and 2 have not ended, in
> a way.  There may be more on them.  How should PostgreSQL know what
> is
> the last WAL entry on a certain timeline?  For that, it would have to
> restore and examine *all* WAL segments on that timeline until that
> fails.
> 
> But that is unreasonable because of the potential amount of time
> and work involved.  Rather, PostgreSQL has to decide at the point
> where
> timeline 2 branches off which one it should follow.
> 
> Yours,
> Laurenz Albe

Thank God I finnally conveied my meaning :)

For one cluster, if , for example, it recovered to "A" at "o1", then
the switched WAL(in case of stop-then-recover) or .partial
corresponding WAL(in case of promote) is the last WAL of the timeline1,
and it makes no sense to consider timeline1 will continue grow after
"o1", because it has ended.

You meantioned: "There may be more on them.".

Would you please give me an example? I must have missed something
out...

---
Magodo





Re: how to identify the timeline of specified recovery_target_time when do multiple PITR

2018-10-10 Thread Laurenz Albe
magodo wrote:
> OK... Just take another example:
> 
>  A B
> BASE-+-+--o1 (recover to A)  1
>  | |   C
>  +.|...+---o2 (regret, recover to B) 2
>|   |
>+...|..--o3 (regret again, recover to C)  3
>| 
>+ 4
> 
> 
> Suppose I'm at "o3" and want to recover to point "C". Because I want to
> recover to the first timeline which covers this time point, it means I
> wish to recover to timeline 2.

Ah, I finally understand your question.

You assume tht timeline 1 and 2 have ended (that's how you drew it),
and that consequently timeline 3 is the "earliest existing" timeline,
so why doesn't PostgreSQL choose that one automatically?

Even though you drew it that way, timeline 1 and 2 have not ended, in
a way.  There may be more on them.  How should PostgreSQL know what is
the last WAL entry on a certain timeline?  For that, it would have to
restore and examine *all* WAL segments on that timeline until that fails.

But that is unreasonable because of the potential amount of time
and work involved.  Rather, PostgreSQL has to decide at the point where
timeline 2 branches off which one it should follow.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: how to identify the timeline of specified recovery_target_time when do multiple PITR

2018-10-10 Thread magodo


On Wed, 2018-10-10 at 18:21 +0200, Laurenz Albe wrote:
> magodo wrote:
> > > By default, recovery will stay on the timeline where it started.
> > > If you want to go to timeline 2 or 3, you have to specify
> > > recovery_target_timeline.
> > > 
> > For me, the specified point corresponds to timeline 1, because at
> > that
> > time, the timeline is 1 indeed (when there is no timeline 2
> > branched
> > out yet). 
> > 
> > So in other word, my expectation is like I want to get the first
> > timeline which covers the specified time.
> > 
> > As you mentioned, there is no active timeline concept in postgres,
> > then
> > what is the best practice to meet my expectation? Do I have to
> > somehow
> > record the timestamp when archived wal's timeline has been changed,
> > then compare the specified recovery target time with those records
> > to
> > find the first timeline which covers that time?
> 
> As I wrote, that is the default behavior; PostgreSQL will stay on
> the timeline that was active when recovery was started.
> 
> Yours,
> Laurenz Albe

OK... Just take another example:

 A B
BASE-+-+--o1 (recover to A)  1
 | |   C
 +.|...+---o2 (regret, recover to B) 2
   |   |
   +...|..--o3 (regret again, recover to C)  3
   | 
   + 4


Suppose I'm at "o3" and want to recover to point "C". Because I want to
recover to the first timeline which covers this time point, it means I
wish to recover to timeline 2. Then the same question:

> > what is the best practice to meet my expectation? Do I have to
> > somehow
> > record the timestamp when archived wal's timeline has been changed,
> > then compare the specified recovery target time with those records
> > to
> > find the first timeline which covers that time?

---
Magodo
 





Re: how to identify the timeline of specified recovery_target_time when do multiple PITR

2018-10-10 Thread Laurenz Albe
magodo wrote:
> > By default, recovery will stay on the timeline where it started.
> > If you want to go to timeline 2 or 3, you have to specify
> > recovery_target_timeline.
> > 
> For me, the specified point corresponds to timeline 1, because at that
> time, the timeline is 1 indeed (when there is no timeline 2 branched
> out yet). 
> 
> So in other word, my expectation is like I want to get the first
> timeline which covers the specified time.
> 
> As you mentioned, there is no active timeline concept in postgres, then
> what is the best practice to meet my expectation? Do I have to somehow
> record the timestamp when archived wal's timeline has been changed,
> then compare the specified recovery target time with those records to
> find the first timeline which covers that time?

As I wrote, that is the default behavior; PostgreSQL will stay on
the timeline that was active when recovery was started.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: how to identify the timeline of specified recovery_target_time when do multiple PITR

2018-10-10 Thread magodo


On Wed, 2018-10-03 at 08:06 +0200, Laurenz Albe wrote:
> magodo wrote:
> > I read the document about recovery configuration, it says:
> 
> [...]
> 
> > Therefore, suppose I am going to recover to a point of some child
> > timeline, to identify the point, I have to specify either
> > recovery_target_name or recovery_target_time, and also specify the
> > recovery_target_timeline.
> > 
> > It is more like a tuple like (recovery_target_time,
> > recovery_target_timeline), that specify a real point among all
> > history
> > branches. Am I understand this correctly?
> 
> I think you understood the concept well enough.
> 
> > If yes, what I want to ask is that, though the timeline is
> > increasing
> > between different recovery, but each timestamp corresponds to a
> > timeline ID, one by one. So if I get a recovery_target_time, why
> > should
> > I still specify the recovery_target_timeline? 
> > 
> > Suppose following illustration:
> > 
> >  A B
> > BASE-+-+--o1 (recover to
> > A)  1
> >  | |   C
> >  +.|...+---o2 (regret, recover to
> > B) 2
> >|   |
> >+...|..--o3 (regret again, recover to
> > C)  3
> >| 
> >+--
> > -- 4
> 
> Consider this   ^   point in time.
> |
> 
> Suppose you specify this point in time as recovery_target_time.
> 
> Then it is not clear which of the timelines you want to follow.
> The point of time exists in timeline 1, 2 and 3.
> 
> In other words, should recovery use the WAL from
> 00010ABC0012, 00020ABC0012 or
> 00030ABC0012?
> 
> By default, recovery will stay on the timeline where it started.
> If you want to go to timeline 2 or 3, you have to specify
> recovery_target_timeline.
> 
> Yours,
> Laurenz Albe

For me, the specified point corresponds to timeline 1, because at that
time, the timeline is 1 indeed (when there is no timeline 2 branched
out yet). 

So in other word, my expectation is like I want to get the first
timeline which covers the specified time.

As you mentioned, there is no active timeline concept in postgres, then
what is the best practice to meet my expectation? Do I have to somehow
record the timestamp when archived wal's timeline has been changed,
then compare the specified recovery target time with those records to
find the first timeline which covers that time?

Thank you in anticipation!

---
Magodo





Re: pg9.6: PITR paused just after failover, need manual resume

2018-10-10 Thread magodo


On Wed, 2018-10-10 at 11:24 +0800, magodo wrote:
> Hello,
> 
> I'm figuring out how to do PITR on a HA setup for postgres 9.6. I
> almost manage to do it, while I faced following issue:
> 
> 
> [1. create basebackup]
> 
> [2. insert 2 rows]
> 
> on both primary and standby's pg_xlog
> 
> 00010005
> 
> rmgr: Transaction len (rec/tot): 34/34, tx:633, lsn:
> 0/05016120, prev 0/050160E0, desc: COMMIT 2018-10-10 03:04:58.459754
> UTC  - insert 1 (before: 2018-10-10 11:05:04 +08:00)
> 
> rmgr: Transaction len (rec/tot): 34/34, tx:634, lsn:
> 0/05016188, prev 0/05016148, desc: COMMIT 2018-10-10 03:05:07.010478
> UTC  - insert 2 (before: 2018-10-10 11:05:09 +08:00)
> 
> [3. failover and promote the standby]
> 
> 00010005 been archived as
> 00010005.partial, and a new wal is switched out
> named:
> 00020005, which contains above two commits.
> 
> [4. recover to "after insert 1st row"]
> 
> Stop new standby(the rewinded primary), and on the new
> primary(promoted
> standby), do following:
> 
> 1. stop DB
> 2. sync $PGDATA against basebackup
> 3. create a recovery.conf with following content:
> 
>restore_command = 'cp /var/lib/pgsql/9.6/data/archive/%f %p'
>recovery_target_timeline = 2
>recovery_target_time = '2018-10-10 11:05:04 +08:00'
> 
> 4. stat DB
> 
> Then I expect server is recovered to the time before insert 2nd row
> but
> after insert 1st row. The result is as I expected, but it is in a
> "read-only transaction". The log shows following:
> 
> < 2018-10-10 03:09:56.790 UTC > LOG:  database system was
> interrupted;
> last known up at 2018-10-10 02:57:59 UTC
> < 2018-10-10 03:09:56.931 UTC > DEBUG:  restore_command = 'cp
> /var/lib/pgsql/9.6/data/archive/%f %p'
> < 2018-10-10 03:09:56.931 UTC > DEBUG:  recovery_target_timeline = 2
> < 2018-10-10 03:09:56.931 UTC > DEBUG:  recovery_target_time = '2018-
> 10-10 03:05:04+00'
> < 2018-10-10 03:09:56.934 UTC > LOG:  restored log file
> "0002.history" from archive
> < 2018-10-10 03:09:56.934 UTC > LOG:  starting point-in-time recovery
> to 2018-10-10 03:05:04+00
> < 2018-10-10 03:09:56.936 UTC > LOG:  restored log file
> "0002.history" from archive
> cp: cannot stat
> '/var/lib/pgsql/9.6/data/archive/00020004': No such
> file or directory
> < 2018-10-10 03:09:56.941 UTC > DEBUG:  could not restore file
> "00020004" from archive: child process exited with
> exit
> code
> 1
>   
> 
> < 2018-10-10 03:09:56.952 UTC > LOG:  restored log file
> "00010004" from archive
> < 2018-10-10 03:09:57.012 UTC > DEBUG:  got WAL segment from archive
> < 2018-10-10 03:09:57.013 UTC > DEBUG:  checkpoint record is at
> 0/460
> < 2018-10-10 03:09:57.013 UTC > DEBUG:  redo record is at 0/428;
> shutdown FALSE
> < 2018-10-10 03:09:57.013 UTC > DEBUG:  next transaction ID: 0:632;
> next OID: 16385
> < 2018-10-10 03:09:57.013 UTC > DEBUG:  next MultiXactId: 1; next
> MultiXactOffset: 0
> < 2018-10-10 03:09:57.013 UTC > DEBUG:  oldest unfrozen transaction
> ID:
> 623, in database 1
> < 2018-10-10 03:09:57.013 UTC > DEBUG:  oldest MultiXactId: 1, in
> database 1
> < 2018-10-10 03:09:57.013 UTC > DEBUG:  commit timestamp Xid
> oldest/newest: 0/0
> < 2018-10-10 03:09:57.013 UTC > DEBUG:  transaction ID wrap limit is
> 2147484270, limited by database with OID 1
> < 2018-10-10 03:09:57.013 UTC > DEBUG:  MultiXactId wrap limit is
> 2147483648, limited by database with OID 1
> < 2018-10-10 03:09:57.013 UTC > DEBUG:  starting up replication slots
> < 2018-10-10 03:09:57.013 UTC > DEBUG:  restoring replication slot
> from
> "pg_replslot/repl_slot/state"
> < 2018-10-10 03:09:57.015 UTC > DEBUG:  starting up replication
> origin
> progress state
> < 2018-10-10 03:09:57.020 UTC > DEBUG:  resetting unlogged relations:
> cleanup 1 init 0
> < 2018-10-10 03:09:57.021 UTC > DEBUG:  initializing for hot standby
> < 2018-10-10 03:09:57.021 UTC > LOG:  redo starts at 0/428
> < 2018-10-10 03:09:57.021 UTC > DEBUG:  recovery snapshots are now
> enabled
> < 2018-10-10 03:09:57.021 UTC > CONTEXT:  xlog redo at 0/428 for
> Standby/RUNNING_XACTS: nextXid 632 latestCompletedXid 631
> oldestRunningXid
> 632 

pg9.6: PITR paused just after failover, need manual resume

2018-10-10 Thread magodo


Hello,

I'm figuring out how to do PITR on a HA setup for postgres 9.6. I
almost manage to do it, while I faced following issue:


[1. create basebackup]

[2. insert 2 rows]

on both primary and standby's pg_xlog

00010005

rmgr: Transaction len (rec/tot): 34/34, tx:633, lsn:
0/05016120, prev 0/050160E0, desc: COMMIT 2018-10-10 03:04:58.459754
UTC  - insert 1 (before: 2018-10-10 11:05:04 +08:00)

rmgr: Transaction len (rec/tot): 34/34, tx:634, lsn:
0/05016188, prev 0/05016148, desc: COMMIT 2018-10-10 03:05:07.010478
UTC  - insert 2 (before: 2018-10-10 11:05:09 +08:00)

[3. failover and promote the standby]

00010005 been archived as
00010005.partial, and a new wal is switched out named:
00020005, which contains above two commits.

[4. recover to "after insert 1st row"]

Stop new standby(the rewinded primary), and on the new primary(promoted
standby), do following:

1. stop DB
2. sync $PGDATA against basebackup
3. create a recovery.conf with following content:

   restore_command = 'cp /var/lib/pgsql/9.6/data/archive/%f %p'
   recovery_target_timeline = 2
   recovery_target_time = '2018-10-10 11:05:04 +08:00'

4. stat DB

Then I expect server is recovered to the time before insert 2nd row but
after insert 1st row. The result is as I expected, but it is in a
"read-only transaction". The log shows following:

< 2018-10-10 03:09:56.790 UTC > LOG:  database system was interrupted;
last known up at 2018-10-10 02:57:59 UTC
< 2018-10-10 03:09:56.931 UTC > DEBUG:  restore_command = 'cp
/var/lib/pgsql/9.6/data/archive/%f %p'
< 2018-10-10 03:09:56.931 UTC > DEBUG:  recovery_target_timeline = 2
< 2018-10-10 03:09:56.931 UTC > DEBUG:  recovery_target_time = '2018-
10-10 03:05:04+00'
< 2018-10-10 03:09:56.934 UTC > LOG:  restored log file
"0002.history" from archive
< 2018-10-10 03:09:56.934 UTC > LOG:  starting point-in-time recovery
to 2018-10-10 03:05:04+00
< 2018-10-10 03:09:56.936 UTC > LOG:  restored log file
"0002.history" from archive
cp: cannot stat
'/var/lib/pgsql/9.6/data/archive/00020004': No such
file or directory
< 2018-10-10 03:09:56.941 UTC > DEBUG:  could not restore file
"00020004" from archive: child process exited with exit
code
1  

< 2018-10-10 03:09:56.952 UTC > LOG:  restored log file
"00010004" from archive
< 2018-10-10 03:09:57.012 UTC > DEBUG:  got WAL segment from archive
< 2018-10-10 03:09:57.013 UTC > DEBUG:  checkpoint record is at
0/460
< 2018-10-10 03:09:57.013 UTC > DEBUG:  redo record is at 0/428;
shutdown FALSE
< 2018-10-10 03:09:57.013 UTC > DEBUG:  next transaction ID: 0:632;
next OID: 16385
< 2018-10-10 03:09:57.013 UTC > DEBUG:  next MultiXactId: 1; next
MultiXactOffset: 0
< 2018-10-10 03:09:57.013 UTC > DEBUG:  oldest unfrozen transaction ID:
623, in database 1
< 2018-10-10 03:09:57.013 UTC > DEBUG:  oldest MultiXactId: 1, in
database 1
< 2018-10-10 03:09:57.013 UTC > DEBUG:  commit timestamp Xid
oldest/newest: 0/0
< 2018-10-10 03:09:57.013 UTC > DEBUG:  transaction ID wrap limit is
2147484270, limited by database with OID 1
< 2018-10-10 03:09:57.013 UTC > DEBUG:  MultiXactId wrap limit is
2147483648, limited by database with OID 1
< 2018-10-10 03:09:57.013 UTC > DEBUG:  starting up replication slots
< 2018-10-10 03:09:57.013 UTC > DEBUG:  restoring replication slot from
"pg_replslot/repl_slot/state"
< 2018-10-10 03:09:57.015 UTC > DEBUG:  starting up replication origin
progress state
< 2018-10-10 03:09:57.020 UTC > DEBUG:  resetting unlogged relations:
cleanup 1 init 0
< 2018-10-10 03:09:57.021 UTC > DEBUG:  initializing for hot standby
< 2018-10-10 03:09:57.021 UTC > LOG:  redo starts at 0/428
< 2018-10-10 03:09:57.021 UTC > DEBUG:  recovery snapshots are now
enabled
< 2018-10-10 03:09:57.021 UTC > CONTEXT:  xlog redo at 0/428 for
Standby/RUNNING_XACTS: nextXid 632 latestCompletedXid 631
oldestRunningXid
632

< 2018-10-10 03:09:57.021 UTC > DEBUG:  end of backup reached
< 2018-10-10 03:09:57.021 UTC > CONTEXT:  xlog redo at 0/4D0 for
XLOG/BACKUP_END: 0/428
< 2018-10-10 03:09:57.022 UTC > LOG:  consistent recovery state reached
at 0/4F8
< 2018-10-10 03:09:57.023 UTC > DEBUG:  checkpointer updated shared
memory configuration values
< 2018-10-10 03:09:57.024 UTC > LOG:  database system is ready to
accept read only connections
< 2018-10-10 03:09:57.028 UTC > DEBUG:  archived transaction 

Re: how to identify the timeline of specified recovery_target_time when do multiple PITR

2018-10-03 Thread Laurenz Albe
magodo wrote:
> I read the document about recovery configuration, it says:

[...]

> Therefore, suppose I am going to recover to a point of some child
> timeline, to identify the point, I have to specify either
> recovery_target_name or recovery_target_time, and also specify the
> recovery_target_timeline.
> 
> It is more like a tuple like (recovery_target_time,
> recovery_target_timeline), that specify a real point among all history
> branches. Am I understand this correctly?

I think you understood the concept well enough.

> If yes, what I want to ask is that, though the timeline is increasing
> between different recovery, but each timestamp corresponds to a
> timeline ID, one by one. So if I get a recovery_target_time, why should
> I still specify the recovery_target_timeline? 
> 
> Suppose following illustration:
> 
>  A B
> BASE-+-+--o1 (recover to A)  1
>  | |   C
>  +.|...+---o2 (regret, recover to B) 2
>|   |
>+...|..--o3 (regret again, recover to C)  3
>| 
>+ 4

Consider this   ^   point in time.
|

Suppose you specify this point in time as recovery_target_time.

Then it is not clear which of the timelines you want to follow.
The point of time exists in timeline 1, 2 and 3.

In other words, should recovery use the WAL from
00010ABC0012, 00020ABC0012 or 00030ABC0012?

By default, recovery will stay on the timeline where it started.
If you want to go to timeline 2 or 3, you have to specify
recovery_target_timeline.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




how to identify the timeline of specified recovery_target_time when do multiple PITR

2018-10-02 Thread magodo


I read the document about recovery configuration, it says:

   recovery_target_timeline (string)

   ...

   Other than that you only need to set this parameter in complex re-
   recovery situations, where you need to return to a state that itself
   was reached after a point-in-time recovery. See Section 25.3.5 for
   discussion.

   ...

Andin section 25.3.5 it says:

   ...

   If you wish to recover into some child timeline (that is, you want
   to return to some state that was itself generated after a recovery
   attempt), you need to specify the target timeline ID in
   recovery.conf.

   ...

Therefore, suppose I am going to recover to a point of some child
timeline, to identify the point, I have to specify either
recovery_target_name or recovery_target_time, and also specify the
recovery_target_timeline.

It is more like a tuple like (recovery_target_time,
recovery_target_timeline), that specify a real point among all history
branches. Am I understand this correctly?

If yes, what I want to ask is that, though the timeline is increasing
between different recovery, but each timestamp corresponds to a
timeline ID, one by one. So if I get a recovery_target_time, why should
I still specify the recovery_target_timeline? 

Suppose following illustration:

 A B
BASE-+-+--o1 (recover to A)  1
 | |   C
 +.|...+---o2 (regret, recover to B) 2
   |   |
   +...|..--o3 (regret again, recover to C)  3
   | 
   + 4


Legend:

   BASE: basebackup
   A-Z: recovery point
   ---: active wal histroy (continuous among branches)
   ...: inactive wal history
   oN: point to do PITR

If am at "o3", and I want to recover to "C", if I don't specify
timeline ID, then i will ends up with state as "o1". Only if I specify
timeline ID as "2", then I can get the state as I expect (as
illustrated). 

Why not just recover along the history from timeline 1 to the timeline
specified by recovery target? The only reason I can imagine is that
there is a chance that there are two active timeline at same point, but
what is the use case of that(if it exists)?





Re: Issue with PostgreSQL replication and PITR

2018-03-28 Thread Laurenz Albe
Amee Sankhesara - Quipment India wrote:
> We have setup replication and PITR on this PostgreSQL server and wal files
> for replication and PITR we are storing on the another server which is
> shared with primary and secondary server.
>  
> For doing some OS migration activity for the PostgreSQL database servers
> We created two new replication server and promoted to new master server for 
> the same
>  
> During that activity one .partial file is created on master server and after
> that wal files are not copied in shared folder from which secondary server
> take it and restore it. All Wal files after .partial file are stored in 
> pg_xlog
> folder of master data so it is increasing size of data directory of master 
> server.
>  
> Wal files are not copying in shared location of master and secondary server 
> but
> still replication is running and on secondary server data is up to date.
>  
> In logs getting only below warning :
> Warning : The failed archive command was:
>   copy "D:\PostgreSQL\9.6\data\pg_xlog\00010A8800F8.partial"
>   "\\10.0.0.35\Archive_Replication\00010A8800F8.partial"
>   | copy "D:\PostgreSQL\9.6\data\pg_xlog\00010A8800F8.partial"
>   "\\10.0.0.35\Archive_PITR\00010A8800F8.partial"
>  
> Can’t figure out that in shared folder wal files are not copied but still
> secondary server is running in sync with master server.
>  
> What could be reason behind this?

If archiving is not working and the standby still can replicat, I would assume
that streaming replication is configured.

Is "primary_conninfo" defined in "recovery.conf"?
Are there WAL sender and WAL receiver processes?
 
> And how to remove this .partial file from pg_xlog safely and how to start
> copying wal files from pg_xlog of master server to shared location of master
> and secondary server ?

You should try to figure out why your "archive_command" fails; perhaps
try it by hand.  Once you have got that fixed, archiving will continue normally.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Issue with PostgreSQL replication and PITR

2018-03-27 Thread Amee Sankhesara - Quipment India
[Quipment Logo]

Hello,

We have a windows(2016) setup for PostgreSQL(Version 9.6.5).

We have setup replication and PITR on this PostgreSQL server and wal files for 
replication and PITR we are storing on the another server which is shared with 
primary and secondary server.

For doing some OS migration activity for the PostgreSQL database servers
We created two new replication server and promoted to new master server for the 
same

During that activity one .partial file is created on master server and after 
that wal files are not copied in shared folder from which secondary server take 
it and restore it. All Wal files after .partial file are stored in pg_xlog 
folder of master data so it is increasing size of data directory of master 
server.

Wal files are not copying in shared location of master and secondary server but 
still replication is running and on secondary server data is up to date.

In logs getting only below warning :
Warning : The failed archive command was: copy 
"D:\PostgreSQL\9.6\data\pg_xlog\00010A8800F8.partial" 
"\\10.0.0.35\Archive_Replication\00010A8800F8.partial"
 | copy "D:\PostgreSQL\9.6\data\pg_xlog\00010A8800F8.partial" 
"\\10.0.0.35\Archive_PITR\00010A8800F8.partial"

Can't figure out that in shared folder wal files are not copied but still 
secondary server is running in sync with master server.

What could be reason behind this?

And how to remove this .partial file from pg_xlog safely and how to start 
copying wal files from pg_xlog of master server to shared location of master 
and secondary server ?

Thanks,
Amee Sankhesara
Database Developer

R .Net Department

T / +91 (0) 94295 45266
amee.sankhesara@<mailto:%20amee.sankhes...@quipment.in> / 
www.quipment.in<http://www.quipment.in>



QTwee Group BV
KvK Arnhem 60858001
BTW NL854090721B01
mail@<mailto:m...@quipment.nl> / www.quipment.nl<http://www.quipment.nl>

Hoofdkantoor Nijmegen
Kerkenbos 1236 C
6546 BE Nijmegen
Tel.

+31 (0)24 372 47 00

Fax.

+31 (0)24 372 47 07


Quipment India
903-904 Landmark,
Beside Titanium City Centre,
Anandnagar Road, 100ft Ring Road,
Ahmedabad
Gujarat - 380015 - India
Tel. +91 (0) 79 4009 7847




Disclaimer

This e-mail message (including any attachment) is intended only for the 
personal use of the recipient(s) named above. This message is confidential and 
may be legally privileged. If you are not an intended recipient, you may not 
review, copy or distribute this message. If you have received this 
communication in error, please notify us immediately by e-mail and delete the 
original message.




Re: Warning of .partial wal file in PITR and Replication Environment

2018-03-26 Thread Michael Paquier
On Mon, Mar 26, 2018 at 11:52:53AM +, Amee Sankhesara - Quipment India 
wrote:
> Warning : The failed archive command was: copy
> "D:\PostgreSQL\9.6\data\pg_xlog\00010A8800F8.partial"
> "\\10.0.0.35\Archive_Replication\00010A8800F8.partial" |
> copy "D:\PostgreSQL\9.6\data\pg_xlog\00010A8800F8.partial"
> "\\10.0.0.35\Archive_PITR\00010A8800F8.partial"

Am I reading that correctly or you are trying to copy twice the same
file?  Why?  Using only copy may cause corruptions if you are not
careful as a disabled write caching would cause the data to go to the OS
cache on Windows and not directly to disk.

> Due to this it has stopped to take backup of PITR
> 
> So we like to know how to clean up the ".partial" file from pg_xlog and run 
> PITR  backup smoothly
> Also like to know is there any database consistency related issue ?

This is the last, partial WAL segment from the previous timeline.
Normally such things are able to archive correctly, so you may want to
consider a more advanced archive command able to handle duplicates.
It is impossible to give a strict answer before knowing what you are
looking for in terms of WAL archive redundancy.

You may need to use target_timeline = 'latest' in your recovery.conf
settings as well.
--
Michael


signature.asc
Description: PGP signature


Re: How to perform PITR when all of the logs won't fit on the drive

2018-03-01 Thread Alan Hodgson
On Thu, 2018-03-01 at 17:28 -0500, Tony Sullivan wrote:
Hello,

I have a situation where something was deleted from a database that
shouldn't have been so I am having to take a base backup and perform a
point-in-time-recovery. The problem I have is that the decompressed WAL
files will not fit on the drive of the machine I am trying to do the
restore on.

Your restore_command can be a complex command or shell script that can
transfer WAL files as needed from a network source.

How to perform PITR when all of the logs won't fit on the drive

2018-03-01 Thread Tony Sullivan
Hello,

I have a situation where something was deleted from a database that
shouldn't have been so I am having to take a base backup and perform a
point-in-time-recovery. The problem I have is that the decompressed WAL
files will not fit on the drive of the machine I am trying to do the
restore on.

I am wondering if I can arrange the WAL files by date and copy them to the
directory where they belong and then copy another batch when those are
restored or if I will need to find some other way of performing the
recovery.






Re: PITR Multiple recoveries

2018-02-08 Thread Sébastien Boutté
Hi Jeff,

You mean that  ? Is this correct ?
hot_stanby = on to be sure it plays WAL and stays in standby mode ?

1. Stop Postgres
2. Restore Local basebackup (untar, un gzip)
3. Create (recovery.conf) standby_mode + hot_standby on (postgresql.conf)
4. Start Postgres
5. Wait to see in log "database system is ready to accept read only
connections"
6. To Check
7. Stop Postgres
8. . some time passed
9. resync wals with latest
10. recreate recovery.conf without standby_mode + hot_standby off
11. Start Postgres
11. Wait recovery.done
12. To Check

Thanks

Sebastien




On Thu, Feb 8, 2018 at 5:11 PM, Jeff Janes <jeff.ja...@gmail.com> wrote:

> On Thu, Feb 8, 2018 at 5:07 AM, Sébastien Boutté <
> sebastien.bou...@gmail.com> wrote:
>
>> Hi all,
>>
>> I'm trying to make my server doing PITR backup, i follow the rules on
>> https://www.postgresql.org/docs/9.5/static/continuous-archiving.html.
>>
>> On my local server, i would like to resync multiple times my local
>> database:
>>
>> What i'm doing :
>> 1. Stop Postgres
>> 2. Restore Local basebackup (untar, un gzip)
>> 3. Create recovery.conf
>> 4. Start Postgres
>> 5. Wait recovery.done
>> 6. At this point, it's OK.
>>
>
> Once recovery is done and the database is open, it is now a primary.  It
> is no longer eligible for further WAL replay.
> (Unless it is was only open in standby mode.)
>
> Cheers,
>
> Jeff
>


Re: PITR Multiple recoveries

2018-02-08 Thread Jeff Janes
On Thu, Feb 8, 2018 at 5:07 AM, Sébastien Boutté <sebastien.bou...@gmail.com
> wrote:

> Hi all,
>
> I'm trying to make my server doing PITR backup, i follow the rules on
> https://www.postgresql.org/docs/9.5/static/continuous-archiving.html.
>
> On my local server, i would like to resync multiple times my local
> database:
>
> What i'm doing :
> 1. Stop Postgres
> 2. Restore Local basebackup (untar, un gzip)
> 3. Create recovery.conf
> 4. Start Postgres
> 5. Wait recovery.done
> 6. At this point, it's OK.
>

Once recovery is done and the database is open, it is now a primary.  It is
no longer eligible for further WAL replay.
(Unless it is was only open in standby mode.)

Cheers,

Jeff


PITR Multiple recoveries

2018-02-08 Thread Sébastien Boutté
Hi all,

I'm trying to make my server doing PITR backup, i follow the rules on
https://www.postgresql.org/docs/9.5/static/continuous-archiving.html.

On my local server, i would like to resync multiple times my local database:

What i'm doing :
1. Stop Postgres
2. Restore Local basebackup (untar, un gzip)
3. Create recovery.conf
4. Start Postgres
5. Wait recovery.done
6. At this point, it's OK.
7. Stop Postgres
8. . some time passed
9. resync wals with latest
10. recreate recovery.conf
11. Start Postgres
11. Wait recovery.done
12. At this point, it's NOK. Latest wals have not been played on local
server.

Do you know what i'm missing ? Thanks

Regards,

Sebastien Boutte