Re: Idempotent DDL Updates

2021-08-27 Thread Julien Rouhaud
On Sat, Aug 28, 2021 at 2:19 AM Miles Elam  wrote:
>
> What is the general consensus within the community on idempotent DDL scripts, 
> ie. consistent usage of IF EXISTS/IF NOT EXISTS/OR REPLACE for SQL init files 
> that get checked into source control?
> [...]
> The drawbacks I've run across are those areas where the EXISTS/REPLACE 
> constructs aren't implemented like roles management, domains, constraints, 
> etc. However those cases seem to be handled with only minor increases in 
> complexity with judicious use of inline plpgsql.
>
> In others' opinions, has DDL idempotency been viable for maintenance of PG 
> databases fo you in production?

Note that the IF EXISTS / IF NOT EXISTS are *not* idempotent.  If you
need to write idempotent schema update scripts, you need to query the
catalogs to check if the specific change you want to apply has already
been applied or not.




Re:

2021-08-27 Thread obi reddy
Dear Gogala

Pg_RMAN is not support on windows machine.  I have small about the can I
set the archive_commnad and I will take the wal files  like incremental
backup.

On Sat, 28 Aug 2021, 1:15 am Mladen Gogala,  wrote:

> On 8/27/21 8:59 AM, Ravi Krishna wrote:
> >>
> >> how to take incremental backup in postgresql windows machine.
> >>
> > AFAIK PG has no concept of incremental backup.  pgbackrest has, but not
> sure whether it runs on Windows.
>
> PG_RMAN can do incremental backups. I think it does the same thing as
> Oracle  without enabled block change tracking: it backs up all the files
> changed since the last full backup. However, that doesn't help you much
> because of the vacuum which can touch the file well after it has been
> referenced in transaction. I like pg_rman because it can automatically
> delete old backups and keep only 4 latest backups.
>
> --
>
> Mladen Gogala
> Database Consultant
> Tel: (347) 321-1217
> https://dbwhisperer.wordpress.com
>
>
>
>


Re: Unexpected block ID found when reading data

2021-08-27 Thread Gilar Ginanjar
Hi, sorry it's been a long time to reply.

It throw the same errors. I can't find any solution. It frustrated me and i've 
been on a break since. ;)


> On 5 Aug 2021, at 00.50, Vijaykumar Jain  
> wrote:
> 
> On Tue, 3 Aug 2021 at 20:37, Gilar Ginanjar  > wrote:
> I’m not sure which patch version i used to dump, but i was using postgre 12.5 
> for pg_dump back then.
> 
> I’m running pg_restore -f dbdump.backup right now, I think it will take some 
> times because it has a large size (around 9 GB). There are no issues yet.
> 
> Did this complete without issues ? or did it throw the same errors ? 
>  



Re: Can we get rid of repeated queries from pg_dump?

2021-08-27 Thread Tom Lane
Adrian Klaver  writes:
> On 8/27/21 2:23 PM, Tom Lane wrote:
>> Those queries are coming from getFormattedTypeName(), which is used
>> for function arguments and the like.  I'm not quite sure why Hubert
>> is seeing 5000 such calls in a database with only ~100 functions;
>> surely they don't all have an average of 50 arguments?

> Could be.

Maybe.  I'm disturbed by the discrepancy between my result (about
10% of pg_dump's queries are these) and Hubert's (over 50% are).
I'd like to know the reason for that before we push forward.

regards, tom lane




Re: use fopen unknown resource

2021-08-27 Thread ourdiaspora


‐‐‐ Original Message ‐‐‐

On Friday, August 27th, 2021 at 11:10 PM, Adrian Klaver 
 wrote:

> https://www.php.net/manual/en/pdo.pgsqlcopyfromfile.php
>

"
public PDO::pgsqlCopyFromFile(
string $table_name,
string $filename,
"

Sorry but do not understand; the line does not explain what to write in the php 
file.

So far have written:
"
https://www.php.net/manual/en/function.fgetcsv.php)

Instead of:
"
...
fopen("test.csv", "r"))
...
"

it would _not_ be possible to write, correct?:
"
...
fopen("", "r"))




Re: Can we get rid of repeated queries from pg_dump?

2021-08-27 Thread Stephen Frost
Greetings,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost  writes:
> > * Tom Lane (t...@sss.pgh.pa.us) wrote:
> >> I experimented with the attached, very quick-n-dirty patch to collect
> >> format_type results during the initial scan of pg_type, instead.  On the
> >> regression database in HEAD, it reduces the number of queries pg_dump
> >> issues from 3260 to 2905; but I'm having a hard time detecting any net
> >> performance change.
> 
> > Seems like the issue here is mainly just the latency of each query being
> > rather high compared to most use-cases, so local testing where there's
> > basically zero latency wouldn't see any change in timing, but throw a
> > trans-atlantic or worse amount of latency between the system running
> > pg_dump and the PG server and you'd see notable wall-clock savings in
> > time.
> 
> Yeah.  What I was more concerned about was the potential downside
> of running format_type() for each pg_type row, even though we might
> use only a few of those results.  The fact that I'm *not* seeing
> a performance hit with a local server is encouraging from that
> standpoint.

Ah, yes, agreed.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Can we get rid of repeated queries from pg_dump?

2021-08-27 Thread Tom Lane
Stephen Frost  writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> I experimented with the attached, very quick-n-dirty patch to collect
>> format_type results during the initial scan of pg_type, instead.  On the
>> regression database in HEAD, it reduces the number of queries pg_dump
>> issues from 3260 to 2905; but I'm having a hard time detecting any net
>> performance change.

> Seems like the issue here is mainly just the latency of each query being
> rather high compared to most use-cases, so local testing where there's
> basically zero latency wouldn't see any change in timing, but throw a
> trans-atlantic or worse amount of latency between the system running
> pg_dump and the PG server and you'd see notable wall-clock savings in
> time.

Yeah.  What I was more concerned about was the potential downside
of running format_type() for each pg_type row, even though we might
use only a few of those results.  The fact that I'm *not* seeing
a performance hit with a local server is encouraging from that
standpoint.

regards, tom lane




Re: use fopen unknown resource

2021-08-27 Thread Adrian Klaver

On 8/27/21 2:59 PM, ourdiaspora wrote:

Readers,

Objective, to import a csv file into postgresql database. If understood correctly, the 
manual section 'fopen' (https://www.php.net/manual/en/function.fopen.php) shows that the 
file has a "known resource" extant name.

Please what is the syntax to assign an unknown file name to the 'fopen' 
function?



Why not use?:

https://www.php.net/manual/en/pdo.pgsqlcopyfromfile.php


--
Adrian Klaver
adrian.kla...@aklaver.com




use fopen unknown resource

2021-08-27 Thread ourdiaspora
Readers,

Objective, to import a csv file into postgresql database. If understood 
correctly, the manual section 'fopen' 
(https://www.php.net/manual/en/function.fopen.php) shows that the file has a 
"known resource" extant name.

Please what is the syntax to assign an unknown file name to the 'fopen' 
function?





Re: Can we get rid of repeated queries from pg_dump?

2021-08-27 Thread Stephen Frost
Greetings,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Laurenz Albe  writes:
> > On Thu, 2021-08-26 at 18:06 +0200, hubert depesz lubaczewski wrote:
> >> In total, there were 5000 queries:
> >> SELECT pg_catalog.format_type('[0-9]+'::pg_catalog.oid, NULL)
> >> But there were only 83 separate oids that were scanned.
> 
> > That is a strong argument for using a hash table to cache the types.
> 
> Those queries are coming from getFormattedTypeName(), which is used
> for function arguments and the like.  I'm not quite sure why Hubert
> is seeing 5000 such calls in a database with only ~100 functions;
> surely they don't all have an average of 50 arguments?
> 
> I experimented with the attached, very quick-n-dirty patch to collect
> format_type results during the initial scan of pg_type, instead.  On the
> regression database in HEAD, it reduces the number of queries pg_dump
> issues from 3260 to 2905; but I'm having a hard time detecting any net
> performance change.

Seems like the issue here is mainly just the latency of each query being
rather high compared to most use-cases, so local testing where there's
basically zero latency wouldn't see any change in timing, but throw a
trans-atlantic or worse amount of latency between the system running
pg_dump and the PG server and you'd see notable wall-clock savings in
time.

Only took a quick look but generally +1 on reducing the number of
queries that pg_dump is doing and the changes suggested looked good to
me.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Can we get rid of repeated queries from pg_dump?

2021-08-27 Thread Adrian Klaver

On 8/27/21 2:23 PM, Tom Lane wrote:

Laurenz Albe  writes:

On Thu, 2021-08-26 at 18:06 +0200, hubert depesz lubaczewski wrote:

In total, there were 5000 queries:
SELECT pg_catalog.format_type('[0-9]+'::pg_catalog.oid, NULL)
But there were only 83 separate oids that were scanned.



That is a strong argument for using a hash table to cache the types.


Those queries are coming from getFormattedTypeName(), which is used
for function arguments and the like.  I'm not quite sure why Hubert
is seeing 5000 such calls in a database with only ~100 functions;
surely they don't all have an average of 50 arguments?


Could be.

From the stats post:

"Based on my reading of queries in order it seems to follow the pattern of:

One call for:

SELECT  proretset,  prosrc,  probin,  provolatile,  proisstrict, 
prosecdef,  lanname,  proconfig,  procost,  prorows, 
pg_catalog.pg_get_function_arguments(p.oid) AS funcargs, 
pg_catalog.pg_get_function_identity_arguments(p.oid) AS funciargs, 
pg_catalog.pg_get_function_re
sult(p.oid) AS funcresult,  proleakproof,  array_to_string(protrftypes, 
' ') AS protrftypes,  proparallel,  prokind,  prosupport,  NULL AS 
prosqlbody  FROM pg_catalog.pg_proc p, pg_catalog.pg_language l  WHERE 
p.oid = 'SOME_NUMBER'::pg_catalog.oid AND l.oid = p.prolang


and then one or more:

SELECT pg_catalog.format_type('SOME_NUMBER'::pg_catalog.oid, NULL)


In one case, after proc query, there were 94 concecutive
pg_catalog.format_type queries.
"




I experimented with the attached, very quick-n-dirty patch to collect
format_type results during the initial scan of pg_type, instead.  On the
regression database in HEAD, it reduces the number of queries pg_dump
issues from 3260 to 2905; but I'm having a hard time detecting any net
performance change.

(This is not meant for commit as-is; notably, I didn't bother to fix
getTypes' code paths for pre-9.6 servers.  It should be fine for
performance testing though.)

regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Idempotent DDL Updates

2021-08-27 Thread Rob Sargent



> On Aug 27, 2021, at 1:32 PM, Adrian Klaver  wrote:
> 
> On 8/27/21 11:19 AM, Miles Elam wrote:
>> What is the general consensus within the community on idempotent DDL 
>> scripts, ie. consistent usage of IF EXISTS/IF NOT EXISTS/OR REPLACE for SQL 
>> init files that get checked into source control?
>> My experience has been that it's much easier to manage than an ever-growing 
>> set of migration files, but I'm just a data point of one. Obviously for 
>> other engines that don't support transactional DDL, it's a non-starter, 
>> which leads me toward the notion that its lack of obvious popularity is due 
>> to its limited cross-engine viability. But PG does have transaction DDL, 
>> preventing incomplete DDL updates. However this may just be my personal bias 
>> talking. Yet I cannot seem to discount the advantages over ever-increasing 
>> volumes of Flyway-style migration files & directories:
For production or dev environments?
The latter is made easier if the baseline moves forward regularly. For the 
former the sticky bit possibly has as much to do with data migration as DDL 
evolution, especially if there are multiple instances (many customers)
>>  * being able to do diffs of changes in source control
>>  * limiting the size of the total SQL over time relative to what's actually 
>> in the DB structure
>>  * much more easily determining the most current version of a 
>> function/procedure (this bit me hard in the past with dozens of migration 
>> files)
>>  * the ability to prune old changes that no longer apply to any deployed 
>> version of the database
>>  * treating database structure as code
>> The drawbacks I've run across are those areas where the EXISTS/REPLACE 
>> constructs aren't implemented like roles management, domains, constraints, 
>> etc. However those cases seem to be handled with only minor increases in 
>> complexity with judicious use of inline plpgsql.
>> In others' opinions, has DDL idempotency been viable for maintenance of PG 
>> databases fo you in production?
> 
> For me at least you will need to show examples of what you trying to achieve. 
> I'm not seeing how a migration(change) can happen without a change of some 
> sort. More to the point how *EXISTS/OR REPLACE helps?
> 
>> - Miles Elam
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com
> 
> 




Re: Can we get rid of repeated queries from pg_dump?

2021-08-27 Thread Tom Lane
Laurenz Albe  writes:
> On Thu, 2021-08-26 at 18:06 +0200, hubert depesz lubaczewski wrote:
>> In total, there were 5000 queries:
>> SELECT pg_catalog.format_type('[0-9]+'::pg_catalog.oid, NULL)
>> But there were only 83 separate oids that were scanned.

> That is a strong argument for using a hash table to cache the types.

Those queries are coming from getFormattedTypeName(), which is used
for function arguments and the like.  I'm not quite sure why Hubert
is seeing 5000 such calls in a database with only ~100 functions;
surely they don't all have an average of 50 arguments?

I experimented with the attached, very quick-n-dirty patch to collect
format_type results during the initial scan of pg_type, instead.  On the
regression database in HEAD, it reduces the number of queries pg_dump
issues from 3260 to 2905; but I'm having a hard time detecting any net
performance change.

(This is not meant for commit as-is; notably, I didn't bother to fix
getTypes' code paths for pre-9.6 servers.  It should be fine for
performance testing though.)

regards, tom lane

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 6adbd20778..efe1be533f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5088,6 +5088,7 @@ getTypes(Archive *fout, int *numTypes)
 	int			i_oid;
 	int			i_typname;
 	int			i_typnamespace;
+	int			i_ftypname;
 	int			i_typacl;
 	int			i_rtypacl;
 	int			i_inittypacl;
@@ -5129,6 +5130,7 @@ getTypes(Archive *fout, int *numTypes)
 
 		appendPQExpBuffer(query, "SELECT t.tableoid, t.oid, t.typname, "
 		  "t.typnamespace, "
+		  "pg_catalog.format_type(t.oid, NULL) AS ftypname, "
 		  "%s AS typacl, "
 		  "%s AS rtypacl, "
 		  "%s AS inittypacl, "
@@ -5211,6 +5213,7 @@ getTypes(Archive *fout, int *numTypes)
 	i_oid = PQfnumber(res, "oid");
 	i_typname = PQfnumber(res, "typname");
 	i_typnamespace = PQfnumber(res, "typnamespace");
+	i_ftypname = PQfnumber(res, "ftypname");
 	i_typacl = PQfnumber(res, "typacl");
 	i_rtypacl = PQfnumber(res, "rtypacl");
 	i_inittypacl = PQfnumber(res, "inittypacl");
@@ -5232,6 +5235,7 @@ getTypes(Archive *fout, int *numTypes)
 		tyinfo[i].dobj.name = pg_strdup(PQgetvalue(res, i, i_typname));
 		tyinfo[i].dobj.namespace =
 			findNamespace(atooid(PQgetvalue(res, i, i_typnamespace)));
+		tyinfo[i].ftypname = pg_strdup(PQgetvalue(res, i, i_ftypname));
 		tyinfo[i].rolname = pg_strdup(PQgetvalue(res, i, i_rolname));
 		tyinfo[i].typacl = pg_strdup(PQgetvalue(res, i, i_typacl));
 		tyinfo[i].rtypacl = pg_strdup(PQgetvalue(res, i, i_rtypacl));
@@ -18892,12 +18896,11 @@ findDumpableDependencies(ArchiveHandle *AH, const DumpableObject *dobj,
  *
  * This does not guarantee to schema-qualify the output, so it should not
  * be used to create the target object name for CREATE or ALTER commands.
- *
- * TODO: there might be some value in caching the results.
  */
 static char *
 getFormattedTypeName(Archive *fout, Oid oid, OidOptions opts)
 {
+	TypeInfo   *typeInfo;
 	char	   *result;
 	PQExpBuffer query;
 	PGresult   *res;
@@ -18910,6 +18913,16 @@ getFormattedTypeName(Archive *fout, Oid oid, OidOptions opts)
 			return pg_strdup("NONE");
 	}
 
+	typeInfo = findTypeByOid(oid);
+	if (typeInfo)
+		return pg_strdup(typeInfo->ftypname);
+
+	/*
+	 * Fall back to asking the server.  XXX we could only reach this code if
+	 * getTypes() didn't see the type, which should never happen.  If it did
+	 * happen, it'd imply that we don't have proper dependencies for the
+	 * object we're trying to describe.  Maybe just throw an error instead?
+	 */
 	query = createPQExpBuffer();
 	appendPQExpBuffer(query, "SELECT pg_catalog.format_type('%u'::pg_catalog.oid, NULL)",
 	  oid);
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index f5e170e0db..29af845ece 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -166,9 +166,11 @@ typedef struct _typeInfo
 	DumpableObject dobj;
 
 	/*
-	 * Note: dobj.name is the pg_type.typname entry.  format_type() might
-	 * produce something different than typname
+	 * Note: dobj.name is the raw pg_type.typname entry.  ftypname is the
+	 * result of format_type(), which will be quoted if needed, and might be
+	 * schema-qualified too.
 	 */
+	char	   *ftypname;
 	char	   *rolname;		/* name of owner, or empty string */
 	char	   *typacl;
 	char	   *rtypacl;


Re: Idempotent DDL Updates

2021-08-27 Thread Adrian Klaver

On 8/27/21 11:19 AM, Miles Elam wrote:
What is the general consensus within the community on idempotent DDL 
scripts, ie. consistent usage of IF EXISTS/IF NOT EXISTS/OR REPLACE for 
SQL init files that get checked into source control?


My experience has been that it's much easier to manage than an 
ever-growing set of migration files, but I'm just a data point of one. 
Obviously for other engines that don't support transactional DDL, it's a 
non-starter, which leads me toward the notion that its lack of obvious 
popularity is due to its limited cross-engine viability. But PG does 
have transaction DDL, preventing incomplete DDL updates. However this 
may just be my personal bias talking. Yet I cannot seem to discount the 
advantages over ever-increasing volumes of Flyway-style migration files 
& directories:

  * being able to do diffs of changes in source control
  * limiting the size of the total SQL over time relative to what's 
actually in the DB structure
  * much more easily determining the most current version of a 
function/procedure (this bit me hard in the past with dozens of 
migration files)
  * the ability to prune old changes that no longer apply to any 
deployed version of the database

  * treating database structure as code

The drawbacks I've run across are those areas where the EXISTS/REPLACE 
constructs aren't implemented like roles management, domains, 
constraints, etc. However those cases seem to be handled with only minor 
increases in complexity with judicious use of inline plpgsql.


In others' opinions, has DDL idempotency been viable for maintenance of 
PG databases fo you in production?


For me at least you will need to show examples of what you trying to 
achieve. I'm not seeing how a migration(change) can happen without a 
change of some sort. More to the point how *EXISTS/OR REPLACE helps?




- Miles Elam




--
Adrian Klaver
adrian.kla...@aklaver.com




Re:

2021-08-27 Thread Mladen Gogala

On 8/27/21 8:59 AM, Ravi Krishna wrote:


how to take incremental backup in postgresql windows machine.


AFAIK PG has no concept of incremental backup.  pgbackrest has, but not sure 
whether it runs on Windows.


PG_RMAN can do incremental backups. I think it does the same thing as 
Oracle  without enabled block change tracking: it backs up all the files 
changed since the last full backup. However, that doesn't help you much 
because of the vacuum which can touch the file well after it has been 
referenced in transaction. I like pg_rman because it can automatically 
delete old backups and keep only 4 latest backups.


--

Mladen Gogala
Database Consultant
Tel: (347) 321-1217
https://dbwhisperer.wordpress.com





Re:

2021-08-27 Thread Atul Kumar
Hi,

You can try to make a batch script in windows for taking incremental backup
and schedule it in windows scheduler.




Regards
Atul






On Friday, August 27, 2021, obi reddy  wrote:

> Hi everyone.
>
> how to take incremental backup in postgresql windows machine.
>
>
> Thank you And Regards
> Obireddy.G
>


Idempotent DDL Updates

2021-08-27 Thread Miles Elam
What is the general consensus within the community on idempotent DDL
scripts, ie. consistent usage of IF EXISTS/IF NOT EXISTS/OR REPLACE for SQL
init files that get checked into source control?

My experience has been that it's much easier to manage than an ever-growing
set of migration files, but I'm just a data point of one. Obviously for
other engines that don't support transactional DDL, it's a non-starter,
which leads me toward the notion that its lack of obvious popularity is due
to its limited cross-engine viability. But PG does have transaction DDL,
preventing incomplete DDL updates. However this may just be my personal
bias talking. Yet I cannot seem to discount the advantages over
ever-increasing volumes of Flyway-style migration files & directories:
 * being able to do diffs of changes in source control
 * limiting the size of the total SQL over time relative to what's actually
in the DB structure
 * much more easily determining the most current version of a
function/procedure (this bit me hard in the past with dozens of migration
files)
 * the ability to prune old changes that no longer apply to any deployed
version of the database
 * treating database structure as code

The drawbacks I've run across are those areas where the EXISTS/REPLACE
constructs aren't implemented like roles management, domains, constraints,
etc. However those cases seem to be handled with only minor increases in
complexity with judicious use of inline plpgsql.

In others' opinions, has DDL idempotency been viable for maintenance of PG
databases fo you in production?

- Miles Elam


Re: pg_dump - increase in backup time - log

2021-08-27 Thread Tom Lane
Adrian Klaver  writes:
> On 8/27/21 8:27 AM, Marc wrote:
>> We want to dig into it but we lack a detailled pg_dump log. We used 
>> verbose mode but unfortunately this lacks a date time stamp.
>> No changes on the server neither.

> The Postgres server log should have timestamped log entries though.

Yeah, turn on log_statement = all on the server and watch what pg_dump
is doing that way.

regards, tom lane




Re: Problem with connecting with named host parameter

2021-08-27 Thread Mike Martin
On Fri, 27 Aug 2021 at 15:00, Tom Lane  wrote:

> Mike Martin  writes:
> > I am getting the following error when I connect with hostname on
> localhost
>
> > psql -h capture -U  -p 5432 
> > psql: error: FATAL:  no pg_hba.conf entry for host
> > "fe80::7285:c2ff:feb0:cd4%eth0", user "usevideo", database "usevideos",
> SSL
> > off
>
> Evidently, your DNS setup is resolving the machine's name as a IPv6
> address, whereupon PG quite legitimately doesn't find a match in
> its pg_hba table.  (I suppose you are using listen_addresses = "*",
> else the server wouldn't even be listening on this address.)
>
> > This is with pg 11 and 12 on fedora 34
>
> Looking at the "hosts" entry in /etc/nsswitch.conf might help figure out
> exactly what's going on, but I'll bet a nickel that this is some
> unhelpful systemd behavior.
>
> regards, tom lane
>

Very possibly.

I seem to have resolved the issue with just the following active lines in
pg_hba.conf

host allall samehost password
host all all samenet password


Re: pg_dump - increase in backup time - log

2021-08-27 Thread Adrian Klaver

On 8/27/21 8:27 AM, Marc wrote:

Hello,

Suddenly the time required to backup with pg_dump increased suddenly 
drastically ( + 20min on a backuptime of 2 hours ) no comparable 
increase of data which could explain the behaviour.


What is the pg_dump command you are using?



We want to dig into it but we lack a detailled pg_dump log. We used 
verbose mode but unfortunately this lacks a date time stamp.

No changes on the server neither.


The Postgres server log should have timestamped log entries though.



Postgres: version 11.13
pg_dump is running on the same machine as Postgres and no changes have 
been made to the configuration


Any ideas/help most welcome,
Thanks in advance,



Marc



*ARC - your Apple Authorised Service partner*   H.D. Saviolaan 8
B-1700 Dilbeek
Belgium
i...@arcict.com  	www.arcict.com 


tel. : +32 (0)2 466 50 00   fax. : +32 (0)2 466 88 33





--
Adrian Klaver
adrian.kla...@aklaver.com




Please help: pgAdmin 4 on Amazon Linux 2

2021-08-27 Thread Blake McBride
Greetings,

I am trying to install pgAdmin 4 on Amazon Linux 2.  PostgreSQL is already
installed and working fine.  I believe Amazon Linux 2 is based on RedHat.

I am doing the following:

[root@a-1lxumlkkw4mu4 ~]# rpm -i
https://ftp.postgresql.org/pub/pgadmin/pgadmin4/yum/pgadmin4-redhat-repo-2-1.noarch.rpm
warning: /var/tmp/rpm-tmp.ZEygli: Header V3 RSA/SHA256 Signature, key ID
210976f2: NOKEY
[root@a-1lxumlkkw4mu4 ~]# yum install pgadmin4
Loaded plugins: amzn_workspaces_filter_updates, halt_os_update_check,
priorities, update-motd
amzn2-core
  | 3.7 kB  00:00:00

amzn2extra-GraphicsMagick1.3
  | 3.0 kB  00:00:00

amzn2extra-docker
 | 3.0 kB  00:00:00
amzn2extra-epel
 | 3.0 kB  00:00:00
amzn2extra-gimp
 | 1.3 kB  00:00:00
amzn2extra-libreoffice
  | 3.0 kB  00:00:00

amzn2extra-mate-desktop1.x
  | 3.0 kB  00:00:00

epel/x86_64/metalink
  |  15 kB  00:00:00

firefox
 | 2.2 kB  00:00:00
google-chrome
 | 1.3 kB  00:00:00
https://ftp.postgresql.org/pub/pgadmin/pgadmin4/yum/redhat/rhel-2-x86_64/repodata/repomd.xml:
[Errno 14] HTTPS Error 404 - Not Found
Trying other mirror.


 One of the configured repositories failed (pgadmin4),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

 1. Contact the upstream for the repository and get them to fix the
problem.

 2. Reconfigure the baseurl/etc. for the repository, to point to a
working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).

 3. Run the command with the repository temporarily disabled
yum --disablerepo=pgAdmin4 ...

 4. Disable the repository permanently, so yum won't use it by default.
Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:

yum-config-manager --disable pgAdmin4
or
subscription-manager repos --disable=pgAdmin4

 5. Configure the failing repository to be skipped, if it is
unavailable.
Note that yum will try to contact the repo. when it runs most
commands,
so will have to try and fail each time (and thus. yum will be be
much
slower). If it is a very temporary problem though, this is often a
nice
compromise:

yum-config-manager --save
--setopt=pgAdmin4.skip_if_unavailable=true

failure: repodata/repomd.xml from pgAdmin4: [Errno 256] No more mirrors to
try.
https://ftp.postgresql.org/pub/pgadmin/pgadmin4/yum/redhat/rhel-2-x86_64/repodata/repomd.xml:
[Errno 14] HTTPS Error 404 - Not Found
[root@a-1lxumlkkw4mu4 ~]#


I have no idea how to fix this.  Any help would sure be appreciated.

Blake McBride


pg_dump - increase in backup time - log

2021-08-27 Thread Marc

Hello,

Suddenly the time required to backup with pg_dump increased suddenly 
drastically ( + 20min on a backuptime of 2 hours ) no comparable 
increase of data which could explain the behaviour.


We want to dig into it but we lack a detailled pg_dump log. We used 
verbose mode but unfortunately this lacks a date time stamp.

No changes on the server neither.

Postgres: version 11.13
pg_dump is running on the same machine as Postgres and no changes have 
been made to the configuration


Any ideas/help most welcome,
Thanks in advance,



Marc

ARC - your Apple service partner


Re: Manual failover cluster

2021-08-27 Thread Saul Perdomo
Are you sure that it is *mandatory*? Because from my recollection I've only
needed to set one manually when (for one reason or another) my recovery
attempt fails and then I'm in what the docs call a "complex re-recovery
situation" -- not a fun time:

recovery_target_timeline (string)

Specifies recovering into a particular timeline. The default is to recover
along the same timeline that was current when the base backup was taken.
Setting this to latest recovers to the latest timeline found in the
archive, which is useful in a standby server. Other than that you only need
to set this parameter in complex re-recovery situations, where you need to
return to a state that itself was reached after a point-in-time recovery.
See Section 25.3.5

for
discussion.
From: https://www.postgresql.org/docs/10/recovery-target-settings.html


On Fri, Aug 27, 2021 at 2:36 AM Ninad Shah  wrote:

> Hi Saul,
>
> Hope you are doing well. My apology for no response for a longer time.
>
> Pgbackrest helps build a streaming replication. While performing role
> reversal(switchover), it is mandatory to set recover_target_timeline to
> latest in recovery.conf(in data directory). Steps to perform switchover is
> as below.
>
> 1) Stop master database
> 2) Promote the slave database
> 3) Prepare the recovery.conf file on the stopped master node, and set
> recover_target_timeline to latest in that file
> 4) Start the stopped master database; it will automatically come up as a
> slave
> 5) Check status of the database.
>
> Additionally, entries in pg_hba.conf is required.
>
> Hope this helps.
>
>
> Regards,
> Ninad Shah
>
>
> On Mon, 23 Aug 2021 at 23:12, Saul Perdomo  wrote:
>
>> Sorry, I misspoke there - I meant to say that since one should not count
>> on the standby-failover process to always run smoothly (whether it's due to
>> hardware, operator, automated scripts, or software issues), DB backups
>> should also be in place if at all possible.
>>
>> On Mon, Aug 23, 2021 at 1:37 PM Saul Perdomo 
>> wrote:
>>
>>> Hi Moishe,
>>>
>>> Since we use pgbackrest ourselves, this is the process I followed to set
>>> up something similar on PG 10:
>>>
>>>
>>> https://pgstef.github.io/2018/11/28/combining_pgbackrest_and_streaming_replication.html
>>>
>>> (Not knowing much [if at all] about the reason for your requirements, I
>>> would recommend looking into enabling PITR backups in addition to the hot
>>> standby[s], which as you've seen are not bulletproof.)
>>>
>>> Anyway, just wanted to add that your experience roughly matches ours --
>>> it seems that an attempt to promote the standby isn't always smooth. My
>>> sample size is almost certainly smaller than yours, about half a dozen
>>> attempts. In the past I've attributed this to a poor understanding on my
>>> part of the behind-the-scenes of the process, and chalked it up to having
>>> made a mistake or other in the manual steps (mainly in the order they are
>>> executed). That being said, if you find a way to faithfully reproduce the
>>> issue, I'm sure the community will want to know, there is certainly an
>>> expectation that the failover is reliable from the PG software side of
>>> things, as long as there are no hardware nor operator issues! Again, not
>>> knowing a lot about your setup, my first instinct would be to troubleshoot
>>> your automated scripts, you might find that you need to change the order
>>> things are run when on server B vs server A, for example..
>>>
>>> On Fri, Aug 20, 2021 at 9:23 AM Hispaniola Sol 
>>> wrote:
>>>
 Team,

 I have a pg 10 cluster with a master and two hot-standby nodes. There
 is a requirement for a manual failover (nodes switching the roles) at will.
 This is a vanilla 3 node PG cluster that was built with WAL archiving
 (central location) and streaming replication to two hot standby nodes.  The
 failover is scripted in Ansible. Ansible massages and moves around the
 archive/restore scripts, the conf files and the trigger and calls `
 pg_ctlcluster` to start/stop. This part _seems_ to be doing the job fine.

 The issue I am struggling with is the apparent fragility of the process
 - all 3 nodes will end up in a "good" state after the switch only every
 other time. Other times I have to rebase the hot-standby from the new
 master with pg_basebackup. It seems the issues are mostly with those nodes,
 ending up as slaves after the roles switch runs.
 They get errors like mismatch in timelines, recovering from the same
 WAL over and over again, invalid resource manager ID in primary checkpoint
 record, etc.

 In this light, I am wondering - using what's offered by PostgreSQL
 itself, i.e. streaming WAL replication with log shipping - can I expect to
 have this kind of failover 100% reliable on PG side ? Anyone is doing this
 reliably on PostgreSQL 10.1x ?

 Thanks !

 

Re: Problem with connecting with named host parameter

2021-08-27 Thread Tom Lane
Mike Martin  writes:
> I am getting the following error when I connect with hostname on localhost

> psql -h capture -U  -p 5432 
> psql: error: FATAL:  no pg_hba.conf entry for host
> "fe80::7285:c2ff:feb0:cd4%eth0", user "usevideo", database "usevideos", SSL
> off

Evidently, your DNS setup is resolving the machine's name as a IPv6
address, whereupon PG quite legitimately doesn't find a match in
its pg_hba table.  (I suppose you are using listen_addresses = "*",
else the server wouldn't even be listening on this address.)

> This is with pg 11 and 12 on fedora 34

Looking at the "hosts" entry in /etc/nsswitch.conf might help figure out
exactly what's going on, but I'll bet a nickel that this is some
unhelpful systemd behavior.

regards, tom lane




Problem with connecting with named host parameter

2021-08-27 Thread Mike Martin
Hi
I am getting the following error when I connect with hostname on localhost

psql -h capture -U  -p 5432 
psql: error: FATAL:  no pg_hba.conf entry for host
"fe80::7285:c2ff:feb0:cd4%eth0", user "usevideo", database "usevideos", SSL
off

The hba.conf is

local   all all password
# IPv4 local connections:
hostall all 127.0.0.1/24password
host allall 192.0.0.0/0 password
# IPv6 local connections:
hostall all ::1/128 trust

If I change -h to ip address then it works, just not with hostname.
Connecting from another host also works.

The other thing I cant understand if the
"fe80::7285:c2ff:feb0:cd4%eth0"

This looks like an ipv6 host or block id which I caant understand.
This is with pg 11 and 12 on fedora 34


Re:

2021-08-27 Thread Ravi Krishna
> 
> 
> how to take incremental backup in postgresql windows machine.
> 

AFAIK PG has no concept of incremental backup.  pgbackrest has, but not sure 
whether it runs on Windows.





[no subject]

2021-08-27 Thread obi reddy
Hi everyone.

how to take incremental backup in postgresql windows machine.


Thank you And Regards
Obireddy.G


Re: Can we get rid of repeated queries from pg_dump?

2021-08-27 Thread Laurenz Albe
On Thu, 2021-08-26 at 18:06 +0200, hubert depesz lubaczewski wrote:
> Now for query stats.
> 
> To dump it all, pg_dump needed 9173 queries (logged by
> log_min_duration_statement = 0 for this user).
> 
> I extracted all queries to separate files, and made stats. In total there were
> only 4257 unique queries.
> 
> Then I checked for repeated queries. Top 10 most repeated offenders were:
> 
> 615 times : SELECT pg_catalog.format_type('25'::pg_catalog.oid, NULL)
> 599 times : SELECT pg_catalog.format_type('23'::pg_catalog.oid, NULL)
> 579 times : SELECT pg_catalog.format_type('2281'::pg_catalog.oid, NULL)
> 578 times : SELECT pg_catalog.format_type('41946'::pg_catalog.oid, NULL)
> 523 times : SELECT pg_catalog.format_type('701'::pg_catalog.oid, NULL)
> 459 times : SELECT pg_catalog.format_type('42923'::pg_catalog.oid, NULL)
> 258 times : SELECT pg_catalog.format_type('16'::pg_catalog.oid, NULL)
> 176 times : SELECT pg_catalog.format_type('19'::pg_catalog.oid, NULL)
> 110 times : SELECT pg_catalog.format_type('21'::pg_catalog.oid, NULL)
> 106 times : SELECT pg_catalog.format_type('42604'::pg_catalog.oid, NULL)
> 
> In total, there were 5000 queries:
> SELECT pg_catalog.format_type('[0-9]+'::pg_catalog.oid, NULL)
> 
> But there were only 83 separate oids that were scanned.

That is a strong argument for using a hash table to cache the types.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Issue with a query while running on a remote host

2021-08-27 Thread Karsten Hilbert
Deep packet inspection naively scanning for potential
fragments of bash scripts being transferred ?

Karsten

Am Fri, Aug 27, 2021 at 12:32:09PM +0530 schrieb Ninad Shah:
> Date: Fri, 27 Aug 2021 12:32:09 +0530
> From: Ninad Shah 
> To: pgsql-general 
> Subject: Issue with a query while running on a remote host
>
> Hello All,
>
> While working with a PostgreSQL database, I came across an issue where data
> is not being fetched over the network.
>
> Version : PostgreSQL 11.10
> Operating system : RHEL 8.4
>
> *Issue description:*
>
> We tried to execute the below query on the database host using psql prompt,
> it works without any issue.
>
> select regexp_replace(state_cd, E'[\\n\\r]+', '', 'g' ) as  state_cd,
> off_cd, user_cd, regexp_replace(user_name, E'[\\n\\r]+', '', 'g' ) as
> user_name, regexp_replace(desig_cd, E'[\\n\\r]+', '', 'g' ) as desig_cd,
> regexp_replace(user_id, E'[\\n\\r]+', '', 'g' ) as user_id,
> regexp_replace(user_pwd, E'[\\n\\r]+', '', 'g' ) as user_pwd,
> regexp_replace(phone_off, E'[\\n\\r]+', '', 'g' ) as phone_off, mobile_no,
> regexp_replace(email_id, E'[\\n\\r]+', '', 'g' ) as email_id,
> regexp_replace(user_catg, E'[\\n\\r]+', '', 'g' ) as user_catg,
> regexp_replace(status, E'[\\n\\r]+', '', 'g' ) as status, created_by,
> created_dt, aadhaar, op_dt, regexp_replace(login_ipaddress, E'[\\n\\r]+',
> '', 'g' ) as login_ipaddress, regexp_replace(forget_password, E'[\\n\\r]+',
> '', 'g' ) as forget_password, regexp_replace(newuser_change_password,
> E'[\\n\\r]+', '', 'g' ) as newuser_change_password from  tm_user_info
> where  ( user_cd >= 1068540014 ) AND ( user_cd <= 2137079303 ) ;
>
> While trying to execute the same query over the network using psql prompt,
> the execution doesn't finish.
>
> *My Analysis:*
>
> By digging further, we came to see that a specific record was causing the
> issue, and by further analysis, we saw that the records that contain a
> specific string("*bash@*") in the column user_id are not being fetched over
> the network.
>
> To confirm that, we also changed some records manually by creating a test
> table. And, we were able to reproduce the issue.
>
> vow4_xdb=# select * from vahan4.test_tbl where user_cd =2106011301;
> .
> .
>
> But, this issue doesn't occur if we try to fetch on the database host or
> via PgAdmin4. In such cases, we get the record in a few milliseconds.
>
> *Surprisingly, this table has only one record.*
>
> There is no table/row-level lock found here.
>
>
> *Table definition:-*
>   Table "test_tbl"
>  Column  |Type | Collation |
> Nullable | Default | Storage  | Stats targe
> t | Description
> -+-+---+--+-+--+
> --+-
>  state_cd| character varying(2)|   | not
> null | | extended |
>   |
>  off_cd  | numeric(5,0)|   | not
> null | | main |
>   |
>  user_cd | numeric(10,0)   |   | not
> null | | main |
>   |
>  user_name   | character varying(99)   |   | not
> null | | extended |
>   |
>  desig_cd| character varying(10)   |   | not
> null | | extended |
>   |
>  user_id | character varying(20)   |   | not
> null | | extended |
>   |
>  user_pwd| character varying(100)  |   | not
> null | | extended |
>   |
>  phone_off   | character varying(20)   |   |
>   | | extended |
>   |
>  mobile_no   | numeric(10,0)   |   | not
> null | | main |
>   |
>  email_id| character varying(50)   |   |
>   | | extended |
>   |
>  user_catg   | character varying(1)|   | not
> null | | extended |
>   |
>  status  | character varying(1)|   | not
> null | | extended |
>   |
>  created_by  | numeric(10,0)   |   | not
> null | | main |
>   |
>  created_dt  | date|   | not
> null | | plain|
>   |
>  aadhaar | numeric(12,0)   |   |
>   | | main |
>   |
>  op_dt   | timestamp without time zone |   | not
> null | now()   | plain|
>   |
>  login_ipaddress | character varying(20)   |   |
>   | | extended |
>   |
>  forget_password | character varying(1)|   |
>   | | extended |
>   |
>  newuser_change_password | character varying(1)|   |
>   | | extended |
>   |
> Indexes:
> "tm_user_info_pkey" PRIMARY KEY, btree (user_cd)
> 

Issue with a query while running on a remote host

2021-08-27 Thread Ninad Shah
Hello All,

While working with a PostgreSQL database, I came across an issue where data
is not being fetched over the network.

Version : PostgreSQL 11.10
Operating system : RHEL 8.4

*Issue description:*

We tried to execute the below query on the database host using psql prompt,
it works without any issue.

select regexp_replace(state_cd, E'[\\n\\r]+', '', 'g' ) as  state_cd,
off_cd, user_cd, regexp_replace(user_name, E'[\\n\\r]+', '', 'g' ) as
user_name, regexp_replace(desig_cd, E'[\\n\\r]+', '', 'g' ) as desig_cd,
regexp_replace(user_id, E'[\\n\\r]+', '', 'g' ) as user_id,
regexp_replace(user_pwd, E'[\\n\\r]+', '', 'g' ) as user_pwd,
regexp_replace(phone_off, E'[\\n\\r]+', '', 'g' ) as phone_off, mobile_no,
regexp_replace(email_id, E'[\\n\\r]+', '', 'g' ) as email_id,
regexp_replace(user_catg, E'[\\n\\r]+', '', 'g' ) as user_catg,
regexp_replace(status, E'[\\n\\r]+', '', 'g' ) as status, created_by,
created_dt, aadhaar, op_dt, regexp_replace(login_ipaddress, E'[\\n\\r]+',
'', 'g' ) as login_ipaddress, regexp_replace(forget_password, E'[\\n\\r]+',
'', 'g' ) as forget_password, regexp_replace(newuser_change_password,
E'[\\n\\r]+', '', 'g' ) as newuser_change_password from  tm_user_info
where  ( user_cd >= 1068540014 ) AND ( user_cd <= 2137079303 ) ;

While trying to execute the same query over the network using psql prompt,
the execution doesn't finish.

*My Analysis:*

By digging further, we came to see that a specific record was causing the
issue, and by further analysis, we saw that the records that contain a
specific string("*bash@*") in the column user_id are not being fetched over
the network.

To confirm that, we also changed some records manually by creating a test
table. And, we were able to reproduce the issue.

vow4_xdb=# select * from vahan4.test_tbl where user_cd =2106011301;
.
.

But, this issue doesn't occur if we try to fetch on the database host or
via PgAdmin4. In such cases, we get the record in a few milliseconds.

*Surprisingly, this table has only one record.*

There is no table/row-level lock found here.


*Table definition:-*
  Table "test_tbl"
 Column  |Type | Collation |
Nullable | Default | Storage  | Stats targe
t | Description
-+-+---+--+-+--+
--+-
 state_cd| character varying(2)|   | not
null | | extended |
  |
 off_cd  | numeric(5,0)|   | not
null | | main |
  |
 user_cd | numeric(10,0)   |   | not
null | | main |
  |
 user_name   | character varying(99)   |   | not
null | | extended |
  |
 desig_cd| character varying(10)   |   | not
null | | extended |
  |
 user_id | character varying(20)   |   | not
null | | extended |
  |
 user_pwd| character varying(100)  |   | not
null | | extended |
  |
 phone_off   | character varying(20)   |   |
  | | extended |
  |
 mobile_no   | numeric(10,0)   |   | not
null | | main |
  |
 email_id| character varying(50)   |   |
  | | extended |
  |
 user_catg   | character varying(1)|   | not
null | | extended |
  |
 status  | character varying(1)|   | not
null | | extended |
  |
 created_by  | numeric(10,0)   |   | not
null | | main |
  |
 created_dt  | date|   | not
null | | plain|
  |
 aadhaar | numeric(12,0)   |   |
  | | main |
  |
 op_dt   | timestamp without time zone |   | not
null | now()   | plain|
  |
 login_ipaddress | character varying(20)   |   |
  | | extended |
  |
 forget_password | character varying(1)|   |
  | | extended |
  |
 newuser_change_password | character varying(1)|   |
  | | extended |
  |
Indexes:
"tm_user_info_pkey" PRIMARY KEY, btree (user_cd)
"idx_tm_user_info_user_id" UNIQUE, btree (user_id)
Replica Identity: FULL


*Record with an issue:-*
state_cd|off_cd|user_cd|user_name|desig_cd|user_id|user_pwd|phone_off|mobile_no|email_id|user_catg|status|created_by|created_dt|aadhaar|op_dt|login_ipaddress|forget_password|newuser_change_password
HR|31|2106011301|SUBASH AUTO HOUSE|DS|subash@123
|c22d1779437117852ffcdeb7e9c689f41c502a1dae2687b8845a33bb892f3b66||9255561872|
skpanwar2...@gmail.com|B|D|2006078673|2021-01-04|0|2021-01-04
14:30:27.715728||N|F
(1 row)


Can anyone help me out here?


Regards,
Ninad 

Re: Manual failover cluster

2021-08-27 Thread Ninad Shah
Hi Saul,

Hope you are doing well. My apology for no response for a longer time.

Pgbackrest helps build a streaming replication. While performing role
reversal(switchover), it is mandatory to set recover_target_timeline to
latest in recovery.conf(in data directory). Steps to perform switchover is
as below.

1) Stop master database
2) Promote the slave database
3) Prepare the recovery.conf file on the stopped master node, and set
recover_target_timeline to latest in that file
4) Start the stopped master database; it will automatically come up as a
slave
5) Check status of the database.

Additionally, entries in pg_hba.conf is required.

Hope this helps.


Regards,
Ninad Shah


On Mon, 23 Aug 2021 at 23:12, Saul Perdomo  wrote:

> Sorry, I misspoke there - I meant to say that since one should not count
> on the standby-failover process to always run smoothly (whether it's due to
> hardware, operator, automated scripts, or software issues), DB backups
> should also be in place if at all possible.
>
> On Mon, Aug 23, 2021 at 1:37 PM Saul Perdomo 
> wrote:
>
>> Hi Moishe,
>>
>> Since we use pgbackrest ourselves, this is the process I followed to set
>> up something similar on PG 10:
>>
>>
>> https://pgstef.github.io/2018/11/28/combining_pgbackrest_and_streaming_replication.html
>>
>> (Not knowing much [if at all] about the reason for your requirements, I
>> would recommend looking into enabling PITR backups in addition to the hot
>> standby[s], which as you've seen are not bulletproof.)
>>
>> Anyway, just wanted to add that your experience roughly matches ours --
>> it seems that an attempt to promote the standby isn't always smooth. My
>> sample size is almost certainly smaller than yours, about half a dozen
>> attempts. In the past I've attributed this to a poor understanding on my
>> part of the behind-the-scenes of the process, and chalked it up to having
>> made a mistake or other in the manual steps (mainly in the order they are
>> executed). That being said, if you find a way to faithfully reproduce the
>> issue, I'm sure the community will want to know, there is certainly an
>> expectation that the failover is reliable from the PG software side of
>> things, as long as there are no hardware nor operator issues! Again, not
>> knowing a lot about your setup, my first instinct would be to troubleshoot
>> your automated scripts, you might find that you need to change the order
>> things are run when on server B vs server A, for example..
>>
>> On Fri, Aug 20, 2021 at 9:23 AM Hispaniola Sol 
>> wrote:
>>
>>> Team,
>>>
>>> I have a pg 10 cluster with a master and two hot-standby nodes. There is
>>> a requirement for a manual failover (nodes switching the roles) at will.
>>> This is a vanilla 3 node PG cluster that was built with WAL archiving
>>> (central location) and streaming replication to two hot standby nodes.  The
>>> failover is scripted in Ansible. Ansible massages and moves around the
>>> archive/restore scripts, the conf files and the trigger and calls `
>>> pg_ctlcluster` to start/stop. This part _seems_ to be doing the job fine.
>>>
>>> The issue I am struggling with is the apparent fragility of the process
>>> - all 3 nodes will end up in a "good" state after the switch only every
>>> other time. Other times I have to rebase the hot-standby from the new
>>> master with pg_basebackup. It seems the issues are mostly with those nodes,
>>> ending up as slaves after the roles switch runs.
>>> They get errors like mismatch in timelines, recovering from the same WAL
>>> over and over again, invalid resource manager ID in primary checkpoint
>>> record, etc.
>>>
>>> In this light, I am wondering - using what's offered by PostgreSQL
>>> itself, i.e. streaming WAL replication with log shipping - can I expect to
>>> have this kind of failover 100% reliable on PG side ? Anyone is doing this
>>> reliably on PostgreSQL 10.1x ?
>>>
>>> Thanks !
>>>
>>> Moishe
>>>
>>