Re: Purpose of DEFERRABLE _and_ INITIALLY DEFERRED foreign key constraint checking?

2022-08-03 Thread Adrian Klaver

On 8/3/22 18:13, Ron wrote:

On 8/3/22 20:02, Adrian Klaver wrote:

On 8/3/22 17:30, Ron wrote:

AWS RDS Postgresql 12.10




And https://www.postgresql.org/docs/12/sql-set-constraints.html seems to 
say that SET CONSTRAINTS can make DEFERRABLE FKs temporarily not 
deferrable.


What's the point?


The way I think of it is:

1) CONSTRAINTS are something that is supposed to constrain an action.

2) Setting DEFERRABLE is a cheat code to push 1) out to the end of a 
transaction by using INITIALLY DEFERRED.


3) If the constraint was created/altered to DEFERRABLE INITIALLY 
DEFERRED the SET CONSTRAINTS IMMEDIATE unsets the cheat code in a given 
transaction. Maybe for most cases you want the check done at the end  of 
the transaction, but in a specific case you need that check to happen 
before another statement occurs prior to the end of the transaction.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Is Client connections via ca.crt only possible?

2022-08-03 Thread Thomas Guyot

On 2022-08-03 21:37, Rejo Oommen wrote:
Thank you for the reply Thomas. I agree with you on the mutual TLS 
that you mentioned.


Here is what I was looking at.

The configurations at the server end will be with auth-method as md5 
and auth-option as clientcert=verify-ca.




There's your issue. If you tell the server to validate the client cert, 
then it will require the client to provide a valid cert to identify itself.


In this way, the user's password along with the valid ca should allow 
connections to pass.




The ca on your setup is only useful for the client to ensure the server 
is the correct one and prevent MITM attacks. This is a client-side 
check, not server-side.


The only authentication security here is the password/md5, but protected 
from eavesdropping (passive and MITM) and connection hijacking by 
encryption, with some of these protections only effective when the 
client use the verify-ca option. The server cannot ensure the client is 
actually validating the ca, not even that it's taking to the actual 
client and not a MITM, simply because the client itself is not 
authenticated by mutual TLS.


Regards

--
Thomas




Re: Purpose of DEFERRABLE _and_ INITIALLY DEFERRED foreign key constraint checking?

2022-08-03 Thread Julien Rouhaud
Hi,

On Wed, Aug 03, 2022 at 07:30:31PM -0500, Ron wrote:
>
> [quote]
> |DEFERRABLE|
> |NOT DEFERRABLE|
>
>This controls whether the constraint can be deferred. A constraint that
>is not deferrable will be checked immediately after every command.
>*Checking of constraints that are deferrable can be postponed until the
>end of the transaction*[/quote]
>
> [/quote]
>
> But yet a |DEFERRABLE| FK constraint in a transaction immediately failed on
> a FK constraint violation.
>
> [quote]
> |INITIALLY IMMEDIATE|
> |INITIALLY DEFERRED|
>
>If a constraint is deferrable, this clause specifies the default time to
>check the constraint. If the constraint is|INITIALLY IMMEDIATE|, it is
>checked after each statement. This is the default. *If the constraint
>is|INITIALLY DEFERRED|**, it is checked only at the end of the
>transaction.*
>
> [/quote]
>
> INITIALLY DEFERRED solved my problem.  Why do both clauses exist?

Because a deferred constraint needs to keep references around in memory until
the constraint is evaluated.  The sooner it's done, the sooner you release that
memory and therefore can avoid, or minimize, memory-related problems.

The typical naive example for INITIALLY IMMEDIATE deferred constraint is a
primary key for which you want to do something like UPDATE ... SET pk = pk + 1

Postponing the evaluation at the end of the UPDATE command is enough, no need
to wait for the end of the transaction.




Re: Purpose of DEFERRABLE _and_ INITIALLY DEFERRED foreign key constraint checking?

2022-08-03 Thread Ron

On 8/3/22 20:02, Adrian Klaver wrote:

On 8/3/22 17:30, Ron wrote:

AWS RDS Postgresql 12.10

https://www.postgresql.org/docs/12/sql-createtable.html

[quote]
|DEFERRABLE|
|NOT DEFERRABLE|

    This controls whether the constraint can be deferred. A constraint
    that is not deferrable will be checked immediately after every
    command. *Checking of constraints that are deferrable can be
    postponed until the end of the transaction*[/quote]

[/quote]

But yet a |DEFERRABLE| FK constraint in a transaction immediately failed 
on a FK constraint violation.


[quote]
|INITIALLY IMMEDIATE|
|INITIALLY DEFERRED|

    If a constraint is deferrable, this clause specifies the default
    time to check the constraint. If the constraint is|INITIALLY
    IMMEDIATE|, it is checked after each statement. This is the default.
    *If the constraint is|INITIALLY DEFERRED|**, it is checked only
    at the end of the transaction.*

[/quote]

INITIALLY DEFERRED solved my problem.  Why do both clauses exist?


Because from the same page:

[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]

and

DEFERRABLE
NOT DEFERRABLE

    This controls whether the constraint can be deferred. A constraint 
that is not deferrable will be checked immediately after every command. 
Checking of constraints that are deferrable can be postponed until the end 
of the transaction (using the SET CONSTRAINTS command). NOT DEFERRABLE is 
the default. Currently, only UNIQUE, PRIMARY KEY, EXCLUDE, and REFERENCES 
(foreign key) constraints accept this clause. NOT NULL and CHECK 
constraints are not deferrable. Note that deferrable constraints cannot be 
used as conflict arbitrators in an INSERT statement that includes an ON 
CONFLICT DO UPDATE clause.



INITIALLY IMMEDIATE
INITIALLY DEFERRED

    If a constraint is deferrable, this clause specifies the default time 
to check the constraint. If the constraint is INITIALLY IMMEDIATE, it is 
checked after each statement. This is the default. If the constraint is 
INITIALLY DEFERRED, it is checked only at the end of the transaction. The 
constraint check time can be altered with the SET CONSTRAINTS command.



So the default

NOT DEFERRABLE

and:

"A constraint that is not deferrable will be checked immediately after 
every command."


When you do

DEFERRABLE

the default is

INITIALLY IMMEDIATE

You have to explicitly set:

INITIALLY DEFERRED.


And https://www.postgresql.org/docs/12/sql-set-constraints.html seems to say 
that SET CONSTRAINTS can make DEFERRABLE FKs temporarily not deferrable.


What's the point?


--
Angular momentum makes the world go 'round.




Re: Purpose of DEFERRABLE _and_ INITIALLY DEFERRED foreign key constraint checking?

2022-08-03 Thread Adrian Klaver

On 8/3/22 17:30, Ron wrote:

AWS RDS Postgresql 12.10

https://www.postgresql.org/docs/12/sql-createtable.html

[quote]
|DEFERRABLE|
|NOT DEFERRABLE|

This controls whether the constraint can be deferred. A constraint
that is not deferrable will be checked immediately after every
command. *Checking of constraints that are deferrable can be
postponed until the end of the transaction*[/quote]

[/quote]

But yet a |DEFERRABLE| FK constraint in a transaction immediately failed 
on a FK constraint violation.


[quote]
|INITIALLY IMMEDIATE|
|INITIALLY DEFERRED|

If a constraint is deferrable, this clause specifies the default
time to check the constraint. If the constraint is|INITIALLY
IMMEDIATE|, it is checked after each statement. This is the default.
*If the constraint is|INITIALLY DEFERRED|**, it is checked only
at the end of the transaction.*

[/quote]

INITIALLY DEFERRED solved my problem.  Why do both clauses exist?


Because from the same page:

[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]

and

DEFERRABLE
NOT DEFERRABLE

This controls whether the constraint can be deferred. A constraint 
that is not deferrable will be checked immediately after every command. 
Checking of constraints that are deferrable can be postponed until the 
end of the transaction (using the SET CONSTRAINTS command). NOT 
DEFERRABLE is the default. Currently, only UNIQUE, PRIMARY KEY, EXCLUDE, 
and REFERENCES (foreign key) constraints accept this clause. NOT NULL 
and CHECK constraints are not deferrable. Note that deferrable 
constraints cannot be used as conflict arbitrators in an INSERT 
statement that includes an ON CONFLICT DO UPDATE clause.



INITIALLY IMMEDIATE
INITIALLY DEFERRED

If a constraint is deferrable, this clause specifies the default 
time to check the constraint. If the constraint is INITIALLY IMMEDIATE, 
it is checked after each statement. This is the default. If the 
constraint is INITIALLY DEFERRED, it is checked only at the end of the 
transaction. The constraint check time can be altered with the SET 
CONSTRAINTS command.



So the default

NOT DEFERRABLE

and:

"A constraint that is not deferrable will be checked immediately after 
every command."


When you do

DEFERRABLE

the default is

INITIALLY IMMEDIATE

You have to explicitly set:

INITIALLY DEFERRED.




(A naive interpretation just by looking at the clause words led me to 
think that INITIALLY DEFERRED would not check record validity when a 
constraint is *added* to a table, but obviously that's wrong too.)


--
Angular momentum makes the world go 'round.



--
Adrian Klaver
adrian.kla...@aklaver.com




Purpose of DEFERRABLE _and_ INITIALLY DEFERRED foreign key constraint checking?

2022-08-03 Thread Ron

AWS RDS Postgresql 12.10

https://www.postgresql.org/docs/12/sql-createtable.html

[quote]
|DEFERRABLE|
|NOT DEFERRABLE|

   This controls whether the constraint can be deferred. A constraint that
   is not deferrable will be checked immediately after every command.
   *Checking of constraints that are deferrable can be postponed until the
   end of the transaction*[/quote]

[/quote]

But yet a |DEFERRABLE| FK constraint in a transaction immediately failed on 
a FK constraint violation.


[quote]
|INITIALLY IMMEDIATE|
|INITIALLY DEFERRED|

   If a constraint is deferrable, this clause specifies the default time to
   check the constraint. If the constraint is|INITIALLY IMMEDIATE|, it is
   checked after each statement. This is the default. *If the constraint
   is|INITIALLY DEFERRED|**, it is checked only at the end of the
   transaction.*

[/quote]

INITIALLY DEFERRED solved my problem.  Why do both clauses exist?

(A naive interpretation just by looking at the clause words led me to think 
that INITIALLY DEFERRED would not check record validity when a constraint is 
*added* to a table, but obviously that's wrong too.)


--
Angular momentum makes the world go 'round.

Re: PostgreSQL 14.4 ERROR: out of memory issues

2022-08-03 Thread Tom Lane
=?UTF-8?B?QWxlxaEgWmVsZW7DvQ==?=  writes:
> CREATE OR REPLACE FUNCTION tescase (_id_data_provider integer, _external_id
> bigint DEFAULT NULL::bigint, _external_complete_id character varying
> DEFAULT NULL::character varying)
>  RETURNS TABLE(id_gm bigint, id_opp bigint, id_opp_state integer)
>  LANGUAGE sql
> AS $function$
> SELECT ... simple join of two tables...
>  WHERE opd.id_data_provider = _id_data_provider
>AND CASE WHEN _external_id IS NULL
> THEN external_id IS NULL
> ELSE external_id = _external_id
>END
>AND CASE WHEN _external_complete_id IS NULL
> THEN _external_complete_id IS NULL
> ELSE external_complete_id = _external_complete_id
>END;
> $function$
> ;

> It is a kind of creative construct for me, but it works. The key here is
> that if I replace at least one of the "CASEd" where conditions, it seems
> not to suffer from the memory leak issue.

> Finally, I've found, that even having the function as is and before the
> test disabling JIT (SET jit = off;) and calling the function 100k times,
> RssAnon memory for the given process is stable and only 3612 kB, while when
> JIT is enabled (the default setting on the server suffering from the memory
> leak, RssAnon memory for the given process growth in a linear manner over
> time (canceled when it reached 5GB).

Ah.  I bet this is another instance of the known memory leakage problems
with JIT inlining [1].  Per Andres' comments in that thread, it seems
hard to solve properly.  For now all I can recommend is to disable that.

regards, tom lane

[1] 
https://www.postgresql.org/message-id/flat/20201001021609.GC8476%40telsasoft.com




Re: PostgreSQL 14.4 ERROR: out of memory issues

2022-08-03 Thread Aleš Zelený
Hello,

I did some testing and the result is that on pg12 there are no such
problems. Pg13 and Pg 14.3 tests will follow but based on monitoring
processed RssAnon memory, I've found a correlation that only processes by a
single user are suffering from the memory allocation (it looks to be a
memory leak to me).
For the given DB user, I've configured log_min_duration_stattement to 0 and
afterward analyzed the Postgres server log with all the statements in the
time window when I spot a significant memory growth for a single process in
a short time (a few minutes).
This correlation points my attention to a SQL function called frequently
and I've tried to do some testing with it.



CREATE OR REPLACE FUNCTION tescase (_id_data_provider integer, _external_id
bigint DEFAULT NULL::bigint, _external_complete_id character varying
DEFAULT NULL::character varying)
 RETURNS TABLE(id_gm bigint, id_opp bigint, id_opp_state integer)
 LANGUAGE sql
AS $function$
SELECT ... simple join of two tables...
 WHERE opd.id_data_provider = _id_data_provider
   AND CASE WHEN _external_id IS NULL
THEN external_id IS NULL
ELSE external_id = _external_id
   END
   AND CASE WHEN _external_complete_id IS NULL
THEN _external_complete_id IS NULL
ELSE external_complete_id = _external_complete_id
   END;
$function$
;

It is a kind of creative construct for me, but it works. The key here is
that if I replace at least one of the "CASEd" where conditions, it seems
not to suffer from the memory leak issue.

Finally, I've found, that even having the function as is and before the
test disabling JIT (SET jit = off;) and calling the function 100k times,
RssAnon memory for the given process is stable and only 3612 kB, while when
JIT is enabled (the default setting on the server suffering from the memory
leak, RssAnon memory for the given process growth in a linear manner over
time (canceled when it reached 5GB).
Tested on real application data I could not share, but if I got a chance to
give try to the preparation of a synthetic test case, then I thought it'll
be time to submit it as a bug.

In the smaps for the given process, it is possible to identify the address
of a heap allocated memory wich constantly grows every execution (while
other mapped heap segment's size is static ), but I have no clue whether it
might help, it looks like:

while :; do date; sed -n '/^02d93000.*/,/^VmFlags/p' /proc/31600/smaps;
sleep 2; done

Út srp  2 17:10:18 CEST 2022
02d93000-20fe1000 rw-p  00:00 0
 [heap]
Size: 493880 kB
Rss:  492784 kB
Pss:  492784 kB
Shared_Clean:  0 kB
Shared_Dirty:  0 kB
Private_Clean: 0 kB
Private_Dirty:492784 kB
Referenced:   492560 kB
Anonymous:492784 kB
AnonHugePages: 0 kB
Swap:  0 kB
KernelPageSize:4 kB
MMUPageSize:   4 kB
Locked:0 kB
VmFlags: rd wr mr mp me ac sd
Út srp  2 17:10:20 CEST 2022
02d93000-21003000 rw-p  00:00 0
 [heap]
Size: 494016 kB
Rss:  493660 kB
Pss:  493660 kB
Shared_Clean:  0 kB
Shared_Dirty:  0 kB
Private_Clean: 0 kB
Private_Dirty:493660 kB
Referenced:   493436 kB
Anonymous:493660 kB
AnonHugePages: 0 kB
Swap:  0 kB
KernelPageSize:4 kB
MMUPageSize:   4 kB
Locked:0 kB
VmFlags: rd wr mr mp me ac sd

Thanks for any hints or comments.
Ales


Re: Is Client connections via ca.crt only possible?

2022-08-03 Thread Thomas Guyot

On 2022-08-01 04:12, Rejo Oommen wrote:

Requirement is to use only ca.crt and connect to postgres

Server.crt, Server.key and ca.crt are configured at the postgres 
server for tls connection.


Connection successful while using
psql ‘host=172.29.21.222 dbname=test user=postgres sslmode=verify-ca 
sslcert=/tmp/server.crt sslkey=/tmp/server.key sslrootcert=/tmp/ca.crt 
port=5432’


For clients to connect, can they use only ca.crt and connect to the 
DB. Tried and got the below error


psql ‘host=172.29.21.222 dbname=test user=postgres sslmode=verify-ca 
sslrootcert=/tmp/ca.crt port=5432’
psql: error: connection to server at “172.29.21.222”, port 50001 
failed: FATAL:  connection requires a valid client certificate




Hi Rejo,

I don't think you understand fully how mutual TLS auth works. For the 
client to authenticate using a certificate, it needs a valid certificate 
and key too, where the certificate is signed by a CA your server trusts 
(usually the same CA that signed your server cert) and with a proper 
subject (that bears the certificate owner's user name, the user you will 
use to grant privileges in the database). You shouldn't even need to 
pass a username, it will be in the certificate.


I'm talking purely from a generic view, I'm not familiar with any of the 
specifics of PostgreSQL configuration but TLS authentication requires a 
secret and a CA certificate isn't secret. Your server certificate 
authenticates the server, but nothing authenticates the client.


Regards,

--
Thomas




Re: Upgrading from 12.3 to 12.11

2022-08-03 Thread zaphod61
I am pretty sure it was a free version of the install, however, I will check 
the version tomorrow when I am back in the office.Thanks!Sent from my Verizon, 
Samsung Galaxy smartphone
 Original message From: Adrian Klaver 
 Date: 8/3/22  2:51 PM  (GMT-05:00) To: zaphod61 
, pgsql-general@lists.postgresql.org Subject: Re: Upgrading 
from 12.3 to 12.11 On 8/3/22 11:44 AM, zaphod61 wrote:> The server version is 
2016. The installer is the one listed on the > postgresql support page.  I 
think version 12.3 was installed the same > way but that was before I came into 
contact with this applicationYou need to make sure, mixing installation methods 
can cause a mess.Also how much downtime can you tolerate?> > > > > > Sent from 
my Verizon, Samsung Galaxy smartphone> > >  Original message > 
From: Adrian Klaver > Date: 8/3/22 1:57 PM 
(GMT-05:00)> To: zaphod61 , 
pgsql-general@lists.postgresql.org> Subject: Re: Upgrading from 12.3 to 12.11> 
> On 8/3/22 10:16 AM, zaphod61 wrote:>  >>  > I've inherited a postgresql 12.3 
installation. It has 1 database in it.>  > I need to upgrade it to the newest 
version, which appears to be 12.11.>  > Can I just download the installer for 
12>  > 11 and run that to upgrade the product and still maintain access to the> 
 > existing connections?> > What OS are you using?> > How was the Postgres 12.3 
instance installed?> > What installer are you referring to?> > > 12.3 -> 12.11 
is a minor upgrade so you can do the update without a> dump/restore or using 
pg_upgrade. That being said, at some point in the> process the existing 
instance(12.3) will need to be stopped and then the> new install for the 12.11 
done and then start it. That means there will>    be point where the 
connections will be lost.  How big an issue would> that be?> > >  >>  >>  >>  
>>  > Sent from my Verizon, Samsung Galaxy smartphone>  >> > > -- > Adrian 
Klaver> adrian.kla...@aklaver.com-- Adrian klaveradrian.kla...@aklaver.com

Re: Upgrading from 12.3 to 12.11

2022-08-03 Thread Adrian Klaver

On 8/3/22 11:44, zaphod61 wrote:
The server version is 2016. The installer is the one listed on the 
postgresql support page.  I think version 12.3 was installed the same 
way but that was before I came into contact with this application




Would be helpful to do:

select version();

to get some clue  of where the install originated from.

I add this because it occurred to me that maybe it was not installed 
with this:


https://www.postgresql.org/download/windows/

https://www.enterprisedb.com/downloads/postgres-postgresql-downloads

Instead it may be one of EDB's not free versions of Postgres. Mixing 
that up would be a mess.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Upgrading from 12.3 to 12.11

2022-08-03 Thread Adrian Klaver

On 8/3/22 11:44 AM, zaphod61 wrote:
The server version is 2016. The installer is the one listed on the 
postgresql support page.  I think version 12.3 was installed the same 
way but that was before I came into contact with this application


You need to make sure, mixing installation methods can cause a mess.

Also how much downtime can you tolerate?







Sent from my Verizon, Samsung Galaxy smartphone


 Original message 
From: Adrian Klaver 
Date: 8/3/22 1:57 PM (GMT-05:00)
To: zaphod61 , pgsql-general@lists.postgresql.org
Subject: Re: Upgrading from 12.3 to 12.11

On 8/3/22 10:16 AM, zaphod61 wrote:
 >
 > I've inherited a postgresql 12.3 installation. It has 1 database in it.
 > I need to upgrade it to the newest version, which appears to be 12.11.
 > Can I just download the installer for 12
 > 11 and run that to upgrade the product and still maintain access to the
 > existing connections?

What OS are you using?

How was the Postgres 12.3 instance installed?

What installer are you referring to?


12.3 -> 12.11 is a minor upgrade so you can do the update without a
dump/restore or using pg_upgrade. That being said, at some point in the
process the existing instance(12.3) will need to be stopped and then the
new install for the 12.11 done and then start it. That means there will
   be point where the connections will be lost.  How big an issue would
that be?


 >
 >
 >
 >
 > Sent from my Verizon, Samsung Galaxy smartphone
 >


--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Upgrading from 12.3 to 12.11

2022-08-03 Thread zaphod61
The server version is 2016. The installer is the one listed on the postgresql 
support page.  I think version 12.3 was installed the same way but that was 
before I came into contact with this application Sent from my Verizon, Samsung 
Galaxy smartphone
 Original message From: Adrian Klaver 
 Date: 8/3/22  1:57 PM  (GMT-05:00) To: zaphod61 
, pgsql-general@lists.postgresql.org Subject: Re: Upgrading 
from 12.3 to 12.11 On 8/3/22 10:16 AM, zaphod61 wrote:> > I've inherited a 
postgresql 12.3 installation. It has 1 database in it.  > I need to upgrade it 
to the newest version, which appears to be 12.11. > Can I just download the 
installer for 12> 11 and run that to upgrade the product and still maintain 
access to the > existing connections?What OS are you using?How was the Postgres 
12.3 instance installed?What installer are you referring to?12.3 -> 12.11 is a 
minor upgrade so you can do the update without a dump/restore or using 
pg_upgrade. That being said, at some point in the process the existing 
instance(12.3) will need to be stopped and then the new install for the 12.11 
done and then start it. That means there will   be point where the connections 
will be lost.  How big an issue would that be?> > > > > Sent from my Verizon, 
Samsung Galaxy smartphone> -- Adrian klaveradrian.kla...@aklaver.com

Re: Upgrading from 12.3 to 12.11

2022-08-03 Thread Adrian Klaver

On 8/3/22 10:16 AM, zaphod61 wrote:


I've inherited a postgresql 12.3 installation. It has 1 database in it.  
I need to upgrade it to the newest version, which appears to be 12.11. 
Can I just download the installer for 12
11 and run that to upgrade the product and still maintain access to the 
existing connections?


What OS are you using?

How was the Postgres 12.3 instance installed?

What installer are you referring to?


12.3 -> 12.11 is a minor upgrade so you can do the update without a 
dump/restore or using pg_upgrade. That being said, at some point in the 
process the existing instance(12.3) will need to be stopped and then the 
new install for the 12.11 done and then start it. That means there will 
 be point where the connections will be lost.  How big an issue would 
that be?








Sent from my Verizon, Samsung Galaxy smartphone




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Upgrading from 12.3 to 12.11

2022-08-03 Thread Christophe Pettus



> On Aug 3, 2022, at 10:16, zaphod61  wrote:
> 
> 
> I've inherited a postgresql 12.3 installation. It has 1 database in it.  I 
> need to upgrade it to the newest version, which appears to be 12.11. Can I 
> just download the installer for 12
> 11 and run that to upgrade the product and still maintain access to the 
> existing connections?

Any upgrade will require a server restart.  If it's a minor version upgrade, 
you just need to replace the binaries, but *always* read the upgrade notes for 
post-upgrade housekeeping.



Re: Upgrading from 12.3 to 12.11

2022-08-03 Thread David G. Johnston
On Wednesday, August 3, 2022, zaphod61  wrote:

>
> I've inherited a postgresql 12.3 installation. It has 1 database in it.  I
> need to upgrade it to the newest version, which appears to be 12.11. Can I
> just download the installer for 12
> 11 and run that to upgrade the product and still maintain access to the
> existing connections?
>
>
https://www.postgresql.org/support/versioning/

David J.


Upgrading from 12.3 to 12.11

2022-08-03 Thread zaphod61
I've inherited a postgresql 12.3 installation. It has 1 database in it.  I need 
to upgrade it to the newest version, which appears to be 12.11. Can I just 
download the installer for 1211 and run that to upgrade the product and still 
maintain access to the existing connections?Sent from my Verizon, Samsung 
Galaxy smartphone

Re: "Missing" column in Postgres logical replication update message

2022-08-03 Thread Kevin Martin
Thanks for the response, Tom.  I think that's exactly what's going on here.

On Wed, Aug 3, 2022 at 11:28 AM Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 8/3/22 08:09, Kevin Martin wrote:
> >> I thought that any UPDATE message in the replication logs is designed
> to
> >> include all values for all columns.  Is that correct?
>
> Buried deep in the wire protocol specification is this bit:
>
> 
> The following message parts are shared by the above messages.
>
> TupleData
>
> Int16
> Number of columns.
>
> Next, one of the following submessages appears for each column (except
> generated columns):
>
> ...
>
> Byte1('u')
> Identifies unchanged TOASTed value (the actual value is not sent).
> 
>
> So that right there is two cases where we omit data for a column.
> I suspect the "unchanged toasted data" case is what matters for
> your purposes.  Maybe you've found some code that fails to implement
> that correctly?
>
> regards, tom lane
>


Re: ICU is not supported in this build. install from source code.

2022-08-03 Thread Daniel Verite
jian he wrote:

> ./configure --with-perl --with-python  --with-icu
> ICU_CFLAGS='-I/usr/include/unicode'  ICU_LIBS='-L/usr/lib/icu'
> --enable-debug --with-pgport=5440

The location with -L is not sufficient in ICU_LIBS.
The list of libraries to link with should be put as well.
For instance:
ICU_LIBS="-L/usr/lib/icu -licui18n -licuuc -licudata"

When using the version of ICU that ships with the distribution (libicu-dev
package on Ubuntu), adding only "--with-icu" should be enough.
The locations of includes and libraries will be automatically found.
I'm not sure that's your case though. On one hand, /usr/include/unicode is
where the package puts the ICU includes, but on the other hand
it doesn't create any /usr/lib/icu directory. 
And Ubuntu 22.04 LTS brings ICU 70, not ICU 71.

If you need a self-built version of ICU, it might be simpler to use pkgconfig
as an alternative to ICU_CFLAGS/ICU_LIBS, for instance:

./configure --with-icu PKG_CONFIG_PATH=/path/to/icu/lib/pkgconfig

where /path/to/icu was passed as the prefix to ICU4C configure
invocation. It's recommended to install custom builds to specific
locations instead of overwriting the libraries that are part of
the distribution (otherwise it becomes unmaintainable)


Best regards,
-- 
Daniel Vérité
https://postgresql.verite.pro/
Twitter: @DanielVerite




Re: "Missing" column in Postgres logical replication update message

2022-08-03 Thread Tom Lane
Adrian Klaver  writes:
> On 8/3/22 08:09, Kevin Martin wrote:
>> I thought that any UPDATE message in the replication logs is designed to 
>> include all values for all columns.  Is that correct?

Buried deep in the wire protocol specification is this bit:


The following message parts are shared by the above messages.

TupleData

Int16
Number of columns.

Next, one of the following submessages appears for each column (except 
generated columns):

...

Byte1('u')
Identifies unchanged TOASTed value (the actual value is not sent).


So that right there is two cases where we omit data for a column.
I suspect the "unchanged toasted data" case is what matters for
your purposes.  Maybe you've found some code that fails to implement
that correctly?

regards, tom lane




Why is my table continuousely written? -> XID issue?

2022-08-03 Thread Peter


I had a look into what actually changed in the table. At the first
write that I grabbed, four rows in that segment had such a change:

117ee000  77 00 00 00 00 df b8 82  8e a4 00 00 64 00 a0 00  |w...d...|
117ee000  77 00 00 00 f0 22 b4 f3  68 d3 00 00 64 00 a0 00  |w"..h...d...|

117ee0a0  f8 5c 00 00 f8 5c 00 00  09 00 00 00 06 00 f7 8b  |.\...\..|
117ee0b0  13 00 04 00 93 00 18 0d  d8 27 fe 01 00 00 00 00  |.'..|
117ee0b0  13 00 04 00 93 01 18 0d  d8 27 fe 01 00 00 00 00  |.'..|

1aad8000  77 00 00 00 30 fc 8c a5  80 8a 00 00 58 00 a8 00  |w...0...X...|
1aad8000  77 00 00 00 e8 12 8a f3  6e 5b 00 00 58 00 a8 00  |w...n[..X...|

1aad80a0   f6 5d 00 00 00 00 00 00  |.]..|
1aad80b0  09 00 00 00 06 00 6c d5  10 00 04 00 03 08 18 0d  |..l.|
1aad80b0  09 00 00 00 06 00 6c d5  10 00 04 00 03 09 18 0d  |..l.|

If I get that right, then it is the HEAP_XMIN_COMMITTED from
t_infomask that got set for the rows.

The next write is a bit different:

a000  76 00 00 00 60 9d 84 d0  23 4c 04 00 70 00 50 01  |v...`...#L..p.P.|
a000  78 00 00 00 80 81 e9 05  78 0d 04 00 70 00 50 01  |x...x...p.P.|

a690   cf 51 00 00 00 00 00 00  |504c54...Q..|
a690   cf 51 00 00 13 5e 00 00  |504c54...Q...^..|
a6a0  00 00 00 00 06 00 05 00  12 00 04 00 03 2b 18 0d  |.+..|
a6a0  00 00 00 00 06 00 05 00  12 00 04 00 93 23 18 0d  |.#..|

003ec000  76 00 00 00 30 98 85 d0  59 e9 04 00 60 00 48 01  |v...0...Y...`.H.|
003ec000  78 00 00 00 08 33 1b 03  bd 97 04 00 60 00 48 01  |x3..`.H.|

003eddf0  cf 51 00 00 00 00 00 00  00 00 00 00 06 00 f6 01  |.Q..|
003eddf0  cf 51 00 00 13 5e 00 00  00 00 00 00 06 00 f6 01  |.Q...^..|
003ede00  02 00 04 00 03 2b 18 0d  f3 56 2a 00 00 00 00 00  |.+...V*.|
003ede00  02 00 04 00 93 23 18 0d  f3 56 2a 00 00 00 00 00  |.#...V*.|

It seems here is the t_xmax for the row updated, and also flags in
t_infomask changed - and there are about 1000 rows (of some 3 mio.
that are contained in this segment) where this is done.

So if I get this right, it looks like some out-of-band housekeeping is
done here.
Now, while I would understand that this is done as occasion offers
when a record is read anyway, and I do not think that I grok the
XID housekeeping in fullness, I also do not think that I have any open
transactions that would persist a server restart. So I was under the
impression that a server restart plus a vacuum FULL, without any
further update/delete operations, should resolve all the needs for
such housekeeping for the time being, and quiesce the situation.

Apparently it does not. So, then, what does?





Re: "Missing" column in Postgres logical replication update message

2022-08-03 Thread Kevin Martin
My apologies.  Read that in the mailing list instructions, and still
forgot.  Thanks for the reminder.

On Wed, Aug 3, 2022 at 11:12 AM Adrian Klaver 
wrote:

> On 8/3/22 08:09, Kevin Martin wrote:
>
> Please reply to list also
> Ccing list
>
> I don't have answers to below at the moment, just getting thread back to
> list so others who might have answers can see it.
>
> > Thanks for the reply, Adrian.
> >
> > We're looking at the messages in the replication slot using
> > pg_logical_slot_peek_changes in the source db.
> >
> > In those messages, we see some UPDATEs that do not include one of the
> > columns in the table.
> >
> > I'm not sure what statements are producing the updates to the table
> > from the application, if that is what you are asking.  Does the update
> > against the database have to include all columns in order for the
> > replication log to have them all?
> >
> > I thought that any UPDATE message in the replication logs is designed to
> > include all values for all columns.  Is that correct?
> >
> > The data is not showing up in the replica table.  In this case, though,
> > the replication slot is being queried by Stitch to produce a copy in
> > Snowflake.  That is probably somewhat irrelevant to the current
> > question, though, since we appear to be seeing missing data in the
> > replication slot messages on the source.
> >
> > I'm on the receiving side of this issue and am working with my DBA on
> > trying to figure it out, so I'm not fully versed in how all of this
> > works.  I can try to get more information if it helps.  I have seen the
> > output from the peek function, and there are clearly some UPDATE
> > messages that have the column / values in question and some that do not.
> >
> > On Wed, Aug 3, 2022 at 10:47 AM Adrian Klaver  > > wrote:
> >
> > On 8/3/22 06:50, Kevin Martin wrote:
> >  > We have a replication slot set up on a database in Postgres 12.8.
> > For
> >  > one of the tables, when a row is created, we see a series of
> records
> >  > come through - an INSERT followed by a handful of UPDATEs. All of
> > these
> >  > messages in the WAL files show all columns, except for the last
> > UPDATE
> >  > message, which is missing one of the columns. (The column in
> > question is
> >  > a JSONB field, and the data is not overly large - less than 1000
> >  > chars.)  We think this is causing the data to come into our data
> > lake
> >  > (via Stitch) with that column as NULL.
> >
> > See the messages where and/or how?
> >
> > What is the UPDATE command that is being given on primary?
> >
> > Is the data showing up in the replica table?
> >
> >  >
> >  > My understanding is that all INSERT and UPDATE messages written
> > to the
> >  > replication logs are supposed to include all columns. But I can't
> > find a
> >  > definitive answer on that.
> >  >
> >  > So, my first question is: Is it normal / expected for UPDATE
> > messages in
> >  > the replication logs to exclude any columns in the table?
> >  >
> >  > And, of course, if that is unexpected behavior, I'd love to hear
> any
> >  > thoughts on what may cause it.
> >  >
> >  > Thanks.
> >  >
> >  > -Kevin
> >  >
> >  >
> >  > FYI.  I have this question posted also on StackOverflow:
> >  >
> >
> https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message
> > <
> https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message
> >
> >
> >  >
> > <
> https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message
> > <
> https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message
> >>
> >  >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: "Missing" column in Postgres logical replication update message

2022-08-03 Thread Adrian Klaver

On 8/3/22 08:09, Kevin Martin wrote:

Please reply to list also
Ccing list

I don't have answers to below at the moment, just getting thread back to 
list so others who might have answers can see it.



Thanks for the reply, Adrian.

We're looking at the messages in the replication slot using 
pg_logical_slot_peek_changes in the source db.


In those messages, we see some UPDATEs that do not include one of the 
columns in the table.


I'm not sure what statements are producing the updates to the table 
from the application, if that is what you are asking.  Does the update 
against the database have to include all columns in order for the 
replication log to have them all?


I thought that any UPDATE message in the replication logs is designed to 
include all values for all columns.  Is that correct?


The data is not showing up in the replica table.  In this case, though, 
the replication slot is being queried by Stitch to produce a copy in 
Snowflake.  That is probably somewhat irrelevant to the current 
question, though, since we appear to be seeing missing data in the 
replication slot messages on the source.


I'm on the receiving side of this issue and am working with my DBA on 
trying to figure it out, so I'm not fully versed in how all of this 
works.  I can try to get more information if it helps.  I have seen the 
output from the peek function, and there are clearly some UPDATE 
messages that have the column / values in question and some that do not.


On Wed, Aug 3, 2022 at 10:47 AM Adrian Klaver > wrote:


On 8/3/22 06:50, Kevin Martin wrote:
 > We have a replication slot set up on a database in Postgres 12.8.
For
 > one of the tables, when a row is created, we see a series of records
 > come through - an INSERT followed by a handful of UPDATEs. All of
these
 > messages in the WAL files show all columns, except for the last
UPDATE
 > message, which is missing one of the columns. (The column in
question is
 > a JSONB field, and the data is not overly large - less than 1000
 > chars.)  We think this is causing the data to come into our data
lake
 > (via Stitch) with that column as NULL.

See the messages where and/or how?

What is the UPDATE command that is being given on primary?

Is the data showing up in the replica table?

 >
 > My understanding is that all INSERT and UPDATE messages written
to the
 > replication logs are supposed to include all columns. But I can't
find a
 > definitive answer on that.
 >
 > So, my first question is: Is it normal / expected for UPDATE
messages in
 > the replication logs to exclude any columns in the table?
 >
 > And, of course, if that is unexpected behavior, I'd love to hear any
 > thoughts on what may cause it.
 >
 > Thanks.
 >
 > -Kevin
 >
 >
 > FYI.  I have this question posted also on StackOverflow:
 >

https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message



 >

>
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: "Missing" column in Postgres logical replication update message

2022-08-03 Thread Adrian Klaver

On 8/3/22 06:50, Kevin Martin wrote:
We have a replication slot set up on a database in Postgres 12.8. For 
one of the tables, when a row is created, we see a series of records 
come through - an INSERT followed by a handful of UPDATEs. All of these 
messages in the WAL files show all columns, except for the last UPDATE 
message, which is missing one of the columns. (The column in question is 
a JSONB field, and the data is not overly large - less than 1000 
chars.)  We think this is causing the data to come into our data lake 
(via Stitch) with that column as NULL.


See the messages where and/or how?

What is the UPDATE command that is being given on primary?

Is the data showing up in the replica table?



My understanding is that all INSERT and UPDATE messages written to the 
replication logs are supposed to include all columns. But I can't find a 
definitive answer on that.


So, my first question is: Is it normal / expected for UPDATE messages in 
the replication logs to exclude any columns in the table?


And, of course, if that is unexpected behavior, I'd love to hear any 
thoughts on what may cause it.


Thanks.

-Kevin


FYI.  I have this question posted also on StackOverflow: 
https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message 






--
Adrian Klaver
adrian.kla...@aklaver.com




"Missing" column in Postgres logical replication update message

2022-08-03 Thread Kevin Martin
We have a replication slot set up on a database in Postgres 12.8. For one
of the tables, when a row is created, we see a series of records come
through - an INSERT followed by a handful of UPDATEs. All of these messages
in the WAL files show all columns, except for the last UPDATE message,
which is missing one of the columns. (The column in question is a JSONB
field, and the data is not overly large - less than 1000 chars.)  We
think this is causing the data to come into our data lake (via Stitch) with
that column as NULL.

My understanding is that all INSERT and UPDATE messages written to the
replication logs are supposed to include all columns. But I can't find a
definitive answer on that.

So, my first question is: Is it normal / expected for UPDATE messages in
the replication logs to exclude any columns in the table?

And, of course, if that is unexpected behavior, I'd love to hear any
thoughts on what may cause it.

Thanks.

-Kevin


FYI.  I have this question posted also on StackOverflow:
https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message


Why is my table continuousely written?

2022-08-03 Thread Peter


Hello,

 this is postgres version 12.11_1 on Freebsd 13.1

I have a table "mess", filename "6309215", that behaves strange.
Data is currently only inserted/appended, there is no update and no
(successful) delete done, autovacuum is NOT enabled.

This is the definition:

CREATE TABLE IF NOT EXISTS dnstap.mess
(
id bigint NOT NULL DEFAULT nextval('dnstap.mess_id_seq'::regclass),
mess text COLLATE pg_catalog."default",
hash text COLLATE pg_catalog."default",
CONSTRAINT mess_pkey PRIMARY KEY (id),
CONSTRAINT mess_hash_key UNIQUE (hash)
)
TABLESPACE l1only;


And this is the table:

-rw---  1 770  770  1073741824 Aug  2 04:38 6309212
-rw---  1 770  770  1073741824 Aug  2 04:38 6309212.1
-rw---  1 770  770  1073741824 Aug  2 04:38 6309212.2
-rw---  1 770  770  1073741824 Aug  2 04:39 6309212.3
-rw---  1 770  770  1073741824 Aug  2 04:39 6309212.4
-rw---  1 770  770   405872640 Aug  3 04:28 6309212.5
-rw---  1 770  770 1441792 Aug  3 03:25 6309212_fsm
-rw---  1 770  770  180224 Aug  2 05:13 6309212_vm
-rw---  1 770  770   0 Aug  2 00:21 6309213
-rw---  1 770  7708192 Aug  2 04:40 6309214
-rw---  1 770  770  1073741824 Aug  3 04:29 6309215
-rw---  1 770  770  1073741824 Aug  3 04:30 6309215.1
-rw---  1 770  770  1073741824 Aug  3 04:13 6309215.10
-rw---  1 770  770   995811328 Aug  3 04:16 6309215.11
-rw---  1 770  770  1073741824 Aug  3 04:32 6309215.2
-rw---  1 770  770  1073741824 Aug  3 04:34 6309215.3
-rw---  1 770  770  1073741824 Aug  3 04:35 6309215.4
-rw---  1 770  770  1073741824 Aug  3 04:37 6309215.5
-rw---  1 770  770  1073741824 Aug  3 04:39 6309215.6
-rw---  1 770  770  1073741824 Aug  3 04:41 6309215.7
-rw---  1 770  770  1073741824 Aug  3 04:42 6309215.8
-rw---  1 770  770  1073741824 Aug  3 04:12 6309215.9
-rw---  1 770  770 3170304 Aug  3 04:38 6309215_fsm
-rw---  1 770  770  393216 Aug  3 04:38 6309215_vm
-rw---  1 770  770  1073741824 Aug  3 03:17 6309216
-rw---  1 770  770  1073741824 Aug  2 04:33 6309216.1
-rw---  1 770  770   241360896 Aug  3 03:18 6309216.2
-rw---  1 770  770  606208 Aug  3 03:18 6309216_fsm
-rw---  1 770  770   73728 Aug  3 03:17 6309216_vm
-rw---  1 770  77043139072 Aug  3 03:18 6309217


There is another table, "6309212", that is also only appended data,
and that behaves as expected: only the final segment is written, the
other segments are a day old.
But in "6309215" all the segments were recently written. ("6309216" is
the TOAST for "6309215", and "6309217" seems to be an index on that.)


This is what happens:

2022-08-03 03:09:03 CEST
-- --- Check for hash value clash with old entries
SELECT i1.id
  FROM dnstap.incoming AS i1,
   dnstap.mess AS mess
 WHERE mess.hash = md5(i1.mess)
   AND mess.mess <> i1.mess

I don't think a SELECT like this has reasons to write the
referenced table - but it does:

-rw---  1 770  770  1073741824 Aug  3 03:09:25 2022 6309215
-rw---  1 770  770  1073741824 Aug  3 03:09:25 2022 6309215.1
-rw---  1 770  770  1073741824 Aug  3 03:09:47 2022 6309215.10
-rw---  1 770  770   965394432 Aug  3 03:10:17 2022 6309215.11
-rw---  1 770  770  1073741824 Aug  3 03:09:25 2022 6309215.2
-rw---  1 770  770  1073741824 Aug  3 03:09:20 2022 6309215.3
-rw---  1 770  770  1073741824 Aug  3 03:09:25 2022 6309215.4
-rw---  1 770  770  1073741824 Aug  3 03:09:25 2022 6309215.5
-rw---  1 770  770  1073741824 Aug  3 03:10:18 2022 6309215.6
-rw---  1 770  770  1073741824 Aug  3 03:10:21 2022 6309215.7
-rw---  1 770  770  1073741824 Aug  3 03:09:25 2022 6309215.8
-rw---  1 770  770  1073741824 Aug  3 03:09:21 2022 6309215.9

And this goes on in that way with all the queries.
And when finally the vacuum analyze is run, it is the same, only
that one writes the segments in sequence (only inserts were appended
to the end, there are no updates or deletes):

2022-08-03 04:28:16 CEST
VACUUM (ANALYZE) dnstap.mess;

-rw---  1 770  770  1073741824 Aug  3 04:29 6309215
-rw---  1 770  770  1073741824 Aug  3 04:30 6309215.1
-rw---  1 770  770  1073741824 Aug  3 04:13 6309215.10
-rw---  1 770  770   995811328 Aug  3 04:16 6309215.11
-rw---  1 770  770  1073741824 Aug  3 04:32 6309215.2
-rw---  1 770  770  1073741824 Aug  3 04:33 6309215.3
-rw---  1 770  770  1073741824 Aug  3 04:07 6309215.4
-rw---  1 770  770  1073741824 Aug  3 04:08 6309215.5
-rw---  1 770  770  1073741824 Aug  3 04:09 6309215.6
-rw---  1 770  770  1073741824 Aug  3 04:10 6309215.7
-rw---  1 770  770  1073741824 Aug  3 04:11 6309215.8
-rw---  1 770  770  1073741824 Aug  3 04:12 6309215.9
-rw---  1 770  770 3170304 Aug  3 03:18 6309215_fsm
-rw---  1 770  770  393216 Aug  3 04:28 6309215_vm

What is worse is this:

-rw---  1 770   770  16777216 Aug  3 04:28 000100780067.ok
-rw---  1 770   

Re: ICU is not supported in this build. install from source code.

2022-08-03 Thread jian he
Then I try to install the latest from the latest source code.

./configure --with-perl --with-python  --with-icu
ICU_CFLAGS='-I/usr/include/unicode'  ICU_LIBS='-L/usr/lib/icu'
--enable-debug --with-pgport=5440

./configure.status return

config.status: creating GNUmakefile
> config.status: creating src/Makefile.global
> config.status: creating src/include/pg_config.h
> config.status: src/include/pg_config.h is unchanged
> config.status: creating src/include/pg_config_ext.h
> config.status: src/include/pg_config_ext.h is unchanged
> config.status: creating src/interfaces/ecpg/include/ecpg_config.h
> config.status: src/interfaces/ecpg/include/ecpg_config.h is unchanged
> config.status: linking src/backend/port/tas/dummy.s to
> src/backend/port/tas.s
> config.status: linking src/backend/port/posix_sema.c to
> src/backend/port/pg_sema.c
> config.status: linking src/backend/port/sysv_shmem.c to
> src/backend/port/pg_shmem.c
> config.status: linking src/include/port/linux.h to
> src/include/pg_config_os.h
> config.status: linking src/makefiles/Makefile.linux to src/Makefile.port
>

then make world
the following is make world error:

/usr/bin/ld: access/hash/hashfunc.o: in function `hashtext':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/access/hash/hashfunc.c:301:
> undefined reference to `ucol_getSortKey_70'
> /usr/bin/ld:
> /home/jian/Desktop/pg16_source/postgresql/src/backend/access/hash/hashfunc.c:304:
> undefined reference to `ucol_getSortKey_70'
> /usr/bin/ld: access/hash/hashfunc.o: in function `hashtextextended':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/access/hash/hashfunc.c:358:
> undefined reference to `ucol_getSortKey_70'
> /usr/bin/ld:
> /home/jian/Desktop/pg16_source/postgresql/src/backend/access/hash/hashfunc.c:361:
> undefined reference to `ucol_getSortKey_70'
> /usr/bin/ld: commands/collationcmds.o: in function
> `pg_import_system_collations':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/commands/collationcmds.c:786:
> undefined reference to `uloc_getAvailable_70'
> /usr/bin/ld: commands/collationcmds.o: in function `get_icu_language_tag':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/commands/collationcmds.c:529:
> undefined reference to `uloc_toLanguageTag_70'
> /usr/bin/ld: commands/collationcmds.o: in function
> `pg_import_system_collations':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/commands/collationcmds.c:775:
> undefined reference to `uloc_countAvailable_70'
> /usr/bin/ld: commands/collationcmds.o: in function
> `get_icu_locale_comment':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/commands/collationcmds.c:554:
> undefined reference to `uloc_getDisplayName_70'
> /usr/bin/ld: commands/collationcmds.o: in function `get_icu_language_tag':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/commands/collationcmds.c:531:
> undefined reference to `u_errorName_70'
> /usr/bin/ld: regex/regcomp.o: in function `pg_wc_isalpha':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/regex/regc_pg_locale.c:356:
> undefined reference to `u_isalpha_70'
> /usr/bin/ld: regex/regcomp.o: in function `pg_wc_isspace':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/regex/regc_pg_locale.c:617:
> undefined reference to `u_isspace_70'
> /usr/bin/ld: regex/regcomp.o: in function `pg_wc_isdigit':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/regex/regc_pg_locale.c:320:
> undefined reference to `u_isdigit_70'
> /usr/bin/ld: regex/regcomp.o: in function `pg_wc_isalnum':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/regex/regc_pg_locale.c:392:
> undefined reference to `u_isalnum_70'
> /usr/bin/ld: regex/regcomp.o: in function `pg_wc_isgraph':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/regex/regc_pg_locale.c:509:
> undefined reference to `u_isgraph_70'
> /usr/bin/ld: regex/regcomp.o: in function `pg_wc_isupper':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/regex/regc_pg_locale.c:437:
> undefined reference to `u_isupper_70'
> /usr/bin/ld: regex/regcomp.o: in function `pg_wc_islower':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/regex/regc_pg_locale.c:473:
> undefined reference to `u_islower_70'
> /usr/bin/ld: regex/regcomp.o: in function `pg_wc_ispunct':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/regex/regc_pg_locale.c:581:
> undefined reference to `u_ispunct_70'
> /usr/bin/ld: regex/regcomp.o: in function `pg_wc_isprint':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/regex/regc_pg_locale.c:545:
> undefined reference to `u_isprint_70'
> /usr/bin/ld: regex/regcomp.o: in function `pg_wc_tolower':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/regex/regc_pg_locale.c:705:
> undefined reference to `u_tolower_70'
> /usr/bin/ld: regex/regcomp.o: in function `pg_wc_toupper':
> /home/jian/Desktop/pg16_source/postgresql/src/backend/regex/regc_pg_locale.c:661:
> undefined reference to `u_toupper_70'
> /usr/bin/ld: utils/adt/formatting.o: in function `u_strToTi