Re: sequence id overflow ERROR using timescaledb

2022-08-04 Thread Julien Rouhaud
Hi,

On Thu, Aug 04, 2022 at 08:47:16PM +, abrahim abrahao wrote:
>
>  I am using timescaledb version  2.7.2, and PostgreSQL 12.11
> [...]
> I tried to compress a chuck using the compress_chunk function and running a
> job as well, and I got "sequence id overflow" message ERROR.Any idea how to
> fix it or why I got this error?Note: I compressed other chuckles without
> problem.
> [...]
>    SELECT  'set temp_file_limit =-1; SELECT compress_chunk(''' || 
> chunk_schema|| '.' || chunk_name || ''');'  
> [...]
> ERROR:  sequence id overflow
> CONTEXT:  SQL statement "SELECT public.compress_chunk( chunk_rec.oid )"
> PL/pgSQL function 
> _timescaledb_internal.policy_compression_execute(integer,integer,anyelement,integer,boolean,boolean)
>  line 35 at PERFORM
> SQL statement "CALL _timescaledb_internal.policy_compression_execute(
>         job_id, htid, lag_value::INTERVAL,
>         maxchunks, verbose_log, recompress_enabled
>       )"
> PL/pgSQL function _timescaledb_internal.policy_compression(integer,jsonb) 
> line 51 at CALL
> Time: 1113429.153 ms (18:33.429)

This error is coming from timescale, not postgres, so you should open an issue
on their repository instead.




Re: Purpose of DEFERRABLE _and_ INITIALLY DEFERRED foreign key constraint checking?

2022-08-04 Thread Jan Wieck

On 8/3/22 20:30, Ron wrote:

AWS RDS Postgresql 12.10

https://www.postgresql.org/docs/12/sql-createtable.html

[quote]
|DEFERRABLE|
|NOT DEFERRABLE|

This controls whether the constraint can be deferred. A constraint
that is not deferrable will be checked immediately after every
command. *Checking of constraints that are deferrable can be
postponed until the end of the transaction*[/quote]

[/quote]

But yet a |DEFERRABLE| FK constraint in a transaction immediately failed 
on a FK constraint violation.


[quote]
|INITIALLY IMMEDIATE|
|INITIALLY DEFERRED|

If a constraint is deferrable, this clause specifies the default
time to check the constraint. If the constraint is|INITIALLY
IMMEDIATE|, it is checked after each statement. This is the default.
*If the constraint is|INITIALLY DEFERRED|**, it is checked only
at the end of the transaction.*

[/quote]

INITIALLY DEFERRED solved my problem.  Why do both clauses exist?


This is as per the Standard.

The default is NOT DEFERRABLE and when DEFERRABLE is specified then the 
default is INITIALLY DEFERRED. This can then be overriden inside a 
transaction with SET CONSTRAINT so that one or more (or all) DEFERRABLE 
constraints will be deferred until the end of transaction OR until they 
are explicitly set to IMMEDIATE again. Setting a previously DEFERRED 
constraint to IMMEDIATE will immediately run all the queued up checks.


This gives the application absolute fine control as to when constraints 
are checked.


The purpose of deferrable constraints is to do things that normally are 
impossible. Like for example a circular constraint because you want 
table A and table B to have a guaranteed 1:1 content on their primary 
key. For every row in A there must be a corresponding row in B and vice 
versa. This is implemented with two constraints where A and B point at 
each other. Without deferring those constraints it would be impossible 
to ever get a single row into either of them.



Regards, Jan




Re: PostgreSQL 14.4 ERROR: out of memory issues

2022-08-04 Thread Joe Conway

On 8/4/22 08:34, Aleš Zelený wrote:

SELECT ... simple join of two tables...
      WHERE opd.id_data_provider = _id_data_provider
        AND CASE WHEN _external_id IS NULL
                 THEN external_id IS NULL
                 ELSE external_id = _external_id
            END
        AND CASE WHEN _external_complete_id IS NULL
                 THEN _external_complete_id IS NULL


Unrelated to your question, but shouldn't that actually read:

   AND CASE WHEN _external_complete_id IS NULL
THEN external_complete_id IS NULL
^^^


                 ELSE external_complete_id = _external_complete_id
            END;
$function$


--
Joe Conway
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




sequence id overflow ERROR using timescaledb

2022-08-04 Thread abrahim abrahao

 
 I am using timescaledb version  2.7.2, and PostgreSQL 12.11 (Ubuntu 
12.11-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit
I tried to compress a chuck using the compress_chunk function and running a job 
as well, and I got "sequence id overflow" message ERROR.Any idea how to fix it 
or why I got this error?Note: I compressed other chuckles without problem.
   SELECT  'set temp_file_limit =-1; SELECT compress_chunk(''' || 
chunk_schema|| '.' || chunk_name || ''');'   
 FROM timescaledb_information.chunks
  WHERE   is_compressed =false;
                                         ?column?                               
           
---
 set temp_file_limit =-1; SELECT 
compress_chunk('_timescaledb_internal._hyper_1_2_chunk');
 set temp_file_limit =-1; SELECT 
compress_chunk('_timescaledb_internal._hyper_1_3_chunk');
 set temp_file_limit =-1; SELECT 
compress_chunk('_timescaledb_internal._hyper_1_4_chunk');
 set temp_file_limit =-1; SELECT 
compress_chunk('_timescaledb_internal._hyper_1_5_chunk');
 set temp_file_limit =-1; SELECT 
compress_chunk('_timescaledb_internal._hyper_1_8_chunk');


SELECT compress_chunk('_timescaledb_internal._hyper_1_2_chunk');
DEBUG:  building index "pg_toast_29929263_index" on table "pg_toast_29929263" 
serially
DEBUG:  building index 
"compress_hyper_3_12_chunk__compressed_hypertable_3_ivehicleid__" on table 
"compress_hyper_3_12_chunk" serially
DEBUG:  building index 
"compress_hyper_3_12_chunk__compressed_hypertable_3_gid__ts_meta" on table 
"compress_hyper_3_12_chunk" serially
DEBUG:  building index 
"compress_hyper_3_12_chunk__compressed_hypertable_3_irowversion_" on table 
"compress_hyper_3_12_chunk" serially
DEBUG:  building index 
"compress_hyper_3_12_chunk__compressed_hypertable_3_gdiagnostici" on table 
"compress_hyper_3_12_chunk" serially
DEBUG:  building index 
"compress_hyper_3_12_chunk__compressed_hypertable_3_gcontrolleri" on table 
"compress_hyper_3_12_chunk" serially
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.58", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.86", size 103432192
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.85", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.84", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.83", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.82", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.81", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.80", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.79", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.78", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.77", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.76", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.75", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.74", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.73", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.72", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.71", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.70", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.69", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.68", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.67", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.66", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.65", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.64", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.63", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.62", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.61", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.60", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.59", size 1073741824
ERROR:  sequence id overflow
Time: 1105092.862 ms (18:25.093)



SET client_min_messages TO DEBUG1; CALL run_job(1000);
SET
Time: 1.120 ms
DEBUG:  Executing policy_compression with parameters {"hypertable_id": 1, 
"compress_after": "28 days"}
DEBUG:  building index "pg_toast_29929276_index" on table "pg_toast_29929276" 
serially
DEBUG:  building index 
"compress_hyper_3_13_chunk__compressed_hypertable_3_ivehicleid__" on table 
"compress_hyper_3_13_chunk" serially
DEBUG:  building index 
"compress_hyper_3_13_chunk__compressed_hypertable_3_gid__ts_meta" on table 
"compress_hyper_3_13_chunk" serially
DEBUG:  building index 

follower never to become master

2022-08-04 Thread Marco Lechner
Hi,

using Patroni, is there an configuration option that makes a follower never 
become master if the master fails (but other followers of course). I want to 
add a follower to a patroni cluster that can be used as a separate readonly 
endpoint to the DB not affecting the rest of the cluster. So it could be 
something like the least relevant follower.

Anyone who can point me to the documentation, if that configuration use case 
exists?

Thanks in advance

i.A. Dr. Marco Lechner
Leiter Fachgebiet RN 1 │ Head RN 1

--
Bundesamt für Strahlenschutz │ Federal Office for Radiation Protection
Koordination Notfallschutzsysteme │ Coordination Emergency Systems │ RN 1
Rosastr. 9
D-79098 Freiburg

Tel.: +49 30 18333-6724
E-Mail: mlech...@bfs.de
www.bfs.de
 Besuchen Sie unsere Website, folgen Sie uns auf 
Twitter und 
abonnieren Sie unseren  Newsletter.
 Informationen zum Datenschutz gemäß Artikel 
13 DSGVO
 E-Mail drucken? Lieber die Umwelt schonen!

--
Hinweis zu Anhängen die auf .p7m/.p7c/.p7s oder .asc/.asc.sig enden:
Die .p7?- und .asc-Dateien sind ungefährliche Signaturdateien (digitale 
Unterschriften). In E-Mail-Clients mit S/MIME Konfiguration (.p7?) oder 
PGP-Erweiterung (.asc) dienen sie zur:
- Überprüfung des Absenders
- Überprüfung einer evtl. Veränderung des Inhalts während der Übermittlung über 
das Internet
Die Signaturdateien können ebenso dazu verwendet werden dem Absender dieser 
Signatur eine E-Mail mit verschlüsseltem Inhalt zu senden. In E-Mail-Clients 
ohne S/MIME Konfiguration oder PGP-Erweiterung erscheinen die Dateien als 
Anhang und können ignoriert werden.



Re: Index only scans for expressional indices when querying for the expression

2022-08-04 Thread Tom Lane
Danny Shemesh  writes:
> Do you think there'd be room to accept a contribution for such
> functionality with a disabled-by-default pg setting,
> or are you skeptical it would ever be worth the trade-off ?

If you can show me a matching algorithm with non-exponential runtime,
I'd be interested.

regards, tom lane




Re: Index only scans for expressional indices when querying for the expression

2022-08-04 Thread Danny Shemesh
A-ha, interesting !

I think we have some specific use cases where it'd be worth the overhead,
I'd need to measure it, though;

Do you think there'd be room to accept a contribution for such
functionality with a disabled-by-default pg setting,
or are you skeptical it would ever be worth the trade-off ?

Thanks again,
Danny

On Thu, Aug 4, 2022 at 4:38 PM Tom Lane  wrote:

> Danny Shemesh  writes:
> > That is of course correct, but what I mean is that, I think that if one
> > would explicitly query f(x), and never for x directly, it would've been
> > theoretically possible to say that the index is covering for every f(x),
> > wouldn't it ?
>
> Theoretically, yeah, but we don't support that: an index-only scan
> will only be considered if x itself is available from the index.
> There are a couple of reasons for that, but the main one is that
> detecting whether an index matches the query would be far more expensive
> if it had to consider expression subtrees not just the base Vars.
>
> regards, tom lane
>


Re: Allow user to connect to replicas only

2022-08-04 Thread hubert depesz lubaczewski
On Thu, Aug 04, 2022 at 03:42:00PM +0200, Wiwwo Staff wrote:
> Is there any way to create a user allowed to connect to a/any read replica
> only, as in "not allowed to connect to primary"?

Sure. Modify pg_hba.conf on primary to disallow connections as this
user.

Best regards,

depesz





Allow user to connect to replicas only

2022-08-04 Thread Wiwwo Staff
Hi!
Is there any way to create a user allowed to connect to a/any read replica
only, as in "not allowed to connect to primary"?


Re: Index only scans for expressional indices when querying for the expression

2022-08-04 Thread Tom Lane
Danny Shemesh  writes:
> That is of course correct, but what I mean is that, I think that if one
> would explicitly query f(x), and never for x directly, it would've been
> theoretically possible to say that the index is covering for every f(x),
> wouldn't it ?

Theoretically, yeah, but we don't support that: an index-only scan
will only be considered if x itself is available from the index.
There are a couple of reasons for that, but the main one is that
detecting whether an index matches the query would be far more expensive
if it had to consider expression subtrees not just the base Vars.

regards, tom lane




Re: PostgreSQL 14.4 ERROR: out of memory issues

2022-08-04 Thread Aleš Zelený
Hello,

thanks for the information and the link!

Ales

čt 4. 8. 2022 v 1:05 odesílatel Tom Lane  napsal:

> =?UTF-8?B?QWxlxaEgWmVsZW7DvQ==?=  writes:
> > CREATE OR REPLACE FUNCTION tescase (_id_data_provider integer,
> _external_id
> > bigint DEFAULT NULL::bigint, _external_complete_id character varying
> > DEFAULT NULL::character varying)
> >  RETURNS TABLE(id_gm bigint, id_opp bigint, id_opp_state integer)
> >  LANGUAGE sql
> > AS $function$
> > SELECT ... simple join of two tables...
> >  WHERE opd.id_data_provider = _id_data_provider
> >AND CASE WHEN _external_id IS NULL
> > THEN external_id IS NULL
> > ELSE external_id = _external_id
> >END
> >AND CASE WHEN _external_complete_id IS NULL
> > THEN _external_complete_id IS NULL
> > ELSE external_complete_id = _external_complete_id
> >END;
> > $function$
> > ;
>
> > It is a kind of creative construct for me, but it works. The key here is
> > that if I replace at least one of the "CASEd" where conditions, it seems
> > not to suffer from the memory leak issue.
>
> > Finally, I've found, that even having the function as is and before the
> > test disabling JIT (SET jit = off;) and calling the function 100k times,
> > RssAnon memory for the given process is stable and only 3612 kB, while
> when
> > JIT is enabled (the default setting on the server suffering from the
> memory
> > leak, RssAnon memory for the given process growth in a linear manner over
> > time (canceled when it reached 5GB).
>
> Ah.  I bet this is another instance of the known memory leakage problems
> with JIT inlining [1].  Per Andres' comments in that thread, it seems
> hard to solve properly.  For now all I can recommend is to disable that.
>
> regards, tom lane
>
> [1]
> https://www.postgresql.org/message-id/flat/20201001021609.GC8476%40telsasoft.com
>


Re: Index only scans for expressional indices when querying for the expression

2022-08-04 Thread Danny Shemesh
Hey David - thanks for the prompt response !

That is of course correct, but what I mean is that, I think that if one
would explicitly query f(x), and never for x directly, it would've been
theoretically possible to say that the index is covering for every f(x),
wouldn't it ?

Or in other words, if one only ever queries f(x), and the only available
index is on f(x), then the index will hold all f(x) values
and would never need to reverse engineer the value of x to answer such a
specific query.

Danny

On Thu, Aug 4, 2022 at 3:28 PM David G. Johnston 
wrote:

> On Thursday, August 4, 2022, Danny Shemesh  wrote:
>>
>> I believe the expressional index in itself could've been considered as
>> covering, when querying for the expression explicitly.
>>
>
> This belief is wrong.  When storing f(x) there is no way to recover the
> value of x.
>
> David J.
>
>


Re: Index only scans for expressional indices when querying for the expression

2022-08-04 Thread David G. Johnston
On Thursday, August 4, 2022, Danny Shemesh  wrote:
>
> I believe the expressional index in itself could've been considered as
> covering, when querying for the expression explicitly.
>

This belief is wrong.  When storing f(x) there is no way to recover the
value of x.

David J.


Re: Behavior of identity columns

2022-08-04 Thread Francisco Olarte
On Thu, 4 Aug 2022 at 10:52, sivapostg...@yahoo.com
 wrote:
> I see 'identity' column values increment even when some error(s) occurs while 
> inserting data.  Is that a known bug or known behavior?

Known behaviour, explained thousand of times over the years.  Identity
is not a way to generate consecutive values, not ever guaranteed
ascending values, it is for generating unique values. The
implementation burn the values before commit/rollback for better
concurreency, search archives or docs for details. In your example, id
is allocated and burnt to generate the complete failing row, when it
fails it is gone. Search lists, google, the docs, its been explained
ad nauseam.

Francisco Olarte.




Re: How to choose new master from slaves.?

2022-08-04 Thread Ron

On 8/4/22 02:43, Sacheen Birhade wrote:


Hi All,

I have a very basic question about streaming replication feature of Postgres.

Let’s assume I have servers A, B, C, D, & E with postgres installed with 
streaming replication as follow:


A à B ( sync replication )

A à C ( async replication )

A à D ( async replication )

A à E ( async replication )

A is master & rest are slaves.

Now my question is:

Assume replication link between A & B may/may not stable.  If A goes down, 
how will I choose my new master from slaves? What are criteria to choose 
new master?




By "link between A & B may/may not stable", do you mean that B might not 
actually be synchronized?


--
Angular momentum makes the world go 'round.

Index only scans for expressional indices when querying for the expression

2022-08-04 Thread Danny Shemesh
Hello everyone,

Quick question here about index-only scans and expressional indices, tested
on postgres <= 14,
Say I have a column, and an expressional index on said column (fiddle
) - e.g.

create table t1 (x text);
create index idx_upper on t1 (upper(x));

I see that even if I query for upper(x) in itself, I won't achieve an
index-only scan without creating a covering index that includes the
manipulated field:
create index idx_upper_covering on t1 (upper(x)) include (x);

I wonder if it would've been possible to have an index-only scan for
similar cases without having to include the base column ?
I believe the expressional index in itself could've been considered as
covering, when querying for the expression explicitly.

The thing is, indices with include clauses do not support page
deduplication, which causes the size of the index to bloat in our case,
over 20x in size at times.
Also, it could've been beneficial when creating indices on complex types,
say - indexing the first element on an array, or a specific nested element
of a jsonb column, et-al.

Appreciate your time !
Danny


Re: Behavior of identity columns

2022-08-04 Thread sivapostg...@yahoo.com
Hello,
I see 'identity' column values increment even when some error(s) occurs while 
inserting data.  Is that a known bug or known behavior?
Create script of table: CREATE TABLE public.users(  
  id integer NOT NULL GENERATED ALWAYS AS IDENTITY ( INCREMENT 1 START 1 
MINVALUE 1 MAXVALUE 2147483647 CACHE 1 ),    username character varying(255) 
COLLATE pg_catalog."default",    email character varying(255) COLLATE 
pg_catalog."default",    first_name character varying(255) COLLATE 
pg_catalog."default",    last_name character varying(255) COLLATE 
pg_catalog."default",    CONSTRAINT users_pkey PRIMARY KEY (id),    CONSTRAINT 
"test_UX" UNIQUE (username))WITH (    OIDS = FALSE)TABLESPACE pg_default;
ALTER TABLE public.users    OWNER to 
postgres;***column id is int and identity 
columncolumn username is unique 
When I insert data that violates the unique constraint, I see that the id value 
gets incremented. And I see missing values in between when I add the correct 
data next time.
my Insert SQLinsert into users (username, email, first_name, last_name) values 
('ONE', 'o...@gmail.com', 'one', '1'); // id = 1insert into users (username, 
email, first_name, last_name) values ('ONE', 'o...@gmail.com', 'one', '1'); // 
Insert failsinsert into users (username, email, first_name, last_name) values 
('TWO', 'o...@gmail.com', 'one', '1'); // id = 3insert into users0(username, 
email, first_name, last_name) values ('TWO', 'o...@gmail.com', 'one', '1'); // 
insert fails as there is no users0 tableinsert into users (username, email, 
first_name, last_name) values ('THREE', 'o...@gmail.com', 'one', '1'); // id = 5
ID gets incremented even when some wrong table name is mentioned in the query. 
Check 4th query.
Is it a known behavior or a known bug?

Happiness AlwaysBKR Sivaprakash


Re: BUG #17567: Unable to Set Max_Connection in Postgresql which has replicas

2022-08-04 Thread Mateusz Henicz
Hey,
It is not a bug.
If you are using patroni, you should change your parameters in patroni
configuration yaml file, so the changes are reflected in your Distributed
Configuration Store. If you would look at the Patroni documentation (
https://patroni.readthedocs.io/en/latest/SETTINGS.html) you can find a
note, that parameters like max_connections are stored in DCS and applied on
all Patroni nodes overwriting settings from postgresql.conf. Also if
Postgres was bootstrapped by Patroni, postgresql.conf itself should have a
note:

# Do not edit this file manually!
# It will be overwritten by Patroni!

Hope it helps.

Cheers,
Mateusz




czw., 4 sie 2022 o 10:21 Poornima Venkatesan 
napisał(a):

>
> Hi Team,
>
> PostgreSQL in our Production environment is configured using primary and
> secondary DB servers via Patroni. We have requirement to explicitly to set
> Max_Connection as 300 overriding default value. If we set Max_Connections
> in
> postgresql.config, it is getting reset post restart of Patroni.
>
> Please advise us on the steps on how to configure and set Max_Connections
> in
> Replica DB servers.
>
> Thanks & Regards,
> Poornima V
>
>
> On Wed, Aug 3, 2022 at 4:43 PM Juan José Santamaría Flecha <
> juanjo.santama...@gmail.com> wrote:
>
>>
>> On Wed, Aug 3, 2022 at 12:48 PM PG Bug reporting form <
>> nore...@postgresql.org> wrote:
>>
>>>
>>> Please advise us on the steps on how to configure and set
>>> Max_Connections in
>>> Replica DB servers.
>>>
>>
>> This list is meant for reporting bugs only, please send questions like so
>> to pgsql-general@lists.postgresql.org.
>>
>> Regards,
>>
>> Juan José Santamaría Flecha
>>
>


Re: Unable to start replica after failover

2022-08-04 Thread Kyotaro Horiguchi
At Fri, 29 Jul 2022 15:01:44 +, "Lahnov, Igor"  
wrote in 
> * "could not find previous WAL record at E6F/C2436F50: invalid 
> resource manager ID 139 at E6F/C2436F50"; or
> * "could not find previous WAL record at 54E/FB348118: unexpected 
> pageaddr 54E/7B34A000 in log segment 0005054E00FB, offset 
> 3448832".
..
> To avoid the problem, we decided to stop using restore_command. Could you 
> please let us know if there is a better solution to the problem we've 
> described?

Perhaps almost none of us don't see what is happning there, since you didn't 
give us sufficient information on the configuration and exact steps.

But roughly it looks like shuffling/mixing of WAL files among several
systems (or WAL archives) with different histories.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Is Client connections via ca.crt only possible?

2022-08-04 Thread Rejo Oommen
Thank you for the reply Thomas. I agree with you on the mutual TLS that you
mentioned.

Here is what I was looking at.

The configurations at the server end will be with auth-method as md5 and
auth-option as clientcert=verify-ca.

In this way, the user's password along with the valid ca should allow
connections to pass.

Regards,
Rejo



On Thu, 4 Aug 2022, 03:01 Thomas Guyot,  wrote:

> On 2022-08-01 04:12, Rejo Oommen wrote:
> > Requirement is to use only ca.crt and connect to postgres
> >
> > Server.crt, Server.key and ca.crt are configured at the postgres
> > server for tls connection.
> >
> > Connection successful while using
> > psql ‘host=172.29.21.222 dbname=test user=postgres sslmode=verify-ca
> > sslcert=/tmp/server.crt sslkey=/tmp/server.key sslrootcert=/tmp/ca.crt
> > port=5432’
> >
> > For clients to connect, can they use only ca.crt and connect to the
> > DB. Tried and got the below error
> >
> > psql ‘host=172.29.21.222 dbname=test user=postgres sslmode=verify-ca
> > sslrootcert=/tmp/ca.crt port=5432’
> > psql: error: connection to server at “172.29.21.222”, port 50001
> > failed: FATAL:  connection requires a valid client certificate
> >
>
> Hi Rejo,
>
> I don't think you understand fully how mutual TLS auth works. For the
> client to authenticate using a certificate, it needs a valid certificate
> and key too, where the certificate is signed by a CA your server trusts
> (usually the same CA that signed your server cert) and with a proper
> subject (that bears the certificate owner's user name, the user you will
> use to grant privileges in the database). You shouldn't even need to
> pass a username, it will be in the certificate.
>
> I'm talking purely from a generic view, I'm not familiar with any of the
> specifics of PostgreSQL configuration but TLS authentication requires a
> secret and a CA certificate isn't secret. Your server certificate
> authenticates the server, but nothing authenticates the client.
>
> Regards,
>
> --
> Thomas
>


Re: BUG #17567: Unable to Set Max_Connection in Postgresql which has replicas

2022-08-04 Thread Poornima Venkatesan
Hi Team,

PostgreSQL in our Production environment is configured using primary and
secondary DB servers via Patroni. We have requirement to explicitly to set
Max_Connection as 300 overriding default value. If we set Max_Connections in
postgresql.config, it is getting reset post restart of Patroni.

Please advise us on the steps on how to configure and set Max_Connections in
Replica DB servers.

Thanks & Regards,
Poornima V


On Wed, Aug 3, 2022 at 4:43 PM Juan José Santamaría Flecha <
juanjo.santama...@gmail.com> wrote:

>
> On Wed, Aug 3, 2022 at 12:48 PM PG Bug reporting form <
> nore...@postgresql.org> wrote:
>
>>
>> Please advise us on the steps on how to configure and set Max_Connections
>> in
>> Replica DB servers.
>>
>
> This list is meant for reporting bugs only, please send questions like so
> to pgsql-general@lists.postgresql.org.
>
> Regards,
>
> Juan José Santamaría Flecha
>


Re: Purpose of DEFERRABLE _and_ INITIALLY DEFERRED foreign key constraint checking?

2022-08-04 Thread Francisco Olarte
Ron:

On Thu, 4 Aug 2022 at 02:30, Ron  wrote:
> DEFERRABLE
> NOT DEFERRABLE
> This controls whether the constraint can be deferred. A constraint that is 
> not deferrable will be checked immediately after every command. Checking of 
> constraints that are deferrable can be postponed until the end of the 
> transaction [/quote]

> But yet a DEFERRABLE FK constraint in a transaction immediately failed on a 
> FK constraint violation.

Because, as the name hints, it is DEFERRABLE not DEFERRED.

> INITIALLY IMMEDIATE
> INITIALLY DEFERRED
> If a constraint is deferrable, this clause specifies the default time to 
> check the constraint. If the constraint is INITIALLY IMMEDIATE, it is checked 
> after each statement. This is the default. If the constraint is INITIALLY 
> DEFERRED, it is checked only at the end of the transaction.
> INITIALLY DEFERRED solved my problem.  Why do both clauses exist?

Note DEFAULT TIME. Both claused exists because you can change that
with SET CONSTRAINTS.

Not knowing your case I cannot comment on it, but one use case I've
found is when your system NORMALLY holds constraints valid across
statements, so you have DEFERRABLE INITIALLY IMMEDIATE to check it and
fail early and loudly on any bug. But you have some SPECIAL_OP (tm),
which only holds constraints at transaction end, because it does some
carefully controlled manipulations. You do this one with SET
CONSTRAINTS DEFERRED ( and triple testing as you have weakened your
safety net ).You can even toggle it according to the docs, i.e. toglle
a constraint to deferred, do several statements which end up in a
valid state, toggle to immediate to check it is really valid. It's all
in the set constraints docs.

> (A naive interpretation just by looking at the clause words led me to think 
> that INITIALLY DEFERRED would not check record validity when a constraint is 
> added to a table, but obviously that's wrong too.)

Not that obvious, but true. In the effort to make it read nice, like
natural languages,  SQL is difficult to interpret precisely, like
natural languages.

FOS.




How to choose new master from slaves.?

2022-08-04 Thread Sacheen Birhade
Hi All,

I have a very basic question about streaming replication feature of Postgres.
Let's assume I have servers A, B, C, D, & E with postgres installed with 
streaming replication as follow:
A  -->  B ( sync replication )
A  -->  C ( async replication )
A  -->  D ( async replication )
A  -->  E ( async replication )
A is master & rest are slaves.

Now my question is:
Assume replication link between A & B may/may not stable.  If A goes down, how 
will I choose my new master from slaves? What are criteria to choose new master?

Thanks,
Sacheen Birhade