Re: pb with join plan

2023-06-20 Thread David Rowley
On Wed, 21 Jun 2023 at 10:26, Marc Millas  wrote:
> link to the plan with both clauses ORed (the one not finishing) 
> https://explain.depesz.com/s/jHO2

I'd go with the UNION or UNION ALL idea I mentioned earlier.

David




Re: strange behavior of .pgpass file

2023-06-20 Thread Adrian Klaver

On 6/20/23 13:32, Atul Kumar wrote:

Please reply to list also.
Ccing list.


Th both pgpass files contains details as below:

*:5432:*:postgres:


There are no other lines in the file?

More information below.



I couldn't find anything wrong there in pgpass.

and if the issue would have been with .pgpass file only then I would not 
have got the same error with -W option.


But I am getting the same issue when I use -W option as well on standby 
side only.


While on master node I don't get any error while using the -W option 
along with standby host name.



What could be the reason for this? As same passwords are replicating to 
standby.


From here:

https://www.postgresql.org/docs/12/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-AUTHENTICATION

password_encryption (enum)

When a password is specified in CREATE ROLE or ALTER ROLE, this 
parameter determines the algorithm to use to encrypt the password. The 
default value is md5, which stores the password as an MD5 hash (on is 
also accepted, as alias for md5). Setting this parameter to 
scram-sha-256 will encrypt the password with SCRAM-SHA-256.


Note that older clients might lack support for the SCRAM 
authentication mechanism, and hence not work with passwords encrypted 
with SCRAM-SHA-256. See Section 20.5 for more details.



1) Verify what the password method is on both servers.

2) Check the version of psql you are using in each case.





Regards
Atul



On Wed, 21 Jun 2023, 01:38 Adrian Klaver, > wrote:


On 6/20/23 11:59, Atul Kumar wrote:
 > Hi,
 >
 > I found some strange behaviour of postgres superuser password in my
 > existing cluster, below is the basic outline of the setup.
 > 1. version - postgres 12
 > 2. replication - streaming replication async
 > 3. OS- centos7
 > 4. One Master, One Standby
 >
 > I have identical pgpass files on both server postgres home directory.
 >
 > So when I execute below command on slave node:
 > psql -h slave_hostname/ip -U postgres -d postgres -p 5432 -c "select
 > pg_is_in_recovery"
 >
 > I get error of password authentication:
 >
 > psql: error: FATAL:  password authentication failed for user
"postgres"
 >
 > password retrieved from file "/homedirectorypath/.pgpass"

I'm going to say this is failing because per:

https://www.postgresql.org/docs/15/libpq-pgpass.html


hostname:port:database:username:password

and when you are running it the hostname is not matching what you think
it is and the wrong password is being returned. Whereas the example
below is matching correctly.

 >
 >
 > But when I run the same command on master node:
 >
 > psql -h slave_hostname/ip -U postgres -d postgres -p 5432 -c "select
 > pg_is_in_recovery"
 >
 >
 > I don't get any errors and get the expected output as "t".
 >
 >
 > *_Note: the passwords in and path of both pgpass files are
identical._*
 >
 >
 > then why am I getting errors on the slave node for the same
command even
 > after having everything the same ?
 >
 >
 > Also, I tried the -W to enforce the password of postgres user but
got
 > the same issue on slave and no issue on master although the
password is
 > the same.
 >
 >
 > What am I missing here ? Please suggest.
 >
 >
 >
 >
 > Regards,
 > Atul
 >
 >
 >
 >

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: strange behavior of .pgpass file

2023-06-20 Thread Atul Kumar
Please suggest.

On Wed, 21 Jun 2023, 02:02 Atul Kumar,  wrote:

> Th both pgpass files contains details as below:
>
> *:5432:*:postgres:
>
> I couldn't find anything wrong there in pgpass.
>
> and if the issue would have been with .pgpass file only then I would not
> have got the same error with -W option.
>
> But I am getting the same issue when I use -W option as well on standby
> side only.
>
> While on master node I don't get any error while using the -W option along
> with standby host name.
>
>
> What could be the reason for this? As same passwords are replicating to
> standby.
>
>
>
> Regards
> Atul
>
>
>
> On Wed, 21 Jun 2023, 01:38 Adrian Klaver, 
> wrote:
>
>> On 6/20/23 11:59, Atul Kumar wrote:
>> > Hi,
>> >
>> > I found some strange behaviour of postgres superuser password in my
>> > existing cluster, below is the basic outline of the setup.
>> > 1. version - postgres 12
>> > 2. replication - streaming replication async
>> > 3. OS- centos7
>> > 4. One Master, One Standby
>> >
>> > I have identical pgpass files on both server postgres home directory.
>> >
>> > So when I execute below command on slave node:
>> > psql -h slave_hostname/ip -U postgres -d postgres -p 5432 -c "select
>> > pg_is_in_recovery"
>> >
>> > I get error of password authentication:
>> >
>> > psql: error: FATAL:  password authentication failed for user "postgres"
>> >
>> > password retrieved from file "/homedirectorypath/.pgpass"
>>
>> I'm going to say this is failing because per:
>>
>> https://www.postgresql.org/docs/15/libpq-pgpass.html
>>
>> hostname:port:database:username:password
>>
>> and when you are running it the hostname is not matching what you think
>> it is and the wrong password is being returned. Whereas the example
>> below is matching correctly.
>>
>> >
>> >
>> > But when I run the same command on master node:
>> >
>> > psql -h slave_hostname/ip -U postgres -d postgres -p 5432 -c "select
>> > pg_is_in_recovery"
>> >
>> >
>> > I don't get any errors and get the expected output as "t".
>> >
>> >
>> > *_Note: the passwords in and path of both pgpass files are identical._*
>> >
>> >
>> > then why am I getting errors on the slave node for the same command
>> even
>> > after having everything the same ?
>> >
>> >
>> > Also, I tried the -W to enforce the password of postgres user but got
>> > the same issue on slave and no issue on master although the password is
>> > the same.
>> >
>> >
>> > What am I missing here ? Please suggest.
>> >
>> >
>> >
>> >
>> > Regards,
>> > Atul
>> >
>> >
>> >
>> >
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>


Re: pb with join plan

2023-06-20 Thread Marc Millas
On Tue, Jun 20, 2023 at 11:19 PM David Rowley  wrote:

> On Wed, 21 Jun 2023 at 08:34, Marc Millas  wrote:
> >
> > On Tue, Jun 20, 2023 at 10:14 PM David Rowley 
> wrote:
> >>
> >> On Wed, 21 Jun 2023 at 07:42, Marc Millas 
> wrote:
> >> > But if I do the same with clause one OR clause 2, I have to  kill the
> request after an hour, seeing the filesystem showing more than 140 Mb of
> increased usage.
> >>
> >>
> > link to the anonymized plan of the req with one clause :
> https://explain.depesz.com/s/TWp4

link to the plan with the second clause alone:
https://explain.depesz.com/s/byW5
link to the plan with both clauses ORed (the one not finishing)
https://explain.depesz.com/s/jHO2

>
>
> It's quite difficult to know what the problem is you want to fix here.
> Your initial post indicated it was the query with the OR condition
> that was causing you the problems, but the plan you've posted has no
> OR condition?!
>
> You're more likely to get help here if you take time to properly
> explain the situation and post the information that's actually
> relevant to the problem you're having, or state the problem more
> clearly, as there's a mismatch somewhere.
>
> It might also be worth having a look at
> https://wiki.postgresql.org/wiki/Slow_Query_Questions . EXPLAIN is not
> going to tell us what part of the query is slow. I'll let the wiki
> page guide you into what to do instead.
>

I know that page. obviously, as I have to kill the request, I cannot
provide a explain analyze...

>
> David
>


Re: pb with join plan

2023-06-20 Thread David Rowley
On Wed, 21 Jun 2023 at 08:34, Marc Millas  wrote:
>
> On Tue, Jun 20, 2023 at 10:14 PM David Rowley  wrote:
>>
>> On Wed, 21 Jun 2023 at 07:42, Marc Millas  wrote:
>> > But if I do the same with clause one OR clause 2, I have to  kill the 
>> > request after an hour, seeing the filesystem showing more than 140 Mb of 
>> > increased usage.
>>
>>
> link to the anonymized plan of the req with one clause : 
> https://explain.depesz.com/s/TWp4

It's quite difficult to know what the problem is you want to fix here.
Your initial post indicated it was the query with the OR condition
that was causing you the problems, but the plan you've posted has no
OR condition?!

You're more likely to get help here if you take time to properly
explain the situation and post the information that's actually
relevant to the problem you're having, or state the problem more
clearly, as there's a mismatch somewhere.

It might also be worth having a look at
https://wiki.postgresql.org/wiki/Slow_Query_Questions . EXPLAIN is not
going to tell us what part of the query is slow. I'll let the wiki
page guide you into what to do instead.

David




Re: foreign keys on multiple parent table

2023-06-20 Thread Les
>
> .
>


> From programming point of view and also to reduce the number of objects in
> DB could be convinient create just an audit table with a structure like:
>
>- auditi id
>- reference_uuid (the key of the main table)
>- table_name
>- list of audit data
>
>
Could work, but is there a way to set a reference key over the uuid of all
> the tables?
>

For existing solution, check out
https://github.com/2ndQuadrant/audit-trigger

Regarding fk constraints, a single fk constraint can only reference the
primary key of a single table.

But, if you want to be serious about audit logs, then you need to keep logs
of deletions too, and for those, foreign key constraints would not work
anyway.

You may also want to consider bulk insert speed. Foreign key constraint
checking can reduce speed.

  Laszlo






>


Re: foreign keys on multiple parent table

2023-06-20 Thread David G. Johnston
On Tuesday, June 20, 2023, Lorusso Domenico  wrote:

>
> Could work, but is there a way to set a reference key over the uuid of all
> the tables?
>

A foreign key in PostgreSQL is between two, and only two, tables.  The PK
side of which must be uniquely constrained.

You can write custom triggers if you need something other than this.

David J.


foreign keys on multiple parent table

2023-06-20 Thread Lorusso Domenico
Hello guys,
I've many tables representing as many concepts.

For each record of each table I need to store extra information (think to
audit information, but more complex than a simple text)

The relation is 1:N, for each record there could be many audit records.

>From programming point of view and also to reduce the number of objects in
DB could be convinient create just an audit table with a structure like:

   - auditi id
   - reference_uuid (the key of the main table)
   - table_name
   - list of audit data

Could work, but is there a way to set a reference key over the uuid of all
the tables?


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: pb with join plan

2023-06-20 Thread Marc Millas
On Tue, Jun 20, 2023 at 10:14 PM David Rowley  wrote:

> On Wed, 21 Jun 2023 at 07:42, Marc Millas  wrote:
> > But if I do the same with clause one OR clause 2, I have to  kill the
> request after an hour, seeing the filesystem showing more than 140 Mb of
> increased usage.
>
> > So, before providing the 3 explain plans (I must anonymize everything,
> so somewhat boring)  I would like to know if there is some obvious thing I
> am missing.
> > all tables have been vacuum analyzed.
>
> I believe you can anonymise the explain with https://explain.depesz.com/
>
> link to the anonymized plan of the req with one clause :
https://explain.depesz.com/s/TWp4

It's pretty hard to say until we see the query, but having an OR in
> the join condition makes it impossible to Hash or Merge join, so
> perhaps it's slow due to Nested Loop join.
>
> You could consider rewriting the query to use a UNION or a UNION ALL
> separating out each branch of the OR into a UNION of its own.  That
> would allow Hash and Merge join to work again. However, that's all
> speculation until you provide more details.
>
> https://wiki.postgresql.org/wiki/Guide_to_reporting_problems
>
> David
>


  Marc MILLAS


how to return data from insert into ... on conflict ... returning ... into

2023-06-20 Thread Les
Consider this example


drop table if exists tbl;

drop sequence if exists seq;

create sequence seq;

create table tbl(

id int8 not null primary key,

d bytea not null,

h bytea not null, -- hash of the data, calculated automatically

dummy byte default 0 -- dummy value, see below...

);

alter table tbl add constraint uidx_tbl_h unique(h);


create or replace function trg() returns trigger language plpgsql as

$function$

begin

new.h = sha256(new.d); -- auto-hash

if new.id is null then

new.id = nextval('seq');

end if;

return new;

end;

$function$;

create trigger trg before insert or update on tbl for each row execute
procedure trg();

The  hash "h" is calculated automatically for each data value "d", and it
is a unique value. Let's suppose that for technical reasons, we want the
primary key to be an int8 value, and foreign keys in other tables will be
referencing tbl records using the sequentially generated tbl.id values.

The basic idea is that for already existing "d" values, we do not insert a
new record, but use the existing record and its identifier.

This code actually works:

do

$body$

declare

aid int8;

begin

insert into tbl(d) values ('1') returning id into aid; -- ok

raise notice '1->%', aid; -- ok

insert into tbl(d) values ('2') returning id into aid; -- ok

raise notice '2->%', aid; -- ok

insert into tbl(d) values ('1') on conflict(h) do update set dummy=0
returning id into aid; -- ok;

raise notice '1->%', aid; -- ok

insert into tbl(d) values ('2') on conflict(h) do update set dummy=0
returning id into aid; -- ok;

raise notice '2->%', aid; -- ok

end;

$body$;

It will display the same id value for the same data values. But it updates
the record even when it does not need to be updated.

This code below does not work:

delete from tbl;

do

$body$

declare

aid int8;

begin

insert into tbl(d) values ('1') returning id into aid; -- ok

raise notice '1->%', aid; -- ok

insert into tbl(d) values ('2') returning id into aid; -- ok

raise notice '2->%', aid; -- ok

insert into tbl(d) values ('1') on conflict(h) do nothing returning id into
aid; -- ok;

raise notice '1->%', aid; -- null ??

insert into tbl(d) values ('1') on conflict(h) do nothing returning id into
aid; -- ok;

raise notice '2->%', aid; -- null ??

end;

$body$;

First it displays two non-null identifiers, then it displays two NULL
values.

The alternative would be something like this:


delete from tbl;

do

$body$

declare

aid int8;

begin

insert into tbl(d) values ('1') returning id into aid; -- ok

raise notice '1->%', aid; -- ok

insert into tbl(d) values ('2') returning id into aid; -- ok

raise notice '2->%', aid; -- ok

select id from tbl into aid where h = sha256('1');

if not found then

insert into tbl(d) values ('1') on conflict(h) do update set id=id+0
returning id into aid;

end if;

raise notice '1->%', aid; -- null ??

select id from tbl into aid where h = sha256('2');

if not found then

insert into tbl(d) values ('2') on conflict(h) do update set id=id+0
returning id into aid;

end if;

raise notice '2->%', aid; -- null ??

end;

$body$;


But there are several problems with this "solution":

1. Running select to check for existence introduces a race condition. Of
course it also depends on the transaction isolation, but in general it is
not guaranteed that the insert won't fail with the unique constraint on h,
even if the select did not find a matching record. (It might also introduce
a deadlock?) I might be wrong on this, I don't really know how plpgsql
procedures are executed, but I suspect that they can run in parallel.
2. Notice how the code changed. The calculation of h is done in the trigger
and also at two other places. This was a very trivial example, but in a
real world scenario, the calculation can be costly, and even if the above
solution works, it must calculate the possibly conflicting values twice. It
is ineffective, and it also requires to factor out the calculations to
separate functions (or even worse, duplicate the code for the
calculations). Even if calculations are not costly, this "solution" may
introduce a dependency hell, because the trg() trigger can access field
values that are calculated by other triggers that depend on each other. One
would have to pre-calculate everything at every place where "insert into"
is needed for the table, duplicating code and/or factoring the calculations
out to a function with many parameters.
3. This trivial example only had a single unique constraint, but there
could be more. When you have many unique constraints, then you have to
write multiple SELECT statements to check for existence before doing the
INSERT. Then the above "solution" becomes ugly and questionable.

Please note that adding a dummy byte does not solve the problem, because it
will always update the record, even if it does not need to be updated.
Degrades performance, possibly executes other triggers that do unwanted
modifications to the database.

I have 

Re: pb with join plan

2023-06-20 Thread David Rowley
On Wed, 21 Jun 2023 at 07:42, Marc Millas  wrote:
> But if I do the same with clause one OR clause 2, I have to  kill the request 
> after an hour, seeing the filesystem showing more than 140 Mb of increased 
> usage.

> So, before providing the 3 explain plans (I must anonymize everything, so 
> somewhat boring)  I would like to know if there is some obvious thing I am 
> missing.
> all tables have been vacuum analyzed.

I believe you can anonymise the explain with https://explain.depesz.com/

It's pretty hard to say until we see the query, but having an OR in
the join condition makes it impossible to Hash or Merge join, so
perhaps it's slow due to Nested Loop join.

You could consider rewriting the query to use a UNION or a UNION ALL
separating out each branch of the OR into a UNION of its own.  That
would allow Hash and Merge join to work again. However, that's all
speculation until you provide more details.

https://wiki.postgresql.org/wiki/Guide_to_reporting_problems

David




Re: strange behavior of .pgpass file

2023-06-20 Thread Adrian Klaver

On 6/20/23 11:59, Atul Kumar wrote:

Hi,

I found some strange behaviour of postgres superuser password in my 
existing cluster, below is the basic outline of the setup.

1. version - postgres 12
2. replication - streaming replication async
3. OS- centos7
4. One Master, One Standby

I have identical pgpass files on both server postgres home directory.

So when I execute below command on slave node:
psql -h slave_hostname/ip -U postgres -d postgres -p 5432 -c "select 
pg_is_in_recovery"


I get error of password authentication:

psql: error: FATAL:  password authentication failed for user "postgres"

password retrieved from file "/homedirectorypath/.pgpass"


I'm going to say this is failing because per:

https://www.postgresql.org/docs/15/libpq-pgpass.html

hostname:port:database:username:password

and when you are running it the hostname is not matching what you think 
it is and the wrong password is being returned. Whereas the example 
below is matching correctly.





But when I run the same command on master node:

psql -h slave_hostname/ip -U postgres -d postgres -p 5432 -c "select 
pg_is_in_recovery"



I don't get any errors and get the expected output as "t".


*_Note: the passwords in and path of both pgpass files are identical._*


then why am I getting errors on the slave node for the same command even 
after having everything the same ?



Also, I tried the -W to enforce the password of postgres user but got 
the same issue on slave and no issue on master although the password is 
the same.



What am I missing here ? Please suggest.




Regards,
Atul






--
Adrian Klaver
adrian.kla...@aklaver.com





pb with join plan

2023-06-20 Thread Marc Millas
Hi,

I have a postgres 14 on linux with a 15 TB db, with 20 cores and 150GB
RAM, all nvme ssd. . Currently one user  :-)

A join between 2 big tables and then another join with  a smaller third one
takes less than 1 minute and provides a result of 15 M lines. Fine.

if I do add a third join, with a 30k lines table, with a simple equality as
join clause it does work almost as fast. explain analyze indicates 45 sec.
same if i do the very same with another equality clause. explain analyze
indicates 140 sec.

But if I do the same with clause one OR clause 2, I have to  kill the
request after an hour, seeing the filesystem showing more than 140 Mb of
increased usage.

Looking at the explain plan with one clause or the 2 ORed, there are
changes in the plan (of course)
with the fastest clause the estimated cost is 3 700 000 and with the a bit
slower one 3 900 000.
with both ORed, the estimated cost is 16 000 000. To me it does sound a bit
strange, as ORing the join clauses should add times, but not more (so so)

So, before providing the 3 explain plans (I must anonymize everything, so
somewhat boring)  I would like to know if there is some obvious thing I am
missing.
all tables have been vacuum analyzed.

thanks



Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com


strange behavior of .pgpass file

2023-06-20 Thread Atul Kumar
Hi,

I found some strange behaviour of postgres superuser password in my
existing cluster, below is the basic outline of the setup.
1. version - postgres 12
2. replication - streaming replication async
3. OS- centos7
4. One Master, One Standby

I have identical pgpass files on both server postgres home directory.

So when I execute below command on slave node:
psql -h slave_hostname/ip -U postgres -d postgres -p 5432 -c "select
pg_is_in_recovery"

I get error of password authentication:

psql: error: FATAL:  password authentication failed for user "postgres"

password retrieved from file "/homedirectorypath/.pgpass"


But when I run the same command on master node:

psql -h slave_hostname/ip -U postgres -d postgres -p 5432 -c "select
pg_is_in_recovery"


I don't get any errors and get the expected output as "t".


*Note: the passwords in and path of both pgpass files are identical.*


then why am I getting errors on the slave node for the same command even
after having everything the same ?


Also, I tried the -W to enforce the password of postgres user but got the
same issue on slave and no issue on master although the password is the
same.


What am I missing here ? Please suggest.




Regards,
Atul


Re: Question: Multiple pg clusters on one server can be reached with the standard port.

2023-06-20 Thread Peter J. Holzer
On 2023-06-20 10:10:47 -0500, Ron wrote:
> On 6/20/23 09:54, Peter J. Holzer wrote:
> > On 2023-06-19 16:09:34 -0500, Ron wrote:
> > > On 6/19/23 12:15, Peter J. Holzer wrote:
> > >  On 2023-06-19 07:49:49 -0500, Ron wrote:
> > >  On 6/19/23 05:33, Peter J. Holzer wrote:
> > >  So (again, as Francisco already wrote) the best way is 
> > > probably to write
> > >  a simple proxy which uses the database (not DNS) name for 
> > > routing. I
> > >  seem to remember that nginx has a plugin architecture for 
> > > protocols so
> > >  it might make sense to write that as an nginx plugin instead 
> > > of a
> > >  standalone server, but that's really a judgement call the 
> > > programmer has
> > >  to make. Another possibility would of course be to extend 
> > > pgbouncer to
> > >  do what the OP needs.
> > > 
> > >  How would this work with JDBC clients?
> > > 
> > >  Same as with any other client, I guess. Any reason why it should be
> > >  different?
> > > 
> > > 
> > > That goes to my ultimate point: why would this work, when the point of a
> > > database client is to connect to a database instance on a specific port 
> > > like
> > > 5432, not connect to a web server.
> > Consider this scenario:
> > 
> > You have several databases scattered across several hosts and ports:
> > 
> > db1  host1.example.com:5432
> > db2  host1.example.com:5433
> > db3  host2.example.com:5432
> > db4  host3.example.com:5432
> > 
> > Then you have your proxy/gateway/bouncer (whatever you want to call it)
> > listening on proxy.example.com:5432.
> 
> Proxies/gateways are great. My question is about why you mentioned nginx.

Somebody else mentioned nginx as a possible solution. I wrote that I
don't think that nginx can do that out of the box but it might be
possible to write a plugin/module. Personally. I wouldn't (learning how
to write nginx modules almost certainly takes longer than writing a
simple proxy from scratch), but if somebody is already familiar with
nginx modules and/or has other reasons to use nginx ...

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Question: Multiple pg clusters on one server can be reached with the standard port.

2023-06-20 Thread Ron

On 6/20/23 09:54, Peter J. Holzer wrote:

On 2023-06-19 16:09:34 -0500, Ron wrote:

On 6/19/23 12:15, Peter J. Holzer wrote:
 On 2023-06-19 07:49:49 -0500, Ron wrote:
 On 6/19/23 05:33, Peter J. Holzer wrote:
 So (again, as Francisco already wrote) the best way is probably to 
write
 a simple proxy which uses the database (not DNS) name for routing. 
I
 seem to remember that nginx has a plugin architecture for 
protocols so
 it might make sense to write that as an nginx plugin instead of a
 standalone server, but that's really a judgement call the 
programmer has
 to make. Another possibility would of course be to extend 
pgbouncer to
 do what the OP needs.

 How would this work with JDBC clients?

 Same as with any other client, I guess. Any reason why it should be
 different?


That goes to my ultimate point: why would this work, when the point of a
database client is to connect to a database instance on a specific port like
5432, not connect to a web server.

Consider this scenario:

You have several databases scattered across several hosts and ports:

db1  host1.example.com:5432
db2  host1.example.com:5433
db3  host2.example.com:5432
db4  host3.example.com:5432

Then you have your proxy/gateway/bouncer (whatever you want to call it)
listening on proxy.example.com:5432.


Proxies/gateways are great. My question is about why you mentioned nginx.

--
Born in Arizona, moved to Babylonia.




Re: Question: Multiple pg clusters on one server can be reached with the standard port.

2023-06-20 Thread Peter J. Holzer
On 2023-06-19 16:09:34 -0500, Ron wrote:
> On 6/19/23 12:15, Peter J. Holzer wrote:
> On 2023-06-19 07:49:49 -0500, Ron wrote:
> On 6/19/23 05:33, Peter J. Holzer wrote:
> So (again, as Francisco already wrote) the best way is probably 
> to write
> a simple proxy which uses the database (not DNS) name for 
> routing. I
> seem to remember that nginx has a plugin architecture for 
> protocols so
> it might make sense to write that as an nginx plugin instead of a
> standalone server, but that's really a judgement call the 
> programmer has
> to make. Another possibility would of course be to extend 
> pgbouncer to
> do what the OP needs.
> 
> How would this work with JDBC clients?
> 
> Same as with any other client, I guess. Any reason why it should be
> different?
> 
> 
> That goes to my ultimate point: why would this work, when the point of a
> database client is to connect to a database instance on a specific port like
> 5432, not connect to a web server.

Consider this scenario:

You have several databases scattered across several hosts and ports:

db1  host1.example.com:5432
db2  host1.example.com:5433
db3  host2.example.com:5432
db4  host3.example.com:5432

Then you have your proxy/gateway/bouncer (whatever you want to call it)
listening on proxy.example.com:5432.

The clients all connect to proxy.example.com:5432.

The proxy does the TLS handshake (if necessary) and reads the first
packet. This contains the database name. The proxy then uses the
database name to look up where that database resides (e.g. for db3 it
gets host2.example.com:5432) opens a connection to that port (plus TLS
handshake, if necessary) and forwards the packet. After that it just has
to forward packets in both directions until the connection is closed.

The client never knows that the databases are actually on different
hosts and/or ports. As far as it is concerned, all the databases are on
proxy.example.com:5432.

There is one caveat: All the database names need to be unique.

Such a proxy should be straightforward to write. It only needs to
understand two requests of postgresql protocol (initiate TLS and
connect). It would be much simpler than e.g. pg_bouncer which has to
know about authentication, transactions, etc.[1]. Depending on the
expected number of parallel connections and throughput you might want to
consider what programming language and concurrency model
(multi-threaded, async, state machine, ...) to use.

hp

[1] But of course, if pgbouncer already does what you want, don't
reinvent the wheel.

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: pg_service file questions

2023-06-20 Thread Erik Wienhold
> On 20/06/2023 08:11 CEST JUN ZHI  wrote:
>
> I was scanning through the postgresql documentations when i came across this
> webpage:PostgreSQL: Documentation: 15: 34.17. The Connection Service File
> (https://www.postgresql.org/docs/current/libpq-pgservice.html). I am fairly
> new to database and i have a few questions regarding this:
>
> 1. Is pg_service.conf and .pg_service.conf (with a dot at the front)
> different files?

Yes, they're different files.  pg_service.conf is the global service file and
.pg_service.conf the user-specific file in your home directory.

> 2. The documentation stated that the .pg_service.conf is named
> %APPDATA%\postgresql.pg_service.conf on windows which is a directory i can
> not find.

%APPDATA% should resolve to C:/Users//AppData/Roaming

You can run echo %APPDATA% in cmd.exe or echo $env:APPDATA in PowerShell to
show the actual path.  Or enter %APPDATA% in the File Explorer address bar.

You have to create directory %APPDATA%/postgresql.

> 3. The documentation also stated that we can check for the sysconfigdir
> environment variable and point it to somewhere else, but when i checked for
> the sysconfigdir path, it is pointing to C:/PROGRA~1/POSTGR~1/15/etc which
> again, is a directory i can not find : to be specific, i can not find the etc
> file stated in the pathing.

You have to create directory etc if you want to put config files there.

> So where should i put this pg_service file and does it link with the database
> itself?

I would go with the user service file because it takes precedence over the
system-wide file.  The default path of the system-wide file only works on the
database server where Postgres is running.

What do you mean with "link with the database"?  The service is file is read by
libpq before opening a database connection.

--
Erik




Re: pg_service file questions

2023-06-20 Thread Ron

On 6/20/23 01:11, JUN ZHI wrote:

Hi,

I was scanning through the postgresql documentations when i came across 
this webpage: PostgreSQL: Documentation: 15: 34.17. The Connection Service 
File  . I am 
fairly new to database and i have a few questions regarding this:


 1. Is pg_service.conf and .pg_service.conf (with a dot at the front)
different files?
 2. The documentation stated that the .pg_service.conf is named
%APPDATA%\postgresql.pg_service.conf on windows which is a directory i
can not find.
 3. The documentation also stated that we can check for the sysconfigdir
environment variable and point it to somewhere else, but when i
checked for the sysconfigdir path, it is pointing to
C:/PROGRA~1/POSTGR~1/15/etc which again, is a directory i can not find
: to be specific, i can not find the etc file stated in the pathing.

So where should i put this pg_service file and does it link with the 
database itself?


%APPDATA% is normally hidden to users.  Here's how to show it?

https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-97fbc472-c603-9d90-91d0-1166d1d9f4b5

--
Born in Arizona, moved to Babylonia.

pg_service file questions

2023-06-20 Thread JUN ZHI
Hi,

I was scanning through the postgresql documentations when i came across this 
webpage: PostgreSQL: Documentation: 15: 34.17. The Connection Service 
File . I am 
fairly new to database and i have a few questions regarding this:


  1.  Is pg_service.conf and .pg_service.conf (with a dot at the front) 
different files?
  2.  The documentation stated that the .pg_service.conf is named 
%APPDATA%\postgresql.pg_service.conf on windows which is a directory i can not 
find.
  3.  The documentation also stated that we can check for the sysconfigdir 
environment variable and point it to somewhere else, but when i checked for the 
sysconfigdir path, it is pointing to C:/PROGRA~1/POSTGR~1/15/etc which again, 
is a directory i can not find : to be specific, i can not find the etc file 
stated in the pathing.

So where should i put this pg_service file and does it link with the database 
itself?

Thank you
Best Regards
Jun


Re: OpenSSL version 3

2023-06-20 Thread Daniel Gustafsson
> On 19 Jun 2023, at 21:00, Jeffrey Walton  wrote:
> On Mon, Jun 19, 2023 at 11:39 AM Daniel Gustafsson  wrote:
>> 
>>> On 19 Jun 2023, at 17:13, Sebastien Flaesch  
>>> wrote:

>>> OpenSSL V1 goes end of support soon (see 
>>> https://www.openssl.org/policies/releasestrat.html) with replacement should 
>>> be OpenSSL V3.
>>> 
>>> What is the strategy with PostgreSQL (regarding sources, and binary 
>>> packages)?
>>> 
>>> On the various supported platforms?
>> 
>> PostgreSQL supports OpenSSL 3 and have done so for some time, each individual
>> packager and platform are however free to choose which OpenSSL version they
>> ship.
> 
> This is a recurring question. I've seen it 3 or 4 times in the last 6
> months or so.
> 
> The information should probably be added to the FAQ at
> https://wiki.postgresql.org/wiki/FAQ .

Sure, feel free to add whatever you think is a relevant to these questions to
the Wiki.

--
Daniel Gustafsson