Re: Need Help On Upgrade

2023-07-10 Thread Laurenz Albe
On Mon, 2023-07-10 at 20:38 -0400, Johnathan Tiamoh wrote:
> Below is the full error message.
> 
> 2023-06-27 05:01:27.385 CDT| XXX (60930)| 
> APPNAME=[unknown]| DB=postgres| USER=grafana| PID=9061| 1| authentication| 
> PGE-28P01: FATAL:  password authentication failed for user
> "grafana"
> 2023-06-27 05:01:27.385 CDT| XXX (60930)| 
> APPNAME=[unknown]| DB=postgres| USER=grafana| PID=9061| 2| authentication| 
> PGE-28P01: DETAIL:  Connection matched pg_hba.conf line 113:
> "host    all all 0.0.0.0/0 md5"

Then you must have entered the wrong password.

If in doubt, change the password.

Yours,
Laurenz Albe




Re: Problematic enforcement of "ERROR: functions in index predicate must be marked IMMUTABLE"

2023-07-10 Thread David G. Johnston
On Mon, Jul 10, 2023 at 6:28 PM Tom Lane  wrote:

> Kyotaro Horiguchi  writes:
> > Even given this, premising users keeping the volatility marks in line
> > with the actual behavior of their corresponding functions, it might be
> > benetifical to prohibit changes to the volatility category while it's
> > being used for indices.
>
> In the end, adding such restrictions would just give a false sense
> of security, because there would always be gaps in whatever we did.
> As you quote from the documentation, volatility markings are a promise
> by the user to the system, not vice versa.  If you break your promise,
> you get to keep both pieces of whatever trouble ensues.
>
>
I'd accept this more readily if we didn't have user unfriendly behavior for
CREATE OR REPLACE FUNCTION.

 postgres=# \df+ immut

List of functions
 Schema | Name  | Result data type | Argument data types | Type |
Volatility | Parallel |  Owner  | Security | Access privileges | Language |
Internal name | Description
+---+--+-+--++--+-+--+---+--+---+-
 public | immut | text | | func | immutable
 | unsafe   | vagrant | invoker  |   | plpgsql  |
|
(1 row)

postgres=# create or replace function immut() returns text as $$begin
select 'one'; end; $$ language plpgsql;
CREATE FUNCTION
postgres=# \df+ immut

List of functions
 Schema | Name  | Result data type | Argument data types | Type |
Volatility | Parallel |  Owner  | Security | Access privileges | Language |
Internal name | Description
+---+--+-+--++--+-+--+---+--+---+-
 public | immut | text | | func | volatile
  | unsafe   | vagrant | invoker  |   | plpgsql  |
  |
(1 row)


I find it quite reasonable to tell the user (warning) that their default
choice of volatile violates the immutable clause of the existing function,
and even would go so far as to require them to drop/recreate the function
if indeed their goal is to change it from immutable to volatile (error).
To the extreme I'd just add "changing the volatility marker" to be
prohibited just like we prohibit changing the return type.

David J.


Re: Problematic enforcement of "ERROR: functions in index predicate must be marked IMMUTABLE"

2023-07-10 Thread Kyotaro Horiguchi
At Tue, 11 Jul 2023 10:14:29 +0900 (JST), Kyotaro Horiguchi 
 wrote in 
> At Sun, 9 Jul 2023 14:22:37 +, Avi Weinberg  wrote in 
> > Hi,
> > 
> > If you attempt to create an index based on function that is not IMMUTABLE 
> > you will get an exception "ERROR:  functions in index predicate must be 
> > marked IMMUTABLE".  However, if you created the index when the function was 
> > IMMUTABLE, but later on you updated the function and mistakenly removed the 
> > IMMUTABLE key, you will not get any error to alert you that there is an 
> > index based on this function and it should remain IMMUTABLE.
> > 
> > I suggest triggering error message also when updating a function that is 
> > used by index if it is no longer IMMUTABLE
> 
> There's no way to truly verify a function is really immutable or
> not. So, as mentioned in the documentation, the function volatility
> categories are essentially a promise to the optimizer regarding the
> function's behavior.
> 
> Even given this, premising users keeping the volatility marks in line
> with the actual behavior of their corresponding functions, it might be
> benetifical to prohibit changes to the volatility category while it's
> being used for indices. In the first place, that protecting indices
> from entering an inconsistent state, at least on the surface.

Even after doing that, any functions used indeirectly are still
allowed to be or to become non-immutable. Therefore, we might want to
reject this because we can't execut it perfectly.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Problematic enforcement of "ERROR: functions in index predicate must be marked IMMUTABLE"

2023-07-10 Thread Tom Lane
Kyotaro Horiguchi  writes:
> Even given this, premising users keeping the volatility marks in line
> with the actual behavior of their corresponding functions, it might be
> benetifical to prohibit changes to the volatility category while it's
> being used for indices.

Are you going to prohibit changes to the function's behavior, which is
what actually matters?  And if so, how will you enforce that?  Even if
we had an understanding of the function body --- which we generally
don't for PL functions --- determining that would be equivalent to
solving the halting problem.  "Refuse *all* updates to the pg_proc
entry" might sound like a solution; but it is not, because the
function might call another one whose behavior could get changed.

Even granting that we had a useful way to enforce such a restriction,
figuring out whether to apply it would be subject to race conditions;
maybe somebody else is in process of creating an index using the
function that's being altered.

In the end, adding such restrictions would just give a false sense
of security, because there would always be gaps in whatever we did.
As you quote from the documentation, volatility markings are a promise
by the user to the system, not vice versa.  If you break your promise,
you get to keep both pieces of whatever trouble ensues.

regards, tom lane




Re: Is anyone using db_user_namespace?

2023-07-10 Thread Ian Lawrence Barwick
2023年7月11日(火) 2:45 Nathan Bossart :
>
> On Fri, Jun 30, 2023 at 02:56:08PM -0700, Nathan Bossart wrote:
> > Over in pgsql-hackers, I've proposed removing the db_user_namespace
> > parameter in v17 [0].  I am personally not aware of anyone using this
> > parameter, but I wanted to give folks an opportunity to object in case they
> > are using it (or are aware of someone who is).
> >
> > [0] https://postgr.es/m/20230630200509.GA2830328%40nathanxps13
>
> Bumping this thread.  As long as no objections materialize, I plan to
> remove this parameter shortly.

Never used it myself; funnily enough I stumbled on it a few weeks back
and wondered if it was some nifty feature I'd overlooked, then I realized
I've stumbled across it before several times and each time promptly
forgotten about it.

Moreover a quick Google search (in quotation marks) brings up a
few pages mainly of documentation copies, configuration file snippets
happening to contain "db_user_namespace", a few mostly very non-recent
mailing list/forum questions, but nothing which screams "this feature
is used widely".

Regards

Ian Barwick




Re: Problematic enforcement of "ERROR: functions in index predicate must be marked IMMUTABLE"

2023-07-10 Thread Kyotaro Horiguchi
At Sun, 9 Jul 2023 14:22:37 +, Avi Weinberg  wrote in 
> Hi,
> 
> If you attempt to create an index based on function that is not IMMUTABLE you 
> will get an exception "ERROR:  functions in index predicate must be marked 
> IMMUTABLE".  However, if you created the index when the function was 
> IMMUTABLE, but later on you updated the function and mistakenly removed the 
> IMMUTABLE key, you will not get any error to alert you that there is an index 
> based on this function and it should remain IMMUTABLE.
> 
> I suggest triggering error message also when updating a function that is used 
> by index if it is no longer IMMUTABLE

There's no way to truly verify a function is really immutable or
not. So, as mentioned in the documentation, the function volatility
categories are essentially a promise to the optimizer regarding the
function's behavior.

Even given this, premising users keeping the volatility marks in line
with the actual behavior of their corresponding functions, it might be
benetifical to prohibit changes to the volatility category while it's
being used for indices. In the first place, that protecting indices
from entering an inconsistent state, at least on the surface.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Need Help On Upgrade

2023-07-10 Thread Johnathan Tiamoh
Sorry Adrian,

Below is the full error message.

2023-06-27 05:01:27.385 CDT| XXX (60930)|
APPNAME=[unknown]| DB=postgres| USER=grafana| PID=9061| 1| authentication|
PGE-28P01: FATAL:  password authentication failed for user "grafana"
2023-06-27 05:01:27.385 CDT| XXX (60930)|
APPNAME=[unknown]| DB=postgres| USER=grafana| PID=9061| 2| authentication|
PGE-28P01: DETAIL:  Connection matched pg_hba.conf line 113: "host
all all 0.0.0.0/0 md5"

On Mon, Jul 10, 2023 at 6:36 PM Adrian Klaver 
wrote:

> On 7/10/23 13:20, Johnathan Tiamoh wrote:
> >
> >> Ok Adrian,
> >>
> >> Postgres 9.5
> >>
> >>  postgresql.conf
> >>
> >> password_encryption  = ?  md5  > wish is the default for 9.5
> >>
> >>  pg_hba.conf-> md5
> >>
> >>  Provide the relevant lines
> >>
> >> Postgres 14.8
> >>
> >>  postgresql.conf
> >>
> >> password_encryption  = ?   scram-sha-256  > which is the
> >> default for 14.8
> >>
> >>  pg_hba.conf  md5
> >
> > You have not answered:
> >
> > 2) Provide the error messages received when trying to connect.
> > [XXX| APPNAME=[unknown]| DB=proftpd| USER=proftpd|
> > PID=9053| 2| authentication| PGE-28P01: DETAIL:  Connection matched
> > pg_hba.conf line 113: "hostall all 0.0.0.0/0
> >  md5"
> >
>
> The above is not showing any actual error. There should a part that says
> what failed when connecting.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Need Help On Upgrade

2023-07-10 Thread Adrian Klaver

On 7/10/23 13:20, Johnathan Tiamoh wrote:



Ok Adrian,

Postgres 9.5

      postgresql.conf

         password_encryption  = ?  md5  > wish is the default for 9.5

      pg_hba.conf    -> md5

          Provide the relevant lines

Postgres 14.8

      postgresql.conf

         password_encryption  = ?   scram-sha-256  > which is the
default for 14.8

      pg_hba.conf      md5


You have not answered:

2) Provide the error messages received when trying to connect.
[XXX| APPNAME=[unknown]| DB=proftpd| USER=proftpd| 
PID=9053| 2| authentication| PGE-28P01: DETAIL:  Connection matched 
pg_hba.conf line 113: "host    all all 0.0.0.0/0 
 md5"




The above is not showing any actual error. There should a part that says 
what failed when connecting.



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Using "exit" to bring "goto" functionality.

2023-07-10 Thread Bryn Llewellyn
> pavel.steh...@gmail.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> As it happens, Oracle's PL/SQL has a "goto" statement. But PL/pgSQL does 
>> not. (I assume that this is because "goto" is considered a bad thing.) But 
>> PL/SQL programmers do use it. However, the doc section:
> 
> The reason why PL/pgSQL has not "goto" statement is mainly technological. 
> PL/pgSQL is an interpreter of high level commands.  For this kind of 
> interpreter the "goto" - unstructured jump cannot be effectively implemented. 
> PL/pgSQL is very simple, and relatively fast (expressions are slow due 
> evaluation by SQL executor), but "goto" cannot be implemented there. 
> Interpreter of PL/pgSQL is very different from the more usual p-code 
> interpreter.

It’s interesting to know that the reason that PL/pgSQL doesn’t support “goto” 
is an implementation restriction rather than a purist stance. Thanks!

I mentioned PL/SQL only to say that it does not support the premature exit from 
a block statement that PL/pgSQL _does_ support (and document). I accept, now, 
that I’ll never know the rationale for this.



Re: Using "exit" to bring "goto" functionality.

2023-07-10 Thread Pavel Stehule
Hi


> As it happens, Oracle's PL/SQL has a "goto" statement. But PL/pgSQL does
> not. (I assume that this is because "goto" is considered a bad thing.) But
> PL/SQL programmers do use it. However, the doc section:
>

The reason why PL/pgSQL has not "goto" statement is mainly technological.
PL/pgSQL is an interpreter of high level commands.  For this kind of
interpreter the "goto" - unstructured jump cannot be effectively
implemented. PL/pgSQL is very simple, and relatively fast (expressions are
slow due evaluation by SQL executor), but "goto" cannot be implemented
there. Interpreter of PL/pgSQL is very different from the more usual p-code
interpreter

Regards

Pavel


Re: Using "exit" to bring "goto" functionality.

2023-07-10 Thread Bryn Llewellyn
> x...@thebuild.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> What is the rationale for supporting what seems to be on its face this 
>> strange functionality?
> 
> It allows you to EXIT or CONTINUE a loop that is not the innermost one, by 
> naming the label of an outer loop. One can debate endlessly whether it's good 
> or bad to include that functionality, but it definitely has use cases.  I 
> would assume that at this point, it's not going anywhere. The relevant 
> documentation is here:
> 
> https://www.postgresql.org/docs/15/plpgsql-control-structures.html#PLPGSQL-CONTROL-STRUCTURES-LOOPS

Sorry, I should’ve said that I’d read this:

« When used with a BEGIN block, EXIT passes control to the next statement after 
the end of the block. Note that a label must be used for this purpose; an 
unlabeled EXIT is never considered to match a BEGIN block. (This is a change 
from pre-8.4 releases of PostgreSQL, which would allow an unlabeled EXIT to 
match a BEGIN block.) »

Yes, I see that the doc implies that the functionality that I asked about 
(premature exit from a block statement) will be supported indefinitely.

I can only assume that somebody implemented this back in the day and that 
nobody thought to say that it was a bad idea.

FYI, I tied both “exit” and “continue” outside of a loop in PL/SQL (using a 
fairly recent version of Oracle Database). There, each of these causes "must 
appear inside a loop" or "label 'B1' must label a LOOP statement".

Re: Need Help On Upgrade

2023-07-10 Thread Johnathan Tiamoh
> Ok Adrian,
>
> Postgres 9.5
>
>  postgresql.conf
>
> password_encryption  = ?  md5  > wish is the default for 9.5
>
>  pg_hba.conf-> md5
>
>  Provide the relevant lines
>
> Postgres 14.8
>
>  postgresql.conf
>
> password_encryption  = ?   scram-sha-256  > which is the
> default for 14.8
>
>  pg_hba.conf  md5

You have not answered:

2) Provide the error messages received when trying to connect.
[XXX| APPNAME=[unknown]| DB=proftpd| USER=proftpd|
PID=9053| 2| authentication| PGE-28P01: DETAIL:  Connection matched
pg_hba.conf line 113: "hostall all 0.0.0.0/0
md5"


On Mon, Jul 10, 2023 at 4:00 PM Adrian Klaver 
wrote:

> On 7/10/23 12:55, Johnathan Tiamoh wrote:
>
> Please reply to list also.
> Ccing list
>
> > Ok Adrian,
> >
> > Postgres 9.5
> >
> >  postgresql.conf
> >
> > password_encryption  = ?  md5  > wish is the default for 9.5
> >
> >  pg_hba.conf-> md5
> >
> >  Provide the relevant lines
> >
> > Postgres 14.8
> >
> >  postgresql.conf
> >
> > password_encryption  = ?   scram-sha-256  > wish is the
> > default for 14.8
> >
> >  pg_hba.conf  md5
>
> You have not answered:
>
> 2) Provide the error messages received when trying to connect.
>
> >
> >
> > Thank you.
> >
> >
> > On Mon, Jul 10, 2023 at 3:38 PM Adrian Klaver  > > wrote:
> >
> > On 7/10/23 11:15, Johnathan Tiamoh wrote:
> >  > Hello,
> >  >
> >  > I upgraded from postgresql 9.5 to 14.8. After the upgrade users
> > were not
> >  > able to connect due to password hash.
> >  >
> >  > The password encryption was on scram-sha-256. The password hash
> > was on
> >  > md5. in the pg_hba.conf file the authentication method was md5. I
> > do not
> >  > know why users could not connect because as per postgresql
> >  > documentation, that should work.
> >  >
> >
> > 1) You are going to have to be more specific about where and what the
> > settings are or where for both Postgres versions and the *.conf
> > files. So:
> >
> > Postgres 9.5
> >
> >  postgresql.conf
> >
> > password_encryption  = ?
> >
> >  pg_hba.conf
> >
> >  Provide the relevant lines
> >
> > Postgres 14.8
> >
> >  postgresql.conf
> >
> > password_encryption  = ?
> >
> >  pg_hba.conf
> >
> >  Provide the relevant lines
> >
> > 2) Provide the error messages received when trying to connect.
> >
> >
> >  > I have previously done a similar upgrade and did not face similar
> > issues.
> >  >
> >  >
> >  > Kind regards
> >  > Johnathan Tiamoh
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Need Help On Upgrade

2023-07-10 Thread Johnathan Tiamoh
Ok Adrian,

Postgres 9.5

postgresql.conf

   password_encryption  = ?  md5  > wish is the default for 9.5

pg_hba.conf-> md5

Provide the relevant lines

Postgres 14.8

postgresql.conf

   password_encryption  = ?   scram-sha-256  > wish is the default
for 14.8

pg_hba.conf  md5


Thank you.

On Mon, Jul 10, 2023 at 3:38 PM Adrian Klaver 
wrote:

> On 7/10/23 11:15, Johnathan Tiamoh wrote:
> > Hello,
> >
> > I upgraded from postgresql 9.5 to 14.8. After the upgrade users were not
> > able to connect due to password hash.
> >
> > The password encryption was on scram-sha-256. The password hash was on
> > md5. in the pg_hba.conf file the authentication method was md5. I do not
> > know why users could not connect because as per postgresql
> > documentation, that should work.
> >
>
> 1) You are going to have to be more specific about where and what the
> settings are or where for both Postgres versions and the *.conf files. So:
>
> Postgres 9.5
>
> postgresql.conf
>
>password_encryption  = ?
>
> pg_hba.conf
>
> Provide the relevant lines
>
> Postgres 14.8
>
> postgresql.conf
>
>password_encryption  = ?
>
> pg_hba.conf
>
> Provide the relevant lines
>
> 2) Provide the error messages received when trying to connect.
>
>
> > I have previously done a similar upgrade and did not face similar issues.
> >
> >
> > Kind regards
> > Johnathan Tiamoh
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Need Help On Upgrade

2023-07-10 Thread Adrian Klaver

On 7/10/23 12:55, Johnathan Tiamoh wrote:

Please reply to list also.
Ccing list


Ok Adrian,

Postgres 9.5

     postgresql.conf

        password_encryption  = ?  md5  > wish is the default for 9.5

     pg_hba.conf    -> md5

         Provide the relevant lines

Postgres 14.8

     postgresql.conf

        password_encryption  = ?   scram-sha-256  > wish is the 
default for 14.8


     pg_hba.conf      md5


You have not answered:

2) Provide the error messages received when trying to connect.




Thank you.


On Mon, Jul 10, 2023 at 3:38 PM Adrian Klaver > wrote:


On 7/10/23 11:15, Johnathan Tiamoh wrote:
 > Hello,
 >
 > I upgraded from postgresql 9.5 to 14.8. After the upgrade users
were not
 > able to connect due to password hash.
 >
 > The password encryption was on scram-sha-256. The password hash
was on
 > md5. in the pg_hba.conf file the authentication method was md5. I
do not
 > know why users could not connect because as per postgresql
 > documentation, that should work.
 >

1) You are going to have to be more specific about where and what the
settings are or where for both Postgres versions and the *.conf
files. So:

Postgres 9.5

     postgresql.conf

        password_encryption  = ?

     pg_hba.conf

         Provide the relevant lines

Postgres 14.8

     postgresql.conf

        password_encryption  = ?

     pg_hba.conf

         Provide the relevant lines

2) Provide the error messages received when trying to connect.


 > I have previously done a similar upgrade and did not face similar
issues.
 >
 >
 > Kind regards
 > Johnathan Tiamoh

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Need Help On Upgrade

2023-07-10 Thread Adrian Klaver

On 7/10/23 11:15, Johnathan Tiamoh wrote:

Hello,

I upgraded from postgresql 9.5 to 14.8. After the upgrade users were not 
able to connect due to password hash.


The password encryption was on scram-sha-256. The password hash was on 
md5. in the pg_hba.conf file the authentication method was md5. I do not 
know why users could not connect because as per postgresql 
documentation, that should work.




1) You are going to have to be more specific about where and what the 
settings are or where for both Postgres versions and the *.conf files. So:


Postgres 9.5

   postgresql.conf

  password_encryption  = ?

   pg_hba.conf

   Provide the relevant lines

Postgres 14.8

   postgresql.conf

  password_encryption  = ?

   pg_hba.conf

   Provide the relevant lines

2) Provide the error messages received when trying to connect.



I have previously done a similar upgrade and did not face similar issues.


Kind regards
Johnathan Tiamoh


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Using "exit" to bring "goto" functionality.

2023-07-10 Thread Adrian Klaver

On 7/10/23 12:08, DAVID ROTH wrote:

I am a newbe at my current job.
They have a separate log table for every table because they want all the 
columns.
I have been looking for a way to go to a common log table without requiring 
major changes to the application.

With your help, I have a proof of principle that I can demonstrate now.


FYI, you replied to the wrong thread.






On 07/10/2023 2:58 PM EDT Christophe Pettus  wrote:

  

On Jul 10, 2023, at 11:54, Bryn Llewellyn  wrote:

What is the rationale for supporting what seems to be on its face this strange 
functionality?


It allows you to EXIT or CONTINUE a loop thaIt is not the innermost one, by 
naming the label of an outer loop.

One can debate endlessly whether it's good or bad to include that 
functionality, but it definitely has use cases.  I would assume that at this 
point, it's not going anywhere.

The relevant documentation is here:


https://www.postgresql.org/docs/15/plpgsql-control-structures.html#PLPGSQL-CONTROL-STRUCTURES-LOOPS





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Using "exit" to bring "goto" functionality.

2023-07-10 Thread DAVID ROTH
I am a newbe at my current job.
They have a separate log table for every table because they want all the 
columns.
I have been looking for a way to go to a common log table without requiring 
major changes to the application.

With your help, I have a proof of principle that I can demonstrate now.



> On 07/10/2023 2:58 PM EDT Christophe Pettus  wrote:
> 
>  
> > On Jul 10, 2023, at 11:54, Bryn Llewellyn  wrote:
> > 
> > What is the rationale for supporting what seems to be on its face this 
> > strange functionality?
> 
> It allows you to EXIT or CONTINUE a loop thaIt is not the innermost one, by 
> naming the label of an outer loop.
> 
> One can debate endlessly whether it's good or bad to include that 
> functionality, but it definitely has use cases.  I would assume that at this 
> point, it's not going anywhere.
> 
> The relevant documentation is here:
> 
>   
> https://www.postgresql.org/docs/15/plpgsql-control-structures.html#PLPGSQL-CONTROL-STRUCTURES-LOOPS




Re: Using "exit" to bring "goto" functionality.

2023-07-10 Thread Christophe Pettus



> On Jul 10, 2023, at 11:54, Bryn Llewellyn  wrote:
> 
> What is the rationale for supporting what seems to be on its face this 
> strange functionality?

It allows you to EXIT or CONTINUE a loop that is not the innermost one, by 
naming the label of an outer loop.

One can debate endlessly whether it's good or bad to include that 
functionality, but it definitely has use cases.  I would assume that at this 
point, it's not going anywhere.

The relevant documentation is here:


https://www.postgresql.org/docs/15/plpgsql-control-structures.html#PLPGSQL-CONTROL-STRUCTURES-LOOPS



Using "exit" to bring "goto" functionality.

2023-07-10 Thread Bryn Llewellyn
I wanted to see what error(s) would be reported if "exit" or "continue" is 
used, in PL/pgSQL, outside of a loop—and in particular if these were reported 
as syntax errors or runtime errors. There were no surprises with "continue". 
But I was surprised by this:

create procedure p()
  language plpgsql
as $body$
begin
  exit;
end;
$body$;

It causes the 42601 syntax error: EXIT cannot be used outside a loop, unless it 
has a label

"unless it has a label"? Eh?

I Googled this for that message and found this:

https://postgrespro.com/list/thread-id/2364409 

“Thread: Better testing coverage and unified coding for plpgsql loops”, Tom 
Lane, 31 December 2017

The message is mentioned. But I couldn't find any opinion about the intention. 
I tried this (using Version 15.3):

create function f(n in int)
  returns text
  language plpgsql
as $body$
declare
  v text not null := 'a';
begin
  <>begin
v := v||'1';
exit b1 when length(v) > n;
v := v||'2';
exit b1 when length(v) > n;
v := v||'3';
  end b1;
  return v;
end;
$body$;

It completed without error. I tried "select f(1)" and then with the actuals "2" 
and "99"—and I got the outcomes that the code asks for. In other words, "exit" 
can, in at least some scenarios, be used to emulate "goto". (Rather like a 
premature "return" in the middle of a procedure.)

What is the rationale for supporting what seems to be on its face this strange 
functionality?

As it happens, Oracle's PL/SQL has a "goto" statement. But PL/pgSQL does not. 
(I assume that this is because "goto" is considered a bad thing.) But PL/SQL 
programmers do use it. However, the doc section:

https://www.postgresql.org/docs/current/plpgsql-porting.html 

"Porting from Oracle PL/SQL"

doesn't mention "goto". But "exit " might help.

Re: Trigger Function question

2023-07-10 Thread Christophe Pettus



> On Jul 10, 2023, at 11:46, DAVID ROTH  wrote:
> 
> Is there a way to get new.* into a jsonb column?

The to_jsonb() function accepts a row type like NEW.*, and returns a JSONB 
object with the keys as column names.



Re: Trigger Function question

2023-07-10 Thread Marcos Pegoraro
>
> Is there a way to get new.* into a jsonb column?
>

select json_object_agg(js.key, js.value) from
json_each_text(row_to_json(new.*)) js

Marcos


Re: Trigger Function question

2023-07-10 Thread DAVID ROTH
I saw your message about a "few" columns and missed the new.* notation.
Is there a way to get new.* into a jsonb column?

> On 07/10/2023 2:38 PM EDT Christophe Pettus  wrote:
> 
>  
> > On Jul 10, 2023, at 11:37, DAVID ROTH  wrote:
> > 
> > Thanks for the example. I have a test trigger now that does that but my 
> > application needs all of the columns.
> 
> I'm not quite sure I understanding.  Logging NEW.* and OLD.* *does* get all 
> the columns, without having to specific query to find out which columns the 
> table that cause the trigger to fire has.




Re: Trigger Function question

2023-07-10 Thread DAVID ROTH
I was hoping that NEW could be treated as a record or as an arrayy similar to 
pg_argv.

> On 07/10/2023 2:31 PM EDT Christophe Pettus  wrote:
> 
>  
> > On Jul 10, 2023, at 11:29, DAVID ROTH  wrote:
> > 
> > I want to use a single trigger function to log multiple tables and the 
> > tables have different columns.  I can get the names of the columns from the 
> > catalog.  But I have not been able to figure out how to get NEW.x when x is 
> > not known until run time.
> 
> Unless you only want to log a subset of rows from each table, it's not 
> required that you get the specific columns.  Here's an example of how to do a 
> generic auditing trigger:
> 
>   https://wiki.postgresql.org/wiki/Audit_trigger
> 
> If it's supported on your platform, you might also look at the pg_audit 
> extension.




Re: Trigger Function question

2023-07-10 Thread Christophe Pettus



> On Jul 10, 2023, at 11:37, DAVID ROTH  wrote:
> 
> Thanks for the example. I have a test trigger now that does that but my 
> application needs all of the columns.

I'm not quite sure I understanding.  Logging NEW.* and OLD.* *does* get all the 
columns, without having to specific query to find out which columns the table 
that cause the trigger to fire has.



Re: Trigger Function question

2023-07-10 Thread DAVID ROTH
Thanks for the example. I have a test trigger now that does that but my 
application needs all of the columns.

> On 07/10/2023 2:31 PM EDT Christophe Pettus  wrote:
> 
>  
> > On Jul 10, 2023, at 11:29, DAVID ROTH  wrote:
> > 
> > I want to use a single trigger function to log multiple tables and the 
> > tables have different columns.  I can get the names of the columns from the 
> > catalog.  But I have not been able to figure out how to get NEW.x when x is 
> > not known until run time.
> 
> Unless you only want to log a subset of rows from each table, it's not 
> required that you get the specific columns.  Here's an example of how to do a 
> generic auditing trigger:
> 
>   https://wiki.postgresql.org/wiki/Audit_trigger
> 
> If it's supported on your platform, you might also look at the pg_audit 
> extension.




Re: Trigger Function question

2023-07-10 Thread Christophe Pettus



> On Jul 10, 2023, at 11:29, DAVID ROTH  wrote:
> 
> I want to use a single trigger function to log multiple tables and the tables 
> have different columns.  I can get the names of the columns from the catalog. 
>  But I have not been able to figure out how to get NEW.x when x is not known 
> until run time.

Unless you only want to log a subset of rows from each table, it's not required 
that you get the specific columns.  Here's an example of how to do a generic 
auditing trigger:

https://wiki.postgresql.org/wiki/Audit_trigger

If it's supported on your platform, you might also look at the pg_audit 
extension.



Re: Trigger Function question

2023-07-10 Thread DAVID ROTH
I want to use a single trigger function to log multiple tables and the tables 
have different columns.  I can get the names of the columns from the catalog.  
But I have not been able to figure out how to get NEW.x when x is not known 
until run time.


> On 07/10/2023 2:23 PM EDT Christophe Pettus  wrote:
> 
>  
> > On Jul 10, 2023, at 11:20, DAVID ROTH  wrote:
> > 
> > In a trigger function, is there a way to get a list of all of the columns 
> > in the triggering table?
> 
> You can get the table that the trigger fired on with TG_TABLE_SCHEMA and 
> TG_TABLE_NAME, and then query the system catalogs to get a list of columns.
> 
> But can you tell us a bit more about your use-case?  You may be able to write 
> the trigger in such a way that it doesn't need to change behavior based on 
> the columns.




Re: Trigger Function question

2023-07-10 Thread Christophe Pettus



> On Jul 10, 2023, at 11:20, DAVID ROTH  wrote:
> 
> In a trigger function, is there a way to get a list of all of the columns in 
> the triggering table?

You can get the table that the trigger fired on with TG_TABLE_SCHEMA and 
TG_TABLE_NAME, and then query the system catalogs to get a list of columns.

But can you tell us a bit more about your use-case?  You may be able to write 
the trigger in such a way that it doesn't need to change behavior based on the 
columns.



Trigger Function question

2023-07-10 Thread DAVID ROTH
In a trigger function, is there a way to get a list of all of the columns in 
the triggering table?
 
I want to be able to use a single trigger function with multiple tables.
 
Thanks
 

Need Help On Upgrade

2023-07-10 Thread Johnathan Tiamoh
Hello,

I upgraded from postgresql 9.5 to 14.8. After the upgrade users were not
able to connect due to password hash.

The password encryption was on scram-sha-256. The password hash was on md5.
in the pg_hba.conf file the authentication method was md5. I do not know
why users could not connect because as per postgresql documentation, that
should work.

I have previously done a similar upgrade and did not face similar issues.


Kind regards
Johnathan Tiamoh


Re: Is anyone using db_user_namespace?

2023-07-10 Thread Nathan Bossart
On Fri, Jun 30, 2023 at 02:56:08PM -0700, Nathan Bossart wrote:
> Over in pgsql-hackers, I've proposed removing the db_user_namespace
> parameter in v17 [0].  I am personally not aware of anyone using this
> parameter, but I wanted to give folks an opportunity to object in case they
> are using it (or are aware of someone who is).
> 
> [0] https://postgr.es/m/20230630200509.GA2830328%40nathanxps13

Bumping this thread.  As long as no objections materialize, I plan to
remove this parameter shortly.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: what causes new temp schemas to be created

2023-07-10 Thread Tom Lane
Ted Toth  writes:
> So it's an optimization to reuse existing but currently unused temp
> schemas, correct?

Exactly.  We could just destroy and recreate the old temp schema,
but that seems to lead to useless catalog churn.  Instead we just
destroy any old contents, if there's an existing temp schema in
the slot assigned to the current backend.

regards, tom lane




Re: what causes new temp schemas to be created

2023-07-10 Thread Ted Toth
So it's an optimization to reuse existing but currently unused temp
schemas, correct?

On Mon, Jul 10, 2023 at 11:22 AM David G. Johnston
 wrote:
>
> On Mon, Jul 10, 2023 at 9:20 AM Ted Toth  wrote:
>>
>> I don't see that the schema is removed when the session is over and I
>> see other sessions come along later and use it. I'm assuming here that
>> a session is started on connect and ended when the connection is
>> closed.
>>
>
> The first time a session needs a temporary schema it is assigned one which is 
> then immediately cleared out.
>
> David J.




Re: what causes new temp schemas to be created

2023-07-10 Thread David G. Johnston
On Mon, Jul 10, 2023 at 9:20 AM Ted Toth  wrote:

> I don't see that the schema is removed when the session is over and I
> see other sessions come along later and use it. I'm assuming here that
> a session is started on connect and ended when the connection is
> closed.
>
>
The first time a session needs a temporary schema it is assigned one which
is then immediately cleared out.

David J.


Re: what causes new temp schemas to be created

2023-07-10 Thread Ted Toth
I don't see that the schema is removed when the session is over and I
see other sessions come along later and use it. I'm assuming here that
a session is started on connect and ended when the connection is
closed.

On Mon, Jul 10, 2023 at 9:21 AM David G. Johnston
 wrote:
>
> On Mon, Jul 10, 2023 at 7:18 AM Ted Toth  wrote:
>>
>> When a temp table is created I see a pg_temp_NNN (for example
>> pg_temp_3, pg_toast_temp_3) schemas created when/why are additional
>> temp schemas created( pg_temp_4/pg_toast_temp_4)?
>>
>
> Temporary schemas are isolated to the session they are created in.  Hence, 
> you get multiple temporary schemas if you have concurrent sessions using 
> temporary objects.
>
> David J.




Prevent LSN pointer advancement in case of cursor exception (wal2json + pg_logical_slot_get_changes)

2023-07-10 Thread Mertez
Hi,

we would like to fetch changes from the wal using a combination of the
cursor and the sql function pg_logical_slot_get_changes that also takes
care about the position of the LSN pointer.

The decision to use a cursor in the Python script was to mitigate the
out-of-memory exceptions in case of large number of changes.

What we have observed is that when an exception occurs during one of the
cursor iterations the function still advances the LSN pointer. Same thing
can also be reproduced in plpgsql. Is this an expected behaviour and is
there maybe a way how we can prevent this?

Interestingly when we execute the same query without the cursor in an IDE
and manually interrupt the statement the pointer does not advance.

Thanks!
Mertoz

--

Version:
- plugin - wal2json v2
- PostgreSQL 14.7 on x86_64-pc-linux-gnu, compiled by Debian clang version
12.0.1, 64-bit

Scenarios:

-- create the slot
SELECT pg_create_logical_replication_slot('test_slot_bq_failures',
'wal2json');

-- prepare table
create table contact9 (id int, name varchar, email varchar, col01 varchar,
col02 varchar, updated_at timestamp);
alter table contact9 replica identity full;

-- check the number of records without affecting the LSN pointer
select count(*) from pg_logical_slot_peek_changes('test_slot_bq_failures',
NULL, NULL, 'include-xids', '1', 'include-schemas','1',
'include-timestamp',
'1','format-version','2','include-transaction','0','pretty-print','0','include-lsn','1');

-- insert dummy data before each scenario
insert into contact9
select
 id,
'User ' || id name,
'user' || id || '@example.com' email,
repeat(gen_random_uuid()::text, 10) col01, -- 36 x 10 = 360 chars
repeat(gen_random_uuid()::text, 10) col02,
current_timestamp updated_at
from (select  generate_series(1, 100) id ) a; -- large, test for manual
interrupt
-- from (select  generate_series(1, 100) id ) a;

-- SCENARIO 1
-- results:
--   1st run + manual interrupt: none (ERROR: canceling statement due to
user request) -> OK
--   2nd run: 1 000 000 -> OK
--   3r run: 0 -> OK  (LSN advanced in the previous step because of the
successful execution of the statement)
select count(*) from pg_logical_slot_get_changes('test_slot_bq_failures',
NULL, NULL, 'include-xids', '1', 'include-schemas','1',
'include-timestamp',
'1','format-version','2','include-transaction','0','pretty-print','0','include-lsn','1');

-- SCENARIO 2 (cursor, pgsql)
-- results:
--   1st run: 1 row + ERROR: Simulated error  -> OK
--   2nd run: 0 rows -> NOT OK as the first execution was not successful so
we should fetch again 1 row and then an exception should trigger
DO $$
DECLARE
  cur CURSOR FOR SELECT 'contact9' target_table_name, lsn, xid, data FROM
pg_logical_slot_get_changes('test_slot_bq_failures', NULL, NULL,
'include-xids', '1', 'include-schemas', '1', 'include-timestamp', '1',
'format-version', '2', 'include-transaction', '0', 'pretty-print', '0',
'include-lsn', '1');
  rec RECORD;
BEGIN
  OPEN cur;
  LOOP
FETCH cur INTO rec;
EXIT WHEN NOT FOUND;
RAISE NOTICE '%', rec; -- debug the record on the output
RAISE EXCEPTION 'Simulated error'; -- simulate exception after the
first retrieved row -> LSN should stay the same and on the next run we
should again see the first row?
  END LOOP;
  CLOSE cur;
END $$;


Re: Moving data from huge table slow, min() query on indexed column taking 38s

2023-07-10 Thread Dimitrios Apostolou

Thanks! I have completed the transfer by taking down the database and
working exclusively on it, but I still wonder how one could do it in
production without exclusive locks. The loop with small DELETE...INSERT
transactions I posted on the parent post bloats the table fast.

The link you posted contains very useful info, I was not expecting that
the constraints could blow up the memory like that. Comments from me:

Disabling and then re-enabling the foreign key constraints is easily done
with ALTER TABLE.

Unfortunately it doesn't seem to be the same case for indices. One has to
create the table without indices, and then create the indices separately.
With such a process there is a risk of ending up with non-identical
table schemas.

By the way, with COPY one must use an intermediate file, right? There is
no way to COPY from table to table directly?


Thanks,
Dimitris

On Thu, 6 Jul 2023, Lorusso Domenico wrote:


Hello,
this is a standard problem during bulk copy.

here some suggestions; for example disable indexes.

The main issue is related to index, lock escalation and log writing.
In other dbms you should set log off on the table, but postgresql does not seem 
to have this feature.

Anyway, using an explicit lock table exclusive should prevent lock escalation.

So: disable indexes in target table
lock exclusive both table
insert data
truncate old table

If this doesn't work you can consider using the copy command.







Il giorno gio 6 lug 2023 alle ore 18:12 Dimitrios Apostolou  ha 
scritto:
  On Thu, 6 Jul 2023, Dimitrios Apostolou wrote:

  > + First I chose the method to DELETE...INSERT everything.

  Just to clarify, the query looked more like

     WITH rows AS ( DELETE FROM tbl_legacy RETURNING * )
       INSERT INTO tbl
         SELECT * FROM rows;

  >   I noticed that the postgres process was growing without bounds, up to
  >   the point that it consumed almost all of swap space and I had to kill
  >   the command. Any idea why that happens?

  Also note that my settings for work_mem, temp_buffers, shared_buffers etc
  are all well below the RAM size and postgres has never shown unbound
  growth again. Postgres version is 15.2.


  Dimitris





--
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]




Re: what causes new temp schemas to be created

2023-07-10 Thread Ron

On 7/10/23 09:20, David G. Johnston wrote:

On Mon, Jul 10, 2023 at 7:18 AM Ted Toth  wrote:

When a temp table is created I see a pg_temp_NNN (for example
pg_temp_3, pg_toast_temp_3) schemas created when/why are additional
temp schemas created( pg_temp_4/pg_toast_temp_4)?


Temporary schemas are isolated to the session they are created in. Hence, 
you get multiple temporary schemas if you have concurrent sessions using 
temporary objects.


IOW, temporary schemata are how Pg lets different sessions have temporary 
objects use the same name?



--
Born in Arizona, moved to Babylonia.

Re: what causes new temp schemas to be created

2023-07-10 Thread David G. Johnston
On Mon, Jul 10, 2023 at 7:18 AM Ted Toth  wrote:

> When a temp table is created I see a pg_temp_NNN (for example
> pg_temp_3, pg_toast_temp_3) schemas created when/why are additional
> temp schemas created( pg_temp_4/pg_toast_temp_4)?
>
>
Temporary schemas are isolated to the session they are created in.  Hence,
you get multiple temporary schemas if you have concurrent sessions using
temporary objects.

David J.


what causes new temp schemas to be created

2023-07-10 Thread Ted Toth
When a temp table is created I see a pg_temp_NNN (for example
pg_temp_3, pg_toast_temp_3) schemas created when/why are additional
temp schemas created( pg_temp_4/pg_toast_temp_4)?

Ted




Re: search_path for replica-mode

2023-07-10 Thread David G. Johnston
On Monday, July 10, 2023, André Kutepow  wrote:

> There is a trigger in the database
>
> SET search_path TO "$user", public;
>
> CREATE OR REPLACE FUNCTION art__a_iu_func()
> BEGIN
>   INSERT INTO table_z...   --*»table_z« is in schema public*
> END;
>
> CREATE OR REPLACE TRIGGER art__a_iu
> AFTER INSERT OR UPDATE OF ak_nr
> ON art
> FOR EACH ROW
> EXECUTE FUNCTION art__a_iu_func();
>
> In a regular trigger, it works great!
> But, if I announce it, as:
>
> ALTER TABLE art ENABLE REPLICA TRIGGER art__a_iu;
> then I get an error:
>
> *FEHLER:  Relation »table_z« existiert nicht bei Zeichen xxx*
>
>
> *ANFRAGE:  INSERT INTO table_z... *why does'nt work the search_path for
> replica-mode?
> is it made specifically or is it a bug?
> How else can I set the search_path for replica-mode?
>
>
Relying on external search_path for system executed objects is
problematic.  Don’t do it. Either attach a SET to the function or
schema-qualify references.

David J.


search_path for replica-mode

2023-07-10 Thread André Kutepow

There is a trigger in the database

   SET search_path TO "$user", public;

   CREATE OR REPLACE FUNCTION art__a_iu_func()
   BEGIN
  INSERT INTO table_z...   --//»table_z« is in schema public//
   END;

   CREATE OR REPLACE TRIGGER art__a_iu
    AFTER INSERT OR UPDATE OF ak_nr
    ON art
    FOR EACH ROW
    EXECUTE FUNCTION art__a_iu_func();

In a regular trigger, it works great!
But, if I announce it, as:

   ALTER TABLE art ENABLE REPLICA TRIGGER art__a_iu;
   then I get an error:

/FEHLER:  Relation »table_z« existiert nicht bei Zeichen xxx//
//ANFRAGE:  INSERT INTO table_z...

/why does'nt work the search_path for replica-mode?
is it made specifically or is it a bug?
How else can I set the search_path for replica-mode?


search_path for replica-mode

2023-07-10 Thread André Kutepow

There is a trigger in the database

   SET search_path TO "$user", public;

   CREATE OR REPLACE FUNCTION art__a_iu_func()
   BEGIN
  INSERT INTO table_z...   --//»table_z« is in schema public//
   END;

   CREATE OR REPLACE TRIGGER art__a_iu
    AFTER INSERT OR UPDATE OF ak_nr
    ON art
    FOR EACH ROW
    EXECUTE FUNCTION art__a_iu_func();

In a regular trigger, it works great!
But, if I announce it, as:

   ALTER TABLE art ENABLE REPLICA TRIGGER art__a_iu;
   then I get an error:

/FEHLER:  Relation »table_z« existiert nicht bei Zeichen xxx//
//ANFRAGE:  INSERT INTO table_z...

/why does'nt work the search_path for replica-mode?
is it made specifically or is it a bug?
How else can I set the search_path for replica-mode?


Re: Query regarding managing Replication

2023-07-10 Thread Julien Rouhaud
On Mon, Jul 10, 2023 at 3:41 PM Ashok Patil  wrote:

> Hello Julien,
>
> I need some more help regarding replication.
>
> Where Primary server can switch to Standby server role and stand by server
> can switch to Primary server role.
> [image: image.png]
> In our case we maintain two servers Server1 and Server2.. Server1 will be
> active and Server2 will be passive.
> In case when Server1 is crashed Server2 will be active in result it starts
> Keyclock instance and keyclock will connect with Standby postgres server.
>
> As per current replication steps, we found that Standby server will always
> be in read only mode. we want to switch it to r/w mode and primary to read
> only mode.
>
> or even it is ok both primary and stand by will always be in read-write
> mode. but both should replicate data.
>
> I searched so many option on internet but did not found exact one.. Will
> you please propose me some good solutions..
>
> Proper steps should be appreciable..
>

I don't know what KeyClock is, but I'm assuming that what you're looking
for is a High Availability solution, not how to setup replication, as
postgres itself only provides way to perform specific actions (promote a
standby to primary, build a new standby...) and third-party tools adds the
necessary logic for something more fancy.  For general overview or how to
perform a failover you can look at the documentation at
https://www.postgresql.org/docs/current/high-availability.html.  If you're
looking for a HA solution, I would recommend looking into Patroni:
https://patroni.readthedocs.io/en/latest/


Re: INSERT UNIQUE row?

2023-07-10 Thread Erik Wienhold
> On 10/07/2023 04:25 CEST p...@pfortin.com wrote:
>
> On Sun, 9 Jul 2023 17:04:03 -0700 Adrian Klaver wrote:
>
> >On 7/9/23 15:58, p...@pfortin.com wrote:
> >> Hi,
> >>
> >> Trying to figure out how to insert new property addresses into an
> >> existing table.
> >>
> >> Can a UNIQUE constraint be applied to an entire row?  Adding UNIQUE to
> >> each column won't work in such a case since there are multiple properties
> >> * on the same street
> >> * in the same town
> >> * with the same number on different streets
> >> * etc...
> >
> >Does the locality you are in have something like the Property ID# and/or 
> >Parcel # / Geo ID shown here:
> >
> >https://property.whatcomcounty.us/propertyaccess/PropertySearch.aspx?cid=0
>
> Thanks!  Just getting started on this issue and this made me realize my
> current data source may not be the best...  Much appreciated!

Also keep in mind that it's not trivial to model addresses, even in a single
country.  Some database constraints may become a footgun.

https://www.mjt.me.uk/posts/falsehoods-programmers-believe-about-addresses/

--
Erik




Re: Query regarding managing Replication

2023-07-10 Thread Ashok Patil
Hello Julien,

I need some more help regarding replication.

Where Primary server can switch to Standby server role and stand by server
can switch to Primary server role.
[image: image.png]
In our case we maintain two servers Server1 and Server2.. Server1 will be
active and Server2 will be passive.
In case when Server1 is crashed Server2 will be active in result it starts
Keyclock instance and keyclock will connect with Standby postgres server.

As per current replication steps, we found that Standby server will always
be in read only mode. we want to switch it to r/w mode and primary to read
only mode.

or even it is ok both primary and stand by will always be in read-write
mode. but both should replicate data.

I searched so many option on internet but did not found exact one.. Will
you please propose me some good solutions..

Proper steps should be appreciable..

Regards,
Ashok




On Mon, Jul 3, 2023 at 3:06 PM Julien Rouhaud  wrote:

> Hi,
>
> On Mon, Jul 03, 2023 at 02:07:54PM +0530, Ashok Patil wrote:
> >
> > I am able to perform replication of a single machine where primary and
> > stand-by are on same localhost..
> > I think you are right.. it might be blocking the backup process by
> > firewall..
>
> Agreed
>
> > Just one signal question: is multi-master replication possible on
> Postgres?
> > I searched on google.. but did not find a proper solution..
>
> This is not possible with vanilla postgres.  There are some commercial
> products
> / postgres forks that implement it, but I've never used them so I can't
> comment
> on how well they work nor how much they cost.
>