Re: ERROR: only immutable functions supported in continuous aggregate view

2023-03-12 Thread Martijn de Munnik
Ok, now I understand, thank you.


My solution is to create materialized view with intermediate values and
the create a normal view on top that uses the intermediate values and
the jsonb_build_object function to create the desired result.


Kind regards,

Martijn de Munnik


On 2023-03-11 16:47, Tom Lane wrote:
> Martijn de Munnik  writes:
>> The error is caused by the jsonb_build_object function, is there any
>> reason why this function is not IMMUTABLE?
> It can invoke arbitrary datatype output functions, some of which are
> not immutable.  We unfortunately lack any infrastructure that would
> allow reasoning that "all the types used in this specific invocation
> have immutable output functions, so it'd be OK to consider this
> invocation immutable".  So we have to assume the worst when labeling
> jsonb_build_object.  (Even if we had such infrastructure, I doubt
> it would save you in this use-case, because it looks like some of
> the values you're converting are timestamps, which respond to the
> DateStyle and TimeZone settings.)
>
> You could, on these grounds, argue that jsonb_build_object and other
> functions with this problem actually need to be VOLATILE.  But we've
> established a project convention that I/O functions should be at
> worst STABLE, allowing calling functions to be STABLE as well.
>
>   regards, tom lane





Re: confusion between max_standby_archive_delay, max_standby_archive_delay and max_standby_archive_delay

2023-03-12 Thread Laurenz Albe
On Sun, 2023-03-12 at 01:53 +0530, Atul Kumar wrote:
> Could someone help me in telling the difference between these three 
> parameters 
> 1. max_standby_archive_delay
> 2. max_standby_streaming_delay
> 3. recovery_min_apply_delay
> 
> My basic motive is to make the standby database server to be delayed to apply 
> the
> changes on itself,  if any data has been accidentally deleted/updated/ 
> truncated
> from the primary server.
> 
> Which parameter do I need to configure to serve this purpose ? And 
> When will the remaining two parameters be used ?
> 
> It would be great if anyone can explain them with a brief example.

The parameter that does what you describe you want is 
"recovery_min_apply_delay".

The other parameters only deal with delaying replication in the face of a
replication conflict.

Note that changes are immediately shipped to the standby, what is delayed with
"recovery_min_apply_delay" is only the replay of the WAL information.

So you can recover from a logical problem like DROP TABLE by stopping the
standby, setting "recovery_target_time" to a time before the problem happened
and then restarting the standby.  Then recovery will stop before the problem
is replayed.

Yours,
Laurenz Albe




Re: confusion between max_standby_archive_delay, max_standby_archive_delay and max_standby_archive_delay

2023-03-12 Thread Ron

On 3/12/23 09:01, Laurenz Albe wrote:

On Sun, 2023-03-12 at 01:53 +0530, Atul Kumar wrote:

Could someone help me in telling the difference between these three parameters
1. max_standby_archive_delay
2. max_standby_streaming_delay
3. recovery_min_apply_delay

My basic motive is to make the standby database server to be delayed to apply 
the
changes on itself,  if any data has been accidentally deleted/updated/ truncated
from the primary server.

Which parameter do I need to configure to serve this purpose ? And
When will the remaining two parameters be used ?

It would be great if anyone can explain them with a brief example.

The parameter that does what you describe you want is 
"recovery_min_apply_delay".

The other parameters only deal with delaying replication in the face of a
replication conflict.

Note that changes are immediately shipped to the standby, what is delayed with
"recovery_min_apply_delay" is only the replay of the WAL information.

So you can recover from a logical problem like DROP TABLE by stopping the
standby, setting "recovery_target_time" to a time before the problem happened
and then restarting the standby.  Then recovery will stop before the problem
is replayed.


How do you determine what to set recovery_min_apply_delay to? 15 minutes... 
an hour... 8 hours... a week?  (OP's problem is better solved by restoring a 
recent backup to a separate instance, while letting replication do it's 
DR/HA things.)


--
Born in Arizona, moved to Babylonia.




Re: confusion between max_standby_archive_delay, max_standby_archive_delay and max_standby_archive_delay

2023-03-12 Thread Laurenz Albe
On Sun, 2023-03-12 at 12:00 -0500, Ron wrote:
> On 3/12/23 09:01, Laurenz Albe wrote:
> > On Sun, 2023-03-12 at 01:53 +0530, Atul Kumar wrote:
> > > Could someone help me in telling the difference between these three 
> > > parameters
> > > 1. max_standby_archive_delay
> > > 2. max_standby_streaming_delay
> > > 3. recovery_min_apply_delay
> > > 
> > > My basic motive is to make the standby database server to be delayed to 
> > > apply the
> > > changes on itself,  if any data has been accidentally deleted/updated/ 
> > > truncated
> > > from the primary server.
> > > 
> > > Which parameter do I need to configure to serve this purpose ? And
> > > When will the remaining two parameters be used ?
> > > 
> > > It would be great if anyone can explain them with a brief example.
> > The parameter that does what you describe you want is 
> > "recovery_min_apply_delay".
> > 
> > The other parameters only deal with delaying replication in the face of a
> > replication conflict.
> > 
> > Note that changes are immediately shipped to the standby, what is delayed 
> > with
> > "recovery_min_apply_delay" is only the replay of the WAL information.
> > 
> > So you can recover from a logical problem like DROP TABLE by stopping the
> > standby, setting "recovery_target_time" to a time before the problem 
> > happened
> > and then restarting the standby.  Then recovery will stop before the problem
> > is replayed.
> 
> How do you determine what to set recovery_min_apply_delay to? 15 minutes... 
> an hour... 8 hours... a week?  (OP's problem is better solved by restoring a 
> recent backup to a separate instance, while letting replication do it's 
> DR/HA things.)

That depends entirely on whether how you want to query the standby.  It is 
totally
irrelevant for the question of configuring a standby that is deliberately 
delayed
in order to recover from data loss caused by SQL.

Yours,
Laurenz Albe




Help? Unexpected PostgreSQL compilation failure using generic compile script

2023-03-12 Thread Martin Goodson

Hello.

For reasons I won't bore you with, we compile PostgreSQL from source 
rather than use the standard packages for some of our databases.


We've compiled numerous PostgreSQL versions, from 11.1 to 14.4, using a 
fairly generic and not particularly complicated compile script that has 
worked successfully on dozens (possibly hundreds, I don't keep track :) 
)  of redhat boxes using numerous different versions of RHEL.


This script has worked without incident for *years*. Until last week, 
where we tried to compile PostgreSQL 12.9 on an RHEL 7.9 box, where it 
bombed out with an error we have never seen before.


To be honest, I'm not sure what's going wrong. I am by no means a Linux 
sysadm or compile expert. I just run the script (and a variety of other 
post-build steps ...)


Our basic process:

1. Install pre-requisite libraries/packages:

yum install pam-devel
yum install libxml2-devel
yum install libxslt-devel
yum install openldap
yum install openldap-devel
yum install uuid-devel
yum install readline-devel
yum install openssl-devel
yum install libicu-devel
yum install uuid-devel
yum install gcc
yum install make

2. Create a user to compile the source and own the software. For 
example, pgbuild


3. Build a couple of directories owned by the build user for the 
destination, source, etc. We then run the following script under the 
build user.


targetdir={directory to install postgresql into}
sourcedir={directory where the postgresql unzipped and untarred tarball 
has been located}

builddir={temporary build directory}
port={port number}

rm -Rf ${targetdir}
rm -Rf ${builddir}
mkdir ${targetdir}
mkdir ${builddir}
cd ${builddir}

${sourcedir}/configure --prefix=${targetdir} --with-pgport=${port} \
   --with-openssl \
   --with-ldap \
   --with-pam \
   --with-icu \
   --with-libxml \
   --with-ossp-uuid \
   --with-libxslt \
   --with-libedit-preferred \
   --with-gssapi \
   --enable-debug
rc=$?
if [ $rc -ne 0 ]
then
echo " ERROR! Configure returned non-zero code $rc - press RETURN to 
continue / Ctrl+C to abort"

read ok
fi

make world
rc=$?
if [ $rc -ne 0 ]
then
echo " ERROR! make world returned non-zero code $rc - press RETURN 
to continue / Ctrl+C to abort"

read ok
fi

make check
rc=$?
if [ $rc -ne 0 ]
then
echo " ERROR! make check returned non-zero code $rc - press RETURN 
to continue / Ctrl+C to abort"

read ok
fi

make install-world
rc=$?
if [ $rc -ne 0 ]
then
echo " ERROR! install-world returned non-zero code $rc - press 
RETURN to continue / Ctrl+C to abort"

read ok
fi


So, pretty straightforward stuff. Run configure, make world, make check, 
make install-word and a little bit of basic error checking after each step.


For years we've been able to run this script without issue, until last 
week where the configure failed with the following error on one of our 
servers. After the usual hundreds of lines of text configure output the 
following:


  checking for library containing gss_init_sec_context... no

  configure: error: could not find function 'gss_init_sec_context' 
required for GSSAPI


And then bombed out with rc 1. Rest of the script aborted due to our 
error checking.



Bit odd, nothing we've seen before on dozens/numerous other compiles 
across the enterprise.


Then I spotted that our libraries pre-install doesn't include anything 
for GSSAPI. Bit of a bug in our pre-reqs step, perhaps we've got away 
with it previously and this one server in our whole estate doesn't have 
GSSAPI. I need to figure out how to install GSSAPI, but that's a bit of 
a faff and I need to get this build tested in a hurry.


So I simply removed the --with-gssapi, and tried again.

AND IT FAILED AGAIN.

This time it failed claiming it couldn't find the ldap library. Which is 
most -definitely- present.


I have no idea what's going on at this point. We have *never* had any 
issues like this. This script/process has been in place for years and 
we've never had any issues with it.


It gets weirder.

The compile step and make world steps work perfectly if the script is 
run under root. Though, of course, the make check step fails. Running it 
under root was inadvertent, but the fact the compile and make steps 
seemed to have run successfully was a bit of a surprise.



So a fairly basic script that has been used for years suddenly fails on 
a fairly generic RHEL 7.9 server.


I am no compilation expert. Obviously. Have I mised something basic? As 
I said, we've not seen problems like this before. Could there be some 
sort of issue on the box's configuration? If it works for root but not 
our usual build user could there be a user config with our account? Can 
anyone offer any insight on what I need to check? At the moment it all 
seems somewhat ... mystifying.


I am assuming there must 

Re: Help? Unexpected PostgreSQL compilation failure using generic compile script

2023-03-12 Thread Tom Lane
Martin Goodson  writes:
> So I simply removed the --with-gssapi, and tried again.
> AND IT FAILED AGAIN.
> This time it failed claiming it couldn't find the ldap library. Which is 
> most -definitely- present.

Hard to debug this sort of thing remotely when you don't supply the exact
error messages.  But ... do you have openldap-devel installed, or just
the base openldap package?

> The compile step and make world steps work perfectly if the script is 
> run under root.

That is odd.  Permissions problems on the libraries, maybe?

regards, tom lane




Re: Help? Unexpected PostgreSQL compilation failure using generic compile script

2023-03-12 Thread Martin Goodson

On 12/03/2023 21:52, Tom Lane wrote:

Martin Goodson  writes:

So I simply removed the --with-gssapi, and tried again.
AND IT FAILED AGAIN.
This time it failed claiming it couldn't find the ldap library. Which is
most -definitely- present.

Hard to debug this sort of thing remotely when you don't supply the exact
error messages.  But ... do you have openldap-devel installed, or just
the base openldap package?


The compile step and make world steps work perfectly if the script is
run under root.

That is odd.  Permissions problems on the libraries, maybe?

regards, tom lane


Hi, Tom.

Sorry, I can get the complete log tomorrow - it's on my work PC, not my 
home. I clearly made insufficient notes, for which I apologize :(


Not sure about permissions on libraries. We just open up a session under 
root and execute yum install , and that has always worked in 
the past. Not sure what I'd need to check? I can perhaps ask our 
friendly neighbourhood UNIX sysadmin to check those?


We did install openldap and openldap-devel, however:

yum install pam-devel
yum install libxml2-devel
yum install libxslt-devel
yum install openldap
yum install openldap-devel
yum install uuid-devel
yum install readline-devel
yum install openssl-devel
yum install libicu-devel
yum install uuid-devel
yum install gcc
yum install make

Regards,

M.


--
Martin Goodson.

"Have you thought up some clever plan, Doctor?"
"Yes, Jamie, I believe I have."
"What're you going to do?"
"Bung a rock at it."





Re: Help? Unexpected PostgreSQL compilation failure using generic compile script

2023-03-12 Thread Adrian Klaver

On 3/12/23 14:43, Martin Goodson wrote:

Hello.

For reasons I won't bore you with, we compile PostgreSQL from source 
rather than use the standard packages for some of our databases.





So a fairly basic script that has been used for years suddenly fails on 
a fairly generic RHEL 7.9 server.


I am no compilation expert. Obviously. Have I mised something basic? As 
I said, we've not seen problems like this before. Could there be some 
sort of issue on the box's configuration? If it works for root but not 
our usual build user could there be a user config with our account? Can 
anyone offer any insight on what I need to check? At the moment it all 
seems somewhat ... mystifying.


SELinux issues?

Have you looked at the system logs to see if they shed any light?



I am assuming there must be something wrong with the box/our 
configuration somewhere, but where to look? If anyone can help - even if 
it's to tell me I'm an idiot for missing one or more incredibly basic 
things somehow - I would be very grateful.


Many thanks.

Regards,

M.



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Practice advice for use of %type in declaring a subprogram's formal arguments

2023-03-12 Thread Bryn Llewellyn
> david.g.johns...@gmail.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> Do you (all) think that, notwithstanding this, it's a good plan to use 
>> "%type" for the declaration of a formal argument just for the reason that it 
>> documents your intention explicitly?
> 
> If my function is likely to be table-specific I would define its signature to 
> be:
> 
> function(i_var tablename)
> 
> i.e., they accept a row of said table as the input.
> 
> As most functions don't, and likely shouldn't, care from what source their 
> arguments come from, but instead perform an operation based upon the data 
> types of the inputs, using %type is not something I've ever really had 
> occasion to use.
> 
> Given that the source of record loses that metadata anyway that would be 
> further reason why I would just pretend the %type syntax doesn't exist.  I 
> suspect that on rare occasion its use could be beneficial, and maybe if I 
> remember it exists while writing one of those use cases up I'll use it, but 
> there doesn't seem to be much downside to just using concrete types 
> everywhere.  Code comments can handle intent well enough here, as opposed to 
> some, frankly, esoteric feature/syntax (which seems not all that SQL-ish but 
> I don't see where it is specifically noted as our extension).
> 
> You comments do seem, at first glance at least, to refute the statement in 
> the documentation:
> 
> > The type of a column is referenced by writing table_name.column_name%TYPE. 
> > Using this feature can sometimes help make a function independent of 
> > changes to the definition of a table.
> 
> I mean, while it is indeed "independent of changes to the definition of a 
> table" so does simply writing "text"...the %type syntax seems like it should 
> follow the changes of the definition of a table...

Suppose you have a masters-and-details table pair where each table uses an 
autogenerated PK. A masters row, following the text book, will also have a 
unique business key. Similarly, and in one plausible design, a details row will 
have a unique business identifier within the context of its masters row so that 
its unique business key will have two parts. Now you want to insert a new 
master row and a few details for it. This is a fine use case for a PL/pgSQL 
procedure with these input formal arguments:

— the new master’s unique business key, and some other facts for it.
— an array of “new within-master details, each with its within-master business 
ID and some other facts for it”

The code writes itself: "insert into masters values... returning PK into m_pk" 
followed by "insert into details... select... unnest(details_arr_in)". This, at 
least on the face of it, would be an obvious candidate for using %type. Both 
for the various input arguments and for the local variable, "m_pk", for the 
masters PK that gets inserted into the details table. Except for the fact that 
it doesn't live up to its promise. David said "the %type syntax seems like it 
should follow the changes of the definition of a table". I agree. But it 
doesn't. And I don't suppose that it ever will.

However, the requirement for "single point of definition" (hereinafter SPOD) is 
broader than just PL/pgsql local variables and ideally (but not usably) 
subprogram formal arguments. For example, route distance between two points on 
the surface of the earth, with agreed units, scale, precision, and the 
requirement to fall between zero and a sensible upper limit, is a good 
candidate for SPOD-ification. A domain gives you exaclty the mechanism you need.

I did this little test:

-- in cr-function.sql
create function s.f(v_in in s.num)
  returns text
  security definer
  set search_path = pg_catalog, pg_temp
  language plpgsql
as $body$
declare 
  r text not null := '';
begin
  select k::text into strict r from s.t where v = v_in;
  return r;
end;
$body$;

and

-- in test.sql
-- Deliberate poor definition of domain "s.num" (upper bound is too small).
create domain s.num as numeric constraint num_ok check(value > 0.0 and value <= 
10.0);

create table s.t(k integer primary key, v s.num);
insert into s.t(k, v) values (1, 5);

\ir cr-function.sql
select s.f(5.0);

-- Improved definition of domain "s.num".
-- Using "create" rather than "alter" for maximum generality.
-- No might want to change the base type, too, in a different use case.
create domain s.num_new as numeric constraint num_ok check(value > 0.0 and 
value <= 20.0);

alter table s.t add column v_new s.num_new;
update s.t set v_new = v::s.num_new;
alter table s.t rename column v to v_old;
alter table s.t rename column v_new to v;
alter table s.t drop column v_old;
drop domain s.num cascade; --> drop cascades to function s.f(s.num)
alter domain s.num_new rename to num;
insert into s.t(k, v) values (2, 14.5);

\ir cr-function.sql
select s.f(14.5);

Using the domain, and everything that this implies when you want to change its 
definition, means that you're forced to accept using "delete domain... casc

Re: can't get psql authentication against Active Directory working

2023-03-12 Thread Stephen Frost
Greetings,

* Tomas Pospisek (t...@sourcepole.ch) wrote:
> On 25.02.23 00:52, Stephen Frost wrote:
> > * Tomas Pospisek (t...@sourcepole.ch) wrote:
> > > On 21.02.23 16:29, Stephen Frost wrote:
> > > > * Tomas Pospisek (t...@sourcepole.ch) wrote:
> > > > > On 20.02.23 15:17, Stephen Frost wrote:
> > > > > > * Tomas Pospisek (t...@sourcepole.ch) wrote:
> >>
> > > That [SSPI] document says:
> > > 
> > > "SSPI authentication only works when both server and client are running
> > > Windows, or, on non-Windows platforms, when GSSAPI is available."
> > > 
> > > I interpret that phrase like this:
> > > 
> > > * there's a case where both server and client are running Windows (doesn't
> > > apply to me)
> > > * there's a case where both are running non-Windows (doesn't apply to me
> > > either - server is Linux, client is Windows)
> > > 
> > > So that's an extremely terse docu that is not clear at all to me.
> > > 
> > > I'd suggest to change that phrase into:
> > > 
> > > "SSPI authentication works when both server and client are running 
> > > Windows.
> > > When server or client are on non-Windows platforms then those need to use
> > > GSSAPI."
> > > 
> > > - assuming that my interpretation of that super terse docu is correct of
> > > course.
> > 
> > No, that's not correct.  The blog post that I referenced is actually
> > using SSPI on the client and GSS on the server and it works because
> > they're compatible with each other on the wire.  Perhaps we could
> > improve the documentation you reference above but it's not actually
> > wrong as-is.  Perhaps this would be clearer:
> > 
> > SSPI and GSS are wireline compatible and can be mixed and matched
> > between clients and servers (where support for GSS is built into the
> > client library or the server), provided that there is an encryption
> > method which both will accept.
> > 
> > The encryption method bit is more of a historical artifact at this
> > point as modern systems have compatible AES-based encryption methods,
> > but that wasn't always the case.  It's also possible that it'll become
> > an issue in the future but at least for the moment most installations
> > have an AES-based compatible encryption method.
> 
> I have posted a suggestion for an improvement (via the form) to the
> pgsql-docs mailing list. Lets see what comes out of that.

Great, I've replied with my own suggestion.  Hopefully we can work
towards something that works for both of us (and everyone else) and then
I can commit the change.

> I like the idea to document the *why* behind SSPI and GSS, however I don't
> understand enough about it to propose a documentation improvement.

You did propose a documentation improvement, so I'm a bit confused by
this statement. ;)

> > > > > Even now I am unable to find *any* documentation on how to go about 
> > > > > doing
> > > > > auth from psql.exe to AD with SSPI. Would you have any pointers to
> > > > > documentation or a howto (or a blog post or a stackoverflow answer 
> > > > > or...).
> > > > 
> > > > Sure, here's a blog post that I wrote about doing exactly that:
> > > > 
> > > > https://www.crunchydata.com/blog/windows-active-directory-postgresql-gssapi-kerberos-authentication
> > > 
> > > Note that that document doesn't mention SSPI a single time.
> > 
> > It doesn't actually need to because SSPI is the default and it all just
> > works..
> 
> I can confirm the "just works" part since my setup now indeed *does* work.

Hah!  Great, glad to hear that.

> However the existing documentation on GSS/Kerberos/SSPI/Active directory is
> extremely terse and sparse. So being more explicit and being clearer would
> improve the situation I think. Are you able and interested to change the
> Blog article?
> 
> I'd suggest to add a paragraph like this:
> 
> Using a Postgresql client on Windows
> 
> If you use a Postgresql client on Windows then that client should
> be able to automatically use Windows' SSPI API, which will do the
> authentication via Active Directory without any further
> configuration necessary.

I'm probably able to make a change to it, given that I wrote it.  I'll
see what I can do to improve on it.

> > I suppose I could have added in somewhere "SSPI is used on the
> > client for this", perhaps, but the blog post was more focused on "this
> > is what you do to make it work" and you don't really need to know that
> > the API that the client is using in this case happens to be called SSPI.
> > 
> > > The ultra terse Postgres docu is using terms such as "SAM", "negotiate
> > > mode", "UPN", without a reference to their definition, which leaves to
> > > Windows noobs like me (and equaly Windows non-noobs) without a clue.
> > 
> > I'm certainly all for improving the documentation.  Not sure that just
> > spelling those out would really be enough and I'm rather confident that
> > trying to explain all of how Kerberos and AD work wouldn't really be
> > maintainable in our documentation.  Still, specific suggestion