Re: Support logical replication of DDLs

2022-05-08 Thread Dilip Kumar
On Sat, May 7, 2022 at 9:38 AM Amit Kapila  wrote:
>
> On Fri, May 6, 2022 at 10:21 PM Zheng Li  wrote:
> >
> > > Attached is a set of two patches as an attempt to evaluate this approach.
> > >
> >
> > Thanks for exploring this direction.
> >
> > I read the deparsing thread and your patch. Here is my thought:
> > 1. The main concern on maintainability of the deparsing code still
> > applies if we want to adapt it for DDL replication.
> >
>
> I agree that it adds to our maintainability effort, like every time we
> enhance any DDL or add a new DDL that needs to be replicated, we
> probably need to change the deparsing code. But OTOH this approach
> seems to provide better flexibility. So, in the long run, maybe the
> effort is worthwhile. I am not completely sure at this stage which
> approach is better but I thought it is worth evaluating this approach
> as Alvaro and Robert seem to prefer this idea.

+1, IMHO with deparsing logic it would be easy to handle the mixed DDL
commands like ALTER TABLE REWRITE.  But the only thing is that we will
have to write the deparsing code for all the utility commands so there
will be a huge amount of new code to maintain.

> > 2. The search_path and role still need to be handled, in the deparsing
> > code. And I think it's actually more overhead to qualify every object
> > compared to just logging the search_path and enforcing it on the apply
> > worker.
> >
>
> But doing it in the deparsing code will have the benefit that the
> other plugins won't have to develop similar logic.

Right, this makes sense.


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: failures in t/031_recovery_conflict.pl on CI

2022-05-08 Thread Tom Lane
Andres Freund  writes:
> On 2022-05-05 23:57:28 -0400, Tom Lane wrote:
>> Are you sure there's just one test that's failing?  I haven't checked
>> the buildfarm history close enough to be sure of that.  But if it's
>> true, disabling just that one would be fine (again, as a stopgap
>> measure).

> I looked through all the failures I found and it's two kinds of failures, both
> related to the deadlock test. So I'm thinking of skipping just that test as in
> the attached.

Per lapwing's latest results [1], this wasn't enough.  I'm again thinking
we should pull the whole test from the back branches.

regards, tom lane

[1] 
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-05-07%2016%3A40%3A04




Re: First-draft release notes for next week's minor releases

2022-05-08 Thread Tom Lane
"Jonathan S. Katz"  writes:
> I found one small thing:

> -  because it didn't actually do anything with the bogus values;
> +  because it didn't actually do anything with the bogus values,

> Should be a "," or remove the "but" on the following line.

Done that way (with a ","), thanks for reviewing.

regards, tom lane




Re: failures in t/031_recovery_conflict.pl on CI

2022-05-08 Thread Andres Freund
Hi,

On 2022-05-08 11:28:34 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > On 2022-05-05 23:57:28 -0400, Tom Lane wrote:
> >> Are you sure there's just one test that's failing?  I haven't checked
> >> the buildfarm history close enough to be sure of that.  But if it's
> >> true, disabling just that one would be fine (again, as a stopgap
> >> measure).
> 
> > I looked through all the failures I found and it's two kinds of failures, 
> > both
> > related to the deadlock test. So I'm thinking of skipping just that test as 
> > in
> > the attached.
> 
> Per lapwing's latest results [1], this wasn't enough.  I'm again thinking
> we should pull the whole test from the back branches.

That failure is different from the earlier failures though. I don't think it's
a timing issue in the test like the deadlock check one. I rather suspect it's
indicative of further problems in this area. Potentially the known problem
with RecoveryConflictInterrupt() running in the signal handler? I think Thomas
has a patch for that...

One failure in ~20 runs, on one animal doesn't seem worth disabling the test
for.

Greetings,

Andres Freund




Re: failures in t/031_recovery_conflict.pl on CI

2022-05-08 Thread Tom Lane
Andres Freund  writes:
> On 2022-05-08 11:28:34 -0400, Tom Lane wrote:
>> Per lapwing's latest results [1], this wasn't enough.  I'm again thinking
>> we should pull the whole test from the back branches.

> That failure is different from the earlier failures though. I don't think it's
> a timing issue in the test like the deadlock check one. I rather suspect it's
> indicative of further problems in this area.

Yeah, that was my guess too.

> Potentially the known problem
> with RecoveryConflictInterrupt() running in the signal handler? I think Thomas
> has a patch for that...

Maybe; or given that it's on v10, it could be telling us about some
yet-other problem we perhaps solved since then without realizing
it needed to be back-patched.

> One failure in ~20 runs, on one animal doesn't seem worth disabling the test
> for.

No one is going to thank us for shipping a known-unstable test case.
It does nothing to fix the problem; all it will lead to is possible
failures during package builds.  I have no idea whether any packagers
use "make check-world" rather than just "make check" while building.
But if they do, even fairly low-probability failures can be problematic.
(I still carry the scars I acquired while working at Red Hat and being
responsible for packaging mysql: at least back then, their test suite
was full of cases that mostly worked fine, except when getting stressed
in Red Hat's build farm.  Dealing with a test suite that fails 50% of
the time under load, while trying to push out an urgent security fix,
is NOT a pleasant situation.)

I'm happy to have this test in the stable branches once we have committed
fixes that address all known problems.  Until then, it will just be
a nuisance for anyone who is not a developer working on those problems.

regards, tom lane




Re: bogus: logical replication rows/cols combinations

2022-05-08 Thread Tomas Vondra
On 5/7/22 07:36, Amit Kapila wrote:
> On Mon, May 2, 2022 at 6:11 PM Alvaro Herrera  wrote:
>>
>> On 2022-May-02, Amit Kapila wrote:
>>
>>
>>> I think it is possible to expose a list of publications for each
>>> walsender as it is stored in each walsenders
>>> LogicalDecodingContext->output_plugin_private. AFAIK, each walsender
>>> can have one such LogicalDecodingContext and we can probably share it
>>> via shared memory?
>>
>> I guess we need to create a DSM each time a walsender opens a
>> connection, at START_REPLICATION time.  Then ALTER PUBLICATION needs to
>> connect to all DSMs of all running walsenders and see if they are
>> reading from it.  Is that what you have in mind?  Alternatively, we
>> could have one DSM per publication with a PID array of all walsenders
>> that are sending it (each walsender needs to add its PID as it starts).
>> The latter might be better.
>>
> 
> While thinking about using DSM here, I came across one of your commits
> f2f9fcb303 which seems to indicate that it is not a good idea to rely
> on it but I think you have changed dynamic shared memory to fixed
> shared memory usage because that was more suitable rather than DSM is
> not portable. Because I see a commit bcbd940806 where we have removed
> the 'none' option of dynamic_shared_memory_type. So, I think it should
> be okay to use DSM in this context. What do you think?
> 

Why would any of this be needed?

ALTER PUBLICATION will invalidate the RelationSyncEntry entries in all
walsenders, no? So AFAICS it should be enough to enforce the limitations
in get_rel_sync_entry, which is necessary anyway because the subscriber
may not be connected when ALTER PUBLICATION gets executed.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Finer grain log timestamps

2022-05-08 Thread David Fetter
Folks,

Please find attached a patch to change the sub-second granularity of
log timestamps from milliseconds to  microseconds.

I started out working on a longer patch that will give people
more choices than whole seconds and microseconds, but there were a lot
of odd corner cases, including what I believe might have been a
requirement for C11, should be wish to get sub-microsecond
granularity.

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
>From f4881d1669526597bdf608c7c59858f88314f8d1 Mon Sep 17 00:00:00 2001
From: David Fetter 
Date: Sun, 8 May 2022 13:24:33 -0700
Subject: [PATCH v1] Change log timestamps from milli- to microseconds

---
 doc/src/sgml/config.sgml   | 12 ++--
 src/backend/utils/error/elog.c | 14 +++---
 2 files changed, 13 insertions(+), 13 deletions(-)

diff --git doc/src/sgml/config.sgml doc/src/sgml/config.sgml
index 03986946a8..f00db09547 100644
--- doc/src/sgml/config.sgml
+++ doc/src/sgml/config.sgml
@@ -7174,17 +7174,17 @@ local0.*/var/log/postgresql
 
 
  %t
- Time stamp without milliseconds
+ Time stamp at second resolution
  no
 
 
  %m
- Time stamp with milliseconds
+ Time stamp with microseconds
  no
 
 
  %n
- Time stamp with milliseconds (as a Unix epoch)
+ Time stamp with microseconds (as a Unix epoch)
  no
 
 
@@ -7526,7 +7526,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a '
 This option emits log lines in comma-separated-values
 (CSV) format,
 with these columns:
-time stamp with milliseconds,
+time stamp with microseconds,
 user name,
 database name,
 process ID,
@@ -7557,7 +7557,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a '
 
 CREATE TABLE postgres_log
 (
-  log_time timestamp(3) with time zone,
+  log_time timestamp(6) with time zone,
   user_name text,
   database_name text,
   process_id integer,
@@ -7682,7 +7682,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
 
  timestamp
  string
- Time stamp with milliseconds
+ Time stamp with microseconds
 
 
  user
diff --git src/backend/utils/error/elog.c src/backend/utils/error/elog.c
index 55ee5423af..4698e32ab7 100644
--- src/backend/utils/error/elog.c
+++ src/backend/utils/error/elog.c
@@ -2295,7 +2295,7 @@ char *
 get_formatted_log_time(void)
 {
 	pg_time_t	stamp_time;
-	char		msbuf[13];
+	char		msbuf[16];
 
 	/* leave if already computed */
 	if (formatted_log_time[0] != '\0')
@@ -2315,13 +2315,13 @@ get_formatted_log_time(void)
 	 * nonempty or CSV mode can be selected.
 	 */
 	pg_strftime(formatted_log_time, FORMATTED_TS_LEN,
-	/* leave room for milliseconds... */
-"%Y-%m-%d %H:%M:%S %Z",
+	/* leave room for microseconds... */
+"%Y-%m-%d %H:%M:%S%Z",
 pg_localtime(&stamp_time, log_timezone));
 
 	/* 'paste' milliseconds into place... */
-	sprintf(msbuf, ".%03d", (int) (saved_timeval.tv_usec / 1000));
-	memcpy(formatted_log_time + 19, msbuf, 4);
+	sprintf(msbuf, ".%06d", saved_timeval.tv_usec );
+	memcpy(formatted_log_time + 19, msbuf, 7);
 
 	return formatted_log_time;
 }
@@ -2652,9 +2652,9 @@ log_line_prefix(StringInfo buf, ErrorData *edata)
 		saved_timeval_set = true;
 	}
 
-	snprintf(strfbuf, sizeof(strfbuf), "%ld.%03d",
+	snprintf(strfbuf, sizeof(strfbuf), "%ld.%06d",
 			 (long) saved_timeval.tv_sec,
-			 (int) (saved_timeval.tv_usec / 1000));
+			 saved_timeval.tv_usec);
 
 	if (padding != 0)
 		appendStringInfo(buf, "%*s", padding, strfbuf);
-- 
2.35.1



Re: Finer grain log timestamps

2022-05-08 Thread Justin Pryzby
On Sun, May 08, 2022 at 08:44:51PM +, David Fetter wrote:
>  CREATE TABLE postgres_log
>  (
> -  log_time timestamp(3) with time zone,
> +  log_time timestamp(6) with time zone,

Please also update the corresponding thing in doc/src/sgml/file-fdw.sgml

It looks like the patch I suggested to include a reminder about this was never
applied.
https://www.postgresql.org/message-id/10995044.nUPlyArG6x@aivenronan

See also:
e568ed0eb07239b7e53d948565ebaeb6f379630f
0830d21f5b01064837dc8bd910ab31a5b7a1101a




Re: Finer grain log timestamps

2022-05-08 Thread David Fetter
On Sun, May 08, 2022 at 04:12:27PM -0500, Justin Pryzby wrote:
> On Sun, May 08, 2022 at 08:44:51PM +, David Fetter wrote:
> >  CREATE TABLE postgres_log
> >  (
> > -  log_time timestamp(3) with time zone,
> > +  log_time timestamp(6) with time zone,
> 
> Please also update the corresponding thing in doc/src/sgml/file-fdw.sgml

Thanks for looking this over, and please find attached the next
version.

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
>From f6772fc0378879fd5c7fbf8cb320ee68f66341d2 Mon Sep 17 00:00:00 2001
From: David Fetter 
Date: Sun, 8 May 2022 13:24:33 -0700
Subject: [PATCH v2] Change log timestamps from milli- to microseconds

---
 doc/src/sgml/config.sgml   | 12 ++--
 doc/src/sgml/file-fdw.sgml |  2 +-
 src/backend/utils/error/elog.c | 14 +++---
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git doc/src/sgml/config.sgml doc/src/sgml/config.sgml
index 03986946a8..f00db09547 100644
--- doc/src/sgml/config.sgml
+++ doc/src/sgml/config.sgml
@@ -7174,17 +7174,17 @@ local0.*/var/log/postgresql
 
 
  %t
- Time stamp without milliseconds
+ Time stamp at second resolution
  no
 
 
  %m
- Time stamp with milliseconds
+ Time stamp with microseconds
  no
 
 
  %n
- Time stamp with milliseconds (as a Unix epoch)
+ Time stamp with microseconds (as a Unix epoch)
  no
 
 
@@ -7526,7 +7526,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a '
 This option emits log lines in comma-separated-values
 (CSV) format,
 with these columns:
-time stamp with milliseconds,
+time stamp with microseconds,
 user name,
 database name,
 process ID,
@@ -7557,7 +7557,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a '
 
 CREATE TABLE postgres_log
 (
-  log_time timestamp(3) with time zone,
+  log_time timestamp(6) with time zone,
   user_name text,
   database_name text,
   process_id integer,
@@ -7682,7 +7682,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
 
  timestamp
  string
- Time stamp with milliseconds
+ Time stamp with microseconds
 
 
  user
diff --git doc/src/sgml/file-fdw.sgml doc/src/sgml/file-fdw.sgml
index 5b98782064..c3efcbd679 100644
--- doc/src/sgml/file-fdw.sgml
+++ doc/src/sgml/file-fdw.sgml
@@ -242,7 +242,7 @@ CREATE SERVER pglog FOREIGN DATA WRAPPER file_fdw;
 
 
 CREATE FOREIGN TABLE pglog (
-  log_time timestamp(3) with time zone,
+  log_time timestamp(6) with time zone,
   user_name text,
   database_name text,
   process_id integer,
diff --git src/backend/utils/error/elog.c src/backend/utils/error/elog.c
index 55ee5423af..4698e32ab7 100644
--- src/backend/utils/error/elog.c
+++ src/backend/utils/error/elog.c
@@ -2295,7 +2295,7 @@ char *
 get_formatted_log_time(void)
 {
 	pg_time_t	stamp_time;
-	char		msbuf[13];
+	char		msbuf[16];
 
 	/* leave if already computed */
 	if (formatted_log_time[0] != '\0')
@@ -2315,13 +2315,13 @@ get_formatted_log_time(void)
 	 * nonempty or CSV mode can be selected.
 	 */
 	pg_strftime(formatted_log_time, FORMATTED_TS_LEN,
-	/* leave room for milliseconds... */
-"%Y-%m-%d %H:%M:%S %Z",
+	/* leave room for microseconds... */
+"%Y-%m-%d %H:%M:%S%Z",
 pg_localtime(&stamp_time, log_timezone));
 
 	/* 'paste' milliseconds into place... */
-	sprintf(msbuf, ".%03d", (int) (saved_timeval.tv_usec / 1000));
-	memcpy(formatted_log_time + 19, msbuf, 4);
+	sprintf(msbuf, ".%06d", saved_timeval.tv_usec );
+	memcpy(formatted_log_time + 19, msbuf, 7);
 
 	return formatted_log_time;
 }
@@ -2652,9 +2652,9 @@ log_line_prefix(StringInfo buf, ErrorData *edata)
 		saved_timeval_set = true;
 	}
 
-	snprintf(strfbuf, sizeof(strfbuf), "%ld.%03d",
+	snprintf(strfbuf, sizeof(strfbuf), "%ld.%06d",
 			 (long) saved_timeval.tv_sec,
-			 (int) (saved_timeval.tv_usec / 1000));
+			 saved_timeval.tv_usec);
 
 	if (padding != 0)
 		appendStringInfo(buf, "%*s", padding, strfbuf);
-- 
2.35.1



Re: failures in t/031_recovery_conflict.pl on CI

2022-05-08 Thread Andres Freund
Hi,

On 2022-05-08 13:59:09 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > On 2022-05-08 11:28:34 -0400, Tom Lane wrote:
> >> Per lapwing's latest results [1], this wasn't enough.  I'm again thinking
> >> we should pull the whole test from the back branches.
> 
> > That failure is different from the earlier failures though. I don't think 
> > it's
> > a timing issue in the test like the deadlock check one. I rather suspect 
> > it's
> > indicative of further problems in this area.
> 
> Yeah, that was my guess too.
> 
> > Potentially the known problem
> > with RecoveryConflictInterrupt() running in the signal handler? I think 
> > Thomas
> > has a patch for that...
> 
> Maybe; or given that it's on v10, it could be telling us about some
> yet-other problem we perhaps solved since then without realizing
> it needed to be back-patched.
> 
> > One failure in ~20 runs, on one animal doesn't seem worth disabling the test
> > for.
> 
> No one is going to thank us for shipping a known-unstable test case.

IDK, hiding failures indicating bugs isn't really better, at least if it
doesn't look like a bug in the test. But you seem to have a stronger opinion
on this than me, so I'll skip the entire test for now :/

Greetings,

Andres Freund




Re: wrong fds used for refilenodes after pg_upgrade relfilenode changes Reply-To:

2022-05-08 Thread Thomas Munro
On Sat, May 7, 2022 at 4:52 PM Thomas Munro  wrote:
> I think we'll probably also want to invent a way
> to report which backend is holding up progress, since without that
> it's practically impossible for an end user to understand why their
> command is hanging.

Simple idea: how about logging the PID of processes that block
progress for too long?  In the attached, I arbitrarily picked 5
seconds as the wait time between LOG messages.  Also, DEBUG1 messages
let you see the processing speed on eg build farm animals.  Thoughts?

To test this, kill -STOP a random backend, and then try an ALTER
DATABASE SET TABLESPACE in another backend.  Example output:

DEBUG:  waiting for all backends to process ProcSignalBarrier generation 1
LOG:  still waiting for pid 1651417 to accept ProcSignalBarrier
STATEMENT:  alter database mydb set tablespace ts1;
LOG:  still waiting for pid 1651417 to accept ProcSignalBarrier
STATEMENT:  alter database mydb set tablespace ts1;
LOG:  still waiting for pid 1651417 to accept ProcSignalBarrier
STATEMENT:  alter database mydb set tablespace ts1;
LOG:  still waiting for pid 1651417 to accept ProcSignalBarrier
STATEMENT:  alter database mydb set tablespace ts1;

... then kill -CONT:

DEBUG:  finished waiting for all backends to process ProcSignalBarrier
generation 1

Another thought is that it might be nice to be able to test with a
dummy PSB that doesn't actually do anything.  You could use it to see
how fast your system processes it, while doing various other things,
and to find/debug problems in other code that fails to handle
interrupts correctly.  Here's an attempt at that.  I guess it could go
into a src/test/modules/something instead of core, but on the other
hand the PSB itself has to be in core anyway, so maybe not.  Thoughts?
 No documentation yet, just seeing if people think this is worth
having... better names/ideas welcome.

To test this, just SELECT pg_test_procsignal_barrier().
From 9fe475b557143cc324ce14de4f29ad8bef206fc8 Mon Sep 17 00:00:00 2001
From: Thomas Munro 
Date: Mon, 9 May 2022 10:24:01 +1200
Subject: [PATCH 1/2] Add logging for ProcSignalBarrier mechanism.

To help with diagnosis of systems that are not processing
ProcSignalBarrier requests promptly, add a LOG message every 5 seconds
if we seem to be wedged.  Although you could already see this state via
wait events in pg_stat_activity, the log message also shows the PID of
the process that is preventing progress.

Also add DEBUG1 logging around the whole loop.
---
 src/backend/storage/ipc/procsignal.c | 18 --
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/src/backend/storage/ipc/procsignal.c b/src/backend/storage/ipc/procsignal.c
index 00d66902d8..4fe6f087db 100644
--- a/src/backend/storage/ipc/procsignal.c
+++ b/src/backend/storage/ipc/procsignal.c
@@ -393,6 +393,11 @@ WaitForProcSignalBarrier(uint64 generation)
 {
 	Assert(generation <= pg_atomic_read_u64(&ProcSignal->psh_barrierGeneration));
 
+	elog(DEBUG1,
+		 "waiting for all backends to process ProcSignalBarrier generation "
+		 UINT64_FORMAT,
+		 generation);
+
 	for (int i = NumProcSignalSlots - 1; i >= 0; i--)
 	{
 		ProcSignalSlot *slot = &ProcSignal->psh_slot[i];
@@ -407,13 +412,22 @@ WaitForProcSignalBarrier(uint64 generation)
 		oldval = pg_atomic_read_u64(&slot->pss_barrierGeneration);
 		while (oldval < generation)
 		{
-			ConditionVariableSleep(&slot->pss_barrierCV,
-   WAIT_EVENT_PROC_SIGNAL_BARRIER);
+			if (ConditionVariableTimedSleep(&slot->pss_barrierCV,
+			5000,
+			WAIT_EVENT_PROC_SIGNAL_BARRIER))
+elog(LOG,
+	 "still waiting for pid %d to accept ProcSignalBarrier",
+	 slot->pss_pid);
 			oldval = pg_atomic_read_u64(&slot->pss_barrierGeneration);
 		}
 		ConditionVariableCancelSleep();
 	}
 
+	elog(DEBUG1,
+		 "finished waiting for all backends to process ProcSignalBarrier generation "
+		 UINT64_FORMAT,
+		 generation);
+
 	/*
 	 * The caller is probably calling this function because it wants to read
 	 * the shared state or perform further writes to shared state once all
-- 
2.30.2

From 266528b553dcc128d2d38039ae821fe515ab1db1 Mon Sep 17 00:00:00 2001
From: Thomas Munro 
Date: Mon, 9 May 2022 11:06:48 +1200
Subject: [PATCH 2/2] Add pg_test_procsignal_barrier().

A superuser-only test function to exercise the ProcSignalBarrier
mechanism.  It has no effect, other than to interrupt every process
connected to shared memory and wait for them all to acknowledge the
request.
---
 src/backend/storage/ipc/procsignal.c | 20 
 src/include/catalog/pg_proc.dat  |  4 
 src/include/storage/procsignal.h |  1 +
 3 files changed, 25 insertions(+)

diff --git a/src/backend/storage/ipc/procsignal.c b/src/backend/storage/ipc/procsignal.c
index 4fe6f087db..93c31bc6b9 100644
--- a/src/backend/storage/ipc/procsignal.c
+++ b/src/backend/storage/ipc/procsignal.c
@@ -541,6 +541,10 @@ ProcessProcSignalBarrier(void)
 type = (ProcSignalBarrierType) pg_rightm

should check interrupts in BuildRelationExtStatistics ?

2022-05-08 Thread Justin Pryzby
Restarting a large instance took twice as long as I expected due to not
checking interrupts in (at least) statext_ndistinct_build.  Long enough that I
attached (and was able to attach) a debugger to verify, which I think is too
long.  I think it could cause issues for an high-availability cluster or other
script if it takes too long to shut down.

The tables being auto-analyzed have 9 exteneded stats objects, each with stats
target=10.  7 of those are (ndistinct) stats on 4 simple columns plus 1
expression (5 total).  And the other 2 stats objects are expressional stats
(necessarily on a single expression).




Re: Possible corruption by CreateRestartPoint at promotion

2022-05-08 Thread Michael Paquier
On Fri, May 06, 2022 at 07:58:43PM +0900, Michael Paquier wrote:
> And I have spent a bit of this stuff to finish with the attached.  It
> will be a plus to get that done on HEAD for beta1, so I'll try to deal
> with it on Monday.  I am still a bit stressed about the back branches
> as concurrent checkpoints are possible via CreateCheckPoint() from the
> startup process (not the case of HEAD), but the stable branches will
> have a new point release very soon so let's revisit this choice there
> later.  v6 attached includes a TAP test, but I don't intend to include
> it as it is expensive.

Okay, applied this one on HEAD after going back-and-forth on it for
the last couple of days.  I have found myself shaping the patch in
what looks like its simplest form, by applying the check based on an
older checkpoint to all the fields updated in the control file, with
the check on DB_IN_ARCHIVE_RECOVERY applying to the addition of
DB_SHUTDOWNED_IN_RECOVERY (got initialially surprised that this was
having side effects on pg_rewind) and the minRecoveryPoint
calculations.

Now, it would be nice to get a test for this stuff, and we are going
to need something cheaper than what's been proposed upthread.  This
comes down to the point of being able to put a deterministic stop
in a restart point while it is processing, meaning that we need to
interact with one of the internal routines of CheckPointGuts().  One
fancy way to do so would be to forcibly take a LWLock to stuck the
restart point until it is released.  Using a SQL function for that
would be possible, if not artistic.  Perhaps we don't need such a
function though, if we could stuck arbitrarily the internals of a 
checkpoint?  Any ideas?
--
Michael


signature.asc
Description: PGP signature


Re: Cryptohash OpenSSL error queue in FIPS enabled builds

2022-05-08 Thread Michael Paquier
On Tue, Apr 26, 2022 at 03:15:24PM +0200, Daniel Gustafsson wrote:
> On 26 Apr 2022, at 03:55, Michael Paquier  wrote:
>> I am a bit annoyed to assume that having only a localized
>> ERR_clear_error() in the error code path of the init() call is the
>> only problem that would occur, only because that's the first one we'd
>> see in a hash computation.
> 
> It's also the only one in this case since the computation won't get past the
> init step with the error no?  The queue will be cleared for each computation 
> so
> the risk of cross contamination is removed.

I was wondering about the case where an error is applied while
updating or finishing the cryptohash, not just the creation or the
initialization.  But cleaning up the queue when beginning a
computation is fine enough.

>> Okay.  I did not recall the full error stack used in pgcrypto.  It is
>> annoying to not get from OpenSSL the details of the error, though.
>> With FIPS enabled, one computing a hash with pgcrypto would just know
>> about the initialization error, but would miss why the computation
>> failed.  It looks like we could use a new error code to tell
>> px_strerror() to look at the OpenSSL error queue instead of one of the
>> hardcoded strings.  Just saying.
> 
> I looked at that briefly, and might revisit it during the 16 cycle, but it 
> does
> have a smell of diminishing returns to it.  With non-OpenSSL code ripped out
> from pgcrypto it's clearly more interesting than before.

Clearly.

For the sake of the archives, this patch series has been applied as
17ec5fa, 0250a16 and ee97d46.
--
Michael


signature.asc
Description: PGP signature


Re: failures in t/031_recovery_conflict.pl on CI

2022-05-08 Thread Andres Freund
On 2022-05-08 15:11:39 -0700, Andres Freund wrote:
> But you seem to have a stronger opinion on this than me, so I'll skip the
> entire test for now :/

And done.




Re: failures in t/031_recovery_conflict.pl on CI

2022-05-08 Thread Tom Lane
Andres Freund  writes:
> On 2022-05-08 15:11:39 -0700, Andres Freund wrote:
>> But you seem to have a stronger opinion on this than me, so I'll skip the
>> entire test for now :/

> And done.

Thanks, I appreciate that.

regards, tom lane




RE: Data is copied twice when specifying both child and parent table in publication

2022-05-08 Thread wangw.f...@fujitsu.com
On Tue, Apr 28, 2022 9:22 AM Shi, Yu/侍 雨  wrote:
> Thanks for your patches.
> 
> Here's a comment on the patch for REL14.
Thanks for your comments.

> + appendStringInfo(&cmd, "SELECT DISTINCT ns.nspname, c.relname\n"
> +  " FROM
> pg_catalog.pg_publication_tables t\n"
> +  "  JOIN pg_catalog.pg_namespace
> ns\n"
> +  " ON ns.nspname =
> t.schemaname\n"
> +  "  JOIN pg_catalog.pg_class c\n"
> +  " ON c.relname = t.tablename 
> AND
> c.relnamespace = ns.oid\n"
> +  " WHERE t.pubname IN (%s)\n"
> +  " AND (c.relispartition IS FALSE\n"
> +  "  OR NOT EXISTS\n"
> +  "( SELECT 1 FROM
> pg_partition_ancestors(c.oid) as relid\n"
> +  "  WHERE relid IN\n"
> +  "(SELECT DISTINCT 
> (schemaname
> || '.' || tablename)::regclass::oid\n"
> +  " FROM
> pg_catalog.pg_publication_tables t\n"
> +  " WHERE t.pubname IN 
> (%s))\n"
> +  "  AND relid != c.oid))\n",
> +  pub_names.data, pub_names.data);
> 
> I think we can use an alias like 'pa' for pg_partition_ancestors, and modify 
> the
> SQL as follows.
> 
> + appendStringInfo(&cmd, "SELECT DISTINCT ns.nspname, c.relname\n"
> +  " FROM
> pg_catalog.pg_publication_tables t\n"
> +  "  JOIN pg_catalog.pg_namespace
> ns\n"
> +  " ON ns.nspname =
> t.schemaname\n"
> +  "  JOIN pg_catalog.pg_class c\n"
> +  " ON c.relname = t.tablename 
> AND
> c.relnamespace = ns.oid\n"
> +  " WHERE t.pubname IN (%s)\n"
> +  " AND (c.relispartition IS FALSE\n"
> +  "  OR NOT EXISTS\n"
> +  "( SELECT 1 FROM
> pg_partition_ancestors(c.oid) pa\n"
> +  "  WHERE pa.relid IN\n"
> +  "(SELECT DISTINCT
> (t.schemaname || '.' || t.tablename)::regclass::oid\n"
> +  " FROM
> pg_catalog.pg_publication_tables t\n"
> +  " WHERE t.pubname IN 
> (%s))\n"
> +  "  AND pa.relid != c.oid))\n",
> +  pub_names.data, pub_names.data);
Fix it.

In addition, I try to modify the approach for the HEAD.
I enhance the API of function pg_get_publication_tables. Change the parameter
type from 'text' to 'any'. Then we can use this function to get tables from one
publication or an array of publications. Any thoughts on this approach?

Attach new patches.
The patch for HEAD:
1. Modify the approach. Enhance the API of function pg_get_publication_tables to
handle one publication or an array of publications.
The patch for REL14:
1. Improve the table sync SQL. [suggestions by Shi yu]

Regards,
Wang wei


HEAD_v2-0001-Fix-data-replicated-twice-when-specifying-PUBLISH.patch
Description:  HEAD_v2-0001-Fix-data-replicated-twice-when-specifying-PUBLISH.patch


REL14_v4-0001-Fix-data-replicated-twice-when-specifying-PUBLISH.patch
Description:  REL14_v4-0001-Fix-data-replicated-twice-when-specifying-PUBLISH.patch


Re: Rewriting the test of pg_upgrade as a TAP test - take three - remastered set

2022-05-08 Thread Michael Paquier
On Sun, May 01, 2022 at 09:27:18PM -0700, Noah Misch wrote:
> On Fri, Apr 01, 2022 at 10:16:48AM +0900, Michael Paquier wrote:
>> commit 322becb wrote:
>
> Nothing checks the command result, so the test file passes even if each of
> these createdb calls fails.  Other run_log() calls in this file have the same
> problem.  This particular one should be command_ok() or similar.

All of them could rely on command_ok(), as they should never fail, so
switched this way.

> --host and --port are redundant in a PostgreSQL::Test::Cluster::run_log call,
> because that call puts equivalent configuration in the environment.  Other
> calls in the file have the same redundant operands.  (No other test file has
> redundant --host or --port.)

Right.  Removed all that.

>> +# Grab any regression options that may be passed down by caller.
>> +my $extra_opts_val = $ENV{EXTRA_REGRESS_OPT} || "";
> 
> Typo: s/_OPT/_OPTS/

Oops, fixed.

>> +my @extra_opts = split(/\s+/, $extra_opts_val);
> 
> src/test/recovery/t/027_stream_regress.pl and the makefiles treat
> EXTRA_REGRESS_OPTS as a shell fragment.  To be compatible, use the
> src/test/recovery/t/027_stream_regress.pl approach.  Affected usage patetrns
> are not very important, but since the tree has code for it, you may as well
> borrow that code.  These examples witness the difference:

So the pattern of EXTRA_REGRESS_OPTS being used in the Makefiles is
the decision point here.  Makes sense.

>> -# Create databases with names covering the ASCII bytes other than NUL, BEL,
>> -# LF, or CR.  BEL would ring the terminal bell in the course of this test, 
>> and
>> -# it is not otherwise a special case.  PostgreSQL doesn't support the rest.
>> -dbname1=`awk 'BEGIN { for (i= 1; i < 46; i++)
>> -if (i != 7 && i != 10 && i != 13) printf "%c", i }' > -# Exercise backslashes adjacent to double quotes, a Windows special case.
>> -dbname1='\"\'$dbname1'\\"\\\'
> 
> This rewrite dropped the exercise of backslashes adjacent to double quotes.

Damn, thanks.  If I am reading that right, this could be done with the
following addition in generate_db(), adding double quotes surrounded
by backslashes before and after the database name: 
$dbname = '\\"\\' . $dbname . '\\"\\';

All these fixes lead me to the attached patch.  Does that look fine to
you?

Thanks,
--
Michael
diff --git a/src/bin/pg_upgrade/t/002_pg_upgrade.pl b/src/bin/pg_upgrade/t/002_pg_upgrade.pl
index 05bf161843..4002982dc0 100644
--- a/src/bin/pg_upgrade/t/002_pg_upgrade.pl
+++ b/src/bin/pg_upgrade/t/002_pg_upgrade.pl
@@ -21,9 +21,11 @@ sub generate_db
 		next if $i == 7 || $i == 10 || $i == 13;# skip BEL, LF, and CR
 		$dbname = $dbname . sprintf('%c', $i);
 	}
-	$node->run_log(
-		[ 'createdb', '--host', $node->host, '--port', $node->port, $dbname ]
-	);
+
+	# Exercise backslashes adjacent to double quotes, a Windows special
+	# case.
+	$dbname = '\\"\\' . $dbname . '\\"\\';
+	$node->command_ok([ 'createdb', $dbname ]);
 }
 
 # The test of pg_upgrade requires two clusters, an old one and a new one
@@ -70,12 +72,7 @@ if (defined($ENV{olddump}))
 
 	# Load the dump using the "postgres" database as "regression" does
 	# not exist yet, and we are done here.
-	$oldnode->command_ok(
-		[
-			'psql',   '-X',   '-f', $olddumpfile,
-			'--port', $oldnode->port, '--host', $oldnode->host,
-			'postgres'
-		]);
+	$oldnode->command_ok([ 'psql', '-X', '-f', $olddumpfile, 'postgres' ]);
 }
 else
 {
@@ -87,8 +84,7 @@ else
 	generate_db($oldnode, 91, 127);
 
 	# Grab any regression options that may be passed down by caller.
-	my $extra_opts_val = $ENV{EXTRA_REGRESS_OPT} || "";
-	my @extra_opts = split(/\s+/, $extra_opts_val);
+	my $extra_opts = $ENV{EXTRA_REGRESS_OPTS} || "";
 
 	# --dlpath is needed to be able to find the location of regress.so
 	# and any libraries the regression tests require.
@@ -100,19 +96,19 @@ else
 	# --inputdir points to the path of the input files.
 	my $inputdir = "$srcdir/src/test/regress";
 
-	my @regress_command = [
-		$ENV{PG_REGRESS}, @extra_opts,
-		'--dlpath',   $dlpath,
-		'--max-concurrent-tests', '20',
-		'--bindir=',  '--host',
-		$oldnode->host,   '--port',
-		$oldnode->port,   '--schedule',
-		"$srcdir/src/test/regress/parallel_schedule", '--outputdir',
-		$outputdir,   '--inputdir',
-		$inputdir
-	];
-
-	my $rc = run_log(@regress_command);
+	my $rc =
+	  system($ENV{PG_REGRESS}
+		  . "$extra_opts "
+		  . "--dlpath=\"$dlpath\" "
+		  . "--bindir= "
+		  . "--host="
+		  . $oldnode->host . " "
+		  . "--port="
+		  . $oldnode->port . " "
+		  . "--schedule=$srcdir/src/test/regress/parallel_schedule "
+		  . "--max-concurrent-tests=20 "
+		  . "--inputdir=\"$inputdir\" "
+		  . "--outputdir=\"$outputdir\"");
 	if ($rc != 0)
 	{
 		# Dump out the regression diffs file, 

Re: should check interrupts in BuildRelationExtStatistics ?

2022-05-08 Thread Michael Paquier
On Sun, May 08, 2022 at 07:01:08PM -0500, Justin Pryzby wrote:
> Restarting a large instance took twice as long as I expected due to not
> checking interrupts in (at least) statext_ndistinct_build.  Long enough that I
> attached (and was able to attach) a debugger to verify, which I think is too
> long.  I think it could cause issues for an high-availability cluster or other
> script if it takes too long to shut down.

Hmm.  That's annoying.

> The tables being auto-analyzed have 9 exteneded stats objects, each with stats
> target=10.  7 of those are (ndistinct) stats on 4 simple columns plus 1
> expression (5 total).  And the other 2 stats objects are expressional stats
> (necessarily on a single expression).

How long can the backend remain unresponsive?  I don't think that
anybody would object to the addition of some CHECK_FOR_INTERRUPTS() in
areas where it would be efficient to make the shutdown quicker, but
we need to think carefully about the places where we'd want to add
these.
--
Michael


signature.asc
Description: PGP signature


Re: should check interrupts in BuildRelationExtStatistics ?

2022-05-08 Thread Tom Lane
Michael Paquier  writes:
> How long can the backend remain unresponsive?  I don't think that
> anybody would object to the addition of some CHECK_FOR_INTERRUPTS() in
> areas where it would be efficient to make the shutdown quicker, but
> we need to think carefully about the places where we'd want to add
> these.

CHECK_FOR_INTERRUPTS is really quite cheap, just a test-and-branch.
I wouldn't put it in a *very* tight loop, but one test per row
processed while gathering stats is unlikely to be a problem.

regards, tom lane




Re: Query generates infinite loop

2022-05-08 Thread Corey Huinker
On Wed, May 4, 2022 at 3:01 PM Jeff Janes  wrote:

> On Wed, Apr 20, 2022 at 5:43 PM Tom Lane  wrote:
>
>> I wrote:
>> > it's true that infinities as generate_series endpoints are going
>> > to work pretty oddly, so I agree with the idea of forbidding 'em.
>>
>> > Numeric has infinity as of late, so the numeric variant would
>> > need to do this too.
>>
>> Oh --- looks like numeric generate_series() already throws error for
>> this, so we should just make the timestamp variants do the same.
>>
>
> The regression test you added for this change causes an infinite loop when
> run against an unpatched server with --install-check.  That is a bit
> unpleasant.  Is there something we can and should do about that?  I was
> expecting regression test failures of course but not an infinite loop
> leading towards disk exhaustion.
>
> Cheers,
>
> Jeff
>

This came up once before
https://www.postgresql.org/message-id/CAB7nPqQUuUh_W3s55eSiMnt901Ud3meF7f_96yPkKcqfd1ZaMg%40mail.gmail.com


Re: bogus: logical replication rows/cols combinations

2022-05-08 Thread Amit Kapila
On Sun, May 8, 2022 at 11:41 PM Tomas Vondra
 wrote:
>
> On 5/7/22 07:36, Amit Kapila wrote:
> > On Mon, May 2, 2022 at 6:11 PM Alvaro Herrera  
> > wrote:
> >>
> >> On 2022-May-02, Amit Kapila wrote:
> >>
> >>
> >>> I think it is possible to expose a list of publications for each
> >>> walsender as it is stored in each walsenders
> >>> LogicalDecodingContext->output_plugin_private. AFAIK, each walsender
> >>> can have one such LogicalDecodingContext and we can probably share it
> >>> via shared memory?
> >>
> >> I guess we need to create a DSM each time a walsender opens a
> >> connection, at START_REPLICATION time.  Then ALTER PUBLICATION needs to
> >> connect to all DSMs of all running walsenders and see if they are
> >> reading from it.  Is that what you have in mind?  Alternatively, we
> >> could have one DSM per publication with a PID array of all walsenders
> >> that are sending it (each walsender needs to add its PID as it starts).
> >> The latter might be better.
> >>
> >
> > While thinking about using DSM here, I came across one of your commits
> > f2f9fcb303 which seems to indicate that it is not a good idea to rely
> > on it but I think you have changed dynamic shared memory to fixed
> > shared memory usage because that was more suitable rather than DSM is
> > not portable. Because I see a commit bcbd940806 where we have removed
> > the 'none' option of dynamic_shared_memory_type. So, I think it should
> > be okay to use DSM in this context. What do you think?
> >
>
> Why would any of this be needed?
>
> ALTER PUBLICATION will invalidate the RelationSyncEntry entries in all
> walsenders, no? So AFAICS it should be enough to enforce the limitations
> in get_rel_sync_entry,
>

Yes, that should be sufficient to enforce limitations in
get_rel_sync_entry() but it will lead to the following behavior:
a. The Alter Publication command will be successful but later in the
logs, the error will be logged and the user needs to check it and take
appropriate action. Till that time the walsender will be in an error
loop which means it will restart and again lead to the same error till
the user takes some action.
b. As we use historic snapshots, so even after the user takes action
say by changing publication, it won't be reflected. So, the option for
the user would be to drop their subscription.

Am, I missing something? If not, then are we okay with such behavior?
If yes, then I think it would be much easier implementation-wise and
probably advisable at this point. We can document it so that users are
careful and can take necessary action if they get into such a
situation. Any way we can improve this in future as you also suggested
earlier.

> which is necessary anyway because the subscriber
> may not be connected when ALTER PUBLICATION gets executed.
>

If we are not okay with the resultant behavior of detecting this in
get_rel_sync_entry(), then we can solve this in some other way as
Alvaro has indicated in one of his responses which is to detect that
at start replication time probably in the subscriber-side.

-- 
With Regards,
Amit Kapila.




Re: Query generates infinite loop

2022-05-08 Thread Tom Lane
Corey Huinker  writes:
> On Wed, May 4, 2022 at 3:01 PM Jeff Janes  wrote:
>> On Wed, Apr 20, 2022 at 5:43 PM Tom Lane  wrote:
>>> Oh --- looks like numeric generate_series() already throws error for
>>> this, so we should just make the timestamp variants do the same.

> This came up once before
> https://www.postgresql.org/message-id/CAB7nPqQUuUh_W3s55eSiMnt901Ud3meF7f_96yPkKcqfd1ZaMg%40mail.gmail.com

Oh!  I'd totally forgotten that thread, but given that discussion,
and particularly the counterexample at

https://www.postgresql.org/message-id/16807.1456091547%40sss.pgh.pa.us

it now feels to me like maybe this change was a mistake.  Perhaps
instead of the committed change, we ought to go the other way and
rip out the infinity checks in numeric generate_series().

In view of tomorrow's minor-release wrap, there is not time for
the sort of more leisured discussion that I now think this topic
needs.  I propose to revert eafdf9de0 et al before the wrap,
and think about this at more length before doing anything.

regards, tom lane




Re: Query generates infinite loop

2022-05-08 Thread Corey Huinker
On Mon, May 9, 2022 at 12:02 AM Tom Lane  wrote:

> Corey Huinker  writes:
> > On Wed, May 4, 2022 at 3:01 PM Jeff Janes  wrote:
> >> On Wed, Apr 20, 2022 at 5:43 PM Tom Lane  wrote:
> >>> Oh --- looks like numeric generate_series() already throws error for
> >>> this, so we should just make the timestamp variants do the same.
>
> > This came up once before
> >
> https://www.postgresql.org/message-id/CAB7nPqQUuUh_W3s55eSiMnt901Ud3meF7f_96yPkKcqfd1ZaMg%40mail.gmail.com
>
> Oh!  I'd totally forgotten that thread, but given that discussion,
> and particularly the counterexample at
>
> https://www.postgresql.org/message-id/16807.1456091547%40sss.pgh.pa.us
>
> it now feels to me like maybe this change was a mistake.  Perhaps
> instead of the committed change, we ought to go the other way and
> rip out the infinity checks in numeric generate_series().
>

The infinite-upper-bound-withlimit-pushdown counterexample makes sense, but
seems like we're using generate_series() only because we lack a function
that generates a series of N elements, without a specified upper bound,
something like

 generate_finite_series( start, step, num_elements )

And if we did that, I'd lobby that we have one that takes dates as well as
one that takes timestamps, because that was my reason for starting the
thread above.


Re: Logical replication timeout problem

2022-05-08 Thread Amit Kapila
On Fri, May 6, 2022 at 12:42 PM wangw.f...@fujitsu.com
 wrote:
>
> On Fri, May 6, 2022 at 9:54 AM Masahiko Sawada  wrote:
> > On Wed, May 4, 2022 at 7:18 PM Amit Kapila  wrote:
> > >
> > > On Mon, May 2, 2022 at 8:07 AM Masahiko Sawada 
> > wrote:
> > > >
> > > > On Mon, May 2, 2022 at 11:32 AM Amit Kapila 
> > wrote:
> > > > >
> > > > >
> > > > > So, shall we go back to the previous approach of using a separate
> > > > > function update_replication_progress?
> > > >
> > > > Ok, agreed.
> > > >
> > >
> > > Attached, please find the updated patch accordingly. Currently, I have
> > > prepared it for HEAD, if you don't see any problem with this, we can
> > > prepare the back-branch patches based on this.
> >
> > Thank you for updating the patch. Looks good to me.
> Thanks for your review.
>
> Improve the back-branch patches according to the discussion.
>

Thanks. The patch LGTM. I'll push and back-patch this after the
current minor release is done unless there are more comments related
to this work.

-- 
With Regards,
Amit Kapila.




Re: Estimating HugePages Requirements?

2022-05-08 Thread Michael Paquier
On Fri, May 06, 2022 at 10:13:18AM -0700, Nathan Bossart wrote:
> On Tue, Apr 26, 2022 at 10:34:06AM +0900, Michael Paquier wrote:
>> Yes, the redirection issue would apply to all the run-time GUCs.
> 
> Should this be tracked as an open item for v15?  There was another recent
> report about the extra log output [0].

That makes it for two complaints on two separate threads.  So an open
item seems adapted to adjust this behavior.

I have looked at the patch posted at [1], and I don't quite understand
why you need the extra dance with log_min_messages.  Why don't you
just set the GUC at the end of the code path in PostmasterMain() where
we print non-runtime-computed parameters?  I am not really worrying
about users deciding to set log_min_messages to PANIC in
postgresql.conf when it comes to postgres -C, TBH, as they'd miss the
FATAL messages if the command is attempted on a server already
starting.

Per se the attached.

[1]: https://www.postgresql.org/message-id/20220328173503.GA137769@nathanxps13
--
Michael
From 3b8a7f8079955cd59a5a318adf6938cdd3c29c6b Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Mon, 9 May 2022 15:50:19 +0900
Subject: [PATCH v4] Silence extra logging with 'postgres -C'.

Presently, the server may emit a variety of extra log messages when
inspecting GUC values.  For example, when inspecting a runtime-computed
GUC, the server will always emit a "database system is shut down" LOG
(unless the user has set log_min_messages higher than LOG).  To avoid
these extra log messages, this change sets log_min_messages to FATAL
when -C is used (even if set to PANIC in postgresql.conf).  At FATAL,
the user will still receive messages explaining why a GUC's value cannot
be inspected.
---
 src/backend/postmaster/postmaster.c | 10 ++
 doc/src/sgml/runtime.sgml   |  2 +-
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index ce4007bb2c..38b63bc215 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -913,6 +913,16 @@ PostmasterMain(int argc, char *argv[])
 			puts(config_val ? config_val : "");
 			ExitPostmaster(0);
 		}
+
+		/*
+		 * A runtime-computed GUC will be printed later on.  As we initialize
+		 * a server startup sequence, silence any log messages that may show up
+		 * in the output generated.  FATAL and more severe messages are useful
+		 * to show, even if one would only expect at least PANIC.  LOG entries
+		 * are hidden.
+		 */
+		SetConfigOption("log_min_messages", "FATAL", PGC_INTERNAL,
+		PGC_S_OVERRIDE);
 	}
 
 	/* Verify that DataDir looks reasonable */
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
index 4465c876b1..62cec614d3 100644
--- a/doc/src/sgml/runtime.sgml
+++ b/doc/src/sgml/runtime.sgml
@@ -1448,7 +1448,7 @@ export PG_OOM_ADJUST_VALUE=0
 server must be shut down to view this runtime-computed parameter.
 This might look like:
 
-$ postgres -D $PGDATA -C shared_memory_size_in_huge_pages 2> /dev/null
+$ postgres -D $PGDATA -C shared_memory_size_in_huge_pages
 3170
 $ grep ^Hugepagesize /proc/meminfo
 Hugepagesize:   2048 kB
-- 
2.36.0



signature.asc
Description: PGP signature