Re: Autogenerate some wait events code and documentation

2023-09-05 Thread Drouvot, Bertrand

Hi,

On 9/6/23 5:44 AM, Michael Paquier wrote:

On Wed, Sep 06, 2023 at 04:20:23AM +0200, Erik Wienhold wrote:

FYI: https://en.wikipedia.org/wiki/Sentence_spacing


That was an interesting read.  Thanks.


+1, thanks!

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Autogenerate some wait events code and documentation

2023-09-05 Thread Drouvot, Bertrand

Hi,

On 9/5/23 7:44 AM, Michael Paquier wrote:

On Mon, Sep 04, 2023 at 02:14:58PM +0200, Drouvot, Bertrand wrote:

 # Build the descriptions.  There are in camel-case.
 # LWLock and Lock classes do not need any modifications.

Nit: 2 whitespaces before "There are in camel"


The whitespaces are intentional, 


Oh ok, out of curiosity, why are 2 whitespaces intentional?


Then, the only diff is:

< Client,WalSenderWaitWal,Waiting for WAL to be flushed in WAL sender process
---

Client,WalSenderWaitForWAL,Waiting for WAL to be flushed in WAL sender process


That said, it looks good to me.


Ah, good catch.  I did not think about cross-checking the data in the
new view before and after the patch set.  This rename needs to happen
in 0001.

Please find v5 attached.  How does that look?


Thanks!

That looks good. I just noticed that v5 did re-introduce the "issue" that
was fixed in 00e49233a9.

Also, v5 needs a rebase due to f691f5b80a.

Attaching v6 taking care of the 2 points mentioned above.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 6a94fc1a00ae784b12e15f498a5afba4020377f9 Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Tue, 5 Sep 2023 14:32:37 +0900
Subject: [PATCH v6 1/2] Rename wait events with more consistent camelcase
 style

This will help in the introduction of more simplifications with the
generation of wait event data using wait_event_names.txt.  The event
names used the same case-insensitive characters, hence applying lower()
or upper() to the monitoring queries allows the detection of the same
events as before this change.
---
 src/backend/libpq/pqmq.c  |  2 +-
 src/backend/replication/walsender.c   |  2 +-
 src/backend/storage/ipc/shm_mq.c  |  6 +-
 .../utils/activity/wait_event_names.txt   | 90 +--
 4 files changed, 50 insertions(+), 50 deletions(-)
   3.2% src/backend/storage/ipc/
  93.3% src/backend/utils/activity/
   3.3% src/backend/

diff --git a/src/backend/libpq/pqmq.c b/src/backend/libpq/pqmq.c
index 253181f47a..38b042804c 100644
--- a/src/backend/libpq/pqmq.c
+++ b/src/backend/libpq/pqmq.c
@@ -182,7 +182,7 @@ mq_putmessage(char msgtype, const char *s, size_t len)
break;
 
(void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, 0,
-WAIT_EVENT_MQ_PUT_MESSAGE);
+
WAIT_EVENT_MESSAGE_QUEUE_PUT_MESSAGE);
ResetLatch(MyLatch);
CHECK_FOR_INTERRUPTS();
}
diff --git a/src/backend/replication/walsender.c 
b/src/backend/replication/walsender.c
index 46e48492ac..e250b0567e 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -1654,7 +1654,7 @@ WalSndWaitForWal(XLogRecPtr loc)
if (pq_is_send_pending())
wakeEvents |= WL_SOCKET_WRITEABLE;
 
-   WalSndWait(wakeEvents, sleeptime, 
WAIT_EVENT_WAL_SENDER_WAIT_WAL);
+   WalSndWait(wakeEvents, sleeptime, 
WAIT_EVENT_WAL_SENDER_WAIT_FOR_WAL);
}
 
/* reactivate latch so WalSndLoop knows to continue */
diff --git a/src/backend/storage/ipc/shm_mq.c b/src/backend/storage/ipc/shm_mq.c
index d134a09a19..06d6b73527 100644
--- a/src/backend/storage/ipc/shm_mq.c
+++ b/src/backend/storage/ipc/shm_mq.c
@@ -1017,7 +1017,7 @@ shm_mq_send_bytes(shm_mq_handle *mqh, Size nbytes, const 
void *data,
 * cheaper than setting one that has been reset.
 */
(void) WaitLatch(MyLatch, WL_LATCH_SET | 
WL_EXIT_ON_PM_DEATH, 0,
-WAIT_EVENT_MQ_SEND);
+
WAIT_EVENT_MESSAGE_QUEUE_SEND);
 
/* Reset the latch so we don't spin. */
ResetLatch(MyLatch);
@@ -1163,7 +1163,7 @@ shm_mq_receive_bytes(shm_mq_handle *mqh, Size 
bytes_needed, bool nowait,
 * setting one that has been reset.
 */
(void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, 0,
-WAIT_EVENT_MQ_RECEIVE);
+
WAIT_EVENT_MESSAGE_QUEUE_RECEIVE);
 
/* Reset the latch so we don't spin. */
ResetLatch(MyLatch);
@@ -1252,7 +1252,7 @@ shm_mq_wait_internal(shm_mq *mq, PGPROC **ptr, 
BackgroundWorkerHandle *handle)
 
/* Wait to be signaled. */
(void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, 0,
-WAIT_EVENT_MQ_INTERNAL);
+
WAIT_EVENT_MESSAGE_QUEUE_INTERNAL);
 
/*

Re: Autogenerate some wait events code and documentation

2023-09-04 Thread Drouvot, Bertrand

Hi,

On 8/29/23 8:41 AM, Michael Paquier wrote:

On Tue, Aug 29, 2023 at 08:17:10AM +0200, Drouvot, Bertrand wrote:
That may be a bug in the matrix because of bb90022, as git am can be
easily pissed. 


git am does not complain anymore.


+   # Generate the element name for the enums based on the
+   # description.  The C symbols are prefixed with "WAIT_EVENT_".

Nit: 2 whitespaces before "The C"

# Build the descriptions.  There are in camel-case.
# LWLock and Lock classes do not need any modifications.

Nit: 2 whitespaces before "There are in camel"

+   my $waiteventdescription = '';
+   if (   $waitclassname eq 'WaitEventLWLock'

Nit: Too many whitespace after the "if (" ?? (I guess pgperltidy would
fix it).


I have double-checked the docs generated, while on it, and I am not
seeing anything missing, particularly for the LWLock and Lock parts..


I did compare the output of select * from pg_wait_events order by 1,2 and
ignored the case (with and without the patch series).

Then, the only diff is:

< Client,WalSenderWaitWal,Waiting for WAL to be flushed in WAL sender process
---

Client,WalSenderWaitForWAL,Waiting for WAL to be flushed in WAL sender process


That said, it looks good to me.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Autogenerate some wait events code and documentation

2023-08-28 Thread Drouvot, Bertrand

Hi,

On 8/28/23 10:04 AM, Michael Paquier wrote:

On Mon, Jul 17, 2023 at 10:16:02AM +0900, Michael Paquier wrote:

So you mean to switch a line that now looks like that:
WAIT_EVENT_FOO_BAR   FooBar"Waiting on Foo Bar."
To that:
FOO_BAR   "Waiting on Foo Bar."
Or even that:
WAIT_EVENT_FOO_BAR   "Waiting on Foo Bar."

Sure, it is an improvement for any wait events that use WAIT_EVENT_
when searching them, but this adds more magic into the LWLock and Lock
areas if the same conversion is applied there.  Or am I right to
assume that you'd mean to *not* do any of that for these two classes?
These can be treated as exceptions in the script when generating the
wait event names from the enum elements, of course.


I have looked again at that, and switching wait_event_names.txt to use
two columns made of the typedef definitions and the docs like is not a
problem:
FOO_BAR   "Waiting on Foo Bar."

WAIT_EVENT_ is appended to the typedef definitions in the script.  The
wait event names like "FooBar" are generated from the enums by
splitting using their underscores and doing some lc().  Lock and
LWLock don't need to change.  This way, it is easy to grep the wait
events from the source code and match them with wait_event_names.txt.

Thoughts or comments?


Agree that done that way one could easily grep the events from the source code 
and
match them with wait_event_names.txt. Then I don't think the "search" issue in 
the code
is still a concern with the current proposal.

FWIW, I'm getting:

$ git am v3-000*
Applying: Rename wait events with more consistent camelcase style
Applying: Remove column for wait event names in wait_event_names.txt
error: patch failed: src/backend/utils/activity/wait_event_names.txt:261
error: src/backend/utils/activity/wait_event_names.txt: patch does not apply
Patch failed at 0002 Remove column for wait event names in wait_event_names.txt

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: WIP: new system catalog pg_wait_event

2023-08-21 Thread Drouvot, Bertrand

Hi,

On 8/20/23 10:07 AM, Michael Paquier wrote:

On Sat, Aug 19, 2023 at 06:30:12PM +0200, Drouvot, Bertrand wrote:

Thanks, fixed in v10.


Okay.  I have done an extra review of it, simplifying a few things in
the function, the comments and the formatting, and applied the patch.


Thanks!

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: WIP: new system catalog pg_wait_event

2023-08-19 Thread Drouvot, Bertrand

Hi,

On 8/19/23 12:00 PM, Michael Paquier wrote:

On Sat, Aug 19, 2023 at 11:35:01AM +0200, Drouvot, Bertrand wrote:

Hi,

On 8/18/23 12:31 PM, Michael Paquier wrote:
Thanks! Please find attached v9 fixing this typo.


I was looking at v8, and this looks pretty good to me. 


Great!


I have
spotted a few minor things.

+  proretset => 't', provolatile => 's', prorettype => 'record',
This function should be volatile.  


Oh right, I copied/pasted this and should have paid more attention to
that part. Fixed in v10 attached.


By the way, shouldn't the results of GetWaitEventExtensionNames() in
the SQL function be freed?  I mean that:
 for (int idx = 0; idx < nbextwaitevents; idx++)
 pfree(waiteventnames[idx]);
 pfree(waiteventnames);



I don't think it's needed. The reason is that they are palloc in a
short-lived memory context while executing pg_get_wait_events().


+   /* Handling extension custom wait events */
Nit warning: s/Handling/Handle/, or "Handling of".


Thanks, fixed in v10.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 40eae29b80920d3859ec78bd2485036ca4ab4f6d Mon Sep 17 00:00:00 2001
From: bdrouvotAWS 
Date: Sat, 5 Aug 2023 12:39:42 +
Subject: [PATCH v10] Add catalog pg_wait_events

Adding a new system view, namely pg_wait_events, that describes the wait
events.
---
 doc/src/sgml/monitoring.sgml  | 13 ++-
 doc/src/sgml/system-views.sgml| 64 +
 src/backend/Makefile  |  3 +-
 src/backend/catalog/system_views.sql  |  3 +
 src/backend/utils/activity/.gitignore |  1 +
 src/backend/utils/activity/Makefile   |  8 +-
 .../activity/generate-wait_event_types.pl | 54 ++-
 src/backend/utils/activity/meson.build|  1 +
 src/backend/utils/activity/wait_event.c   | 44 +
 src/backend/utils/activity/wait_event_funcs.c | 93 +++
 src/include/catalog/pg_proc.dat   |  6 ++
 src/include/utils/meson.build |  4 +-
 src/include/utils/wait_event.h|  1 +
 .../modules/worker_spi/t/001_worker_spi.pl|  6 ++
 src/test/regress/expected/rules.out   |  4 +
 src/test/regress/expected/sysviews.out| 16 
 src/test/regress/sql/sysviews.sql |  4 +
 src/tools/msvc/Solution.pm|  3 +-
 src/tools/msvc/clean.bat  |  1 +
 19 files changed, 318 insertions(+), 11 deletions(-)
  21.1% doc/src/sgml/
  59.5% src/backend/utils/activity/
   3.5% src/include/catalog/
   4.5% src/test/regress/expected/
   4.5% src/test/
   3.0% src/tools/msvc/
   3.6% src/

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 70511a2388..6c53c1ed91 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -1103,7 +1103,7 @@ postgres   27093  0.0  0.0  30096  2752 ?Ss   
11:34   0:00 postgres: ser
   &wait_event_types;
 

- Here is an example of how wait events can be viewed:
+ Here are examples of how wait events can be viewed:
 
 
 SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event 
is NOT NULL;
@@ -1112,6 +1112,17 @@ SELECT pid, wait_event_type, wait_event FROM 
pg_stat_activity WHERE wait_event i
  2540 | Lock| relation
  6644 | LWLock  | ProcArray
 (2 rows)
+
+
+
+SELECT a.pid, a.wait_event, w.description
+FROM pg_stat_activity a JOIN pg_wait_events w
+ON (a.wait_event_type = w.type AND a.wait_event = w.name)
+WHERE wait_event is NOT NULL and a.state = 'active';
+-[ RECORD 1 ]--
+pid | 686674
+wait_event  | WALInitSync
+description | Waiting for a newly initialized WAL file to reach durable storage
 

 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 57b228076e..2b35c2f91b 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -221,6 +221,11 @@
   views
  
 
+ 
+  pg_wait_events
+  wait events
+ 
+
 

   
@@ -4825,4 +4830,63 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   
  
 
+
+ 
+  pg_wait_events
+
+  
+   pg_wait_events
+  
+
+  
+   The view pg_wait_events provides description about 
the
+   wait events.
+  
+
+  
+   pg_wait_events Columns
+   
+
+ 
+  
+   Column Type
+  
+  
+   Description
+  
+ 
+
+
+
+ 
+  
+   type text
+  
+  
+   Wait event type
+  
+ 
+
+ 
+  
+   name text
+  
+  
+   Wait event name
+  
+ 
+
+ 
+  
+   description text
+  
+  
+   Wait event description
+  
+ 
+
+   
+  
+ 
+
 
diff --git a/src/backend/Makefile b/src/backend/Makefile
index 1c

Re: WIP: new system catalog pg_wait_event

2023-08-19 Thread Drouvot, Bertrand

Hi,

On 8/18/23 12:31 PM, Michael Paquier wrote:

On Fri, Aug 18, 2023 at 10:56:55AM +0200, Drouvot, Bertrand wrote:

Okay, using the plural form in v8 attached.


Noting in passing:
- Here is an example of how wait events can be viewed:
+ Here is are examples of how wait events can be viewed:
s/is are/are/.


Thanks! Please find attached v9 fixing this typo.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 430cc8cb79c7cf3ea811760e8aadd53cab7ff9e3 Mon Sep 17 00:00:00 2001
From: bdrouvotAWS 
Date: Sat, 5 Aug 2023 12:39:42 +
Subject: [PATCH v9] Add catalog pg_wait_events

Adding a new system view, namely pg_wait_events, that describes the wait
events.
---
 doc/src/sgml/monitoring.sgml  | 13 ++-
 doc/src/sgml/system-views.sgml| 64 +
 src/backend/Makefile  |  3 +-
 src/backend/catalog/system_views.sql  |  3 +
 src/backend/utils/activity/.gitignore |  1 +
 src/backend/utils/activity/Makefile   |  8 +-
 .../activity/generate-wait_event_types.pl | 54 ++-
 src/backend/utils/activity/meson.build|  1 +
 src/backend/utils/activity/wait_event.c   | 44 +
 src/backend/utils/activity/wait_event_funcs.c | 93 +++
 src/include/catalog/pg_proc.dat   |  6 ++
 src/include/utils/meson.build |  4 +-
 src/include/utils/wait_event.h|  1 +
 .../modules/worker_spi/t/001_worker_spi.pl|  6 ++
 src/test/regress/expected/rules.out   |  4 +
 src/test/regress/expected/sysviews.out| 16 
 src/test/regress/sql/sysviews.sql |  4 +
 src/tools/msvc/Solution.pm|  3 +-
 src/tools/msvc/clean.bat  |  1 +
 19 files changed, 318 insertions(+), 11 deletions(-)
  21.0% doc/src/sgml/
  59.5% src/backend/utils/activity/
   3.5% src/include/catalog/
   4.5% src/test/regress/expected/
   4.5% src/test/
   3.0% src/tools/msvc/
   3.6% src/

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 70511a2388..6c53c1ed91 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -1103,7 +1103,7 @@ postgres   27093  0.0  0.0  30096  2752 ?Ss   
11:34   0:00 postgres: ser
   &wait_event_types;
 

- Here is an example of how wait events can be viewed:
+ Here are examples of how wait events can be viewed:
 
 
 SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event 
is NOT NULL;
@@ -1112,6 +1112,17 @@ SELECT pid, wait_event_type, wait_event FROM 
pg_stat_activity WHERE wait_event i
  2540 | Lock| relation
  6644 | LWLock  | ProcArray
 (2 rows)
+
+
+
+SELECT a.pid, a.wait_event, w.description
+FROM pg_stat_activity a JOIN pg_wait_events w
+ON (a.wait_event_type = w.type AND a.wait_event = w.name)
+WHERE wait_event is NOT NULL and a.state = 'active';
+-[ RECORD 1 ]--
+pid | 686674
+wait_event  | WALInitSync
+description | Waiting for a newly initialized WAL file to reach durable storage
 

 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 57b228076e..2b35c2f91b 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -221,6 +221,11 @@
   views
  
 
+ 
+  pg_wait_events
+  wait events
+ 
+
 

   
@@ -4825,4 +4830,63 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   
  
 
+
+ 
+  pg_wait_events
+
+  
+   pg_wait_events
+  
+
+  
+   The view pg_wait_events provides description about 
the
+   wait events.
+  
+
+  
+   pg_wait_events Columns
+   
+
+ 
+  
+   Column Type
+  
+  
+   Description
+  
+ 
+
+
+
+ 
+  
+   type text
+  
+  
+   Wait event type
+  
+ 
+
+ 
+  
+   name text
+  
+  
+   Wait event name
+  
+ 
+
+ 
+  
+   description text
+  
+  
+   Wait event description
+  
+ 
+
+   
+  
+ 
+
 
diff --git a/src/backend/Makefile b/src/backend/Makefile
index 1c929383c4..3e275ac759 100644
--- a/src/backend/Makefile
+++ b/src/backend/Makefile
@@ -134,7 +134,7 @@ storage/lmgr/lwlocknames.h: 
storage/lmgr/generate-lwlocknames.pl storage/lmgr/lw
$(MAKE) -C storage/lmgr lwlocknames.h lwlocknames.c
 
 utils/activity/wait_event_types.h: utils/activity/generate-wait_event_types.pl 
utils/activity/wait_event_names.txt
-   $(MAKE) -C utils/activity wait_event_types.h pgstat_wait_event.c
+   $(MAKE) -C utils/activity wait_event_types.h pgstat_wait_event.c 
wait_event_funcs_data.c
 
 # run this unconditionally to avoid needing to know its dependencies here:
 submake-catalog-headers:
@@ -311,6 +311,7 @@ maintainer-clean: distclean
  storage/lmgr

Re: WIP: new system catalog pg_wait_event

2023-08-18 Thread Drouvot, Bertrand

Hi,

On 8/18/23 12:37 AM, Michael Paquier wrote:

On Thu, Aug 17, 2023 at 04:37:22PM +0900, Masahiro Ikeda wrote:

On 2023-08-17 14:53, Drouvot, Bertrand wrote:
BTW, is it better to start with "Waiting" like any other lines?
For example, "Waiting for custom event \"worker_spi_main\" defined by an
extension".


Using "Waiting for custom event" is a good term here. 


Yeah, done in v8 just shared up-thread.


I am wondering
if this should be s/extension/module/, as an event could be set by a
loaded library, which may not be set with CREATE EXTENSION.



I think that removing "extension" sounds weird given the fact that the
wait event type is "Extension" (that could lead to confusion).

I did use "defined by an extension module" in v8 (also that's aligned with
the wording used in wait_event.h).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: WIP: new system catalog pg_wait_event

2023-08-18 Thread Drouvot, Bertrand

Hi,

On 8/17/23 9:37 AM, Masahiro Ikeda wrote:

Hi,

On 2023-08-17 14:53, Drouvot, Bertrand wrote:

The followings are additional comments for v7.

1)


I am not sure that "pg_wait_event" is a good idea for the name if the
new view.  How about "pg_wait_events" instead, in plural form?  There
is more than one wait event listed.

I'd prefer the singular form. There is a lot of places where it's already used
(pg_database, pg_user, pg_namespace...to name a few) and it looks like that 
using
the plural form are exceptions.


Since I got the same feeling as Michael-san that "pg_wait_events" would be 
better,
I check the names of all system views. I think the singular form seems to be
exceptions for views though the singular form is used usually for system 
catalogs.

https://www.postgresql.org/docs/devel/views.html
https://www.postgresql.org/docs/devel/catalogs.html


Okay, using the plural form in v8 attached.



2)

$ctmp must be $wctmp?

+    rename($wctmp, "$output_path/wait_event_funcs_data.c")
+  || die "rename: $ctmp to $output_path/wait_event_funcs_data.c: $!";



Nice catch, thanks, fixed in v8.



3)

"List all the wait events type, name and descriptions" is enough?

+ * List all the wait events (type, name) and their descriptions.



Agree, used "List all the wait events type, name and description" in v8
(using description in singular as compared to your proposal)


4)

Why don't you add the usage of the view in the monitoring.sgml?

```
    
  Here is an example of how wait events can be viewed:


SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event 
is NOT NULL;
  pid  | wait_event_type | wait_event
--+-+
  2540 | Lock    | relation
  6644 | LWLock  | ProcArray
(2 rows)

    
```

ex.
postgres(3789417)=# SELECT a.pid, a.wait_event_type, a.wait_event, 
w.description FROM pg_stat_activity a JOIN pg_wait_event w ON 
(a.wait_event_type = w.type AND a.wait_event = w.name) WHERE wait_event is NOT 
NULL;


Agree, I think that's a good idea.

I'd prefer to add also a filtering on "state='active'" for this example (I 
think that would provide
a "real" use case as the sessions are not waiting if they are not active).

Done in v8.


5)

Though I'm not familiar the value, do we need procost?



I think it makes sense to add a "small" one as it's done.
BTW, while looking at it, I changed prorows to a more "accurate"
value.


Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 027a83d37d52eee52e3c82bcf814c6d5949f7ccf Mon Sep 17 00:00:00 2001
From: bdrouvotAWS 
Date: Sat, 5 Aug 2023 12:39:42 +
Subject: [PATCH v8] Add catalog pg_wait_events

Adding a new system view, namely pg_wait_events, that describes the wait
events.
---
 doc/src/sgml/monitoring.sgml  | 13 ++-
 doc/src/sgml/system-views.sgml| 64 +
 src/backend/Makefile  |  3 +-
 src/backend/catalog/system_views.sql  |  3 +
 src/backend/utils/activity/.gitignore |  1 +
 src/backend/utils/activity/Makefile   |  8 +-
 .../activity/generate-wait_event_types.pl | 54 ++-
 src/backend/utils/activity/meson.build|  1 +
 src/backend/utils/activity/wait_event.c   | 44 +
 src/backend/utils/activity/wait_event_funcs.c | 93 +++
 src/include/catalog/pg_proc.dat   |  6 ++
 src/include/utils/meson.build |  4 +-
 src/include/utils/wait_event.h|  1 +
 .../modules/worker_spi/t/001_worker_spi.pl|  6 ++
 src/test/regress/expected/rules.out   |  4 +
 src/test/regress/expected/sysviews.out| 16 
 src/test/regress/sql/sysviews.sql |  4 +
 src/tools/msvc/Solution.pm|  3 +-
 src/tools/msvc/clean.bat  |  1 +
 19 files changed, 318 insertions(+), 11 deletions(-)
 create mode 100644 src/backend/utils/activity/wait_event_funcs.c

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 70511a2388..bfa658f7a8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -1103,7 +1103,7 @@ postgres   27093  0.0  0.0  30096  2752 ?Ss   
11:34   0:00 postgres: ser
   &wait_event_types;
 

- Here is an example of how wait events can be viewed:
+ Here is are examples of how wait events can be viewed:
 
 
 SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event 
is NOT NULL;
@@ -1112,6 +1112,17 @@ SELECT pid, wait_event_type, wait_event FROM 
pg_stat_activity WHERE wait_event i
  2540 | Lock| relation
  6644 | LWLock  | ProcArray
 (2 rows)
+
+
+
+SELECT a.pid, a.wait_event, w.description
+FROM pg_stat

Re: Synchronizing slots from primary to standby

2023-08-16 Thread Drouvot, Bertrand

Hi,

On 8/14/23 11:52 AM, shveta malik wrote:



We (myself and Ajin) performed the tests to compute the lag in standby
slots as compared to primary slots with different number of slot-sync
workers configured.



Thanks!


3 DBs were created, each with 30 tables and each table having one
logical-pub/sub configured. So this made a total of 90 logical
replication slots to be synced. Then the workload was run for aprox 10
mins. During this workload, at regular intervals, primary and standby
slots' lsns were captured (from pg_replication_slots) and compared. At
each capture, the intent was to know how much is each standby's slot
lagging behind corresponding primary's slot by taking the distance
between confirmed_flush_lsn of primary and standby slot. Then we took
the average (integer value) of this distance over the span of 10 min
workload 


Thanks for the explanations, make sense to me.


and this is what we got:

With max_slot_sync_workers=1, average-lag =  42290.3563
With max_slot_sync_workers=2, average-lag =  24585.1421
With max_slot_sync_workers=3, average-lag =  14964.9215

This shows that more workers have better chances to keep logical
replication slots in sync for this case.



Agree.


Another statistics if it interests you is, we ran a frequency test as
well (this by changing code, unit test sort of) to figure out the
'total number of times synchronization done' with different number of
sync-slots workers configured. Same 3 DBs setup with each DB having 30
logical replication slots. With 'max_slot_sync_workers' set at 1, 2
and 3; total number of times synchronization done was 15874, 20205 and
23414 respectively. Note: this is not on the same machine where we
captured lsn-gap data, it is on  a little less efficient machine but
gives almost the same picture

Next we are planning to capture this data for a lesser number of slots
like 10,30,50 etc. It may happen that the benefit of multi-workers
over single workers in such cases could be less, but let's have the
data to verify that.



Thanks a lot for those numbers and for the testing!

Do you think it would make sense to also get the number of using
the pg_failover_slots module? (and compare the pg_failover_slots numbers with 
the
"one worker" case here). Idea is to check if the patch does introduce
some overhead as compare to pg_failover_slots.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: WIP: new system catalog pg_wait_event

2023-08-16 Thread Drouvot, Bertrand

Hi,

On 8/17/23 3:57 AM, Michael Paquier wrote:

On Thu, Aug 17, 2023 at 10:53:02AM +0900, Masahiro Ikeda wrote:

BTW, although I think this is outside the scope of this patch,
it might be a good idea to be able to add a description to the
API for custom wait events.


Somebody on twitter has raised this point.


I think I know him ;-)


 I am not sure that we need
to go down to that for the sake of this view, but I'm OK to..
Disagree and Commit to any consensus reached on this matter.


Yeah, I changed my mind of this. I think it's better to keep the
"control" about what is displayed in the description field for
this new system view.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: WIP: new system catalog pg_wait_event

2023-08-16 Thread Drouvot, Bertrand

Hi,

On 8/17/23 3:53 AM, Masahiro Ikeda wrote:

Hi,

Thank you for creating the patch!
I think it is a very useful view as a user.

I will share some thoughts about the v6 patch.



Thanks for looking at it!


1)

The regular expression needs to be changed in generate-wait_event_types.pl.
I have compared the documentation with the output of the pg_wait_event
view and found the following differences.

* For parameter names, the substitution for underscore is missing.
-ArchiveCleanupCommand Waiting for archive_cleanup_command to complete
-ArchiveCommand Waiting for archive_command to complete
+ArchiveCleanupCommand Waiting for archive-cleanup-command to complete
+ArchiveCommand Waiting for archive-command to complete
-RecoveryEndCommand Waiting for recovery_end_command to complete
+RecoveryEndCommand Waiting for recovery-end-command to complete
-RestoreCommand Waiting for restore_command to complete
+RestoreCommand Waiting for restore-command to complete



Yeah, nice catch. v7 just shared up-thread replace "-" by "_"
for such a case. But I'm not sure the pg_wait_event description field needs
to be 100% aligned with the documentation: wouldn't be better to replace
"-" by " " in such cases in pg_wait_event?


* The HTML tag match is not shortest match.
-frozenid Waiting to update pg_database.datfrozenxid and pg_database.datminmxid
+frozenid Waiting to update datminmxid



Nice catch, fixed in v7.


* There are two blanks before "about".


This is coming from wait_event_names.txt, thanks for pointing out.
I just submitted a patch [1] to fix that.


Also " for heavy weight is
   removed (is it intended?)
-LockManager Waiting to read or update information about "heavyweight" locks
+LockManager Waiting to read or update information  about heavyweight locks



Not intended, fixed in v7.


* Do we need "worker_spi_main" in the description? The name column
   shows the same one, so it could be omitted.

pg_wait_event
worker_spi_main Custom wait event "worker_spi_main" defined by extension




Do you mean remove the wait event name from the description in case of custom
extension wait events? I'd prefer to keep it, if not the descriptions would be
all the same for custom wait events.


2)

Would it be better to add "extension" meaning unassigned?


Extensions can add Extension and LWLock types to the list shown in Table 28.8 
and Table 28.12. In some cases, the name of LWLock assigned by an extension 
will not be available in all server processes; It might be reported as just 
“extension” rather than the extension-assigned name.




Yeah, could make sense but I think that should be a dedicated patch for the 
documentation.


3)

Would index == els be better for the following Assert?
+    Assert(index <= els);



Agree, done in v7.


4)

There is a typo. alll -> all
+    /* Allocate enough space for alll entries */



Thanks! Fixed in v7.

[1]: 
https://www.postgresql.org/message-id/dd836027-2e9e-4df9-9fd9-7527cd1757e1%40gmail.com

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: WIP: new system catalog pg_wait_event

2023-08-16 Thread Drouvot, Bertrand

Hi,

On 8/16/23 2:08 PM, Michael Paquier wrote:

On Wed, Aug 16, 2023 at 01:43:35PM +0200, Drouvot, Bertrand wrote:

Yeah, agree, done that way in v6 (also added a test in 001_worker_spi.pl
to ensure that "worker_spi_main" is reported in pg_wait_event).


-typedef struct WaitEventExtensionEntryByName
-{
-   charwait_event_name[NAMEDATALEN];   /* hash key */
-   uint16  event_id;   /* wait event ID */
-} WaitEventExtensionEntryByName;

I'd rather keep all these structures local to wait_event.c, as these
cover the internals of the hash tables.

Could you switch GetWaitEventExtensionEntries() so as it returns a
list of strings or an array of char*?  We probably can live with the
list for that.



Yeah, I was not sure about this (returning a list of 
WaitEventExtensionEntryByName
or a list of wait event names) while working on v6.

That's true that the only need here is to get the names of the custom wait 
events.
Returning only the names would allow us to move the 
WaitEventExtensionEntryByName definition back
to the wait_event.c file.

It makes sense to me, done in v7 attached and renamed the function to 
GetWaitEventExtensionNames().


+   charbuf[NAMEDATALEN + 44]; //"Custom wait event \"%Name%\" defined 
by extension" +
Incorrect comment.  This would be simpler as a StringInfo.


Yeah and probably less error prone: done in v7.

While at it, v7 is deliberately not calling "pfree(waiteventnames)" and 
"resetStringInfo(&buf)" in
pg_get_wait_events(): reason is that they are palloc in a short-lived memory 
context while executing
pg_get_wait_events().

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 43baf166f14aeca4ef0979d93e9ed34fdc323a09 Mon Sep 17 00:00:00 2001
From: bdrouvotAWS 
Date: Sat, 5 Aug 2023 12:39:42 +
Subject: [PATCH v7] Add catalog pg_wait_event

Adding a new system view, namely pg_wait_event, that describes the wait
events.
---
 doc/src/sgml/system-views.sgml| 64 +
 src/backend/Makefile  |  3 +-
 src/backend/catalog/system_views.sql  |  3 +
 src/backend/utils/activity/.gitignore |  1 +
 src/backend/utils/activity/Makefile   |  8 +-
 .../activity/generate-wait_event_types.pl | 54 ++-
 src/backend/utils/activity/meson.build|  1 +
 src/backend/utils/activity/wait_event.c   | 44 +
 src/backend/utils/activity/wait_event_funcs.c | 93 +++
 src/include/catalog/pg_proc.dat   |  6 ++
 src/include/utils/meson.build |  4 +-
 src/include/utils/wait_event.h|  1 +
 .../modules/worker_spi/t/001_worker_spi.pl|  6 ++
 src/test/regress/expected/rules.out   |  4 +
 src/test/regress/expected/sysviews.out| 16 
 src/test/regress/sql/sysviews.sql |  4 +
 src/tools/msvc/Solution.pm|  3 +-
 src/tools/msvc/clean.bat  |  1 +
 18 files changed, 306 insertions(+), 10 deletions(-)
  16.3% doc/src/sgml/
  63.1% src/backend/utils/activity/
   3.7% src/include/catalog/
   3.0% src/test/modules/worker_spi/t/
   4.8% src/test/regress/expected/
   3.2% src/tools/msvc/
   5.5% src/

diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 57b228076e..b2ed309fe2 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -221,6 +221,11 @@
   views
  
 
+ 
+  pg_wait_event
+  wait events
+ 
+
 

   
@@ -4825,4 +4830,63 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   
  
 
+
+ 
+  pg_wait_event
+
+  
+   pg_wait_event
+  
+
+  
+   The view pg_wait_event provides description about 
the
+   wait events.
+  
+
+  
+   pg_wait_event Columns
+   
+
+ 
+  
+   Column Type
+  
+  
+   Description
+  
+ 
+
+
+
+ 
+  
+   type text
+  
+  
+   Wait event type
+  
+ 
+
+ 
+  
+   name text
+  
+  
+   Wait event name
+  
+ 
+
+ 
+  
+   description text
+  
+  
+   Wait event description
+  
+ 
+
+   
+  
+ 
+
 
diff --git a/src/backend/Makefile b/src/backend/Makefile
index 1c929383c4..3e275ac759 100644
--- a/src/backend/Makefile
+++ b/src/backend/Makefile
@@ -134,7 +134,7 @@ storage/lmgr/lwlocknames.h: 
storage/lmgr/generate-lwlocknames.pl storage/lmgr/lw
$(MAKE) -C storage/lmgr lwlocknames.h lwlocknames.c
 
 utils/activity/wait_event_types.h: utils/activity/generate-wait_event_types.pl 
utils/activity/wait_event_names.txt
-   $(MAKE) -C utils/activity wait_event_types.h pgstat_wait_event.c
+   $(MAKE) -C utils/activity wait_event_types.h pgstat_wait_event.c 
wait_event_funcs_data.c
 
 # run this unconditionally to avoid needing to know its dependencies here:
 submake-

Fix an entry in wait_event_names.txt

2023-08-16 Thread Drouvot, Bertrand

Hi hackers,

While working on [1] it has been noticed by Masahiro-san that the description 
field
in the new pg_wait_event view contains 2 blanks for one row.

It turns out that it comes from wait_event_names.txt (added in fa88928).

Attached a tiny patch to fix this entry in wait_event_names.txt (I did check 
that no
other entries are in the same case).

[1]: 
https://www.postgresql.org/message-id/735fbd560ae914c96faaa23cc8d9a118%40oss.nttdata.com

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 8ed089064bc92f0e721ecff93fdb40cbb25c310e Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Thu, 17 Aug 2023 04:04:44 +
Subject: [PATCH v1] Fix incorrect entry in wait_event_names.txt

fa88928 has introduced wait_event_names.txt, and one of its entries had
some documentation fields with two blanks.
---
 src/backend/utils/activity/wait_event_names.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
 100.0% src/backend/utils/activity/

diff --git a/src/backend/utils/activity/wait_event_names.txt 
b/src/backend/utils/activity/wait_event_names.txt
index f9e01e33b1..4d74f0068e 100644
--- a/src/backend/utils/activity/wait_event_names.txt
+++ b/src/backend/utils/activity/wait_event_names.txt
@@ -332,7 +332,7 @@ WAIT_EVENT_DOCONLY  ReplicationOriginState  "Waiting to 
read or update the progres
 WAIT_EVENT_DOCONLY ReplicationSlotIO   "Waiting for I/O on a 
replication slot."
 WAIT_EVENT_DOCONLY LockFastPath"Waiting to read or update a process' 
fast-path lock information."
 WAIT_EVENT_DOCONLY BufferMapping   "Waiting to associate a data block with 
a buffer in the buffer pool."
-WAIT_EVENT_DOCONLY LockManager "Waiting to read or update information  
about heavyweight locks."
+WAIT_EVENT_DOCONLY LockManager "Waiting to read or update information 
about heavyweight locks."
 WAIT_EVENT_DOCONLY PredicateLockManager"Waiting to access predicate 
lock information used by serializable transactions."
 WAIT_EVENT_DOCONLY ParallelHashJoin"Waiting to synchronize workers 
during Parallel Hash Join plan execution."
 WAIT_EVENT_DOCONLY ParallelQueryDSA"Waiting for parallel query 
dynamic shared memory allocation."
-- 
2.34.1



Re: WIP: new system catalog pg_wait_event

2023-08-16 Thread Drouvot, Bertrand

Hi,

On 8/16/23 8:22 AM, Michael Paquier wrote:

On Wed, Aug 16, 2023 at 07:04:53AM +0200, Drouvot, Bertrand wrote:

I'd prefer the singular form. There is a lot of places where it's already used
(pg_database, pg_user, pg_namespace...to name a few) and it looks like that 
using
the plural form are exceptions.


Okay, fine by me.  


Great, singular form is being used in v6 attached.


Also, I would remove the "wait_event_" prefixes to
the field names for the attribute names.



It looks like it has already been done in v5.


Yeah, now that af720b4c50 is done, I'll add the custom wait events handling
in v6.


Thanks.  I guess that the hash tables had better remain local to
wait_event.c.


Yeah, agree, done that way in v6 (also added a test in 001_worker_spi.pl
to ensure that "worker_spi_main" is reported in pg_wait_event).

I added a "COLLATE "C"" in the "order by" clause's test that we added in
src/test/regress/sql/sysviews.sql (the check was failing on my environment).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom f55bc01ca03b80116664b03b0174bc97dc01b6b9 Mon Sep 17 00:00:00 2001
From: bdrouvotAWS 
Date: Sat, 5 Aug 2023 12:39:42 +
Subject: [PATCH v6] Add catalog pg_wait_event

Adding a new system view, namely pg_wait_event, that describes the wait
events.
---
 doc/src/sgml/system-views.sgml| 64 +
 src/backend/Makefile  |  3 +-
 src/backend/catalog/system_views.sql  |  3 +
 src/backend/utils/activity/.gitignore |  1 +
 src/backend/utils/activity/Makefile   |  8 +-
 .../activity/generate-wait_event_types.pl | 46 -
 src/backend/utils/activity/meson.build|  1 +
 src/backend/utils/activity/wait_event.c   | 50 --
 src/backend/utils/activity/wait_event_funcs.c | 94 +++
 src/include/catalog/pg_proc.dat   |  6 ++
 src/include/utils/meson.build |  4 +-
 src/include/utils/wait_event.h|  7 ++
 .../modules/worker_spi/t/001_worker_spi.pl|  6 ++
 src/test/regress/expected/rules.out   |  4 +
 src/test/regress/expected/sysviews.out| 16 
 src/test/regress/sql/sysviews.sql |  4 +
 src/tools/msvc/Solution.pm|  3 +-
 src/tools/msvc/clean.bat  |  1 +
 18 files changed, 304 insertions(+), 17 deletions(-)
  15.7% doc/src/sgml/
  62.3% src/backend/utils/activity/
   3.6% src/include/catalog/
   4.2% src/include/utils/
   4.6% src/test/regress/expected/
   4.6% src/test/
   3.1% src/tools/msvc/

diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 57b228076e..b2ed309fe2 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -221,6 +221,11 @@
   views
  
 
+ 
+  pg_wait_event
+  wait events
+ 
+
 

   
@@ -4825,4 +4830,63 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   
  
 
+
+ 
+  pg_wait_event
+
+  
+   pg_wait_event
+  
+
+  
+   The view pg_wait_event provides description about 
the
+   wait events.
+  
+
+  
+   pg_wait_event Columns
+   
+
+ 
+  
+   Column Type
+  
+  
+   Description
+  
+ 
+
+
+
+ 
+  
+   type text
+  
+  
+   Wait event type
+  
+ 
+
+ 
+  
+   name text
+  
+  
+   Wait event name
+  
+ 
+
+ 
+  
+   description text
+  
+  
+   Wait event description
+  
+ 
+
+   
+  
+ 
+
 
diff --git a/src/backend/Makefile b/src/backend/Makefile
index 1c929383c4..3e275ac759 100644
--- a/src/backend/Makefile
+++ b/src/backend/Makefile
@@ -134,7 +134,7 @@ storage/lmgr/lwlocknames.h: 
storage/lmgr/generate-lwlocknames.pl storage/lmgr/lw
$(MAKE) -C storage/lmgr lwlocknames.h lwlocknames.c
 
 utils/activity/wait_event_types.h: utils/activity/generate-wait_event_types.pl 
utils/activity/wait_event_names.txt
-   $(MAKE) -C utils/activity wait_event_types.h pgstat_wait_event.c
+   $(MAKE) -C utils/activity wait_event_types.h pgstat_wait_event.c 
wait_event_funcs_data.c
 
 # run this unconditionally to avoid needing to know its dependencies here:
 submake-catalog-headers:
@@ -311,6 +311,7 @@ maintainer-clean: distclean
  storage/lmgr/lwlocknames.c \
  storage/lmgr/lwlocknames.h \
  utils/activity/pgstat_wait_event.c \
+ utils/activity/wait_event_funcs_data.c \
  utils/activity/wait_event_types.h \
  utils/adt/jsonpath_gram.c \
  utils/adt/jsonpath_gram.h \
diff --git a/src/backend/catalog/system_views.sql 
b/src/backend/catalog/system_views.sql
index af65af6bdd..f86a4dd770 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1342,3 +1342,

Re: WIP: new system catalog pg_wait_event

2023-08-15 Thread Drouvot, Bertrand

Hi,

On 8/14/23 6:37 AM, Michael Paquier wrote:

On Thu, Aug 10, 2023 at 08:09:34PM +0200, Drouvot, Bertrand wrote:

Agree that's worth it given the fact that iterating one more time is not that
costly here.


I have reviewed v4, and finished by putting my hands on it to see what
I could do.


Thanks!



There are two things
that we could do:
- Hide that behind a macro defined in wait_event_funcs.c.
- Feed the data generated here into a static structure, like:
+static const struct
+{
+   const char *type;
+   const char *name;
+   const char *description;
+}

After experimenting with both, I've found the latter a tad cleaner, so
the attached version does that.


Yeah, looking at what you've done in v5, I also agree that's better
that what has been done in v4 (I also think it will be easier to maintain).


I am not sure that "pg_wait_event" is a good idea for the name if the
new view.  How about "pg_wait_events" instead, in plural form?  There
is more than one wait event listed.



I'd prefer the singular form. There is a lot of places where it's already used
(pg_database, pg_user, pg_namespace...to name a few) and it looks like that 
using
the plural form are exceptions.


One log entry in Solution.pm has missed the addition of a reference to
wait_event_funcs_data.c.



Oh right, thanks for fixing it in v5.


And..  src/backend/Makefile missed two updates for maintainer-clean & co,
no?
 

Oh right, thanks for fixing it in v5.


One thing that the patch is still missing is the handling of custom
wait events for extensions. 


Yeah, now that af720b4c50 is done, I'll add the custom wait events handling
in v6.


So this still requires more code.  I was
thinking about listed these events as:
- Type: "Extension"
- Name: What a module has registered.
- Description: "Custom wait event \"%Name%\" defined by extension".


That sounds good to me, I'll do that.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: WIP: new system catalog pg_wait_event

2023-08-10 Thread Drouvot, Bertrand

Hi,

On 8/9/23 9:56 AM, Michael Paquier wrote:

On Tue, Aug 08, 2023 at 10:16:37AM +0200, Drouvot, Bertrand wrote:

Please find attached v3 adding the wait event types.


+-- There will surely be at least 9 wait event types, 240 wait events and at
+-- least 27 related to WAL
+select count(distinct(wait_event_type)) > 8 as ok_type,
+   count(*) > 239 as ok,
+   count(*) FILTER (WHERE description like '%WAL%') > 26 AS ok_wal_desc
+from pg_wait_event;
The point is to check the execution of this function, so this could be
simpler, like that or a GROUP BY clause with the event type:
SELECT count(*) > 0 FROM pg_wait_event;
SELECT wait_event_type, count(*) > 0 AS has_data FROM pg_wait_event
   GROUP BY wait_event_type ORDER BY wait_event_type;



Thanks for looking at it!
Right, so v4 attached is just testing "SELECT count(*) > 0 FROM pg_wait_event;",
that does look enough to test.


+   printf $ic "\tmemset(values, 0, sizeof(values));\n";
+   printf $ic "\tmemset(nulls, 0, sizeof(nulls));\n\n";
+   printf $ic "\tvalues[0] = CStringGetTextDatum(\"%s\");\n", $last;
+   printf $ic "\tvalues[1] = CStringGetTextDatum(\"%s\");\n", 
$wev->[1];
+   printf $ic "\tvalues[2] = CStringGetTextDatum(\"%s\");\n\n", 
$new_desc;

That's overcomplicated for some code generated.  Wouldn't it be
simpler to generate a list of elements, with the code inserting the
tuples materialized looping over it?



Yeah, agree thanks!
In v4, the perl script now appends the wait events in a List that way:

"
printf $ic "\telement = (wait_event_element *) 
palloc(sizeof(wait_event_element));\n";

printf $ic "\telement->wait_event_type = \"%s\";\n", $last;
printf $ic "\telement->wait_event_name = \"%s\";\n", $wev->[1];
printf $ic "\telement->wait_event_description = \"%s\";\n\n", 
$new_desc;

printf $ic "\twait_event = lappend(wait_event, element);\n\n";
"

And the C function pg_get_wait_events() now iterates over this List.


+   my $new_desc = substr $wev->[2], 1, -2;
+   $new_desc =~ s/'/\\'/g;
+   $new_desc =~ s/<.*>(.*?)<.*>/$1/g;
+   $new_desc =~ s//$1/g;
+   $new_desc =~ s/; see.*$//;
Better to document what this does,


good idea...

I had to turn them "on" one by one to recall why they are there...;-)
Done in v4.


the contents produced look good.


yeap



+   rename($ictmp, "$output_path/pg_wait_event_insert.c")
+ || die "rename: $ictmp to $output_path/pg_wait_event_insert.c: $!";

  # seems nicer to not add that as an include path for the whole backend.
  waitevent_sources = files(
'wait_event.c',
+  'pg_wait_event.c',
  )

This could use a name referring to SQL functions, say
wait_event_funcs.c, with a wait_event_data.c or a
wait_event_funcs_data.c?


That sounds better indeed, thanks! v4 is using wait_event_funcs.c and
wait_event_funcs_data.c.



+   # Don't generate .c (except pg_wait_event_insert.c) and .h files for
+   # Extension, LWLock and Lock, these are handled independently.
+   my $is_exception = $waitclass eq 'WaitEventExtension' ||
+  $waitclass eq 'WaitEventLWLock' ||
+  $waitclass eq 'WaitEventLock';
Perhaps it would be cleaner to use a separate loop?


Agree that's worth it given the fact that iterating one more time is not that
costly here.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom aa3e81810daaba7b1b2e05219ef662f2fd0607f8 Mon Sep 17 00:00:00 2001
From: bdrouvotAWS 
Date: Sat, 5 Aug 2023 12:39:42 +
Subject: [PATCH v4] pg_wait_event

Adding a new system view, namely pg_wait_event, that describes the wait events.
---
 doc/src/sgml/system-views.sgml| 64 +
 src/backend/catalog/system_views.sql  |  3 +
 src/backend/utils/activity/.gitignore |  1 +
 src/backend/utils/activity/Makefile   |  6 +-
 .../activity/generate-wait_event_types.pl | 46 -
 src/backend/utils/activity/meson.build|  1 +
 src/backend/utils/activity/wait_event_funcs.c | 69 +++
 src/include/catalog/pg_proc.dat   |  6 ++
 src/include/utils/meson.build |  4 +-
 src/test/regress/expected/rules.out   |  4 ++
 src/test/regress/expected/sysviews.out|  7 ++
 src/test/regress/sql/sysviews.sql |  3 +
 src/tools/msvc/clean.bat  |  1 +
 13 files changed, 209 insertions(+), 6 deletions(-)
  24.4% doc/src/sgml/
  58.5% src/backend/utils/activity/
   5.9

Re: WIP: new system catalog pg_wait_event

2023-08-08 Thread Drouvot, Bertrand

Hi,

On 8/8/23 7:37 AM, Drouvot, Bertrand wrote:

Hi,

On 8/8/23 5:05 AM, Michael Paquier wrote:

On Tue, Aug 08, 2023 at 11:53:32AM +0900, Kyotaro Horiguchi wrote:

As I mentioned in another thread, I'm uncertain about our stance on
the class id of the wait event. If a class acts as a namespace, we
should include it in the view. Otherwise, if the class id is just an
attribute of the wait event, we should make the event name unique.


Including the class name in the view makes the most sense to me, FWIW,
as it could be also possible that one reuses an event name in the
existing in-core list, but for extensions.  That's of course not
something I would recommend.


Thanks Kyotaro-san and Michael for the feedback. I do agree and will
add the class name.



Please find attached v3 adding the wait event types.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom b54aa2dd33835a83bfe08a99338874200512fdf0 Mon Sep 17 00:00:00 2001
From: bdrouvotAWS 
Date: Sat, 5 Aug 2023 12:39:42 +
Subject: [PATCH v3] pg_wait_event

Adding a new system view, namely pg_wait_event, that describes the wait events.
---
 doc/src/sgml/system-views.sgml| 64 +++
 src/backend/catalog/system_views.sql  |  3 +
 src/backend/utils/activity/.gitignore |  1 +
 src/backend/utils/activity/Makefile   |  6 +-
 .../activity/generate-wait_event_types.pl | 77 +--
 src/backend/utils/activity/meson.build|  1 +
 src/backend/utils/activity/pg_wait_event.c| 41 ++
 src/include/catalog/pg_proc.dat   |  6 ++
 src/include/utils/meson.build |  4 +-
 src/test/regress/expected/rules.out   |  4 +
 src/test/regress/expected/sysviews.out| 11 +++
 src/test/regress/sql/sysviews.sql |  7 ++
 src/tools/msvc/clean.bat  |  1 +
 13 files changed, 200 insertions(+), 26 deletions(-)
  21.6% doc/src/sgml/
  56.7% src/backend/utils/activity/
   5.2% src/include/catalog/
   7.4% src/test/regress/expected/
   4.0% src/test/regress/sql/
   4.9% src/

diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 57b228076e..ed26c8326f 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -221,6 +221,11 @@
   views
  
 
+ 
+  pg_wait_event
+  wait events
+ 
+
 

   
@@ -4825,4 +4830,63 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   
  
 
+
+ 
+  pg_wait_event
+
+  
+   pg_wait_event
+  
+
+  
+   The view pg_wait_event provides description about 
the
+   wait events.
+  
+
+  
+   pg_wait_event Columns
+   
+
+ 
+  
+   Column Type
+  
+  
+   Description
+  
+ 
+
+
+
+ 
+  
+   wait_event_type text
+  
+  
+   Wait event type
+  
+ 
+
+ 
+  
+   wait_event_name text
+  
+  
+   Wait event name
+  
+ 
+
+ 
+  
+   description texte
+  
+  
+   Wait event description
+  
+ 
+
+   
+  
+ 
+
 
diff --git a/src/backend/catalog/system_views.sql 
b/src/backend/catalog/system_views.sql
index af65af6bdd..f86a4dd770 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1342,3 +1342,6 @@ CREATE VIEW pg_stat_subscription_stats AS
 ss.stats_reset
 FROM pg_subscription as s,
  pg_stat_get_subscription_stats(s.oid) as ss;
+
+CREATE VIEW pg_wait_event AS
+SELECT * FROM pg_get_wait_events() AS we;
diff --git a/src/backend/utils/activity/.gitignore 
b/src/backend/utils/activity/.gitignore
index d77079285b..ad089a0b63 100644
--- a/src/backend/utils/activity/.gitignore
+++ b/src/backend/utils/activity/.gitignore
@@ -1,2 +1,3 @@
 /pgstat_wait_event.c
 /wait_event_types.h
+/pg_wait_event_insert.c
diff --git a/src/backend/utils/activity/Makefile 
b/src/backend/utils/activity/Makefile
index f1117745d4..8595e6ea77 100644
--- a/src/backend/utils/activity/Makefile
+++ b/src/backend/utils/activity/Makefile
@@ -32,10 +32,14 @@ OBJS = \
pgstat_subscription.o \
pgstat_wal.o \
pgstat_xact.o \
+   pg_wait_event.o \
wait_event.o
 
 include $(top_srcdir)/src/backend/common.mk
 
+pg_wait_event.o: pg_wait_event_insert.c
+pg_wait_event_insert.c: wait_event_types.h
+
 wait_event.o: pgstat_wait_event.c
 pgstat_wait_event.c: wait_event_types.h
touch $@
@@ -44,4 +48,4 @@ wait_event_types.h: 
$(top_srcdir)/src/backend/utils/activity/wait_event_names.tx
$(PERL) $(srcdir)/generate-wait_event_types.pl --code $<
 
 maintainer-clean: clean
-   rm -f wait_event_types.h pgstat_wait_event.c
+   rm -f wait_event_types.h pgstat_wait_event.c pg_wait_event_insert.c
diff --git a/src/backend/utils/activity/generate-wait_event_types.pl 
b/src/backend/utils/activity/generate-wait_event_types.pl
index 56335e873

Re: Synchronizing slots from primary to standby

2023-08-07 Thread Drouvot, Bertrand

Hi,

On 8/8/23 7:01 AM, shveta malik wrote:

On Mon, Aug 7, 2023 at 3:17 PM Drouvot, Bertrand
 wrote:


Hi,

On 8/4/23 1:32 PM, shveta malik wrote:

On Fri, Aug 4, 2023 at 2:44 PM Drouvot, Bertrand
 wrote:

On 7/28/23 4:39 PM, Bharath Rupireddy wrote:




Agreed. That is why in v10,v11 patches, we have different infra for
sync-slot worker i.e. it is not relying on "logical replication
background worker" anymore.


yeah saw that, looks like the right way to go to me.


Maybe we should start some tests/benchmark with only one sync worker to get 
numbers
and start from there?


Yes, we can do that performance testing to figure out the difference
between the two modes. I will try to get some statistics on this.



Great, thanks!

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: WIP: new system catalog pg_wait_event

2023-08-07 Thread Drouvot, Bertrand

Hi,

On 8/8/23 5:05 AM, Michael Paquier wrote:

On Tue, Aug 08, 2023 at 11:53:32AM +0900, Kyotaro Horiguchi wrote:

As I mentioned in another thread, I'm uncertain about our stance on
the class id of the wait event. If a class acts as a namespace, we
should include it in the view. Otherwise, if the class id is just an
attribute of the wait event, we should make the event name unique.


Including the class name in the view makes the most sense to me, FWIW,
as it could be also possible that one reuses an event name in the
existing in-core list, but for extensions.  That's of course not
something I would recommend.


Thanks Kyotaro-san and Michael for the feedback. I do agree and will
add the class name.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Fix badly generated entries in wait_event_names.txt

2023-08-07 Thread Drouvot, Bertrand

Hi,

On 8/8/23 1:25 AM, Michael Paquier wrote:

On Mon, Aug 07, 2023 at 10:34:43AM +0200, Drouvot, Bertrand wrote:

fa88928470 introduced src/backend/utils/activity/wait_event_names.txt that has
been auto-generated. The auto-generation had parsing errors leading to bad
entries in wait_event_names.txt (when the same "WAIT_EVENT" name can be found as
a substring of another one, then descriptions were merged in one of them).


Thanks for the patch, applied. 


Thanks!


I have double-checked the contents of
the tables and the five entries you have pointed out are the only ones
with duplicated descriptions.


Yeah, did the same on my side before submitting the patch, and all looks
good now.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: WIP: new system catalog pg_wait_event

2023-08-07 Thread Drouvot, Bertrand

Hi,

On 8/7/23 10:23 AM, Drouvot, Bertrand wrote:

Hi,

On 8/4/23 5:08 PM, Tom Lane wrote:

"Drouvot, Bertrand"  writes:

Now that fa88928470 generates automatically code and documentation
related to wait events, why not exposing the wait events description
through a system catalog relation?


I think you'd be better off making this a view over a set-returning
function.  The nearby work to allow run-time extensibility of the
set of wait events is not going to be happy with a static catalog.


Oh right, good point, thanks!: I'll come back with a new patch version to make
use of SRF instead.


Please find attached v2 making use of SRF.

v2:

- adds a new pg_wait_event.c that acts as the "template" for the SRF
- generates pg_wait_event_insert.c (through generate-wait_event_types.pl) that
is included into pg_wait_event.c

That way I think it's flexible enough to add more code if needed in the SRF.

The patch also:

- updates the doc
- works with autoconf and meson
- adds a simple test

I'm adding a new CF entry for it.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 63f1316b2c160b72bdc1bcff6272e2888fce0db4 Mon Sep 17 00:00:00 2001
From: bdrouvotAWS 
Date: Sat, 5 Aug 2023 12:39:42 +
Subject: [PATCH v2] pg_wait_event

Adding a new system view, namely pg_wait_event, that describes the wait events.
---
 doc/src/sgml/system-views.sgml| 55 ++
 src/backend/catalog/system_views.sql  |  3 +
 src/backend/utils/activity/.gitignore |  1 +
 src/backend/utils/activity/Makefile   |  6 +-
 .../activity/generate-wait_event_types.pl | 76 +--
 src/backend/utils/activity/meson.build|  1 +
 src/backend/utils/activity/pg_wait_event.c| 41 ++
 src/include/catalog/pg_proc.dat   |  6 ++
 src/include/utils/meson.build |  4 +-
 src/test/regress/expected/rules.out   |  3 +
 src/test/regress/expected/sysviews.out|  8 ++
 src/test/regress/sql/sysviews.sql |  4 +
 src/tools/msvc/clean.bat  |  1 +
 13 files changed, 183 insertions(+), 26 deletions(-)
  19.8% doc/src/sgml/
  60.6% src/backend/utils/activity/
   5.3% src/include/catalog/
   5.7% src/test/regress/expected/
   3.0% src/test/regress/sql/
   5.3% src/

diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 57b228076e..e9cbeff682 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -221,6 +221,11 @@
   views
  
 
+ 
+  pg_wait_event
+  wait events
+ 
+
 

   
@@ -4825,4 +4830,54 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   
  
 
+
+ 
+  pg_wait_event
+
+  
+   pg_wait_event
+  
+
+  
+   The view pg_wait_event provides description about 
the
+   wait events.
+  
+
+  
+   pg_wait_event Columns
+   
+
+ 
+  
+   Column Type
+  
+  
+   Description
+  
+ 
+
+
+
+ 
+  
+   wait_event_name text
+  
+  
+   Wait event name
+  
+ 
+
+ 
+  
+   description texte
+  
+  
+   Wait event description
+  
+ 
+
+   
+  
+ 
+
 
diff --git a/src/backend/catalog/system_views.sql 
b/src/backend/catalog/system_views.sql
index af65af6bdd..f86a4dd770 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1342,3 +1342,6 @@ CREATE VIEW pg_stat_subscription_stats AS
 ss.stats_reset
 FROM pg_subscription as s,
  pg_stat_get_subscription_stats(s.oid) as ss;
+
+CREATE VIEW pg_wait_event AS
+SELECT * FROM pg_get_wait_events() AS we;
diff --git a/src/backend/utils/activity/.gitignore 
b/src/backend/utils/activity/.gitignore
index d77079285b..ad089a0b63 100644
--- a/src/backend/utils/activity/.gitignore
+++ b/src/backend/utils/activity/.gitignore
@@ -1,2 +1,3 @@
 /pgstat_wait_event.c
 /wait_event_types.h
+/pg_wait_event_insert.c
diff --git a/src/backend/utils/activity/Makefile 
b/src/backend/utils/activity/Makefile
index f1117745d4..8595e6ea77 100644
--- a/src/backend/utils/activity/Makefile
+++ b/src/backend/utils/activity/Makefile
@@ -32,10 +32,14 @@ OBJS = \
pgstat_subscription.o \
pgstat_wal.o \
pgstat_xact.o \
+   pg_wait_event.o \
wait_event.o
 
 include $(top_srcdir)/src/backend/common.mk
 
+pg_wait_event.o: pg_wait_event_insert.c
+pg_wait_event_insert.c: wait_event_types.h
+
 wait_event.o: pgstat_wait_event.c
 pgstat_wait_event.c: wait_event_types.h
touch $@
@@ -44,4 +48,4 @@ wait_event_types.h: 
$(top_srcdir)/src/backend/utils/activity/wait_event_names.tx
$(PERL) $(srcdir)/generate-wait_event_types.pl --code $<
 
 maintainer-clean: clean
-   rm -f wait_event_types.h pgstat_wait_event.c
+   rm -f wait_event_types.h pgstat_wait_event.c pg_wait_event_insert.c
diff 

Re: Synchronizing slots from primary to standby

2023-08-07 Thread Drouvot, Bertrand

Hi,

On 8/4/23 1:32 PM, shveta malik wrote:

On Fri, Aug 4, 2023 at 2:44 PM Drouvot, Bertrand
 wrote:

On 7/28/23 4:39 PM, Bharath Rupireddy wrote:



Sorry to be late, but I gave a second thought and I wonder if we really need 
this design.
(i.e start a logical replication background worker on the standby to sync the 
slots).

Wouldn't that be simpler to "just" update the sync slots "metadata"
as the https://github.com/EnterpriseDB/pg_failover_slots module (mentioned by 
Peter
up-thread) is doing?
(making use of LogicalConfirmReceivedLocation(), LogicalIncreaseXminForSlot()
and LogicalIncreaseRestartDecodingForSlot(), If I read synchronize_one_slot() 
correctly).



Agreed. It would be simpler to just update the metadata. I think you
have not got chance to review the latest posted patch ('v10-0003')
yet, it does the same.


Thanks for the feedback! Yeah, I did not look at v10 in details and was
looking at the email thread only.

Indeed, I now see that 0003 does update the metadata in local_slot_advance(),
that's great!



But I do not quite get it as in how can we do it w/o starting a
background worker? 


Yeah, agree that we still need background workers.
What I meant was to avoid to use "logical replication background worker"
(aka through logicalrep_worker_launch()) to sync the slots.


The question here is how many background workers we
need to have. Will one be sufficient or do we need one per db (as done
earlier by the original patches in this thread) or are we good with
dividing work among some limited number of workers?

I feel syncing all slots in one worker may increase the lag between
subsequent syncs for a particular slot and if the number of slots are
huge, the chances of losing the slot-data is more in case of failure.
Starting one worker per db also might not be that efficient as it will
increase load on the system (both in terms of background worker and
network traffic) especially for a case where the number of dbs are
more. Thus starting max 'n' number of workers where 'n' is decided by
GUC and dividing the work/DBs among these looks a better option to me.
Please see the discussion in and around the email at [1]

[1]: 
https://www.postgresql.org/message-id/CAJpy0uCT%2BnpL4eUvCWiV_MBEri9ixcUgJVDdsBCJSqLd0oD1fQ%40mail.gmail.com


Thanks for the link! If I read the email thread correctly, this discussion
was before V10 (which is the first version making use of 
LogicalConfirmReceivedLocation(),
LogicalIncreaseXminForSlot(), LogicalIncreaseRestartDecodingForSlot()) means
before the metadata sync only has been implemented.

While I agree that the approach to split the sync load among workers with the 
new
max_slot_sync_workers GUC seems reasonable without the sync only feature (pre 
V10),
I'm not sure that with the metadata sync only in place the extra complexity to 
manage multiple
sync workers is needed.

Maybe we should start some tests/benchmark with only one sync worker to get 
numbers
and start from there?

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Fix badly generated entries in wait_event_names.txt

2023-08-07 Thread Drouvot, Bertrand

Hi hackers,

fa88928470 introduced src/backend/utils/activity/wait_event_names.txt that has
been auto-generated. The auto-generation had parsing errors leading to bad
entries in wait_event_names.txt (when the same "WAIT_EVENT" name can be found as
a substring of another one, then descriptions were merged in one of them).

Fixing those bad entries in the attached.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom fe669c646ea93b66b36d0aa97bb5665f0dd734b7 Mon Sep 17 00:00:00 2001
From: bdrouvotAWS 
Date: Mon, 7 Aug 2023 07:12:11 +
Subject: [PATCH v1] Fix badly generated entries in wait_event_names.txt

fa88928470 introduced src/backend/utils/activity/wait_event_names.txt that has
been auto-generated. The auto-generation had parsing errors leading to bad
entries in wait_event_names.txt (when the same "WAIT_EVENT" name can be found as
a substring of another one, then descriptions were merged in one of them.)

Fixing those bad entries.
---
 src/backend/utils/activity/wait_event_names.txt | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
 100.0% src/backend/utils/activity/

diff --git a/src/backend/utils/activity/wait_event_names.txt 
b/src/backend/utils/activity/wait_event_names.txt
index 2ea4789b00..fcd9d2c63c 100644
--- a/src/backend/utils/activity/wait_event_names.txt
+++ b/src/backend/utils/activity/wait_event_names.txt
@@ -137,7 +137,7 @@ WAIT_EVENT_REPLICATION_ORIGIN_DROP  ReplicationOriginDrop   
"Waiting for a replicat
 WAIT_EVENT_REPLICATION_SLOT_DROP   ReplicationSlotDrop "Waiting for a 
replication slot to become inactive so it can be dropped."
 WAIT_EVENT_RESTORE_COMMAND RestoreCommand  "Waiting for  to complete."
 WAIT_EVENT_SAFE_SNAPSHOT   SafeSnapshot"Waiting to obtain a valid 
snapshot for a READ ONLY DEFERRABLE transaction."
-WAIT_EVENT_SYNC_REPSyncRep "Waiting for confirmation from a remote server 
during synchronous replication. Waiting to read or update information about the 
state of synchronous replication."
+WAIT_EVENT_SYNC_REPSyncRep "Waiting for confirmation from a remote server 
during synchronous replication."
 WAIT_EVENT_WAL_RECEIVER_EXIT   WalReceiverExit "Waiting for the WAL receiver 
to exit."
 WAIT_EVENT_WAL_RECEIVER_WAIT_START WalReceiverWaitStart"Waiting for 
startup process to send initial data for streaming replication."
 WAIT_EVENT_XACT_GROUP_UPDATE   XactGroupUpdate "Waiting for the group leader 
to update transaction status at end of a parallel operation."
@@ -177,9 +177,9 @@ WAIT_EVENT_BUFFILE_READ BufFileRead "Waiting for a 
read from a buffered file."
 WAIT_EVENT_BUFFILE_WRITE   BufFileWrite"Waiting for a write to a 
buffered file."
 WAIT_EVENT_BUFFILE_TRUNCATEBufFileTruncate "Waiting for a buffered file to 
be truncated."
 WAIT_EVENT_CONTROL_FILE_READ   ControlFileRead "Waiting for a read from the 
pg_control file."
-WAIT_EVENT_CONTROL_FILE_SYNC   ControlFileSync "Waiting for the 
pg_control file to reach durable storage. Waiting for an 
update to the pg_control file to reach durable storage."
+WAIT_EVENT_CONTROL_FILE_SYNC   ControlFileSync "Waiting for the 
pg_control file to reach durable storage."
 WAIT_EVENT_CONTROL_FILE_SYNC_UPDATEControlFileSyncUpdate   "Waiting for an 
update to the pg_control file to reach durable storage."
-WAIT_EVENT_CONTROL_FILE_WRITE  ControlFileWrite"Waiting for a write to 
the pg_control file. Waiting for a write to update the 
pg_control file."
+WAIT_EVENT_CONTROL_FILE_WRITE  ControlFileWrite"Waiting for a write to 
the pg_control file."
 WAIT_EVENT_CONTROL_FILE_WRITE_UPDATE   ControlFileWriteUpdate  "Waiting for a 
write to update the pg_control file."
 WAIT_EVENT_COPY_FILE_READ  CopyFileRead"Waiting for a read during a 
file copy operation."
 WAIT_EVENT_COPY_FILE_WRITE CopyFileWrite   "Waiting for a write during a 
file copy operation."
@@ -241,9 +241,9 @@ WAIT_EVENT_WAL_COPY_WRITE   WALCopyWrite"Waiting for a 
write when creating a new
 WAIT_EVENT_WAL_INIT_SYNC   WALInitSync "Waiting for a newly 
initialized WAL file to reach durable storage."
 WAIT_EVENT_WAL_INIT_WRITE  WALInitWrite"Waiting for a write while 
initializing a new WAL file."
 WAIT_EVENT_WAL_READWALRead "Waiting for a read from a WAL file."
-WAIT_EVENT_WAL_SYNCWALSync "Waiting for a WAL file to reach durable 
storage. Waiting for data to reach durable storage while assigning a new WAL 
sync method."
+WAIT_EVENT_WAL_SYNCWALSync "Waiting for a WAL file to reach durable 
storage."
 WAIT_EVENT_WAL_SYNC_METHOD_ASSIGN  WALSyncMethodAssign "Waiting for 
data to reach durable storage while assigning a new WAL sync method."
-WAIT_EVENT_WAL_WRITE   WALWrite"Waiting for a write to a WAL file. 
Waiting for WAL buffers to be written to disk."
+WAIT_EVENT_WAL_WRITE   WALWrite"Waiting for a write to a WAL file."
 
 
 #
-- 
2.34.1



Re: WIP: new system catalog pg_wait_event

2023-08-07 Thread Drouvot, Bertrand

Hi,

On 8/4/23 5:08 PM, Tom Lane wrote:

"Drouvot, Bertrand"  writes:

Now that fa88928470 generates automatically code and documentation
related to wait events, why not exposing the wait events description
through a system catalog relation?


I think you'd be better off making this a view over a set-returning
function.  The nearby work to allow run-time extensibility of the
set of wait events is not going to be happy with a static catalog.


Oh right, good point, thanks!: I'll come back with a new patch version to make
use of SRF instead.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: "duplicated" wait events

2023-08-07 Thread Drouvot, Bertrand

Hi,

On 8/7/23 7:46 AM, Kyotaro Horiguchi wrote:

At Fri, 4 Aug 2023 17:07:51 +0200, "Drouvot, Bertrand" 
 wrote in

SynRep currently appears in "IPC" and "LWLock" (see [2])
WALWrite currently appears in "IO" and "LWLock" (see [2])

I think that can lead to confusion and it would be better to avoid
duplicate wait event
name across Wait Class (and so fix those 2 ones above), what do you
think?


I think it would be handy to be able to summirize wait events by event
names, instead of classes. In other words, grouping wait events
through all classes according to their purpose. From that perspective,
I'm rather fan of consolidating event names (that is, the opposite of
your proposal).



I could agree if they had the same descriptions and behaviors.

For example with WALWrite:

WALWrite in "IO" is described as "Waiting for a write to a WAL file" and
is described in "LWLock" as "Waiting for WAL buffers to be written to disk".
Having two distinct descriptions seems to suggest the wait events are not the 
same.

Also I think it makes sense to distinguish both as the LWLock one could
be reported as waiting in LWLockReportWaitStart() while trying to grab the
lock in LW_SHARED mode (as done in GetLastSegSwitchData()).

I think there is 2 things to address:

1) fix the current duplicates
2) put safeguard in place

As far 1), it could be easily done by updating wait_event_names.txt (renaming 
duplicates)
in the master branch. As the duplication has been introduced in 14a9101091
(so starting as of PG 13) we would need to update monitoring.sgml for <= 16 
should we want to
back patch.

2) could be handled in generate-wait_event_types.pl on master.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




"duplicated" wait events

2023-08-04 Thread Drouvot, Bertrand

Hi hackers,

while working on the new system catalog pg_wait_event (see [1]) I noticed that 
some wait
events are currently "duplicated":

postgres=# select wait_event_name,count(*) from pg_wait_event group by 
wait_event_name having count(*) > 1;
 wait_event_name | count
-+---
 SyncRep | 2
 WALWrite| 2
(2 rows)

Indeed:

SynRep currently appears in "IPC" and "LWLock" (see [2])
WALWrite currently appears in "IO" and "LWLock" (see [2])

I think that can lead to confusion and it would be better to avoid duplicate 
wait event
name across Wait Class (and so fix those 2 ones above), what do you think?

[1]: 
https://www.postgresql.org/message-id/0e2ae164-dc89-03c3-cf7f-de86378053ac%40gmail.com
[2]: https://www.postgresql.org/docs/current/monitoring-stats.html

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




WIP: new system catalog pg_wait_event

2023-08-04 Thread Drouvot, Bertrand

Hi hackers,

Now that fa88928470 generates automatically code and documentation
related to wait events, why not exposing the wait events description
through a system catalog relation? (the idea has been proposed on twitter
by Yves Colin [1]).

I think that could be useful to:

- join this new relation with pg_stat_activity and have a quick
understanding of what the sessions are waiting for (if any).
- quickly compare the wait events across versions (what are the new
ones if any,..)

Please find attached a POC patch creating this new system catalog
pg_wait_event.

The patch:

- updates the documentation
- adds a new option to generate-wait_event_types.pl to generate the 
pg_wait_event.dat
- creates the pg_wait_event.h
- works with autoconf

It currently does not:

- works with meson (has to be done)
- add tests (not sure it's needed)
- create an index on the new system catalog (not sure it's needed as the data 
fits
in 4 pages (8kB default size)).

Outcome example:

postgres=# select a.pid, a.application_name, a.wait_event,d.description from 
pg_stat_activity a, pg_wait_event d where a.wait_event = d.wait_event_name and 
state='active';
   pid   | application_name | wait_event  |
description
-+--+-+---
 2937546 | pgbench  | WALInitSync | Waiting for a newly initialized WAL 
file to reach durable storage
(1 row)

There is still some work to be done to generate the pg_wait_event.dat file, 
specially when the
same wait event name can be found in multiple places (like for example 
"WALWrite" in IO and LWLock),
leading to:

postgres=# select * from pg_wait_event where wait_event_name = 'WALWrite';
 wait_event_name |   description
-+--
 WALWrite| Waiting for a write to a WAL file. Waiting for WAL buffers 
to be written to disk
 WALWrite| Waiting for WAL buffers to be written to disk
(2 rows)

which is obviously not right (we'll probably have to add the wait class name to 
the game).

I'm sharing it now (even if it's still WIP) so that you can start sharing your 
thoughts
about it.

[1]: https://twitter.com/Ycolin/status/1676598065048743948

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom c56ce13af7193bf0d679ec0a7533ab686464f34e Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Tue, 1 Aug 2023 03:55:51 +
Subject: [PATCH v1] pg_wait_event

Adding a new shared catalog relation, namely pg_wait_event, that describes the
wait events.

The content is auto-generated with generate-wait_event_types.pl.
---
 doc/src/sgml/catalogs.sgml| 62 ++
 src/backend/catalog/.gitignore|  1 +
 src/backend/catalog/Makefile  | 13 ++--
 src/backend/catalog/catalog.c |  4 +-
 .../activity/generate-wait_event_types.pl | 65 ++-
 src/include/catalog/catversion.h  |  2 +-
 src/include/catalog/pg_wait_event.h   | 45 +
 7 files changed, 184 insertions(+), 8 deletions(-)
  26.7% doc/src/sgml/
  16.5% src/backend/catalog/
  33.6% src/backend/utils/activity/
  23.0% src/include/catalog/

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 307ad88b50..f181a5cedb 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -369,6 +369,11 @@
   pg_user_mapping
   mappings of users to foreign servers
  
+
+ 
+  pg_wait_event
+  wait events description
+ 
 

   
@@ -9668,4 +9673,61 @@ SCRAM-SHA-256$:&l
   
  
 
+
+ 
+  pg_wait_event
+
+  
+   pg_wait_event
+  
+
+  
+   The catalog pg_wait_event stores description about
+   the wait events.
+  
+
+  
+   Unlike most system catalogs, pg_wait_event
+   is shared across all databases of a cluster: there is only one
+   copy of pg_wait_event per cluster, not
+   one per database.
+  
+
+  
+   pg_wait_event Columns
+   
+
+ 
+  
+   Column Type
+  
+  
+   Description
+  
+ 
+
+
+
+ 
+  
+   wait_event_name text
+  
+  
+   Wait event name
+  
+ 
+
+ 
+  
+   description text
+  
+  
+   Wait event description
+  
+ 
+
+   
+  
+ 
+
 
diff --git a/src/backend/catalog/.gitignore b/src/backend/catalog/.gitignore
index 237ff54165..7f2f3045dc 100644
--- a/src/backend/catalog/.gitignore
+++ b/src/backend/catalog/.gitignore
@@ -4,3 +4,4 @@
 /system_constraints.sql
 /pg_*_d.h
 /bki-stamp
+/pg_wait_event.dat
diff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile
index a60107bf94..68a97fbdcd 100644
--- a/src/backend/catalog/Makefile
+++ b/src/backend/catalog/Makefile
@@ -72,7 +72,8 @@ CATALOG_HEADER

Re: Split index and table statistics into different types of stats

2023-08-04 Thread Drouvot, Bertrand

Hi,

On 7/10/23 11:14 AM, Daniel Gustafsson wrote:

On 4 Apr 2023, at 12:04, Drouvot, Bertrand  wrote:
On 4/3/23 11:47 PM, Gregory Stark (as CFM) wrote:

On Thu, 16 Mar 2023 at 05:25, Drouvot, Bertrand
 wrote:


My plan was to get [1] done before resuming working on the "Split index and table 
statistics into different types of stats" one.
[1]: https://commitfest.postgresql.org/42/4106/

Generate pg_stat_get_xact*() functions with Macros ([1]) was committed March 27.
There's only a few days left in this CF. Would you like to leave this
here? Should it be marked Needs Review or Ready for Commit? Or should
we move it to the next CF now?


I moved it to the next commitfest and marked the target version as v17.


This patch no longer applies (with tests failing when it did), and the thread
has stalled.  I'm marking this returned with feedback for now, please feel free
to resubmit to a future CF with a new version of the patch.


Thanks for the update.
I'll resume working on it and re-submit once done.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Synchronizing slots from primary to standby

2023-08-04 Thread Drouvot, Bertrand

Hi,

On 7/28/23 4:39 PM, Bharath Rupireddy wrote:

On Mon, Jul 24, 2023 at 9:00 AM Amit Kapila  wrote:



2. All candidate standbys will start one slot sync worker per logical
slot which might not be scalable.


Yeah, that doesn't sound like a good idea but IIRC, the proposed patch
is using one worker per database (for all slots corresponding to a
database).


Right. It's based on one worker for each database.


Is having one (or a few more - not
necessarily one for each logical slot) worker for all logical slots
enough?


I guess for a large number of slots the is a possibility of a large
gap in syncing the slots which probably means we need to retain
corresponding WAL for a much longer time on the primary. If we can
prove that the gap won't be large enough to matter then this would be
probably worth considering otherwise, I think we should find a way to
scale the number of workers to avoid the large gap.


I think the gap is largely determined by the time taken to advance
each slot and the amount of WAL that each logical slot moves ahead on
primary. 


Sorry to be late, but I gave a second thought and I wonder if we really need 
this design.
(i.e start a logical replication background worker on the standby to sync the 
slots).

Wouldn't that be simpler to "just" update the sync slots "metadata"
as the https://github.com/EnterpriseDB/pg_failover_slots module (mentioned by 
Peter
up-thread) is doing?
(making use of LogicalConfirmReceivedLocation(), LogicalIncreaseXminForSlot()
and LogicalIncreaseRestartDecodingForSlot(), If I read synchronize_one_slot() 
correctly).


I've measured the time it takes for
pg_logical_replication_slot_advance with different amounts WAL on my
system. It took 2595ms/5091ms/31238ms to advance the slot by
3.7GB/7.3GB/13GB respectively. To put things into perspective here,
imagine there are 3 logical slots to sync for a single slot sync
worker and each of them are in need of advancing the slot by
3.7GB/7.3GB/13GB of WAL. The slot sync worker gets to slot 1 again
after 2595ms+5091ms+31238ms (~40sec), gets to slot 2 again after
advance time of slot 1 with amount of WAL that the slot has moved
ahead on primary during 40sec, gets to slot 3 again after advance time
of slot 1 and slot 2 with amount of WAL that the slot has moved ahead
on primary and so on. If WAL generation on the primary is pretty fast,
and if the logical slot moves pretty fast on the primary, the time it
takes for a single sync worker to sync a slot can increase.


That would be way "faster" and we would probably not need to
worry that much about the number of "sync" workers (if it/they "just" has/have
to sync slot's "metadata") as proposed above.

Thoughts?

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Synchronizing slots from primary to standby

2023-08-04 Thread Drouvot, Bertrand

Hi,

On 7/24/23 4:32 AM, Bharath Rupireddy wrote:

On Fri, Jul 21, 2023 at 5:16 PM shveta malik  wrote:



Here are my thoughts about this feature:


Thanks for looking at it!




Important considerations:

1. Does this design guarantee the row versions required by subscribers
aren't removed on candidate standbys as raised here -
https://www.postgresql.org/message-id/20220218222319.yozkbhren7vkjbi5%40alap3.anarazel.de?

It seems safe with logical decoding on standbys feature. Also, a
test-case from upthread is already in patch sets (in v9 too)
https://www.postgresql.org/message-id/CAAaqYe9FdKODa1a9n%3Dqj%2Bw3NiB9gkwvhRHhcJNginuYYRCnLrg%40mail.gmail.com.
However, we need to verify the use cases extensively.


Agree. We also discussed up-thread that we'd have to drop any "sync" slots if 
they
are invalidated. And they should be re-created based on the 
synchronize_slot_names.


Please feel free to add the list if I'm missing anything.



We'd also have to ensure that "sync" slots can't be consumed on the standby 
(this has been
discussed up-thread).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Autogenerate some wait events code and documentation

2023-07-10 Thread Drouvot, Bertrand




On 7/11/23 12:52 AM, Michael Paquier wrote:

On Mon, Jul 10, 2023 at 09:11:36AM +0200, Alvaro Herrera wrote:

I don't like this bit, because it means the .txt file is now ungreppable
as source of the enum name.  Things become mysterious and people have to
track down the event name by reading the the Perl generating script.
It's annoying.  I'd rather have the extra column, even if it means a
little duplicity.


Hmm.  I can see your point that we'd lose the direct relationship
between the enum and string when running a single `git grep` from the
tree, still attempting to do that does not actually lead to much
information gained?  Personally, I usually grep for code when looking
for consistent information across various paths in the tree.  Wait
events are very different: each enum is used in a single place in the
tree making their grep search the equivalent of looking at
wait_event_names.txt anyway?



Before commit fa88928470 one could find the relationship between the enum and 
the name
in wait_event.c (a simple git grep would provide it).

With commit fa88928470 in place, one could find the relationship between the 
enum and the name
in wait_event_names.txt (a simple git grep would provide it).

With the proposal we are discussing here, once the build is done and so the 
pgstat_wait_event.c
file is generated then we have the same "grep" capability than pre commit 
fa88928470 (except that
"git grep" can't be used and one would need things like
find . -name "*.c" -exec grep -il "WAIT_EVENT_CHECKPOINTER_MAIN" {} \;)

I agree that it is less "obvious" than pre fa88928470 but still doable though.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Autogenerate some wait events code and documentation

2023-07-09 Thread Drouvot, Bertrand




On 7/10/23 7:20 AM, Michael Paquier wrote:

On Mon, Jul 10, 2023 at 07:05:30AM +0200, Drouvot, Bertrand wrote:

Yeah there is one in generate-wait_event_types.pl. I was wondering
to add one in wait_event_names.txt too (as this is the place where
no wait events would be added if any).


Hmm.  Something like that could be done, for instance:

  #   src/backend/utils/activity/wait_event_types.h
-#  typedef enum definitions for wait events.
+#  typedef enum definitions for wait events, generated from the first
+#  field.


Yeah, it looks a good place for it.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Autogenerate some wait events code and documentation

2023-07-09 Thread Drouvot, Bertrand

Hi,

On 7/9/23 9:36 AM, Michael Paquier wrote:

On Sun, Jul 09, 2023 at 09:15:34AM +0200, Drouvot, Bertrand wrote:

I also noticed that you now provide the culprit line in case of parsing
failure (thanks for that).


Yes, that's mentioned in the commit message I quickly wrote in 0002.


  #
-#   "C symbol in enums" "format in the system views" "description in the docs"
+#   "format in the system views" "description in the docs"

Should we add a note here about the impact of the "format in the system views" 
on
the auto generated enum? (aka how it is generated based on its format)?


There is one, 


Yeah there is one in generate-wait_event_types.pl. I was wondering
to add one in wait_event_names.txt too (as this is the place where
no wait events would be added if any).


but now that I look at it WAIT_EVENT repeated twice does
not look great, so this could use "FooBarName" or equivalent:


+1 for "FooBarName"

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Autogenerate some wait events code and documentation

2023-07-09 Thread Drouvot, Bertrand

Hi,

On 7/9/23 6:32 AM, Michael Paquier wrote:

On Fri, Jul 07, 2023 at 01:49:24PM +0900, Michael Paquier wrote:

Hmm.  If we go down this road I would make the choice of simplicity
and remove entirely a column, then, generating the snakecase from the
camelcase or vice-versa (say like a $string =~ s/([a-z]+)/$1_/g;),
even if it means having slightly incompatible strings showing to the
users. And I'd rather minimize the number of exceptions we need to
handle in this automation (aka no exception rules for some keywords
like "SSL" or "WAL", etc.).


After pondering more about that, the attached patch set does exactly
that.


Thanks!


 Patch 0001 includes an update of the wait event names so as
these are more consistent with the enum elements generated.  With this
change, users can apply lower() or upper() across monitoring queries
and still get the same results as before.  An exception was the
message queue events, which the enums used "MQ" but the event names
used "MessageQueue", but this concerns only four lines of code in the
backend.  The newly-generated enum elements match with the existing
ones, except for MQ.




Patch 0002 introduces a set of simplifications for the format of
wait_event_names.txt:
- Removal of the first column for the enums.
- Removal of the quotes for the event name.  We have a single keyword
for these, so that's kind of annoying to cope with that for new
entries.
- Build of the enum elements using the event names, by applying a
rebuild as simple as that:
+   $waiteventenumname =~ s/([a-z])([A-Z])/$1_$2/g;
+   $waiteventenumname = uc($waiteventenumname);

Thoughts?


That's great and it does simplify the wait_event_names.txt format (and the
impact on "MQ" does not seem like a big deal).

I also noticed that you now provide the culprit line in case of parsing
failure (thanks for that).

 #
-#   "C symbol in enums" "format in the system views" "description in the docs"
+#   "format in the system views" "description in the docs"

Should we add a note here about the impact of the "format in the system views" 
on
the auto generated enum? (aka how it is generated based on its format)?

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Autogenerate some wait events code and documentation

2023-07-04 Thread Drouvot, Bertrand

Hi,

On 7/3/23 9:11 AM, Michael Paquier wrote:

On Mon, Jul 03, 2023 at 03:57:42PM +0900, Michael Paquier wrote:


Thanks for looking at it and having fixed the issues that were present in
v10.


I think that we should add some options to the perl script to be more
selective with the files generated.  How about having two options
called --docs and --code to select one or the other, then limit what
gets generated in each path?  I guess that it would be cleaner if we
error in the case where both options are defined, and just use some
gotos to redirect to each foreach loop to limit extra indentations in
the script.  This would avoid the need to remove the C and H files
from the docs, additionally, which is what the Makefile in doc/ does.

I have fixed all the issues I've found in v11 attached, except for the
last one (I have added the OUTDIR trick for reference, but that's
incorrect and incomplete).  Could you look at this part?


Ah.  It took me a few extra minutes, but I think that we should set
"capture" to "false", no?  It looks like meson is getting confused,
expecting something in stdout but the new script generates a few
files, and does not output anything.  That's different from the other
doc-related perl scripts.
--


Yeah, with "capture" set to "false" then ninja alldocs does not error out
and wait_event_types.sgml gets generated.

I'll look at the extra options --code and --docs.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: ProcessStartupPacket(): database_name and user_name truncation

2023-07-03 Thread Drouvot, Bertrand

Hi,

On 7/3/23 10:34 PM, Nathan Bossart wrote:

On Sat, Jul 01, 2023 at 04:02:06PM +0200, Drouvot, Bertrand wrote:

Please find V2 attached where it's failing as soon as the database name or
user name are detected as overlength.


Thanks, Bertrand.  I chickened out and ended up committing v1 for now
(i.e., simply removing the truncation code).  I didn't like the idea of
trying to keep the new error messages consistent with code in faraway
files, and the startup packet length limit is already pretty aggressive, so
I'm a little less concerned about lugging around long names.  Plus, I think
v2 had some subtle interactions with db_user_namespace (maybe for the
better), but I didn't spend too much time looking at that since
db_user_namespace will likely be removed soon.


Thanks Nathan for the feedback and explanations, I think that makes fully sense.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: ProcessStartupPacket(): database_name and user_name truncation

2023-07-01 Thread Drouvot, Bertrand

Hi,

On 6/30/23 7:32 PM, Drouvot, Bertrand wrote:

Hi,

On 6/30/23 5:54 PM, Tom Lane wrote:

Nathan Bossart  writes:

After taking another look at this, I wonder if it'd be better to fail as
soon as we see the database or user name is too long instead of lugging
them around when authentication is destined to fail.


If we're agreed that we aren't going to truncate these identifiers,
that seems like a reasonable way to handle it.



Yeah agree, thanks Nathan for the idea.
I'll work on a new patch version proposal.



Please find V2 attached where it's failing as soon as the database name or
user name are detected as overlength.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 7747032129fb66891805a8a2b5e06cbce8df2d2a Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Wed, 21 Jun 2023 18:28:13 +
Subject: [PATCH v2] Reject incoming username and database name in case of
 overlength

---
 src/backend/postmaster/postmaster.c | 25 -
 1 file changed, 16 insertions(+), 9 deletions(-)
 100.0% src/backend/postmaster/

diff --git a/src/backend/postmaster/postmaster.c 
b/src/backend/postmaster/postmaster.c
index 4c49393fc5..03289f2093 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -2183,9 +2183,25 @@ retry1:
valptr = buf + valoffset;
 
if (strcmp(nameptr, "database") == 0)
+   {
+   /* Overlength database name has been provided. 
*/
+   if (strlen(valptr) >= NAMEDATALEN)
+   ereport(FATAL,
+   
(errcode(ERRCODE_UNDEFINED_DATABASE),
+errmsg("database 
\"%s\" does not exist", valptr)));
+
port->database_name = pstrdup(valptr);
+   }
else if (strcmp(nameptr, "user") == 0)
+   {
+   /* Overlength user name has been provided. */
+   if (strlen(valptr) >= NAMEDATALEN)
+   ereport(FATAL,
+   
(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),
+errmsg("role \"%s\" 
does not exist", valptr)));
+
port->user_name = pstrdup(valptr);
+   }
else if (strcmp(nameptr, "options") == 0)
port->cmdline_options = pstrdup(valptr);
else if (strcmp(nameptr, "replication") == 0)
@@ -2290,15 +2306,6 @@ retry1:
}
}
 
-   /*
-* Truncate given database and user names to length of a Postgres name.
-* This avoids lookup failures when overlength names are given.
-*/
-   if (strlen(port->database_name) >= NAMEDATALEN)
-   port->database_name[NAMEDATALEN - 1] = '\0';
-   if (strlen(port->user_name) >= NAMEDATALEN)
-   port->user_name[NAMEDATALEN - 1] = '\0';
-
if (am_walsender)
MyBackendType = B_WAL_SENDER;
else
-- 
2.34.1



Re: ProcessStartupPacket(): database_name and user_name truncation

2023-06-30 Thread Drouvot, Bertrand

Hi,

On 6/30/23 5:54 PM, Tom Lane wrote:

Nathan Bossart  writes:

After taking another look at this, I wonder if it'd be better to fail as
soon as we see the database or user name is too long instead of lugging
them around when authentication is destined to fail.


If we're agreed that we aren't going to truncate these identifiers,
that seems like a reasonable way to handle it.



Yeah agree, thanks Nathan for the idea.
I'll work on a new patch version proposal.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Synchronizing slots from primary to standby

2023-06-29 Thread Drouvot, Bertrand

Hi Kuroda-san,

On 6/29/23 12:22 PM, Hayato Kuroda (Fujitsu) wrote:

Dear Drouvot,

Hi, I'm also interested in the feature. Followings are my high-level comments.
I did not mention some detailed notations because this patch is not at the 
stage.
And very sorry that I could not follow all of this discussions.



Thanks for looking at it and your feedback!

All I've done so far is to provide a re-based version in April of the existing 
patch.

I'll have a closer look at the code, at your feedback and Amit's one while 
working on the new version that will:

- take care of slot invalidation
- ensure that synchronized slot cant' be consumed until the standby gets 
promoted

as discussed up-thread.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Synchronizing slots from primary to standby

2023-06-29 Thread Drouvot, Bertrand

Hi,

On 6/29/23 12:36 PM, Amit Kapila wrote:

On Wed, Jun 28, 2023 at 12:19 PM Drouvot, Bertrand
 wrote:

Yeah, I think once the slot is dropped we just have to wait for the slot to
be re-created on the standby according to the new synchronize_slot_names GUC.

Assuming the initial slot "creation" on the standby (coming from the 
synchronize_slot_names usage)
is working "correctly" then it should also work "correctly" once the slot is 
dropped.



I also think so.


If we agree that a synchronized slot can not/should not be consumed (will 
implement this behavior) then
I think the proposed scenario above should make sense, do you agree?



Yeah, I also can't think of a use case for this. So, we can probably
disallow it and document the same. I guess if we came across a use
case for this, we can rethink allowing to consume the changes from
synchronized slots.


Yeah agree, I'll work on a new version that deals with invalidated slot that 
way and
that ensures that a synchronized slot can't be consumed (until the standby gets 
promoted).
 
Regards,


--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Synchronizing slots from primary to standby

2023-06-27 Thread Drouvot, Bertrand

Hi,

On 6/26/23 12:34 PM, Amit Kapila wrote:

On Mon, Jun 26, 2023 at 11:15 AM Drouvot, Bertrand
 wrote:


On 6/20/23 12:22 PM, Amit Kapila wrote:

On Mon, Jun 19, 2023 at 9:56 PM Drouvot, Bertrand
 wrote:



In such a case (slot valid on the primary but invalidated on the standby) then 
I think we
could drop and recreate the invalidated slot on the standby.



Will it be safe? Because after recreating the slot, it will reserve
the new WAL location and build the snapshot based on that which might
miss some important information in the snapshot. For example, to
update the slot's position with new information from the primary, the
patch uses pg_logical_replication_slot_advance() which means it will
process all records and update the snapshot via
DecodeCommit->SnapBuildCommitTxn().


Your concern is that the slot could have been consumed on the standby?

I mean, if we suppose the "synchronized" slot can't be consumed on the standby 
then
drop/recreate such an invalidated slot would be ok?



That also may not be sufficient because as soon as the slot is
invalidated/dropped, the required WAL could be removed on standby.



Yeah, I think once the slot is dropped we just have to wait for the slot to
be re-created on the standby according to the new synchronize_slot_names GUC.

Assuming the initial slot "creation" on the standby (coming from the 
synchronize_slot_names usage)
is working "correctly" then it should also work "correctly" once the slot is 
dropped.

If we agree that a synchronized slot can not/should not be consumed (will 
implement this behavior) then
I think the proposed scenario above should make sense, do you agree?

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Synchronizing slots from primary to standby

2023-06-25 Thread Drouvot, Bertrand

Hi,

On 6/20/23 12:22 PM, Amit Kapila wrote:

On Mon, Jun 19, 2023 at 9:56 PM Drouvot, Bertrand
 wrote:



In such a case (slot valid on the primary but invalidated on the standby) then 
I think we
could drop and recreate the invalidated slot on the standby.



Will it be safe? Because after recreating the slot, it will reserve
the new WAL location and build the snapshot based on that which might
miss some important information in the snapshot. For example, to
update the slot's position with new information from the primary, the
patch uses pg_logical_replication_slot_advance() which means it will
process all records and update the snapshot via
DecodeCommit->SnapBuildCommitTxn().


Your concern is that the slot could have been consumed on the standby?

I mean, if we suppose the "synchronized" slot can't be consumed on the standby 
then
drop/recreate such an invalidated slot would be ok?
 
Asking, because I'm not sure we should allow consumption of a "synchronized" slot

until the standby gets promoted.

When the patch has been initially proposed, logical decoding from a standby
was not implemented yet.
 

The other related thing is that do we somehow need to ensure that WAL
is replayed on standby before moving the slot's position to the target
location received from the primary?


Yeah, will check if this is currently done that way in the patch proposal.


BTW, does the patch handles drop of logical slots on standby when the
same slot is dropped on publisher/primary?



from what I've seen, yes it looks like it behaves that way (will look closer).



Okay, I have asked because I don't see a call to ReplicationSlotDrop()
in the patch.



Right. I'd need to look closer to understand how it works (for the moment
the "only" thing I've done was the re-base shared up-thread).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: ProcessStartupPacket(): database_name and user_name truncation

2023-06-21 Thread Drouvot, Bertrand

Hi,

On 6/22/23 1:37 AM, Michael Paquier wrote:

On Wed, Jun 21, 2023 at 12:55:15PM -0700, Nathan Bossart wrote:

LGTM.  I think this can wait for v17 since the current behavior has been
around since 2001 and AFAIK this is the first report.  While it's arguably
a bug fix, the patch also breaks some cases that work today.


Agreed that anything discussed on this thread does not warrant a
backpatch.


Fully agree, the CF entry [1] has been tagged as "Target Version 17".

[1] https://commitfest.postgresql.org/43/4383/

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: ProcessStartupPacket(): database_name and user_name truncation

2023-06-21 Thread Drouvot, Bertrand



On 6/21/23 4:22 PM, Drouvot, Bertrand wrote:

Hi,

On 6/21/23 3:43 PM, Tom Lane wrote:

Kyotaro Horiguchi  writes:

At Wed, 21 Jun 2023 09:43:50 +0200, "Drouvot, Bertrand" 
 wrote in

Trying to connect with the 64 bytes name:
$ psql -d 
psql: error: connection to server on socket "/tmp/.s.PGSQL.55448"
failed: FATAL: database "äää" does not
exist



IMHO, I'm not sure we should allow connections without the exact name
being provided. In that sense, I think we might want to consider
outright rejecting the estblishment of a connection when the given
database name doesn't fit the startup packet, since the database with
the exact given name cannot be found.


I think I agree.  I don't like the proposed patch at all, because it's
making completely unsupportable assumptions about what encoding the
names are given in.  Simply failing to match when a name is overlength
sounds safer.



Yeah, that's another and "cleaner" option.

I'll propose a patch to make it failing even for the non multibyte case then (
so that multibyte and non multibyte behaves the same aka failing in case of 
overlength
name is detected).


Please find attached a patch doing so (which is basically a revert of 
d18c1d1f51).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 1a40e13385752ef05b9602d1040e73dbb14d0c7e Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Wed, 21 Jun 2023 18:28:13 +
Subject: [PATCH v1] Reject incoming username and database name in case of
 overlength

---
 src/backend/postmaster/postmaster.c | 9 -
 1 file changed, 9 deletions(-)
 100.0% src/backend/postmaster/

diff --git a/src/backend/postmaster/postmaster.c 
b/src/backend/postmaster/postmaster.c
index 4c49393fc5..0b1de9efb2 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -2290,15 +2290,6 @@ retry1:
}
}
 
-   /*
-* Truncate given database and user names to length of a Postgres name.
-* This avoids lookup failures when overlength names are given.
-*/
-   if (strlen(port->database_name) >= NAMEDATALEN)
-   port->database_name[NAMEDATALEN - 1] = '\0';
-   if (strlen(port->user_name) >= NAMEDATALEN)
-   port->user_name[NAMEDATALEN - 1] = '\0';
-
if (am_walsender)
MyBackendType = B_WAL_SENDER;
else
-- 
2.34.1



Re: ProcessStartupPacket(): database_name and user_name truncation

2023-06-21 Thread Drouvot, Bertrand

Hi,

On 6/21/23 3:43 PM, Tom Lane wrote:

Kyotaro Horiguchi  writes:

At Wed, 21 Jun 2023 09:43:50 +0200, "Drouvot, Bertrand" 
 wrote in

Trying to connect with the 64 bytes name:
$ psql -d 
psql: error: connection to server on socket "/tmp/.s.PGSQL.55448"
failed: FATAL: database "äää" does not
exist



IMHO, I'm not sure we should allow connections without the exact name
being provided. In that sense, I think we might want to consider
outright rejecting the estblishment of a connection when the given
database name doesn't fit the startup packet, since the database with
the exact given name cannot be found.


I think I agree.  I don't like the proposed patch at all, because it's
making completely unsupportable assumptions about what encoding the
names are given in.  Simply failing to match when a name is overlength
sounds safer.



Yeah, that's another and "cleaner" option.

I'll propose a patch to make it failing even for the non multibyte case then (
so that multibyte and non multibyte behaves the same aka failing in case of 
overlength
name is detected).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




ProcessStartupPacket(): database_name and user_name truncation

2023-06-21 Thread Drouvot, Bertrand

Hi hackers,

Please find attached a patch to truncate (in ProcessStartupPacket())
the port->database_name and port->user_name in such a way to not break
multibyte character boundary.

Indeed, currently, one could create a database that way:

postgres=# create database ;
NOTICE:  identifier "" will be truncated to 
"äää"
CREATE DATABASE

The database name has been truncated from 64 bytes to 62 bytes thanks to 
pg_mbcliplen()
which ensures to not break multibyte character boundary.

postgres=# select datname, OCTET_LENGTH(datname),encoding from pg_database;
 datname | octet_length | encoding
-+--+--
 äää |   62 |6

Trying to connect with the 64 bytes name:

$ psql -d 
psql: error: connection to server on socket "/tmp/.s.PGSQL.55448" failed: FATAL:  
database "äää" does not exist


It fails because the truncation done in ProcessStartupPacket():

"
if (strlen(port→database_name) >= NAMEDATALEN)
port→database_name[NAMEDATALEN - 1] = '\0';
"

does not take care about multibyte character boundary.

On the other hand it works with non multibyte character involved:

postgres=# create database 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijke;
NOTICE:  identifier "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijke" 
will be truncated to "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk"
CREATE DATABASE

postgres=# select datname, OCTET_LENGTH(datname),encoding from pg_database;
 datname | octet_length 
| encoding
-+--+--
 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk |   63 
|6

The database name is truncated to 63 bytes and then using the 64 bytes name 
would work:

$ psql -d abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijke
psql (16beta1)
Type "help" for help.

The comment in ProcessStartupPacket() states:

"
/*
 * Truncate given database and user names to length of a Postgres name.
 * This avoids lookup failures when overlength names are given.
 */
"

The last sentence is not right in case of mutlibyte character (as seen
in the first example).

About the patch:

As the database encoding is not known yet in ProcessStartupPacket() (
and we are even not sure the database provided does exist), the proposed
patch does not rely on pg_mbcliplen() but on pg_encoding_mbcliplen().

The proposed patch does use the client encoding that it retrieves that way:

- use the one requested in the startup packet (if we come across it)
- use the one from the locale (if we did not find a client encoding request
in the startup packet)
- use PG_SQL_ASCII (if none of the above have been satisfied)

Happy to discuss any other thoughts or suggestions if any.

With the proposed patch in place, using the first example above (and the
64 bytes name) we would get:

$ PGCLIENTENCODING=LATIN1 psql -d 
psql: error: connection to server on socket "/tmp/.s.PGSQL.55448" failed: FATAL:  
database "äää" does not exist

but this one would allow us to connect:

$ PGCLIENTENCODING=UTF8 psql -d 
psql (16beta1)
Type "help" for help.

The patch does not provide documentation update or related TAP test (but could 
be added
if we feel the need).

Looking forward to your feedback,

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 6ccb4f03f56dda9e9a2616c6e0875a97a8711d72 Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Tue, 20 Jun 2023 10:26:16 +
Subject: [PATCH v1] multibyte truncation for database and user name

database_name and user_name truncation done in ProcessStartupPacket() do not
take care of multibyte character boundary. In case of multibyte, this somehow
breaks the initial goal to avoid lookup failures when overlength names are 
given.
---
 src/backend/postmaster/postmaster.c | 48 ++---
 1 file changed, 44 insertions(+), 4 deletions(-)
 100.0% src/backend/postmaster/

diff --git a/src/backend/postmaster/postmaster.c 
b/src/backend/postmaster/postmaster.c
index 4c49393fc5..72991c4eab 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -1951,6 +1951,7 @@ ProcessStartupPacket(Port *port, bool ssl_done, bool 
gss_done)
char   *buf;
ProtocolVersion proto;
MemoryContext oldcontext;
+   int client_encoding = -1;
 
pq_startmsgread();
 
@@ -2238,6 +2239,18 @@ retry1:
{

Re: Synchronizing slots from primary to standby

2023-06-19 Thread Drouvot, Bertrand

Hi,

On 6/19/23 12:03 PM, Amit Kapila wrote:

On Mon, Jun 19, 2023 at 11:34 AM Drouvot, Bertrand
 wrote:


Also I think we need to handle the case of invalidated replication slot(s): 
should
we drop/recreate it/them? (as the main goal is to have sync slot(s) on the 
standby).



Do you intend to ask what happens to logical slots invalidated (due to
say max_slot_wal_keep_size) on publisher? I think those should be
invalidated on standby too. 


Agree that it should behave that way.


Another thought whether there is chance
that the slot on standby gets invalidated due to conflict (say
required rows removed on primary)?


That's the scenario I had in mind when asking the question above.


I think in such cases the slot on
primary/publisher should have been dropped/invalidated by that time.


I don't think so.

For example, such a scenario could occur:

- there is no physical slot between the standby and the primary
- the standby is shut down
- logical decoding on the primary is moving forward and now there is vacuum
operations that will conflict on the standby
- the standby starts and reports the logical slot being invalidated (while it is
not on the primary)

In such a case (slot valid on the primary but invalidated on the standby) then 
I think we
could drop and recreate the invalidated slot on the standby.


BTW, does the patch handles drop of logical slots on standby when the
same slot is dropped on publisher/primary?



from what I've seen, yes it looks like it behaves that way (will look closer).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Synchronizing slots from primary to standby

2023-06-18 Thread Drouvot, Bertrand

Hi,

On 6/16/23 11:56 AM, Amit Kapila wrote:

On Mon, Apr 17, 2023 at 7:37 PM Drouvot, Bertrand
 wrote:


Please find attached V5 (a rebase of V4 posted up-thread).

In addition to the "rebasing" work, the TAP test adds a test about conflict 
handling (logical slot invalidation)
relying on the work done in the "Minimal logical decoding on standby" patch 
series.

I did not look more at the patch (than what's was needed for the rebase) but 
plan to do so.



Are you still planning to continue working on this? 


Yes, I think it would be great to have such a feature in core.


Some miscellaneous
comments while going through this patch are as follows?


Thanks! I'll look at them and will try to come back to you by
mid of next week.

Also I think we need to handle the case of invalidated replication slot(s): 
should
we drop/recreate it/them? (as the main goal is to have sync slot(s) on the 
standby).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Support to define custom wait events for extensions

2023-06-15 Thread Drouvot, Bertrand

Hi,

On 6/15/23 10:00 AM, Michael Paquier wrote:

On Thu, Jun 15, 2023 at 03:06:01PM +0900, Masahiro Ikeda wrote:

Currently, only one PG_WAIT_EXTENSION event can be used as a
wait event for extensions. Therefore, in environments with multiple
extensions are installed, it could take time to identify which
extension is the bottleneck.


Thanks for taking the time to implement a patch to do that.


+1 thanks for it!




I want to know your feedbacks. Please feel free to comment.


I think that's been cruelly missed.


Yeah, that would clearly help to diagnose which extension(s) is/are causing the 
waits (if any).

I did not look at the code yet (will do) but just wanted to chime in to support 
the idea.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN

2023-06-09 Thread Drouvot, Bertrand

Hi,

On 6/9/23 11:20 AM, Masahiro Ikeda wrote:

Hi,

On 2023-06-09 13:26, Drouvot, Bertrand wrote:

Hi,

On 6/9/23 1:15 AM, Michael Paquier wrote:

On Thu, Jun 08, 2023 at 10:57:55AM +0900, Masahiro Ikeda wrote:

(Excuse me for cutting in, and this is not directly related to the thread.)
+1. I'm interested in the feature.

This is just a example and it probable be useful for other users. IMO, at
least, it's better to improve the specification that "Extension"
wait event type has only the "Extension" wait event.


I hope that nobody would counter-argue you here.  In my opinion, we
should just introduce an API that allows extensions to retrieve wait
event numbers that are allocated by the backend under
PG_WAIT_EXTENSION, in a fashion similar to GetNamedLWLockTranche().
Say something like:
int GetExtensionWaitEvent(const char *wait_event_name);


+1, that's something I've in mind to work on once/if this patch is/get
committed.


Thanks for replying. If you are ok, I'll try to make a patch
to allow extensions to define custom wait events.


Great, thank you!

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN

2023-06-08 Thread Drouvot, Bertrand

Hi,

On 6/9/23 1:15 AM, Michael Paquier wrote:

On Thu, Jun 08, 2023 at 10:57:55AM +0900, Masahiro Ikeda wrote:

(Excuse me for cutting in, and this is not directly related to the thread.)
+1. I'm interested in the feature.

This is just a example and it probable be useful for other users. IMO, at
least, it's better to improve the specification that "Extension"
wait event type has only the "Extension" wait event.


I hope that nobody would counter-argue you here.  In my opinion, we
should just introduce an API that allows extensions to retrieve wait
event numbers that are allocated by the backend under
PG_WAIT_EXTENSION, in a fashion similar to GetNamedLWLockTranche().
Say something like:
int GetExtensionWaitEvent(const char *wait_event_name);


+1, that's something I've in mind to work on once/if this patch is/get
committed.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Let's make PostgreSQL multi-threaded

2023-06-07 Thread Drouvot, Bertrand

Hi,

On 6/8/23 12:37 AM, Jeremy Schneider wrote:

On 6/7/23 2:39 PM, Thomas Kellerer wrote:

Tomas Vondra schrieb am 07.06.2023 um 21:20:


I did google search for "oracle threaded_execution" and browsed a bit;
didn't see anything that seems earth shattering so far.


FWIW, I recall Karl Arao's wiki page: 
https://karlarao.github.io/karlaraowiki/#%2212c%20threaded_execution%22
where some performance and memory consumption studies have been done.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: is pg_log_standby_snapshot() really needed?

2023-06-07 Thread Drouvot, Bertrand

Hi,

On 6/7/23 8:50 PM, Jaime Casanova wrote:

On Wed, Jun 7, 2023 at 5:19 AM Drouvot, Bertrand
 wrote:


Hi,

On 6/7/23 7:32 AM, Jaime Casanova wrote:


So, I wonder if that function is really needed because as I said I
solved it with already existing functionality. Or if it is really
needed maybe it is a bug that a CHECKPOINT and pg_switch_wal() have
the same effect?



Even if CHECKPOINT and pg_switch_wal() do produce the same effect, I think
they are expensive (as compare to pg_log_standby_snapshot() which does nothing 
but
emit a xl_running_xacts).

For this reason, I think pg_log_standby_snapshot() is worth to have/keep.



CHECKPOINT could be expensive in a busy system, but the problem
pg_log_standby_snapshot() is solving is about a no-activity system,
and in a no-activity system CHECKPOINT is very fast.


a no-activity system at the time the logical replication slot is being created.
Means at the time the system is "non active" it may be possible that the 
checkpoint
would still have a lot to do.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: is pg_log_standby_snapshot() really needed?

2023-06-07 Thread Drouvot, Bertrand

Hi,

On 6/7/23 7:32 AM, Jaime Casanova wrote:

Hi,

I'm testing the ability to have a logical replica subscribed from a standby.

Of course, I'm doing this in a laboratory with no activity so
everything get stuck after creating the subscription (the main slot).
This is clearly because every time it will create a temp slot for copy
a table it needs the running xacts from the primary.

Now, I was solving this by executing CHECKPOINT on the primary, and
also noted that pg_switch_wal() works too. After that, I read about
pg_log_standby_snapshot().

So, I wonder if that function is really needed because as I said I
solved it with already existing functionality. Or if it is really
needed maybe it is a bug that a CHECKPOINT and pg_switch_wal() have
the same effect?



Even if CHECKPOINT and pg_switch_wal() do produce the same effect, I think
they are expensive (as compare to pg_log_standby_snapshot() which does nothing 
but
emit a xl_running_xacts).

For this reason, I think pg_log_standby_snapshot() is worth to have/keep.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Test slots invalidations in 035_standby_logical_decoding.pl only if dead rows are removed

2023-05-31 Thread Drouvot, Bertrand

Hi,

On 5/30/23 12:34 PM, Drouvot, Bertrand wrote:

Hi hackers,

Please find attached a patch proposal to $SUBJECT.

Indeed, we have seen occurrences in [1] that some slots were
not invalidated (while we expected vacuum to remove dead rows
leading to slots invalidation on the standby).

Though we don't have strong evidences that this
was due to transactions holding back global xmin (as vacuum did
not run in verbose mode), suspicion is high enough (as Tom pointed
out that the test is broken on its face (see [1])).

The proposed patch:

- set autovacuum = off on the primary (as autovacuum is the usual suspect
for holding global xmin).
- Ensure that vacuum is able to remove dead rows before launching
the slots invalidation tests.
- If after 10 attempts the vacuum is still not able to remove the dead
rows then the slots invalidation tests are skipped: that should be pretty
rare, as currently the majority of the tests are green (with only one attempt).

While at it, the patch also addresses the nitpicks mentioned by Robert in [2].

[1]: 
https://www.postgresql.org/message-id/flat/OSZPR01MB6310CFFD7D0DCD60A05DB1C3FD4A9%40OSZPR01MB6310.jpnprd01.prod.outlook.com#71898e088d2a57564a1bd9c41f3e6f36
[2]: 
https://www.postgresql.org/message-id/CA%2BTgmobHGpU2ZkChgKifGDLaf_%2BmFA7njEpeTjfyNf_msCZYew%40mail.gmail.com



Please find attached V2 that, instead of the above proposal, waits for a new 
snapshot
that has a newer horizon before doing the vacuum (as proposed by Andres in [1]).

So, V2:

- set autovacuum = off on the primary (as autovacuum is the usual suspect
for holding global xmin).
- waits for a new snapshot that has a newer horizon before doing the vacuum(s).
- addresses the nitpicks mentioned by Robert in [2].

V2 also keeps the verbose mode for the vacuum(s) (as done in V1), as it may help
for further analysis if needed.

[1]: 
https://www.postgresql.org/message-id/20230530152426.ensapay7pozh7ozn%40alap3.anarazel.de
[2]: 
https://www.postgresql.org/message-id/CA%2BTgmobHGpU2ZkChgKifGDLaf_%2BmFA7njEpeTjfyNf_msCZYew%40mail.gmail.com

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 383bfcf39257d4542b35ffe2ab56b43182ca2dea Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Tue, 30 May 2023 07:54:02 +
Subject: [PATCH v2] Ensure vacuum is able to remove dead rows in
 035_standby_logical_decoding

We want to ensure that vacuum was able to remove dead rows (aka no other
transactions holding back global xmin) before testing for slots invalidation
on the standby.
---
 .../t/035_standby_logical_decoding.pl | 76 +--
 1 file changed, 37 insertions(+), 39 deletions(-)
 100.0% src/test/recovery/t/

diff --git a/src/test/recovery/t/035_standby_logical_decoding.pl 
b/src/test/recovery/t/035_standby_logical_decoding.pl
index 64beec4bd3..bd426f3068 100644
--- a/src/test/recovery/t/035_standby_logical_decoding.pl
+++ b/src/test/recovery/t/035_standby_logical_decoding.pl
@@ -185,7 +185,7 @@ sub change_hot_standby_feedback_and_wait_for_xmins
 # Check conflicting status in pg_replication_slots.
 sub check_slots_conflicting_status
 {
-   my ($conflicting) = @_;
+   my ($conflicting, $details) = @_;
 
if ($conflicting)
{
@@ -193,7 +193,7 @@ sub check_slots_conflicting_status
'postgres', qq(
 select bool_and(conflicting) from 
pg_replication_slots;));
 
-   is($res, 't', "Logical slots are reported as conflicting");
+   is($res, 't', "logical slots are reported as conflicting 
$details");
}
else
{
@@ -201,7 +201,7 @@ sub check_slots_conflicting_status
'postgres', qq(
select bool_or(conflicting) from 
pg_replication_slots;));
 
-   is($res, 'f', "Logical slots are reported as non conflicting");
+   is($res, 'f', "logical slots are reported as non conflicting 
$details");
}
 }
 
@@ -256,6 +256,22 @@ sub check_for_invalidation
) or die "Timed out waiting confl_active_logicalslot to be updated";
 }
 
+# Launch $sql and wait for a new snapshot that has a newer horizon before
+# doing the vacuum with $vac_option on $to_vac.
+sub wait_until_vacuum_can_remove
+{
+   my ($vac_option, $sql, $to_vac) = @_;
+
+   my $xid = $node_primary->safe_psql('testdb', qq[$sql
+   
select txid_current();]);
+
+   $node_primary->poll_query_until('testdb',
+   "SELECT (select txid_snapshot_xmin(txid_current_snapshot()) - 
$xid) > 0")
+ or die "new snapshot does not have a newer horizon";
+
+   $node_primary->safe_psql(&#

Re: BF animal dikkop reported a failure in 035_standby_logical_decoding

2023-05-30 Thread Drouvot, Bertrand

Hi,

On 5/30/23 5:24 PM, Andres Freund wrote:

Hi,

On 2023-05-29 14:31:24 +0200, Drouvot, Bertrand wrote:

On 5/29/23 1:03 PM, Tom Lane wrote:

but I wouldn't be surprised if something in the logical replication
mechanism itself could be running a transaction at the wrong instant.

Some of the other recovery tests set
autovacuum = off
to try to control such problems, but I'm not sure how much of
a solution that really is.


One option I can think of is to:

1) set autovacuum = off (as it looks like the usual suspect).
2) trigger the vacuum in verbose mode (as suggested by Shi-san) and
depending of its output run the "invalidation" test or: re-launch the vacuum, 
re-check the output
and so on.. (n times max). If n is reached, then skip this test.


I think the best fix would be to wait for a new snapshot that has a newer
horizon, before doing the vacuum full.



Thanks for the proposal! I think that's a great idea, I'll look at it
and update the patch I've submitted in [1] accordingly.


[1]: 
https://www.postgresql.org/message-id/bf67e076-b163-9ba3-4ade-b9fc51a3a8f6%40gmail.com

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Test slots invalidations in 035_standby_logical_decoding.pl only if dead rows are removed

2023-05-30 Thread Drouvot, Bertrand

Hi hackers,

Please find attached a patch proposal to $SUBJECT.

Indeed, we have seen occurrences in [1] that some slots were
not invalidated (while we expected vacuum to remove dead rows
leading to slots invalidation on the standby).

Though we don't have strong evidences that this
was due to transactions holding back global xmin (as vacuum did
not run in verbose mode), suspicion is high enough (as Tom pointed
out that the test is broken on its face (see [1])).

The proposed patch:

- set autovacuum = off on the primary (as autovacuum is the usual suspect
for holding global xmin).
- Ensure that vacuum is able to remove dead rows before launching
the slots invalidation tests.
- If after 10 attempts the vacuum is still not able to remove the dead
rows then the slots invalidation tests are skipped: that should be pretty
rare, as currently the majority of the tests are green (with only one attempt).

While at it, the patch also addresses the nitpicks mentioned by Robert in [2].

[1]: 
https://www.postgresql.org/message-id/flat/OSZPR01MB6310CFFD7D0DCD60A05DB1C3FD4A9%40OSZPR01MB6310.jpnprd01.prod.outlook.com#71898e088d2a57564a1bd9c41f3e6f36
[2]: 
https://www.postgresql.org/message-id/CA%2BTgmobHGpU2ZkChgKifGDLaf_%2BmFA7njEpeTjfyNf_msCZYew%40mail.gmail.com

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom a5c6ef80cae1fb7606ff46679422238fdceb5cc8 Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Tue, 30 May 2023 07:54:02 +
Subject: [PATCH v1] Ensure vacuum did remove dead rows in
 035_standby_logical_decoding

We want to ensure that vacuum was able to remove dead rows (aka no other
transactions holding back global xmin) before testing for slots invalidation
on the standby.
---
 .../t/035_standby_logical_decoding.pl | 263 ++
 1 file changed, 150 insertions(+), 113 deletions(-)
 100.0% src/test/recovery/t/

diff --git a/src/test/recovery/t/035_standby_logical_decoding.pl 
b/src/test/recovery/t/035_standby_logical_decoding.pl
index 64beec4bd3..d26d9bcace 100644
--- a/src/test/recovery/t/035_standby_logical_decoding.pl
+++ b/src/test/recovery/t/035_standby_logical_decoding.pl
@@ -9,6 +9,7 @@ use warnings;
 use PostgreSQL::Test::Cluster;
 use PostgreSQL::Test::Utils;
 use Test::More;
+use Time::HiRes qw(usleep);
 
 my ($stdin, $stdout, $stderr,
$cascading_stdout, $cascading_stderr, $subscriber_stdin,
@@ -23,6 +24,9 @@ my $node_subscriber = 
PostgreSQL::Test::Cluster->new('subscriber');
 my $default_timeout = $PostgreSQL::Test::Utils::timeout_default;
 my $psql_timeout = IPC::Run::timer($default_timeout);
 my $res;
+my ($vacret, $vacstdout, $vacstderr) = ('', '', '');
+my $nb_vac = 0;
+my $vac_attempts = 10;
 
 # Name for the physical slot on primary
 my $primary_slotname = 'primary_physical';
@@ -185,7 +189,7 @@ sub change_hot_standby_feedback_and_wait_for_xmins
 # Check conflicting status in pg_replication_slots.
 sub check_slots_conflicting_status
 {
-   my ($conflicting) = @_;
+   my ($conflicting, $details) = @_;
 
if ($conflicting)
{
@@ -193,7 +197,7 @@ sub check_slots_conflicting_status
'postgres', qq(
 select bool_and(conflicting) from 
pg_replication_slots;));
 
-   is($res, 't', "Logical slots are reported as conflicting");
+   is($res, 't', "logical slots are reported as conflicting 
$details");
}
else
{
@@ -201,7 +205,7 @@ sub check_slots_conflicting_status
'postgres', qq(
select bool_or(conflicting) from 
pg_replication_slots;));
 
-   is($res, 'f', "Logical slots are reported as non conflicting");
+   is($res, 'f', "logical slots are reported as non conflicting 
$details");
}
 }
 
@@ -256,6 +260,29 @@ sub check_for_invalidation
) or die "Timed out waiting confl_active_logicalslot to be updated";
 }
 
+# Try ($vac_attempts times max) to vacuum until it is able to remove dead rows.
+sub vacuum_until_can_remove
+{
+   my ($vac_option, $sql, $to_vac) = @_;
+
+   ($vacret, $vacstdout, $vacstderr) = ('', '', '');
+   $nb_vac = 0;
+
+   while ($nb_vac <= $vac_attempts)
+   {
+   last if ($vacstderr =~ /0 dead row versions cannot be removed 
yet/ ||
+$vacstderr =~ /0 are dead but not yet 
removable/);
+   # This should trigger the conflict
+   ($vacret, $vacstdout, $vacstderr) = $node_primary->psql(
+   'testdb', qq[
+ $sql
+ VACUUM $vac_option verbose $to_vac;
+ INSERT INTO flush_wal DEFAULT VALUES; -- see create 
table flush_wal
+   ]);
+   $nb_vac++;
+   usleep(100_000);
+   }
+}
 
 # Initialize primary node
 ##

Re: BF animal dikkop reported a failure in 035_standby_logical_decoding

2023-05-29 Thread Drouvot, Bertrand

Hi,

On 5/29/23 1:03 PM, Tom Lane wrote:

"Drouvot, Bertrand"  writes:

On 5/26/23 9:27 AM, Yu Shi (Fujitsu) wrote:

Is it possible that the vacuum command didn't remove tuples and then the
conflict was not triggered?



The flush_wal table added by Andres should guarantee that the WAL is flushed, so
the only reason I can think about is indeed that the vacuum did not remove 
tuples (
but I don't get why/how that could be the case).


This test is broken on its face:

   CREATE TABLE conflict_test(x integer, y text);
   DROP TABLE conflict_test;
   VACUUM full pg_class;

There will be something VACUUM can remove only if there were no other
transactions holding back global xmin --- and there's not even a delay
here to give any such transaction a chance to finish.

Background autovacuum is the most likely suspect for breaking that,


Oh right, I did not think autovacuum could start during this test, but yeah 
there
is no reasons it could not.


but I wouldn't be surprised if something in the logical replication
mechanism itself could be running a transaction at the wrong instant.

Some of the other recovery tests set
autovacuum = off
to try to control such problems, but I'm not sure how much of
a solution that really is.


One option I can think of is to:

1) set autovacuum = off (as it looks like the usual suspect).
2) trigger the vacuum in verbose mode (as suggested by Shi-san) and
depending of its output run the "invalidation" test or: re-launch the vacuum, 
re-check the output
and so on.. (n times max). If n is reached, then skip this test.

As this test is currently failing randomly (and it looks like there is more 
success that failures, even without
autovacuum = off), then the test should still validate that the invalidation 
works as expected for the large
majority of runs (and skipping the test should be pretty rare then).

Would that make sense?

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: BF animal dikkop reported a failure in 035_standby_logical_decoding

2023-05-29 Thread Drouvot, Bertrand

Hi,

On 5/26/23 9:27 AM, Yu Shi (Fujitsu) wrote:

Hi hackers,

I saw a buildfarm failure on "dikkop"[1]. It failed in
035_standby_logical_decoding.pl, because the slots row_removal_inactiveslot and
row_removal_activeslot are not invalidated after vacuum.


Thanks for sharing!



regress_log_035_standby_logical_decoding:
```
[12:15:05.943](4.442s) not ok 22 - inactiveslot slot invalidation is logged 
with vacuum on pg_class
[12:15:05.945](0.003s)
[12:15:05.946](0.000s) #   Failed test 'inactiveslot slot invalidation is 
logged with vacuum on pg_class'
#   at t/035_standby_logical_decoding.pl line 238.
[12:15:05.948](0.002s) not ok 23 - activeslot slot invalidation is logged with 
vacuum on pg_class
[12:15:05.949](0.001s)
[12:15:05.950](0.000s) #   Failed test 'activeslot slot invalidation is logged 
with vacuum on pg_class'
#   at t/035_standby_logical_decoding.pl line 244.
[13:38:26.977](5001.028s) # poll_query_until timed out executing this query:
# select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where 
datname = 'testdb'
# expecting this output:
# t
# last actual query output:
# f
# with stderr:
[13:38:26.980](0.003s) not ok 24 - confl_active_logicalslot updated
[13:38:26.982](0.002s)
[13:38:26.982](0.000s) #   Failed test 'confl_active_logicalslot updated'
#   at t/035_standby_logical_decoding.pl line 251.
Timed out waiting confl_active_logicalslot to be updated at 
t/035_standby_logical_decoding.pl line 251.
```

035_standby_logical_decoding.pl:
```
# This should trigger the conflict
$node_primary->safe_psql(
'testdb', qq[
   CREATE TABLE conflict_test(x integer, y text);
   DROP TABLE conflict_test;
   VACUUM pg_class;
   INSERT INTO flush_wal DEFAULT VALUES; -- see create table flush_wal
]);

$node_primary->wait_for_replay_catchup($node_standby);

# Check invalidation in the logfile and in pg_stat_database_conflicts
check_for_invalidation('row_removal_', $logstart, 'with vacuum on pg_class');
```

Is it possible that the vacuum command didn't remove tuples and then the
conflict was not triggered? 


The flush_wal table added by Andres should guarantee that the WAL is flushed, so
the only reason I can think about is indeed that the vacuum did not remove 
tuples (
but I don't get why/how that could be the case).


It seems we can't confirm this because there is not
enough information. 


Right, and looking at its status history most of the time the test is green 
(making it
even more difficult to diagnose).


Maybe "vacuum verbose" can be used to provide more
information.


I can see that dikkop "belongs" to Tomas (adding Tomas to this thread).
Tomas, do you think it would be possible to run some 
035_standby_logical_decoding.pl
manually with "vacuum verbose" in the test mentioned above? (or any other way 
you can think
about that would help diagnose this random failure?).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Cleaning up nbtree after logical decoding on standby work

2023-05-29 Thread Drouvot, Bertrand

Hi,

On 5/26/23 7:28 PM, Peter Geoghegan wrote:

On Fri, May 26, 2023 at 1:56 AM Alvaro Herrera  wrote:

I suppose you're not thinking of applying this to current master, but
instead just leave it for when pg17 opens, right?  I mean, clearly it
seems far too invasive to put it in after beta1.


I was planning on targeting 16 with this. Although I only posted a
patch recently, I complained about the problems in this area shortly
after the code first went in. It's fairly obvious to me that the
changes made to nbtree went quite a bit further than they needed to.


Thanks Peter for the call out and the follow up on this!

As you already mentioned in this thread, all the changes I've done in
61b313e47e were purely "mechanical" as the main goal was to move forward the
logical decoding on standby thread and..


Admittedly that's partly because I'm an expert on this particular
code.



it was not obvious to me (as I'm not an expert as you are in this area) that
many of those changes were "excessive".


Now, to be fair to Bertrand, it *looks* more complicated than it
really is, in large part due to the obscure case where VACUUM finishes
an interrupted page split (during page deletion), which itself goes on
to cause a page split one level up.


Thanks ;-)

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: pgsql: TAP test for logical decoding on standby

2023-05-24 Thread Drouvot, Bertrand

Hi,

On 5/23/23 5:15 PM, Robert Haas wrote:

On Sat, Apr 8, 2023 at 5:26 AM Andres Freund  wrote:

TAP test for logical decoding on standby


Small nitpicks:

1. The test names generated by check_slots_conflicting_status() start
with a capital letter, while most other test names start with a
lower-case letter.



Yeah, not sure that would deserve a fix for its own but if we address 2.
then let's do 1. too.


2. The function is called 7 times, 6 with a true argument and 1 with a
false argument, but the test name only depends on whether the argument
is true or false, so we get the same test name 6 times. Maybe there's
not a reasonable way to do better, I'm not sure, but it's not ideal.



I agree that's not ideal (but one could still figure out which one is
failing if any by looking at the perl script).

If we want to "improve" this, what about passing a second argument that
would provide more context in the test name?

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: PG 16 draft release notes ready

2023-05-20 Thread Drouvot, Bertrand

Hi,

On 5/19/23 2:29 PM, Bruce Momjian wrote:

On Fri, May 19, 2023 at 09:49:18AM +0200, Drouvot, Bertrand wrote:

Thanks!

"
This adds the function pg_log_standby_snapshot(). TEXT?:
"

My proposal:

This adds the function pg_log_standby_snapshot() to log details of the current 
snapshot
to WAL. If the primary is idle, the slot creation on a standby can take a while.
This function can be used on the primary to speed up the logical slot creation 
on
the standby.


Yes, I got this concept from the commit message, but I am unclear on
what is actually happening so I can clearly explain it.  Slot creation
on the standby needs a snapshot, and that is only created when there is
activity, or happens periodically, and this forces it to happen, or
something?  And what snapshot is this?  The current session's?



It's the snapshot of running transactions (aka the xl_running_xacts WAL record) 
that is used during the
logical slot creation to determine if the logical decoding find a consistent 
state to start with.

On a primary this WAL record is being emitted during the logical slot creation, 
but on a standby
we can't write WAL records (so we are waiting for the primary to emit it).

Outside of logical slot creation, this WAL record is also emitted during 
checkpoint or periodically
by the bgwriter.

What about?

This adds the function pg_log_standby_snapshot() to emit the WAL record that 
contains the list
of running transactions.

If the primary is idle, the logical slot creation on a standby can take a while 
(waiting for this WAL record
to be replayed to determine if the logical decoding find a consistent state to 
start with).

In that case, this new function can be used on the primary to speed up the 
logical slot
creation on the standby.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: PG 16 draft release notes ready

2023-05-19 Thread Drouvot, Bertrand

Hi,

On 5/18/23 10:49 PM, Bruce Momjian wrote:

I have completed the first draft of the PG 16 release notes.  You can
see the output here:

https://momjian.us/pgsql_docs/release-16.html

I will adjust it to the feedback I receive;  that URL will quickly show
all updates.



Thanks!

"
This adds the function pg_log_standby_snapshot(). TEXT?:
"

My proposal:

This adds the function pg_log_standby_snapshot() to log details of the current 
snapshot
to WAL. If the primary is idle, the slot creation on a standby can take a while.
This function can be used on the primary to speed up the logical slot creation 
on
the standby.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN

2023-05-19 Thread Drouvot, Bertrand

Hi,

On 5/19/23 12:36 AM, Michael Paquier wrote:

On Thu, May 18, 2023 at 12:28:20PM -0400, Robert Haas wrote:

I mean, I agree that it would probably be hard to measure any real
performance difference. But I'm not sure that's a good reason to add
cycles to a path where we don't really need to.


Honestly, I am not sure that it's worth worrying about performance
here,


Same feeling here and as those new functions will be used "only" from
pg_stat_get_activity() / pg_stat_get_backend_wait_event().


or perhaps you know of some external stuff that could set the
extension class type in a code path hot enough that it would matter.


And that would matter, only when making use of pg_stat_get_activity()
/ pg_stat_get_backend_wait_event() at the time the "extension is waiting"
on this wait event.

While at it, I think that making use of an enum might also be an open door
(need to think more about it) to allow extensions to set their own wait event.
Something like RequestNamedLWLockTranche()/GetNamedLWLockTranche() are doing.

Currently we have "only" the "extension" wait event which is not that useful 
when
there is multiples extensions installed in a database.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN

2023-05-16 Thread Drouvot, Bertrand

Hi,

On 5/16/23 8:16 AM, Michael Paquier wrote:

On Mon, May 15, 2023 at 10:07:04AM +0200, Drouvot, Bertrand wrote:

This is preliminary work to autogenerate some wait events
code and documentation done in [1].

The patch introduces 2 new "wait events" (WAIT_EVENT_EXTENSION
and WAIT_EVENT_BUFFER_PIN) and their associated functions
(pgstat_get_wait_extension() and pgstat_get_wait_bufferpin() resp.)

Please note that that would not break extensions outside contrib
that make use of the existing PG_WAIT_EXTENSION.


I have looked at this one, and I think that's OK for what you are
aiming at here (in addition to my previous message that I hope
provides enough context about the whys and the hows).


Thanks!

Please find V2 attached, it adds WaitEventBufferPin and WaitEventExtension to
src/tools/pgindent/typedefs.list (that was done in [1]...).

[1]: 
https://www.postgresql.org/message-id/flat/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98%40gmail.com

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 4c9ca1389b0aa9f7eae34c3b72f15dbc7da39783 Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Sat, 13 May 2023 07:59:08 +
Subject: [PATCH v2] Introducing WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN

---
 contrib/dblink/dblink.c  |  4 +-
 contrib/pg_prewarm/autoprewarm.c |  4 +-
 contrib/postgres_fdw/connection.c|  6 +--
 src/backend/storage/buffer/bufmgr.c  |  2 +-
 src/backend/storage/ipc/standby.c|  2 +-
 src/backend/utils/activity/wait_event.c  | 66 +---
 src/include/utils/wait_event.h   | 20 ++-
 src/test/modules/test_shm_mq/setup.c |  2 +-
 src/test/modules/test_shm_mq/test.c  |  2 +-
 src/test/modules/worker_spi/worker_spi.c |  2 +-
 src/tools/pgindent/typedefs.list |  2 +
 11 files changed, 93 insertions(+), 19 deletions(-)
   8.8% contrib/dblink/
   4.6% contrib/pg_prewarm/
   9.3% contrib/postgres_fdw/
   3.4% src/backend/storage/buffer/
   3.2% src/backend/storage/ipc/
  47.9% src/backend/utils/activity/
  14.2% src/include/utils/
   4.5% src/test/modules/test_shm_mq/
   3.7% src/

diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 55f75eff36..f167cb71d4 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -203,7 +203,7 @@ dblink_get_conn(char *conname_or_str,
dblink_connstr_check(connstr);
 
/* OK to make connection */
-   conn = libpqsrv_connect(connstr, PG_WAIT_EXTENSION);
+   conn = libpqsrv_connect(connstr, WAIT_EVENT_EXTENSION);
 
if (PQstatus(conn) == CONNECTION_BAD)
{
@@ -293,7 +293,7 @@ dblink_connect(PG_FUNCTION_ARGS)
dblink_connstr_check(connstr);
 
/* OK to make connection */
-   conn = libpqsrv_connect(connstr, PG_WAIT_EXTENSION);
+   conn = libpqsrv_connect(connstr, WAIT_EVENT_EXTENSION);
 
if (PQstatus(conn) == CONNECTION_BAD)
{
diff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c
index 93835449c0..d0efc9e524 100644
--- a/contrib/pg_prewarm/autoprewarm.c
+++ b/contrib/pg_prewarm/autoprewarm.c
@@ -237,7 +237,7 @@ autoprewarm_main(Datum main_arg)
(void) WaitLatch(MyLatch,
 WL_LATCH_SET | 
WL_EXIT_ON_PM_DEATH,
 -1L,
-PG_WAIT_EXTENSION);
+WAIT_EVENT_EXTENSION);
}
else
{
@@ -264,7 +264,7 @@ autoprewarm_main(Datum main_arg)
(void) WaitLatch(MyLatch,
 WL_LATCH_SET | 
WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,
 delay_in_ms,
-PG_WAIT_EXTENSION);
+WAIT_EVENT_EXTENSION);
}
 
/* Reset the latch, loop. */
diff --git a/contrib/postgres_fdw/connection.c 
b/contrib/postgres_fdw/connection.c
index da32d503bc..25d0c43b64 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -530,7 +530,7 @@ connect_pg_server(ForeignServer *server, UserMapping *user)
/* OK to make connection */
conn = libpqsrv_connect_params(keywords, values,
   
false,   /* expand_dbname */
-  
PG_WAIT_EXTENSION);
+  
WAIT_EVENT_EXTENSION);
 
if (!conn || PQstatus(conn) != CONNECTION_OK)
  

Re: walsender performance regression due to logical decoding on standby changes

2023-05-15 Thread Drouvot, Bertrand

Hi,

On 5/15/23 4:39 PM, Bharath Rupireddy wrote:

On Mon, May 15, 2023 at 6:14 PM Masahiko Sawada  wrote:


On Mon, May 15, 2023 at 1:49 PM Thomas Munro  wrote:


Do we need to add an open item for this issue in
https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items? If yes, can
anyone in this loop add one?


I do think we need one for this issue and then just added it.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN

2023-05-15 Thread Drouvot, Bertrand

Hi,

On 5/16/23 7:14 AM, Michael Paquier wrote:

On Mon, May 15, 2023 at 06:01:02PM -0700, Andres Freund wrote:

On 2023-05-16 09:38:54 +0900, Michael Paquier wrote:

On Mon, May 15, 2023 at 05:17:16PM -0700, Andres Freund wrote:



These are the two things refactored in the patch, explaining the what.
The reason behind the why is to make the script in charge of
generating all these structures and functions consistent for all the
wait event classes, simply.  Treating all the wait event classes
together eases greatly the generation of the documentation, so that it
is possible to enforce an ordering of the tables of each class used to
list each wait event type attached to them.


Right, it does "fix" the ordering issue (for BufferPin and Extension)
that I've described in the patch introduction in [1]:

"
  so that PG_WAIT_LWLOCK, PG_WAIT_LOCK, PG_WAIT_BUFFER_PIN and 
PG_WAIT_EXTENSION are not autogenerated.


This result to having the wait event part of the documentation "monitoring-stats" not 
ordered as compared to the "Wait Event Types" Table.
.
.
.
"

Thanks Michael for having provided this detailed explanation (my patch
introduction clearly was missing some context as Andres pointed out).

[1]: 
https://www.postgresql.org/message-id/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98%40gmail.com

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: walsender performance regression due to logical decoding on standby changes

2023-05-15 Thread Drouvot, Bertrand

Hi,

On 5/15/23 4:19 AM, Zhijie Hou (Fujitsu) wrote:

On Friday, May 12, 2023 7:58 PM Bharath Rupireddy 
 wrote:


On Wed, May 10, 2023 at 3:23 PM Drouvot, Bertrand

That's not the case with the attached v2 patch. Please have a look.




Thanks for V2! It does look good to me and I like the fact that
WalSndWakeup() does not need to loop on all the Walsenders slot
anymore (for both the physical and logical cases).


Thanks for updating the patch. I did some simple primary->standby replication 
test for the
patch and can see the degradation doesn't happen in the replication after 
applying it[1].



Thanks for the performance regression measurement!

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Autogenerate some wait events code and documentation

2023-05-15 Thread Drouvot, Bertrand

Hi,

On 5/6/23 4:23 AM, Michael Paquier wrote:

On Thu, May 04, 2023 at 08:39:49AM +0200, Drouvot, Bertrand wrote:

On 5/1/23 1:59 AM, Michael Paquier wrote:
I'm not sure I like it. First, it does break the "Note" ordering as compare
to the current documentation. That's not a big deal though.

Secondly, what If we need to add some note(s) in the future for
another wait class? Having all the notes after all the tables are
generated would sound weird to me.


Appending these notes at the end of all the tables does not strike me
as a big dea, TBH.  But, well, my sole opinion is not the final choice
either.  For now, I am mostly tempted to keep the generation script as
minimalistic as possible.



I agree that's not a big deal and I'm not against having these notes at the end
of all the tables.


We could discuss another approach for the "Note" part if there is a
need to add one for an existing/new wait class though.




means, that was more a NIT comment from my side.


Documenting what's expected from the wait event classes is critical in
the .txt file as that's what developers are going to look at when
adding a new wait event.  Adding them in the header is less appealing
to me considering that is it now generated, and the docs provide a lot
of explanation as well.



Your argument that the header is now generated makes me change my mind: I
know think that having the comments in the .txt file is enough.


This has as extra consequence to require a change in
wait_event.h so as PG_WAIT_BUFFER_PIN is renamed to PG_WAIT_BUFFERPIN,
equally fine by me.  Logically, this rename should be done in a patch
of its own, for clarity.


Yes, I can look at it.
[...]
Agree, I'll look at this.


Thanks!


Please find the dedicated patch proposal in [1].

[1]: 
https://www.postgresql.org/message-id/c6f35117-4b20-4c78-1df5-d3056010dcf5%40gmail.com

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN

2023-05-15 Thread Drouvot, Bertrand

Hi hackers,

Please find attached a patch to $SUBJECT.

This is preliminary work to autogenerate some wait events
code and documentation done in [1].

The patch introduces 2 new "wait events" (WAIT_EVENT_EXTENSION
and WAIT_EVENT_BUFFER_PIN) and their associated functions
(pgstat_get_wait_extension() and pgstat_get_wait_bufferpin() resp.)

Please note that that would not break extensions outside contrib
that make use of the existing PG_WAIT_EXTENSION.

[1]: 
https://www.postgresql.org/message-id/flat/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9...@gmail.com

Looking forward to your feedback,

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com
From b48b0189067b6ce799636e469a10b0e265ff4473 Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Sat, 13 May 2023 07:59:08 +
Subject: [PATCH v1] Introducing WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN

---
 contrib/dblink/dblink.c  |  4 +-
 contrib/pg_prewarm/autoprewarm.c |  4 +-
 contrib/postgres_fdw/connection.c|  6 +--
 src/backend/storage/buffer/bufmgr.c  |  2 +-
 src/backend/storage/ipc/standby.c|  2 +-
 src/backend/utils/activity/wait_event.c  | 66 +---
 src/include/utils/wait_event.h   | 20 ++-
 src/test/modules/test_shm_mq/setup.c |  2 +-
 src/test/modules/test_shm_mq/test.c  |  2 +-
 src/test/modules/worker_spi/worker_spi.c |  2 +-
 10 files changed, 91 insertions(+), 19 deletions(-)
   8.9% contrib/dblink/
   4.7% contrib/pg_prewarm/
   9.4% contrib/postgres_fdw/
   3.4% src/backend/storage/buffer/
   3.3% src/backend/storage/ipc/
  48.6% src/backend/utils/activity/
  14.4% src/include/utils/
   4.6% src/test/modules/test_shm_mq/

diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 55f75eff36..f167cb71d4 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -203,7 +203,7 @@ dblink_get_conn(char *conname_or_str,
dblink_connstr_check(connstr);
 
/* OK to make connection */
-   conn = libpqsrv_connect(connstr, PG_WAIT_EXTENSION);
+   conn = libpqsrv_connect(connstr, WAIT_EVENT_EXTENSION);
 
if (PQstatus(conn) == CONNECTION_BAD)
{
@@ -293,7 +293,7 @@ dblink_connect(PG_FUNCTION_ARGS)
dblink_connstr_check(connstr);
 
/* OK to make connection */
-   conn = libpqsrv_connect(connstr, PG_WAIT_EXTENSION);
+   conn = libpqsrv_connect(connstr, WAIT_EVENT_EXTENSION);
 
if (PQstatus(conn) == CONNECTION_BAD)
{
diff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c
index 93835449c0..d0efc9e524 100644
--- a/contrib/pg_prewarm/autoprewarm.c
+++ b/contrib/pg_prewarm/autoprewarm.c
@@ -237,7 +237,7 @@ autoprewarm_main(Datum main_arg)
(void) WaitLatch(MyLatch,
 WL_LATCH_SET | 
WL_EXIT_ON_PM_DEATH,
 -1L,
-PG_WAIT_EXTENSION);
+WAIT_EVENT_EXTENSION);
}
else
{
@@ -264,7 +264,7 @@ autoprewarm_main(Datum main_arg)
(void) WaitLatch(MyLatch,
 WL_LATCH_SET | 
WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,
 delay_in_ms,
-PG_WAIT_EXTENSION);
+WAIT_EVENT_EXTENSION);
}
 
/* Reset the latch, loop. */
diff --git a/contrib/postgres_fdw/connection.c 
b/contrib/postgres_fdw/connection.c
index da32d503bc..25d0c43b64 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -530,7 +530,7 @@ connect_pg_server(ForeignServer *server, UserMapping *user)
/* OK to make connection */
conn = libpqsrv_connect_params(keywords, values,
   
false,   /* expand_dbname */
-  
PG_WAIT_EXTENSION);
+  
WAIT_EVENT_EXTENSION);
 
if (!conn || PQstatus(conn) != CONNECTION_OK)
ereport(ERROR,
@@ -863,7 +863,7 @@ pgfdw_get_result(PGconn *conn, const char *query)
   
WL_LATCH_SET | WL_SOCKET_READABLE |
   
WL_EXIT_ON_PM_DEATH,
   
PQsocket(conn),
-  

Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-11 Thread Drouvot, Bertrand




On 5/10/23 12:41 PM, Amit Kapila wrote:

On Tue, May 9, 2023 at 12:44 PM Drouvot, Bertrand




Pushed this yesterday.



Thanks!

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: walsender performance regression due to logical decoding on standby changes

2023-05-10 Thread Drouvot, Bertrand

Hi,

On 5/10/23 8:36 AM, Bharath Rupireddy wrote:

On Wed, May 10, 2023 at 12:33 AM Andres Freund  wrote:


Unfortunately I have found the following commit to have caused a performance
regression:

commit e101dfac3a53c20bfbf1ca85d30a368c2954facf

The problem is that, on a standby, after the change - as needed to for the
approach to work - the call to WalSndWakeup() in ApplyWalRecord() happens for
every record, instead of only happening when the timeline is changed (or WAL
is flushed or ...).

WalSndWakeup() iterates over all walsender slots, regardless of whether in
use. For each of the walsender slots it acquires a spinlock.

When replaying a lot of small-ish WAL records I found the startup process to
spend the majority of the time in WalSndWakeup(). I've not measured it very
precisely yet, but the overhead is significant (~35% slowdown), even with the
default max_wal_senders. If that's increased substantially, it obviously gets
worse.




Thanks Andres for the call out! I do agree that this is a concern.


The only saving grace is that this is not an issue on the primary.


Yeah.


+1




My current guess is that mis-using a condition variable is the best bet. I
think it should work to use ConditionVariablePrepareToSleep() before a
WalSndWait(), and then ConditionVariableCancelSleep(). I.e. to never use
ConditionVariableSleep(). The latch set from ConditionVariableBroadcast()
would still cause the necessary wakeup.


Yeah, I think that "mis-using" a condition variable is a valid option. Unless 
I'm missing
something, I don't think there is anything wrong with using a CV that way (aka 
not using
ConditionVariableTimedSleep() or ConditionVariableSleep() in this particular 
case).



How about something like the attached? Recovery and subscription tests
don't complain with the patch.


Thanks Bharath for looking at it!

I launched a full Cirrus CI test with it but it failed on one environment (did 
not look in details,
just sharing this here): https://cirrus-ci.com/task/6570140767092736

Also I have a few comments:

@@ -1958,7 +1959,7 @@ ApplyWalRecord(XLogReaderState *xlogreader, XLogRecord 
*record, TimeLineID *repl
 * --
 */
if (AllowCascadeReplication())
-   WalSndWakeup(switchedTLI, true);
+   ConditionVariableBroadcast(&WalSndCtl->cv);

I think the comment above this change needs to be updated.


@@ -3368,9 +3370,13 @@ WalSndWait(uint32 socket_events, long timeout, uint32 
wait_event)
WaitEvent   event;

ModifyWaitEvent(FeBeWaitSet, FeBeWaitSetSocketPos, socket_events, NULL);
+
+   ConditionVariablePrepareToSleep(&WalSndCtl->cv);
if (WaitEventSetWait(FeBeWaitSet, timeout, &event, 1, wait_event) == 1 
&&
(event.events & WL_POSTMASTER_DEATH))
proc_exit(1);
+
+   ConditionVariableCancelSleep();

May be worth to update the comment above WalSndWait() too? (to explain why a CV 
handling is part of it).


@@ -108,6 +109,8 @@ typedef struct
 */
boolsync_standbys_defined;

+   ConditionVariable cv;

Worth to give it a more meaning full name? (to give a clue as when it is used)


I think we also need to update the comment above WalSndWakeup():

"
 * For cascading replication we need to wake up physical walsenders separately
 * from logical walsenders (see the comment before calling WalSndWakeup() in
 * ApplyWalRecord() for more details).
"

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: walsender performance regression due to logical decoding on standby changes

2023-05-09 Thread Drouvot, Bertrand

Hi,

On 5/9/23 11:00 PM, Andres Freund wrote:

Hi,

On 2023-05-09 13:38:24 -0700, Jeff Davis wrote:

On Tue, 2023-05-09 at 12:02 -0700, Andres Freund wrote:

I don't think the approach of not having any sort of "registry" of
whether
anybody is waiting for the replay position to be updated is
feasible. Iterating over all walsenders slots is just too expensive -


Would it work to use a shared counter for the waiters (or, two
counters, one for physical and one for logical), and just early exit if
the count is zero?


That doesn't really fix the problem - once you have a single walsender
connected, performance is bad again.



Just to clarify, do you mean that if there is only one remaining active 
walsender that, say,
has been located at slot n, then we’d still have to loop from 0 to n in 
WalSndWakeup()?

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-09 Thread Drouvot, Bertrand

Hi,

On 5/9/23 8:02 AM, Amit Kapila wrote:

On Mon, May 8, 2023 at 1:45 PM Drouvot, Bertrand
 wrote:




Why not initialize the cascading standby node just before the standby
promotion test: "Test standby promotion and logical decoding behavior
after the standby gets promoted."? That way we will avoid any unknown
side-effects of cascading standby and it will anyway look more logical
to initialize it where the test needs it.



Yeah, that's even better. Moved the physical slot creation on the standby
and the cascading standby initialization where "strictly" needed in V2
attached.

Also ensuring that hsf is set to on on the cascading standby to be on the
safe side of thing.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 0292d14a62b72b38197d434d85ee2c6a7f2cec00 Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Tue, 9 May 2023 06:43:59 +
Subject: [PATCH v2] fix retaining WAL test

---
 .../t/035_standby_logical_decoding.pl | 36 +--
 1 file changed, 18 insertions(+), 18 deletions(-)
 100.0% src/test/recovery/t/

diff --git a/src/test/recovery/t/035_standby_logical_decoding.pl 
b/src/test/recovery/t/035_standby_logical_decoding.pl
index ad1def2474..2b4a688330 100644
--- a/src/test/recovery/t/035_standby_logical_decoding.pl
+++ b/src/test/recovery/t/035_standby_logical_decoding.pl
@@ -276,20 +276,6 @@ $node_standby->append_conf('postgresql.conf',
max_replication_slots = 5]);
 $node_standby->start;
 $node_primary->wait_for_replay_catchup($node_standby);
-$node_standby->safe_psql('testdb', qq[SELECT * FROM 
pg_create_physical_replication_slot('$standby_physical_slotname');]);
-
-###
-# Initialize cascading standby node
-###
-$node_standby->backup($backup_name);
-$node_cascading_standby->init_from_backup(
-   $node_standby, $backup_name,
-   has_streaming => 1,
-   has_restoring => 1);
-$node_cascading_standby->append_conf('postgresql.conf',
-   qq[primary_slot_name = '$standby_physical_slotname']);
-$node_cascading_standby->start;
-$node_standby->wait_for_replay_catchup($node_cascading_standby, $node_primary);
 
 ###
 # Initialize subscriber node
@@ -503,9 +489,6 @@ check_slots_conflicting_status(1);
 # Verify that invalidated logical slots do not lead to retaining WAL.
 ##
 
-# Wait for the cascading standby to catchup before removing the WAL file(s)
-$node_standby->wait_for_replay_catchup($node_cascading_standby, $node_primary);
-
 # Get the restart_lsn from an invalidated slot
 my $restart_lsn = $node_standby->safe_psql('postgres',
"SELECT restart_lsn from pg_replication_slots WHERE slot_name = 
'vacuum_full_activeslot' and conflicting is true;"
@@ -777,9 +760,26 @@ $node_standby->reload;
 $node_primary->psql('postgres', q[CREATE DATABASE testdb]);
 $node_primary->safe_psql('testdb', qq[CREATE TABLE decoding_test(x integer, y 
text);]);
 
-# Wait for the standby to catchup before creating the slots
+# Wait for the standby to catchup before initializing the cascading standby
 $node_primary->wait_for_replay_catchup($node_standby);
 
+# Create a physical replication slot on the standby.
+# Keep this step after the "Verify that invalidated logical slots do not lead
+# to retaining WAL" test (as the physical slot on the standby could prevent the
+# WAL file removal).
+$node_standby->safe_psql('testdb', qq[SELECT * FROM 
pg_create_physical_replication_slot('$standby_physical_slotname');]);
+
+# Initialize cascading standby node
+$node_standby->backup($backup_name);
+$node_cascading_standby->init_from_backup(
+   $node_standby, $backup_name,
+   has_streaming => 1,
+   has_restoring => 1);
+$node_cascading_standby->append_conf('postgresql.conf',
+   qq[primary_slot_name = '$standby_physical_slotname'
+  hot_standby_feedback = on]);
+$node_cascading_standby->start;
+
 # create the logical slots
 create_logical_slots($node_standby, 'promotion_');
 
-- 
2.34.1



Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-08 Thread Drouvot, Bertrand

Hi,

On 5/8/23 4:42 AM, Amit Kapila wrote:

On Sat, May 6, 2023 at 9:33 PM Drouvot, Bertrand
 wrote:


On 5/6/23 3:28 PM, Amit Kapila wrote:

On Sat, May 6, 2023 at 1:52 PM Drouvot, Bertrand
 wrote:



Next steps:
=
1. The first thing is we should verify this theory by adding some LOG
in KeepLogSeg() to see if the _logSegNo is reduced due to the value
returned by XLogGetReplicationSlotMinimumLSN().


Yeah, will do that early next week.


It's done with the following changes:

https://github.com/bdrouvot/postgres/commit/79e1bd9ab429a22f876b9364eb8a0da2dacaaef7#diff-c1cb3ab2a19606390c1a7ed00ffe5a45531702ca5faf999d401c548f8951c65bL7454-R7514

With that in place, there is one failing test here: 
https://cirrus-ci.com/task/5173216310722560

Where we can see:

2023-05-08 07:42:56.301 UTC [18038][checkpointer] LOCATION:  
UpdateMinRecoveryPoint, xlog.c:2500
2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG:  0: 
CreateRestartPoint: After XLByteToSeg RedoRecPtr is 0/498, _logSegNo is 4
2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION:  
CreateRestartPoint, xlog.c:7271
2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG:  0: KeepLogSeg: 
segno changed to 4 due to XLByteToSeg
2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION:  KeepLogSeg, 
xlog.c:7473
2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG:  0: KeepLogSeg: 
segno changed to 3 due to XLogGetReplicationSlotMinimumLSN()
2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION:  KeepLogSeg, 
xlog.c:7483
2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG:  0: 
CreateRestartPoint: After KeepLogSeg RedoRecPtr is 0/498, endptr is 
0/4000148, _logSegNo is 3
2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION:  
CreateRestartPoint, xlog.c:7284
2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG:  0: BDT1 about to 
call RemoveOldXlogFiles in CreateRestartPoint
2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION:  
CreateRestartPoint, xlog.c:7313
2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG:  0: attempting to 
remove WAL segments older than log file 0002

So the suspicion about XLogGetReplicationSlotMinimumLSN() was correct 
(_logSegNo moved from
4 to 3 due to XLogGetReplicationSlotMinimumLSN()).


What about postponing the physical slot creation on the standby and the
cascading standby node initialization after this test?



Yeah, that is also possible. But, I have a few questions regarding
that: (a) There doesn't seem to be a physical slot on cascading
standby, if I am missing something, can you please point me to the
relevant part of the test?


That's right. There is a physical slot only on the primary and on the standby.

What I meant up-thread is to also postpone the cascading standby node 
initialization
after this test (once the physical slot on the standby is created).

Please find attached a proposal doing so.


(b) Which test is currently dependent on
the physical slot on standby?


Not a test but the cascading standby initialization with the 
"primary_slot_name" parameter.

Also, now I think that's better to have the physical slot on the standby + hsf 
set to on on the
cascading standby (coming from the standby backup).

Idea is to avoid any risk of logical slot invalidation on the cascading standby 
in the
standby promotion test.

That was not the case before the attached proposal though (hsf was off on the 
cascading standby).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 38c67b0fd8f2d83e97a93108484fe23c881dd91c Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Mon, 8 May 2023 07:02:50 +
Subject: [PATCH v1] fix retaining WAL test

---
 .../t/035_standby_logical_decoding.pl | 38 ++-
 1 file changed, 21 insertions(+), 17 deletions(-)
 100.0% src/test/recovery/t/

diff --git a/src/test/recovery/t/035_standby_logical_decoding.pl 
b/src/test/recovery/t/035_standby_logical_decoding.pl
index ad1def2474..4bda706eae 100644
--- a/src/test/recovery/t/035_standby_logical_decoding.pl
+++ b/src/test/recovery/t/035_standby_logical_decoding.pl
@@ -276,20 +276,6 @@ $node_standby->append_conf('postgresql.conf',
max_replication_slots = 5]);
 $node_standby->start;
 $node_primary->wait_for_replay_catchup($node_standby);
-$node_standby->safe_psql('testdb', qq[SELECT * FROM 
pg_create_physical_replication_slot('$standby_physical_slotname');]);
-
-###
-# Initialize cascading standby node
-###
-$node_standby->backup($backup_name);
-$node_cascading_standby->init_from_backup(
-   $node_standby, $backup_name,
-   has_streaming => 1,
-   has_restoring => 1);
-$node_cascading_standby->append_conf('postgresql.conf',
-   qq[primary_slot_name = '$standby_physical_slotname']);
-$no

Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-06 Thread Drouvot, Bertrand

Hi,

On 5/6/23 3:28 PM, Amit Kapila wrote:

On Sat, May 6, 2023 at 1:52 PM Drouvot, Bertrand
 wrote:


There is 2 runs with this extra info in place:

A successful one: https://cirrus-ci.com/task/6528745436086272
A failed one: https://cirrus-ci.com/task/4558139312308224



Thanks, I think I got some clue as to why this test is failing
randomly. 


Great, thanks!


Observations:

1. In the failed run, the KeepLogSeg(), reduced the _logSegNo to 3
which is the reason for the failure because now the standby won't be
able to remove/recycle the WAL file corresponding to segment number 3
which the test was expecting.


Agree.


2.  We didn't expect the KeepLogSeg() to reduce the _logSegNo because
all logical slots were invalidated. However, I think we forgot that
both standby and primary have physical slots which might also
influence the XLogGetReplicationSlotMinimumLSN() calculation in
KeepLogSeg().


Oh right...


Next steps:
=
1. The first thing is we should verify this theory by adding some LOG
in KeepLogSeg() to see if the _logSegNo is reduced due to the value
returned by XLogGetReplicationSlotMinimumLSN().


Yeah, will do that early next week.


2. The reason for the required file not being removed in the primary
is also that it has a physical slot which prevents the file removal.


Yeah, agree. But this one is not an issue as we are not
checking for the WAL file removal on the primary, do you agree?


3. If the above theory is correct then I see a few possibilities to
fix this test (a) somehow ensure that restart_lsn of the physical slot
on standby is advanced up to the point where we can safely remove the
required files; (b) just create a separate test case by initializing a
fresh node for primary and standby where we only have logical slots on
standby. This will be a bit costly but probably less risky. (c) any
better ideas?



(c): Since, I think, the physical slot on the primary is not a concern for
the reason mentioned above, then instead of (b):

What about postponing the physical slot creation on the standby and the
cascading standby node initialization after this test?

That way, this test would be done without a physical slot on the standby and
we could also get rid of the "Wait for the cascading standby to catchup before
removing the WAL file(s)" part.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-06 Thread Drouvot, Bertrand

Hi,

On 5/6/23 4:10 AM, Amit Kapila wrote:

On Fri, May 5, 2023 at 7:53 PM Drouvot, Bertrand
 wrote:


On 5/5/23 2:28 PM, Amit Kapila wrote:

On Fri, May 5, 2023 at 5:36 PM Drouvot, Bertrand


So, even on a successful test, we can see that the WAL file we expect to be 
removed on the standby has not been recycled on the primary before the test.



Okay, one possibility of not removing on primary is that at the time
of checkpoint (when we compute RedoRecPtr), the wal_swtich and insert
is not yet performed because in that case it will compute the
RedoRecPtr as a location before those operations which would be *3
file. However, it is not clear how is that possible except from a
background checkpoint happening at that point but from LOGs, it
appears that the checkpoint triggered by test has recycled the wal
files.


I think we need to add more DEBUG info to find that
out. Can you please try to print 'RedoRecPtr', '_logSegNo', and
recptr?



Yes, will do.



Okay, thanks, please try to print similar locations on standby in
CreateRestartPoint().



The extra information is displayed that way:

https://github.com/bdrouvot/postgres/commit/a3d6d58d105b379c04a17a1129bfb709302588ca#diff-c1cb3ab2a19606390c1a7ed00ffe5a45531702ca5faf999d401c548f8951c65bR6822-R6830
https://github.com/bdrouvot/postgres/commit/a3d6d58d105b379c04a17a1129bfb709302588ca#diff-c1cb3ab2a19606390c1a7ed00ffe5a45531702ca5faf999d401c548f8951c65bR7269-R7271
https://github.com/bdrouvot/postgres/commit/a3d6d58d105b379c04a17a1129bfb709302588ca#diff-c1cb3ab2a19606390c1a7ed00ffe5a45531702ca5faf999d401c548f8951c65bR7281-R7284

There is 2 runs with this extra info in place:

A successful one: https://cirrus-ci.com/task/6528745436086272
A failed one: https://cirrus-ci.com/task/4558139312308224

For both the testrun.zip is available in the Artifacts section.

Sharing this now in case you want to have a look (I'll have a look at them 
early next week on my side).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-05 Thread Drouvot, Bertrand




On 5/5/23 2:28 PM, Amit Kapila wrote:

On Fri, May 5, 2023 at 5:36 PM Drouvot, Bertrand
 wrote:

It seems due to some reason the current wal file is not switched due
to some reason.


Oh wait, here is a NON failing one: https://cirrus-ci.com/task/5086849685782528 
(I modified the
.cirrus.yml so that we can download the "testrun.zip" file even if the test is 
not failing).

So, in this testrun.zip we can see, that the test is ok:

$ grep -i retain 
./build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding
[10:06:08.789](0.000s) ok 19 - invalidated logical slots do not lead to 
retaining WAL

and that the WAL file we expect to be removed is:

$ grep "WAL file is" 
./build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding
[10:06:08.789](0.925s) # BDT WAL file is 
/Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/00010003

This WAL file has been removed by the standby:

$ grep -i 00010003 
./build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_standby.log
 | grep -i recy
2023-05-05 10:06:08.787 UTC [17521][checkpointer] DEBUG:  0: recycled write-ahead log 
file "00010003"

But on the primary, it has been recycled way after that time:

$ grep -i 00010003 
./build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_primary.log
 | grep -i recy
2023-05-05 10:06:13.370 UTC [16785][checkpointer] DEBUG:  0: recycled write-ahead log 
file "00010003"

As, the checkpoint on the primary after the WAL file switch only recycled (001 
and 002):

$ grep -i recycled 
./build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_primary.log
2023-05-05 10:05:57.196 UTC [16785][checkpointer] LOG:  0: checkpoint 
complete: wrote 4 buffers (3.1%); 0 WAL file(s) added, 0 removed, 0 recycled; 
write=0.001 s, sync=0.001 s, total=0.027 s; sync files=0, longest=0.000 s, 
average=0.000 s; distance=11219 kB, estimate=11219 kB; lsn=0/260, redo 
lsn=0/228
2023-05-05 10:06:08.138 UTC [16785][checkpointer] DEBUG:  0: recycled write-ahead log 
file "00010001"
2023-05-05 10:06:08.138 UTC [16785][checkpointer] DEBUG:  0: recycled write-ahead log 
file "00010002"
2023-05-05 10:06:08.138 UTC [16785][checkpointer] LOG:  0: checkpoint 
complete: wrote 20 buffers (15.6%); 0 WAL file(s) added, 0 removed, 2 recycled; 
write=0.001 s, sync=0.001 s, total=0.003 s; sync files=0, longest=0.000 s, 
average=0.000 s; distance=32768 kB, estimate=32768 kB; lsn=0/4D0, redo 
lsn=0/498


So, even on a successful test, we can see that the WAL file we expect to be 
removed on the standby has not been recycled on the primary before the test.


I think we need to add more DEBUG info to find that
out. Can you please try to print 'RedoRecPtr', '_logSegNo', and
recptr?



Yes, will do.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-05 Thread Drouvot, Bertrand




On 5/5/23 12:58 PM, Amit Kapila wrote:

On Fri, May 5, 2023 at 4:02 PM Drouvot, Bertrand
 wrote:



How did you concluded that 00010003 is the file the
test is expecting to be removed?


because I added a note in the test that way:

"
@@ -535,6 +539,7 @@ $node_standby->safe_psql('postgres', 'checkpoint;');

 # Verify that the WAL file has not been retained on the standby
 my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;
+note "BDT WAL file is $standby_walfile";
 ok(!-f "$standby_walfile",
"invalidated logical slots do not lead to retaining WAL");
"

so that I can check in the test log file:

grep "WAL file is" 
./build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding
[08:23:31.931](2.217s) # BDT WAL file is 
/Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/00010003

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-05 Thread Drouvot, Bertrand




On 5/5/23 11:29 AM, Amit Kapila wrote:

On Fri, May 5, 2023 at 1:16 PM Drouvot, Bertrand
 wrote:



After multiple attempts, I got one failing one.

Issue is that we expect this file to be removed:

[07:24:27.261](0.899s) #WAL file is 
/Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/00010003

But the standby emits:

2023-05-05 07:24:27.216 UTC [17909][client backend] 
[035_standby_logical_decoding.pl][3/6:0] LOG:  statement: checkpoint;
2023-05-05 07:24:27.216 UTC [17745][checkpointer] LOG:  restartpoint starting: 
immediate wait
2023-05-05 07:24:27.259 UTC [17745][checkpointer] LOG:  attempting to remove 
WAL segments older than log file 0002

So it seems the test is not right (missing activity??), not sure why yet.



Can you try to print the value returned by
XLogGetReplicationSlotMinimumLSN() in KeepLogSeg() on standby? Also,
please try to print "attempting to remove WAL segments ..." on the
primary. We can see, if by any chance some slot is holding us to
remove the required WAL file.



I turned DEBUG2 on. We can also see on the primary:

2023-05-05 08:23:30.843 UTC [16833][checkpointer] LOCATION:  
CheckPointReplicationSlots, slot.c:1576
2023-05-05 08:23:30.844 UTC [16833][checkpointer] DEBUG:  0: snapshot of 
0+0 running transaction ids (lsn 0/4D0 oldest xid 746 latest complete 745 
next xid 746)
2023-05-05 08:23:30.844 UTC [16833][checkpointer] LOCATION:  
LogCurrentRunningXacts, standby.c:1377
2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOG:  0: BDT1 about to 
call RemoveOldXlogFiles in CreateCheckPoint
2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION:  CreateCheckPoint, 
xlog.c:6835
2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOG:  0: attempting to 
remove WAL segments older than log file 0002
2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION:  
RemoveOldXlogFiles, xlog.c:3560
2023-05-05 08:23:30.845 UTC [16833][checkpointer] DEBUG:  0: recycled write-ahead log 
file "00010001"
2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION:  RemoveXlogFile, 
xlog.c:3708
2023-05-05 08:23:30.845 UTC [16833][checkpointer] DEBUG:  0: recycled write-ahead log 
file "00010002"
2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION:  RemoveXlogFile, 
xlog.c:3708
2023-05-05 08:23:30.845 UTC [16833][checkpointer] DEBUG:  0: 
SlruScanDirectory invoking callback on pg_subtrans/

So, 00010003 is not removed on the primary.

It has been recycled on:

2023-05-05 08:23:38.605 UTC [16833][checkpointer] DEBUG:  0: recycled write-ahead log 
file "00010003"

Which is later than the test:

[08:23:31.931](0.000s) not ok 19 - invalidated logical slots do not lead to 
retaining WAL

FWIW, the failing test with DEBUG2 can be found there: 
https://cirrus-ci.com/task/5615316688961536

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-05 Thread Drouvot, Bertrand




On 5/5/23 9:04 AM, Amit Kapila wrote:

On Fri, May 5, 2023 at 11:08 AM Drouvot, Bertrand
 wrote:


On 5/4/23 6:43 AM, Amit Kapila wrote:

On Thu, May 4, 2023 at 8:37 AM vignesh C  wrote:


Thanks for posting the updated patch, I had run this test in a loop of
100 times to verify that there was no failure because of race
conditions. The 100 times execution passed successfully.

One suggestion:
"wal file" should be changed to "WAL file":
+# Request a checkpoint on the standby to trigger the WAL file(s) removal
+$node_standby->safe_psql('postgres', 'checkpoint;');
+
+# Verify that the wal file has not been retained on the standby
+my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;



Thanks for the verification. I have pushed the patch.



It looks like there is still something wrong with this test as there
are a bunch of cfbot errors on this new test (mainly on macOS - Ventura - 
Meson).



Is it possible for you to point me to those failures?


I'll try to reproduce with more debug infos.



Okay, thanks!



After multiple attempts, I got one failing one.

Issue is that we expect this file to be removed:

[07:24:27.261](0.899s) #WAL file is 
/Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/00010003

But the standby emits:

2023-05-05 07:24:27.216 UTC [17909][client backend] 
[035_standby_logical_decoding.pl][3/6:0] LOG:  statement: checkpoint;
2023-05-05 07:24:27.216 UTC [17745][checkpointer] LOG:  restartpoint starting: 
immediate wait
2023-05-05 07:24:27.259 UTC [17745][checkpointer] LOG:  attempting to remove 
WAL segments older than log file 0002

So it seems the test is not right (missing activity??), not sure why yet.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-05 Thread Drouvot, Bertrand

Hi,

On 5/5/23 9:11 AM, vignesh C wrote:

On Fri, 5 May 2023 at 12:34, Amit Kapila  wrote:


On Fri, May 5, 2023 at 11:08 AM Drouvot, Bertrand
 wrote:


On 5/4/23 6:43 AM, Amit Kapila wrote:

On Thu, May 4, 2023 at 8:37 AM vignesh C  wrote:


Thanks for posting the updated patch, I had run this test in a loop of
100 times to verify that there was no failure because of race
conditions. The 100 times execution passed successfully.

One suggestion:
"wal file" should be changed to "WAL file":
+# Request a checkpoint on the standby to trigger the WAL file(s) removal
+$node_standby->safe_psql('postgres', 'checkpoint;');
+
+# Verify that the wal file has not been retained on the standby
+my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;



Thanks for the verification. I have pushed the patch.



It looks like there is still something wrong with this test as there
are a bunch of cfbot errors on this new test (mainly on macOS - Ventura - 
Meson).



Is it possible for you to point me to those failures?


I think these failures are occuring in CFBOT, once such instance is at:
https://cirrus-ci.com/task/6642271152504832?logs=test_world#L39
https://api.cirrus-ci.com/v1/artifact/task/6642271152504832/testrun/build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding



Yeah, thanks, that's one of them.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-04 Thread Drouvot, Bertrand

Hi,

On 5/4/23 6:43 AM, Amit Kapila wrote:

On Thu, May 4, 2023 at 8:37 AM vignesh C  wrote:


Thanks for posting the updated patch, I had run this test in a loop of
100 times to verify that there was no failure because of race
conditions. The 100 times execution passed successfully.

One suggestion:
"wal file" should be changed to "WAL file":
+# Request a checkpoint on the standby to trigger the WAL file(s) removal
+$node_standby->safe_psql('postgres', 'checkpoint;');
+
+# Verify that the wal file has not been retained on the standby
+my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;



Thanks for the verification. I have pushed the patch.



It looks like there is still something wrong with this test as there
are a bunch of cfbot errors on this new test (mainly on macOS - Ventura - 
Meson).

I'll try to reproduce with more debug infos.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Autogenerate some wait events code and documentation

2023-05-03 Thread Drouvot, Bertrand

Hi,

On 5/2/23 4:50 AM, Thomas Munro wrote:

[patch]


This is not a review of the perl/make/meson glue/details, but I just
wanted to say thanks for working on this Bertrand & Michael, at a
quick glance that .txt file looks like it's going to be a lot more fun
to maintain!


Thanks Thomas! Yeah and probably less error prone too ;-)

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Autogenerate some wait events code and documentation

2023-05-03 Thread Drouvot, Bertrand

Hi,

On 5/1/23 1:59 AM, Michael Paquier wrote:

On Fri, Apr 28, 2023 at 02:29:13PM +0200, Drouvot, Bertrand wrote:

On 4/27/23 8:13 AM, Michael Paquier wrote:


But do we need to merge more data than necessary?  We could do things
in the simplest fashion possible while making the docs and code
user-friendly in the ordering: just add a section for Lock and LWLocks
in waiteventnames.txt with an extra comment in their headers and/or
data files to tell that waiteventnames.txt also needs a refresh.


Agree that it would fix the doc ordering and that we could do that.


Not much a fan of the part where a full paragraph of the SGML docs is
added to the .txt, particularly with the new handling for "Notes".


I understand your concern.


I'd rather shape the perl script to be minimalistic and simpler, even
if it means moving this paragraph about LWLocks after all the tables
are generated.


I'm not sure I like it. First, it does break the "Note" ordering as compare
to the current documentation. That's not a big deal though.

Secondly, what If we need to add some note(s) in the future for another wait 
class? Having all the notes
after all the tables are generated would sound weird to me.

We could discuss another approach for the "Note" part if there is a need to add 
one for an existing/new wait class
though.



Do we also need the comments in the generated header as well?  My
initial impression was to just move these as comments of the .txt file
because that's where the new events would be added, as the .txt is the
main point of reference.



Oh I see. The advantage of the previous approach is to have them at both places 
(.txt and header).
But that said I understand your point about having the perl script minimalistic 
and simpler.


Please note that it creates 2 new "wait events":
WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN.


Noted.  Makes sense here.


Yup and that may help to add "custom" wait event for extensions too (need to 
think about it once
this refactoring is done).


So, the change here..
+   # Exception here
+   if ($last =~ /^BufferPin/)
+   {
+  $last = "Buffer_Pin";
+   }

..  Implies the two following changes:
typedef enum
  {
-   WAIT_EVENT_BUFFER_PIN = PG_WAIT_BUFFER_PIN
+   WAIT_EVENT_BUFFER_PIN = PG_WAIT_BUFFERPIN
  } WaitEventBufferPin;
[...]
  static const char *
-pgstat_get_wait_buffer_pin(WaitEventBufferPin w)
+pgstat_get_wait_bufferpin(WaitEventBufferPin w)

I would be OK to remove this exception in the script as it does not
change anything for the end user (the wait event string is still
reported as "BufferPin").  This way, we keep things simpler in the
script. 


Good point, agree.


This has as extra consequence to require a change in
wait_event.h so as PG_WAIT_BUFFER_PIN is renamed to PG_WAIT_BUFFERPIN,
equally fine by me.  Logically, this rename should be done in a patch
of its own, for clarity.


Yes, I can look at it.



@@ -185,6 +193,7 @@ distprep:
 $(MAKE) -C utilsdistprep
 $(MAKE) -C utils/adtjsonpath_gram.c jsonpath_gram.h jsonpath_scan.c
 $(MAKE) -C utils/misc   guc-file.c
+   $(MAKE) -C utils/actvitywait_event_types.h pgstat_wait_event.c
Incorrect order, and incorrect name (s/actvity/activity/, lacking an
'i').



Nice catch.


+printf $h $header_comment, 'wait_event_types.h';
+printf $h "#ifndef WAITEVENTNAMES_H\n";
+printf $h "#define WAITEVENTNAMES_H\n\n";
Inconsistency detected here.

It seems to me that we'd better have a .gitignore in utils/activity/
for the new files.



Agree.


@@ -237,7 +237,7 @@ autoprewarm_main(Datum main_arg)
 (void) WaitLatch(MyLatch,
  WL_LATCH_SET | WL_EXIT_ON_PM_DEATH,
  -1L,
-PG_WAIT_EXTENSION);
+WAIT_EVENT_EXTENSION);

Perhaps this should also be part of a first, separate patch, with the
introduction of the new pgstat_get_wait_extension/bufferpin()
functions.  Okay, it is not a big deal because the main patch
generates the enum for extensions which would be used here, but for
the sake of history clarity I'd rather refactor and rename all that
first.



Agree, I'll look at this.


The choices of LWLOCK and LOCK for the internal names was a bit
surprising, while we can be consistent with the rest and use "LWLock"
and "Lock".

Attached is a v7 with the portions I have adjusted, which is mostly OK
by me at this point.  We are still away from the next CF, but I'll
look at that again when the v17 branch opens.


Thanks for the v7! I did not look at the details but just replied to this 
thread.

I'll look at v7 when the v17 branch opens and propose the separate patch
mentioned above at that time too.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-03 Thread Drouvot, Bertrand

Hi,

On 5/4/23 6:43 AM, Amit Kapila wrote:

On Thu, May 4, 2023 at 8:37 AM vignesh C  wrote:


Thanks for posting the updated patch, I had run this test in a loop of
100 times to verify that there was no failure because of race
conditions. The 100 times execution passed successfully.

One suggestion:
"wal file" should be changed to "WAL file":
+# Request a checkpoint on the standby to trigger the WAL file(s) removal
+$node_standby->safe_psql('postgres', 'checkpoint;');
+
+# Verify that the wal file has not been retained on the standby
+my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;



Thanks for the verification. I have pushed the patch.



Thanks!

I've marked the CF entry as Committed and moved the associated PostgreSQL 16 
Open Item
to "resolved before 16beta1".

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-03 Thread Drouvot, Bertrand

Hi,

On 5/3/23 12:29 PM, Amit Kapila wrote:

On Tue, May 2, 2023 at 4:52 PM Drouvot, Bertrand
 wrote:



As per your suggestion, changing the insert ordering (like in V8 attached) 
makes it now work on the failing environment too.



I think it is better to use wait_for_replay_catchup() to wait for
standby to catch up.


Oh right, that's a discussion we already had in [1], I should have thought 
about it.


 I have changed that and a comment in the
attached. I'll push this tomorrow unless there are further comments.



LGTM, thanks!

[1]: 
https://www.postgresql.org/message-id/acbac69e-9ae8-c546-3216-8ecb38e7a93d%40gmail.com

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-05-02 Thread Drouvot, Bertrand

Hi,

On 5/2/23 8:28 AM, Amit Kapila wrote:

On Fri, Apr 28, 2023 at 2:24 PM Drouvot, Bertrand
 wrote:


I can see V7 failing on "Cirrus CI / macOS - Ventura - Meson" only (other 
machines are not complaining).

It does fail on "invalidated logical slots do not lead to retaining WAL", see 
https://cirrus-ci.com/task/4518083541336064

I'm not sure why it is failing, any idea?



I think the reason for the failure is that on standby, the test is not
able to remove the file corresponding to the invalid slot. You are
using pg_switch_wal() to generate a switch record and I think you need
one more WAL-generating statement after that to achieve your purpose
which is that during checkpoint, the tes removes the WAL file
corresponding to an invalid slot. Just doing checkpoint on primary may
not serve the need as that doesn't lead to any new insertion of WAL on
standby. Is your v6 failing in the same environment?


Thanks for the feedback!

No V6 was working fine.


If not, then it
is probably due to the reason that the test is doing insert after
pg_switch_wal() in that version. Why did you change the order of
insert in v7?



I thought doing the insert before the switch was ok and as my local test
was running fine I did not re-consider the ordering.


BTW, you can confirm the failure by changing the DEBUG2 message in
RemoveOldXlogFiles() to LOG. In the case, where the test fails, it may
not remove the WAL file corresponding to an invalid slot whereas it
will remove the WAL file when the test succeeds.


Yeah, I added more debug information and what I can see is that the WAL file
we want to see removed is "00010003" while the standby emits:

"
2023-05-02 10:03:28.351 UTC [16971][checkpointer] LOG:  attempting to remove 
WAL segments older than log file 0002
2023-05-02 10:03:28.351 UTC [16971][checkpointer] LOG:  recycled write-ahead log file 
"00010002"
"

As per your suggestion, changing the insert ordering (like in V8 attached) 
makes it now work on the failing environment too.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom defaeb000b09eab9ba7b5d08c032f81b5bd72a53 Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Fri, 28 Apr 2023 07:27:20 +
Subject: [PATCH v8] Add a test to verify that invalidated logical slots do not
 lead to retaining WAL.

---
 .../t/035_standby_logical_decoding.pl | 39 ++-
 1 file changed, 37 insertions(+), 2 deletions(-)
 100.0% src/test/recovery/t/

diff --git a/src/test/recovery/t/035_standby_logical_decoding.pl 
b/src/test/recovery/t/035_standby_logical_decoding.pl
index 66d264f230..a6b640d7fd 100644
--- a/src/test/recovery/t/035_standby_logical_decoding.pl
+++ b/src/test/recovery/t/035_standby_logical_decoding.pl
@@ -500,9 +500,44 @@ $node_standby->restart;
 check_slots_conflicting_status(1);
 
 ##
-# Verify that invalidated logical slots do not lead to retaining WAL
+# Verify that invalidated logical slots do not lead to retaining WAL.
 ##
-# X TODO
+
+# Before removing WAL file(s), ensure the cascading standby catch up
+$node_standby->wait_for_replay_catchup($node_cascading_standby,
+   $node_primary);
+
+# Get the restart_lsn from an invalidated slot
+my $restart_lsn = $node_standby->safe_psql('postgres',
+   "SELECT restart_lsn from pg_replication_slots WHERE slot_name = 
'vacuum_full_activeslot' and conflicting is true;"
+);
+
+chomp($restart_lsn);
+
+# As pg_walfile_name() can not be executed on the standby,
+# get the WAL file name associated to this lsn from the primary
+my $walfile_name = $node_primary->safe_psql('postgres',
+   "SELECT pg_walfile_name('$restart_lsn')");
+
+chomp($walfile_name);
+
+# Generate some activity and switch WAL file on the primary
+$node_primary->safe_psql(
+   'postgres', "create table retain_test(a int);
+select 
pg_switch_wal();
+insert 
into retain_test values(1);
+
checkpoint;");
+
+# Wait for the standby to catch up
+$node_primary->wait_for_catchup($node_standby);
+
+# Request a checkpoint on the standby to trigger the WAL file(s) removal
+$node_standby->safe_psql('postgres', 'checkpoint;');
+
+# Verify that the wal file has not been retained on the standby
+my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;
+ok(!-f "$standby_walfile",
+   "invalidated logical slots do not lead to retaining WAL");
 
 ##
 # Recovery conflict: Invalidate conflicting slots, including in-use slots
-- 
2.34.1



Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-04-28 Thread Drouvot, Bertrand

Hi,

On 4/28/23 5:55 AM, Amit Kapila wrote:

On Wed, Apr 26, 2023 at 7:53 PM Drouvot, Bertrand
 wrote:

+# Get the restart_lsn from an invalidated slot
+my $restart_lsn = $node_standby->safe_psql('postgres',
+ "SELECT restart_lsn from pg_replication_slots WHERE slot_name =
'vacuum_full_activeslot' and conflicting is true;"
+);
+
+chomp($restart_lsn);
+
+# Get the WAL file name associated to this lsn on the primary
+my $walfile_name = $node_primary->safe_psql('postgres',
+ "SELECT pg_walfile_name('$restart_lsn')");
+
+chomp($walfile_name);
+
+# Check the WAL file is still on the primary
+ok(-f $node_primary->data_dir . '/pg_wal/' . $walfile_name,
+ "WAL file still on the primary");

How is it guaranteed that the WAL file corresponding to the
invalidated slot on standby will still be present on primary?


The slot(s) have been invalidated by the "vacuum full" test just above
this one. So I think the WAL we are looking for is the last one being used
by the primary. As no activity happened on it since the vacuum full it looks to
me that it should still be present.

But I may have missed something and maybe that's not guarantee that this WAL is 
still there in all the cases.
In that case I think it's better to remove this test (it does not provide added 
value here).

Test removed in V7 attached.


Can you
please explain the logic behind this test a bit more like how the WAL
file switch helps you to achieve the purpose?



The idea was to generate enough "wal switch" on the primary to ensure
the WAL file has been removed.

I gave another thought on it and I think we can skip the test that the WAL is
not on the primary any more. That way, one "wal switch" seems to be enough
to see it removed on the standby.

It's done in V7.

V7 is not doing "extra tests" than necessary and I think it's probably better 
like this.

I can see V7 failing on "Cirrus CI / macOS - Ventura - Meson" only (other 
machines are not complaining).

It does fail on "invalidated logical slots do not lead to retaining WAL", see 
https://cirrus-ci.com/task/4518083541336064

I'm not sure why it is failing, any idea?

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 2ab08214415023505244c954a6a5ebf42ec9aebb Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Fri, 28 Apr 2023 07:27:20 +
Subject: [PATCH v7] Add a test to verify that invalidated logical slots do not
 lead to retaining WAL.

---
 .../t/035_standby_logical_decoding.pl | 39 ++-
 1 file changed, 37 insertions(+), 2 deletions(-)
 100.0% src/test/recovery/t/

diff --git a/src/test/recovery/t/035_standby_logical_decoding.pl 
b/src/test/recovery/t/035_standby_logical_decoding.pl
index 66d264f230..b32c1002b0 100644
--- a/src/test/recovery/t/035_standby_logical_decoding.pl
+++ b/src/test/recovery/t/035_standby_logical_decoding.pl
@@ -500,9 +500,44 @@ $node_standby->restart;
 check_slots_conflicting_status(1);
 
 ##
-# Verify that invalidated logical slots do not lead to retaining WAL
+# Verify that invalidated logical slots do not lead to retaining WAL.
 ##
-# X TODO
+
+# Before removing WAL file(s), ensure the cascading standby catch up
+$node_standby->wait_for_replay_catchup($node_cascading_standby,
+   $node_primary);
+
+# Get the restart_lsn from an invalidated slot
+my $restart_lsn = $node_standby->safe_psql('postgres',
+   "SELECT restart_lsn from pg_replication_slots WHERE slot_name = 
'vacuum_full_activeslot' and conflicting is true;"
+);
+
+chomp($restart_lsn);
+
+# As pg_walfile_name() can not be executed on the standby,
+# get the WAL file name associated to this lsn from the primary
+my $walfile_name = $node_primary->safe_psql('postgres',
+   "SELECT pg_walfile_name('$restart_lsn')");
+
+chomp($walfile_name);
+
+# Generate some activity and switch WAL file on the primary
+$node_primary->safe_psql(
+   'postgres', "create table retain_test(a int);
+insert 
into retain_test values(1);
+select 
pg_switch_wal();
+
checkpoint;");
+
+# Wait for the standby to catch up
+$node_primary->wait_for_catchup($node_standby);
+
+# Request a checkpoint on the standby to trigger the WAL file(s) removal
+$node_standby->safe_psql('postgres', 'checkpoint;');
+
+# Verify that the wal file has not been retained on the standby
+my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;
+ok(!-f "$standby_walfile",
+   "invalidated logical slots do not lead to retaining WAL");
 
 ##
 # Recovery conflict: Invalidate conflicting slots, including in-use slots
-- 
2.34.1



Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-04-27 Thread Drouvot, Bertrand




On 4/27/23 11:54 AM, Amit Kapila wrote:

On Thu, Apr 27, 2023 at 1:05 PM Drouvot, Bertrand
 wrote:


On 4/27/23 5:37 AM, Amit Kapila wrote:

On Wed, Apr 26, 2023 at 4:41 PM Drouvot, Bertrand
 wrote:

+When in recovery, the default value of target_lsn is $node->lsn('replay')
+instead. This is needed when the publisher passed to
wait_for_subscription_sync()
+is a standby node.

I think this will be useful whenever wait_for_catchup has been called
for a standby node (where self is a standby node). I have tried even
by commenting wait_for_subscription_sync in the new test then it fails
for $node_standby->wait_for_catchup('tap_sub');. So instead, how about
a comment like: "When in recovery, the default value of target_lsn is
$node->lsn('replay') instead which ensures that the cascaded standby
has caught up to what has been replayed on the standby."?



I did it that way because wait_for_subscription_sync() was the first case I had
to work on but I do agree that your wording better describe the intend of the 
new
code.

Changed in V7 attached.



Pushed.



Thanks!

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Fix a test case in 035_standby_logical_decoding.pl

2023-04-27 Thread Drouvot, Bertrand

Hi,

On 4/27/23 11:53 AM, Amit Kapila wrote:

On Thu, Apr 27, 2023 at 2:16 PM Drouvot, Bertrand
 wrote:


On 4/27/23 10:11 AM, Yu Shi (Fujitsu) wrote:

Hi hackers,

In 035_standby_logical_decoding.pl, I think that the check in the following test
case should be performed on the standby node, instead of the primary node, as
the slot is created on the standby node.


Oh right, the current test is not done on the right node, thanks!


The test currently passes because it
only checks the return value of psql. It might be more appropriate to check the
error message.


Agree, and it's consistent with what is being done in 006_logical_decoding.pl.


Please see the attached patch.



+
+($result, $stdout, $stderr) = $node_standby->psql(
   'otherdb',
   "SELECT lsn FROM pg_logical_slot_peek_changes('behaves_ok_activeslot', 
NULL, NULL) ORDER BY lsn DESC LIMIT 1;"
-),
-3,
-'replaying logical slot from another database fails');
+);
+ok( $stderr =~
+ m/replication slot "behaves_ok_activeslot" was not created in this 
database/,
+   "replaying logical slot from another database fails");


That does look good to me.



I agree that that the check should be done on standby but how does it
make a difference to check the error message or return value? Won't it
be the same for both primary and standby?



Yes that would be the same. I think the original idea from Shi-san (please 
correct me If I'm wrong)
was "while at it" let's also make this test on the right node even better.


Nit: I wonder if while at it (as it was already there) we could not remove the " 
ORDER BY lsn DESC LIMIT 1" part of it.
It does not change anything regarding the test but looks more appropriate to me.



It may not make a difference as this is anyway an error case but it
looks more logical to LIMIT by 1 as you are fetching a single LSN
value and it will be consistent with other tests in this file and the
test case in the file 006_logical_decoding.pl.



yeah I think it all depends how one see this test (sort of test a failure or 
try to get
a result and see if it fails). That was a Nit so I don't have a strong opinion 
on this one.


BTW, in the same test, I see it uses wait_for_catchup() in one place
and wait_for_replay_catchup() in another place after Insert. Isn't it
better to use wait_for_replay_catchup in both places?



They are both using the 'replay' mode so both are perfectly correct but for 
consistency (and as
they don't use the same "target_lsn" ('write' vs 'flush')) I think that using 
wait_for_replay_catchup()
in both places (which is using the 'flush' target lsn) is a good idea.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Fix a test case in 035_standby_logical_decoding.pl

2023-04-27 Thread Drouvot, Bertrand

Hi,

On 4/27/23 10:11 AM, Yu Shi (Fujitsu) wrote:

Hi hackers,

In 035_standby_logical_decoding.pl, I think that the check in the following test
case should be performed on the standby node, instead of the primary node, as
the slot is created on the standby node.


Oh right, the current test is not done on the right node, thanks!


The test currently passes because it
only checks the return value of psql. It might be more appropriate to check the
error message. 


Agree, and it's consistent with what is being done in 006_logical_decoding.pl.


Please see the attached patch.



+
+($result, $stdout, $stderr) = $node_standby->psql(
 'otherdb',
 "SELECT lsn FROM pg_logical_slot_peek_changes('behaves_ok_activeslot', 
NULL, NULL) ORDER BY lsn DESC LIMIT 1;"
-),
-3,
-'replaying logical slot from another database fails');
+);
+ok( $stderr =~
+ m/replication slot "behaves_ok_activeslot" was not created in this 
database/,
+   "replaying logical slot from another database fails");


That does look good to me.

Nit: I wonder if while at it (as it was already there) we could not remove the " 
ORDER BY lsn DESC LIMIT 1" part of it.
It does not change anything regarding the test but looks more appropriate to me.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-04-27 Thread Drouvot, Bertrand

Hi,

On 4/27/23 5:37 AM, Amit Kapila wrote:

On Wed, Apr 26, 2023 at 4:41 PM Drouvot, Bertrand
 wrote:

+When in recovery, the default value of target_lsn is $node->lsn('replay')
+instead. This is needed when the publisher passed to
wait_for_subscription_sync()
+is a standby node.

I think this will be useful whenever wait_for_catchup has been called
for a standby node (where self is a standby node). I have tried even
by commenting wait_for_subscription_sync in the new test then it fails
for $node_standby->wait_for_catchup('tap_sub');. So instead, how about
a comment like: "When in recovery, the default value of target_lsn is
$node->lsn('replay') instead which ensures that the cascaded standby
has caught up to what has been replayed on the standby."?



I did it that way because wait_for_subscription_sync() was the first case I had
to work on but I do agree that your wording better describe the intend of the 
new
code.

Changed in V7 attached.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom af8ab8460a70fe9b70836be021a1a98b2ba6ddd3 Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Mon, 24 Apr 2023 05:13:23 +
Subject: [PATCH v7] Add subscription to the standby test in
 035_standby_logical_decoding.pl

Adding one test, to verify that subscription to the standby is possible.
---
 src/test/perl/PostgreSQL/Test/Cluster.pm  | 21 -
 .../t/035_standby_logical_decoding.pl | 91 ++-
 2 files changed, 107 insertions(+), 5 deletions(-)
  18.9% src/test/perl/PostgreSQL/Test/
  81.0% src/test/recovery/t/

diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm 
b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 6f7f4e5de4..bc9b5dc644 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2611,8 +2611,14 @@ When doing physical replication, the standby is usually 
identified by
 passing its PostgreSQL::Test::Cluster instance.  When doing logical
 replication, standby_name identifies a subscription.
 
-The default value of target_lsn is $node->lsn('write'), which ensures
-that the standby has caught up to what has been committed on the primary.
+When not in recovery, the default value of target_lsn is $node->lsn('write'),
+which ensures that the standby has caught up to what has been committed on
+the primary.
+
+When in recovery, the default value of target_lsn is $node->lsn('replay')
+instead which ensures that the cascaded standby has caught up to what has been
+replayed on the standby.
+
 If you pass an explicit value of target_lsn, it should almost always be
 the primary's write LSN; so this parameter is seldom needed except when
 querying some intermediate replication node rather than the primary.
@@ -2644,7 +2650,16 @@ sub wait_for_catchup
}
if (!defined($target_lsn))
{
-   $target_lsn = $self->lsn('write');
+   my $isrecovery = $self->safe_psql('postgres', "SELECT 
pg_is_in_recovery()");
+   chomp($isrecovery);
+   if ($isrecovery eq 't')
+   {
+   $target_lsn = $self->lsn('replay');
+   }
+   else
+   {
+   $target_lsn = $self->lsn('write');
+   }
}
print "Waiting for replication conn "
  . $standby_name . "'s "
diff --git a/src/test/recovery/t/035_standby_logical_decoding.pl 
b/src/test/recovery/t/035_standby_logical_decoding.pl
index b8f5311fe9..f6d6447412 100644
--- a/src/test/recovery/t/035_standby_logical_decoding.pl
+++ b/src/test/recovery/t/035_standby_logical_decoding.pl
@@ -10,12 +10,17 @@ use PostgreSQL::Test::Cluster;
 use PostgreSQL::Test::Utils;
 use Test::More;
 
-my ($stdin, $stdout, $stderr, $cascading_stdout, $cascading_stderr, $ret, 
$handle, $slot);
+my ($stdin, $stdout,$stderr,
+   $cascading_stdout,  $cascading_stderr,  $subscriber_stdin,
+   $subscriber_stdout, $subscriber_stderr, $ret,
+   $handle,$slot);
 
 my $node_primary = PostgreSQL::Test::Cluster->new('primary');
 my $node_standby = PostgreSQL::Test::Cluster->new('standby');
 my $node_cascading_standby = 
PostgreSQL::Test::Cluster->new('cascading_standby');
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
 my $default_timeout = $PostgreSQL::Test::Utils::timeout_default;
+my $psql_timeout= IPC::Run::timer($default_timeout);
 my $res;
 
 # Name for the physical slot on primary
@@ -267,7 +272,8 @@ $node_standby->init_from_backup(
has_streaming => 1,
has_restoring => 1);
 $node_standby->append_conf('postgresql.conf',
-   q

Re: Autogenerate some wait events code and documentation

2023-04-26 Thread Drouvot, Bertrand

Hi,

On 4/26/23 6:51 PM, Drouvot, Bertrand wrote:

Hi,

On 4/25/23 7:15 AM, Michael Paquier wrote:

Will do, no problem at all.



Please find attached V5 addressing the previous comments except
the "ordering" one (need to look deeper at this).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom b9a270c72ede800f2819b326aef7a7144027d861 Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Sat, 22 Apr 2023 10:37:56 +
Subject: [PATCH v5] Generating wait_event_types.h, pgstat_wait_event.c and
 waiteventnames.sgml

Purpose is to auto-generate those files based on the newly created 
waiteventnames.txt file.

Having auto generated files would help to avoid:

- wait event without description like observed in 
https://www.postgresql.org/message-id/CA%2BhUKGJixAHc860Ej9Qzd_z96Z6aoajAgJ18bYfV3Lfn6t9%3D%2BQ%40mail.gmail.com
- orphaned wait event like observed in 
https://www.postgresql.org/message-id/CA%2BhUKGK6tqm59KuF1z%2Bh5Y8fsWcu5v8%2B84kduSHwRzwjB2aa_A%40mail.gmail.com
---
 doc/src/sgml/.gitignore   |   1 +
 doc/src/sgml/Makefile |   5 +-
 doc/src/sgml/filelist.sgml|   1 +
 doc/src/sgml/meson.build  |  10 +
 doc/src/sgml/monitoring.sgml  | 792 +-
 src/backend/Makefile  |  13 +-
 src/backend/utils/activity/Makefile   |  10 +
 .../utils/activity/generate-waiteventnames.pl | 204 +
 src/backend/utils/activity/meson.build|  24 +
 src/backend/utils/activity/wait_event.c   | 567 +
 src/backend/utils/activity/waiteventnames.txt | 245 ++
 src/include/Makefile  |   2 +-
 src/include/utils/.gitignore  |   1 +
 src/include/utils/meson.build |  18 +
 src/include/utils/wait_event.h| 214 +
 src/tools/msvc/Solution.pm|  18 +
 src/tools/msvc/clean.bat  |   3 +
 17 files changed, 556 insertions(+), 1572 deletions(-)
  36.2% doc/src/sgml/
  52.9% src/backend/utils/activity/
   8.6% src/include/utils/

diff --git a/doc/src/sgml/.gitignore b/doc/src/sgml/.gitignore
index d8e3dab338..e8cbd687d5 100644
--- a/doc/src/sgml/.gitignore
+++ b/doc/src/sgml/.gitignore
@@ -17,6 +17,7 @@
 /errcodes-table.sgml
 /keywords-table.sgml
 /version.sgml
+/waiteventnames.sgml
 # Assorted byproducts from building the above
 /postgres-full.xml
 /INSTALL.html
diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile
index 71cbef230f..01a6aa8187 100644
--- a/doc/src/sgml/Makefile
+++ b/doc/src/sgml/Makefile
@@ -58,7 +58,7 @@ override XSLTPROCFLAGS += --stringparam pg.version 
'$(VERSION)'
 
 GENERATED_SGML = version.sgml \
features-supported.sgml features-unsupported.sgml errcodes-table.sgml \
-   keywords-table.sgml
+   keywords-table.sgml waiteventnames.sgml
 
 ALLSGML := $(wildcard $(srcdir)/*.sgml $(srcdir)/ref/*.sgml) $(GENERATED_SGML)
 
@@ -111,6 +111,9 @@ errcodes-table.sgml: 
$(top_srcdir)/src/backend/utils/errcodes.txt generate-errco
 keywords-table.sgml: $(top_srcdir)/src/include/parser/kwlist.h $(wildcard 
$(srcdir)/keywords/sql*.txt) generate-keywords-table.pl
$(PERL) $(srcdir)/generate-keywords-table.pl $(srcdir) > $@
 
+waiteventnames.sgml: 
$(top_srcdir)/src/backend/utils/activity/waiteventnames.txt 
$(top_srcdir)/src/backend/utils/activity/generate-waiteventnames.pl
+   $(PERL) 
$(top_srcdir)/src/backend/utils/activity/generate-waiteventnames.pl $< > $@
+   rm pgstat_wait_event.c; rm wait_event_types.h
 
 ##
 ## Generation of some text files.
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 0d6be9a2fa..9ab235d36a 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -42,6 +42,7 @@
 
 
 
+
 
 
 
diff --git a/doc/src/sgml/meson.build b/doc/src/sgml/meson.build
index c6d77b5a15..53e4505b97 100644
--- a/doc/src/sgml/meson.build
+++ b/doc/src/sgml/meson.build
@@ -46,6 +46,16 @@ doc_generated += custom_target('errcodes-table.sgml',
   capture: true,
 )
 
+doc_generated += custom_target('waiteventnames.sgml',
+  input: files(
+'../../../src/backend/utils/activity/waiteventnames.txt'),
+  output: 'waiteventnames.sgml',
+  command: [perl, 
files('../../../src/backend/utils/activity/generate-waiteventnames.pl'), 
'@INPUT@'],
+  build_by_default: false,
+  install: false,
+  capture: true,
+)
+
 # FIXME: this actually has further inputs, adding depfile support to
 # generate-keywords-table.pl is probably the best way to address that
 # robustly.
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 99f7f95c39..c8648fca18 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -1100,74 +1100,7 @@ postgres   27093  0.0  0.0  30096  2752 

Re: Autogenerate some wait events code and documentation

2023-04-26 Thread Drouvot, Bertrand

Hi,

On 4/25/23 7:15 AM, Michael Paquier wrote:

On Mon, Apr 24, 2023 at 09:03:53AM +0200, Drouvot, Bertrand wrote:

Oh right, fixed.


I may tweak a few things if I put my hands on it, but that looks
pretty solid seen from here.. 


Glad to hear! ;-)


I have spotted a few extra issues.

One thing I have noticed with v4 is that the order of the tables
generated in wait_event_types.h and the SGML docs is inconsistent with
previous versions, and these are not in an alphabetical order.  HEAD
orders them as Activity, BufferPin, Client, Extension, IO, IPC, Lock,
LWLock and Timeout.  This patch switches the order to become
different, and note that the first table describing each of the wait
event type classes gets it right.



Right, ordering being somehow broken is also something I did mention initially 
when I first
presented this patch up-thread. That's also due to the fact that this patch
does not autogenerate PG_WAIT_LWLOCK, PG_WAIT_LOCK, PG_WAIT_BUFFER_PIN and 
PG_WAIT_EXTENSION.


It seems to me that you should apply an extra ordering in
generate-waiteventnames.pl to make sure that the tables are printed in
the same order as previously, around here:
+# Generate the output files
+foreach $waitclass (keys %hashwe) {



Yeah but that would still affect only the auto-generated one and then
result to having the wait event part of the documentation "monitoring-stats"
not ordered as compared to the "Wait Event Types" Table.

And as we have only one "include" in doc/src/sgml/monitoring.sgml for all the
auto-generated one, the ordering would still be broken.


(The table describing all the wait event types could be removed from
the SGML docs as well, at the cost of having their description in the
new .txt file.  However, as these are long, it would make the .txt
file much messier, so not doing this extra part is OK for me.)


Right, but that might be a valuable option to also fix the ordering issue
mentioned above (need to look deeper at this).



- * Use this category when a process is waiting because it has no work to do,
- * unless the "Client" or "Timeout" category describes the situation better.
- * Typically, this should only be used for background processes

wait_event.h includes a set of comments describing each category, that
this patch removes.  Rather than removing this information, which is
helpful to have around, why not making them comments of
waiteventnames.txt instead?  Losing this information would be sad.



Yeah, good point, I'll look at this.


+#   src/backend/utils/activity/pgstat_wait_event.c
+#  c functions to get the wait event name based on the enum
Nit.  'c' should be upper-case.

 }
+
 if (IsNewer(
 'src/include/storage/lwlocknames.h',
Not wrong, but this is an unrelated diff.



Yeah, probably due to a pgindent run.


+if %DIST%==1 if exist src\backend\utils\activity\pgstat_wait_event.c del /q 
src\backend\utils\activity\pgstat_wait_event.c
  if %DIST%==1 if exist src\backend\storage\lmgr\lwlocknames.h del /q 
src\backend\storage\lmgr\lwlocknames.h
+if %DIST%==1 if exist src\backend\utils\activity\wait_event_types.h del /q 
src\backend\utils\activity\wait_event_types.h
The order here is off a bit.  Missed that previously..

perltidy had better be run on generate-waiteventnames.pl (I can do
that myself, no worries), as a couple of lines' format don't seem
quite in line.


Will do, no problem at all.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-04-26 Thread Drouvot, Bertrand

Hi,

On 4/26/23 11:58 AM, Yu Shi (Fujitsu) wrote:

On Mon, Apr 24, 2023 8:07 PM Drouvot, Bertrand  
wrote:



I think that's because when replaying a checkpoint record, the startup process
of standby only saves the information of the checkpoint, and we need to wait for
the checkpointer to perform a restartpoint (see RecoveryRestartPoint), right? If
so, could we force a checkpoint on standby? After this, the standby should have
completed the restartpoint and we don't need to wait.



Thanks for looking at it!

Oh right, that looks like good a good way to ensure the WAL file is removed on 
the standby
so that we don't need to wait.

Implemented that way in V6 attached and that works fine.


Besides, would it be better to wait for the cascading standby? If the wal log
file needed for cascading standby is removed on the standby, the subsequent test
will fail. 


Good catch! I agree that we have to wait on the cascading standby before 
removing
the WAL files. It's done in V6 (and the test is not failing anymore if we set a
recovery_min_apply_delay to 5s on the cascading standby).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 79f554eaf8185a2d34dc48ba31f1a3b3cd09c185 Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Tue, 25 Apr 2023 06:02:17 +
Subject: [PATCH v6] Add retained WAL test in 035_standby_logical_decoding.pl

Adding one test, to verify that invalidated logical slots do not lead to
retaining WAL.
---
 .../t/035_standby_logical_decoding.pl | 69 ++-
 1 file changed, 67 insertions(+), 2 deletions(-)
 100.0% src/test/recovery/t/

diff --git a/src/test/recovery/t/035_standby_logical_decoding.pl 
b/src/test/recovery/t/035_standby_logical_decoding.pl
index f6d6447412..ea9ca46995 100644
--- a/src/test/recovery/t/035_standby_logical_decoding.pl
+++ b/src/test/recovery/t/035_standby_logical_decoding.pl
@@ -495,9 +495,74 @@ $node_standby->restart;
 check_slots_conflicting_status(1);
 
 ##
-# Verify that invalidated logical slots do not lead to retaining WAL
+# Verify that invalidated logical slots do not lead to retaining WAL.
 ##
-# X TODO
+
+# Before removing WAL files, ensure the cascading standby catch up
+$node_standby->wait_for_replay_catchup($node_cascading_standby, $node_primary);
+
+# Get the restart_lsn from an invalidated slot
+my $restart_lsn = $node_standby->safe_psql('postgres',
+   "SELECT restart_lsn from pg_replication_slots WHERE slot_name = 
'vacuum_full_activeslot' and conflicting is true;"
+);
+
+chomp($restart_lsn);
+
+# Get the WAL file name associated to this lsn on the primary
+my $walfile_name = $node_primary->safe_psql('postgres',
+   "SELECT pg_walfile_name('$restart_lsn')");
+
+chomp($walfile_name);
+
+# Check the WAL file is still on the primary
+ok(-f $node_primary->data_dir . '/pg_wal/' . $walfile_name,
+   "WAL file still on the primary");
+
+# Get the number of WAL files on the standby
+my $nb_standby_files = $node_standby->safe_psql('postgres',
+   "SELECT COUNT(*) FROM pg_ls_dir('pg_wal')");
+
+chomp($nb_standby_files);
+
+# Switch WAL files on the primary
+my @c = (1 .. $nb_standby_files);
+
+$node_primary->safe_psql('postgres', "create table retain_test(a int)");
+
+for (@c)
+{
+   $node_primary->safe_psql(
+   'postgres', "SELECT pg_switch_wal();
+  insert into retain_test values("
+ . $_ . ");");
+}
+
+# Ask for a checkpoint
+$node_primary->safe_psql('postgres', 'checkpoint;');
+
+# Check that the WAL file has not been retained on the primary
+ok(!-f $node_primary->data_dir . '/pg_wal/' . $walfile_name,
+   "WAL file not on the primary anymore");
+
+# Wait for the standby to catch up
+$node_primary->wait_for_catchup($node_standby);
+
+# Generate another WAL switch, more activity and a checkpoint
+$node_primary->safe_psql(
+   'postgres', "SELECT pg_switch_wal();
+ insert into retain_test values(1);");
+$node_primary->safe_psql('postgres', 'checkpoint;');
+
+# Wait for the standby to catch up
+$node_primary->wait_for_catchup($node_standby);
+
+# Request a checkpoint on the standby to trigger the WAL file(s) removal
+$node_standby->safe_psql('postgres', 'checkpoint;');
+
+# Verify that the wal file has not been retained on the standby
+my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;
+ok( !-f "$standby_walfile",
+"invalidated logical slots do not lead to retaining WAL");
 
 ##
 # Recovery conflict: Invalidate conflicting slots, including in-use slots
-- 
2.34.1



Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-04-26 Thread Drouvot, Bertrand

Hi,

On 4/26/23 11:12 AM, vignesh C wrote:

On Wed, 26 Apr 2023 at 13:45, Drouvot, Bertrand

There was one typo in the commit message, subscribtion should be
subscription, the rest of the changes looks good to me:
Subject: [PATCH v5] Add subscribtion to the standby test in
  035_standby_logical_decoding.pl

Adding one test, to verify that subscribtion to the standby is possible.



Oops, at least I repeated it twice ;-)
Fixed in V6 that I just shared up-thread.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com




Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-04-26 Thread Drouvot, Bertrand

Hi,

On 4/26/23 12:27 PM, Alvaro Herrera wrote:

diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm 
b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 6f7f4e5de4..819667d42a 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2644,7 +2644,16 @@ sub wait_for_catchup
}
if (!defined($target_lsn))
{
-   $target_lsn = $self->lsn('write');
+   my $isrecovery = $self->safe_psql('postgres', "SELECT 
pg_is_in_recovery()");
+   chomp($isrecovery);
+   if ($isrecovery eq 't')
+   {
+   $target_lsn = $self->lsn('replay');
+   }
+   else
+   {
+   $target_lsn = $self->lsn('write');
+   }


Please modify the function's documentation to account for this code change.



Good point, thanks! Done in V6 attached.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 2a8b8bdc32671b33b2b7c0fa1a79f8d7624ae4a6 Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Mon, 24 Apr 2023 05:13:23 +
Subject: [PATCH v6] Add subscription to the standby test in
 035_standby_logical_decoding.pl

Adding one test, to verify that subscription to the standby is possible.
---
 src/test/perl/PostgreSQL/Test/Cluster.pm  | 21 -
 .../t/035_standby_logical_decoding.pl | 91 ++-
 2 files changed, 107 insertions(+), 5 deletions(-)
  18.8% src/test/perl/PostgreSQL/Test/
  81.1% src/test/recovery/t/

diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm 
b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 6f7f4e5de4..9117554c07 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2611,8 +2611,14 @@ When doing physical replication, the standby is usually 
identified by
 passing its PostgreSQL::Test::Cluster instance.  When doing logical
 replication, standby_name identifies a subscription.
 
-The default value of target_lsn is $node->lsn('write'), which ensures
-that the standby has caught up to what has been committed on the primary.
+When not in recovery, the default value of target_lsn is $node->lsn('write'),
+which ensures that the standby has caught up to what has been committed on
+the primary.
+
+When in recovery, the default value of target_lsn is $node->lsn('replay')
+instead. This is needed when the publisher passed to 
wait_for_subscription_sync()
+is a standby node.
+
 If you pass an explicit value of target_lsn, it should almost always be
 the primary's write LSN; so this parameter is seldom needed except when
 querying some intermediate replication node rather than the primary.
@@ -2644,7 +2650,16 @@ sub wait_for_catchup
}
if (!defined($target_lsn))
{
-   $target_lsn = $self->lsn('write');
+   my $isrecovery = $self->safe_psql('postgres', "SELECT 
pg_is_in_recovery()");
+   chomp($isrecovery);
+   if ($isrecovery eq 't')
+   {
+   $target_lsn = $self->lsn('replay');
+   }
+   else
+   {
+   $target_lsn = $self->lsn('write');
+   }
}
print "Waiting for replication conn "
  . $standby_name . "'s "
diff --git a/src/test/recovery/t/035_standby_logical_decoding.pl 
b/src/test/recovery/t/035_standby_logical_decoding.pl
index b8f5311fe9..f6d6447412 100644
--- a/src/test/recovery/t/035_standby_logical_decoding.pl
+++ b/src/test/recovery/t/035_standby_logical_decoding.pl
@@ -10,12 +10,17 @@ use PostgreSQL::Test::Cluster;
 use PostgreSQL::Test::Utils;
 use Test::More;
 
-my ($stdin, $stdout, $stderr, $cascading_stdout, $cascading_stderr, $ret, 
$handle, $slot);
+my ($stdin, $stdout,$stderr,
+   $cascading_stdout,  $cascading_stderr,  $subscriber_stdin,
+   $subscriber_stdout, $subscriber_stderr, $ret,
+   $handle,$slot);
 
 my $node_primary = PostgreSQL::Test::Cluster->new('primary');
 my $node_standby = PostgreSQL::Test::Cluster->new('standby');
 my $node_cascading_standby = 
PostgreSQL::Test::Cluster->new('cascading_standby');
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
 my $default_timeout = $PostgreSQL::Test::Utils::timeout_default;
+my $psql_timeout= IPC::Run::timer($default_timeout);
 my $res;
 
 # Name for the physical slot on primary
@@ -267,7 +272,8 @@ $node_standby->init_from_backup(
has_streaming => 1,
has_restoring => 1);
 $node_standby->append_conf('postgresql.conf',
-   qq[primary_slot_name = '$primary_slotname']);
+   qq[primary_slot_name = '$primary_slotname'
+   max_replication_slots = 5]);
 $node_standby->start;
 $node_primary->wait_for_replay_catchup($node_standby);
 $node_standby->safe_psql('testdb', qq[SELECT * FROM 
pg_create_physical_replication_slo

Re: Add two missing tests in 035_standby_logical_decoding.pl

2023-04-26 Thread Drouvot, Bertrand

Hi,

On 4/26/23 6:06 AM, vignesh C wrote:

On Tue, 25 Apr 2023 at 12:51, Drouvot, Bertrand
 wrote:
Thanks for the updated patch.
Few comments:


Thanks for looking at it!


1) subscriber_stdout and  subscriber_stderr are not required for this
test case, we could remove it, I was able to remove those variables
and run the test successfully:
+$node_subscriber->start;
+
+my %psql_subscriber = (
+   'subscriber_stdin'  => '',
+   'subscriber_stdout' => '',
+   'subscriber_stderr' => '');
+$psql_subscriber{run} = IPC::Run::start(
+   [ 'psql', '-XA', '-f', '-', '-d',
$node_subscriber->connstr('postgres') ],
+   '<',
+   \$psql_subscriber{subscriber_stdin},
+   '>',
+   \$psql_subscriber{subscriber_stdout},
+   '2>',
+   \$psql_subscriber{subscriber_stderr},
+   $psql_timeout);

I ran it like:
my %psql_subscriber = (
'subscriber_stdin' => '');
$psql_subscriber{run} = IPC::Run::start(
[ 'psql', '-XA', '-f', '-', '-d', $node_subscriber->connstr('postgres') ],
'<',
\$psql_subscriber{subscriber_stdin},
$psql_timeout);



Not using the 3 std* is also the case for example in 021_row_visibility.pl and 
032_relfilenode_reuse.pl
where the "stderr" is set but does not seem to be used.

I don't think that's a problem to keep them all and I think it's better to have
them re-directed to dedicated places.


2) Also we have changed the default timeout here, why is this change required:
  my $node_cascading_standby =
PostgreSQL::Test::Cluster->new('cascading_standby');
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
  my $default_timeout = $PostgreSQL::Test::Utils::timeout_default;
+my $psql_timeout= IPC::Run::timer(2 * $default_timeout);


I think I used 021_row_visibility.pl as an example. But agree there is
others .pl that are using the timeout_default as the psql_timeout and that
the default is enough in our case. So, using the default in V5 attached.

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.comFrom 7d06ba14cce372b60e859b2274933b7c1a25b26a Mon Sep 17 00:00:00 2001
From: Bertrand Drouvot 
Date: Mon, 24 Apr 2023 05:13:23 +
Subject: [PATCH v5] Add subscribtion to the standby test in
 035_standby_logical_decoding.pl

Adding one test, to verify that subscribtion to the standby is possible.
---
 src/test/perl/PostgreSQL/Test/Cluster.pm  | 11 ++-
 .../t/035_standby_logical_decoding.pl | 91 ++-
 2 files changed, 99 insertions(+), 3 deletions(-)
   7.5% src/test/perl/PostgreSQL/Test/
  92.4% src/test/recovery/t/

diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm 
b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 6f7f4e5de4..819667d42a 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2644,7 +2644,16 @@ sub wait_for_catchup
}
if (!defined($target_lsn))
{
-   $target_lsn = $self->lsn('write');
+   my $isrecovery = $self->safe_psql('postgres', "SELECT 
pg_is_in_recovery()");
+   chomp($isrecovery);
+   if ($isrecovery eq 't')
+   {
+   $target_lsn = $self->lsn('replay');
+   }
+   else
+   {
+   $target_lsn = $self->lsn('write');
+   }
}
print "Waiting for replication conn "
  . $standby_name . "'s "
diff --git a/src/test/recovery/t/035_standby_logical_decoding.pl 
b/src/test/recovery/t/035_standby_logical_decoding.pl
index b8f5311fe9..f6d6447412 100644
--- a/src/test/recovery/t/035_standby_logical_decoding.pl
+++ b/src/test/recovery/t/035_standby_logical_decoding.pl
@@ -10,12 +10,17 @@ use PostgreSQL::Test::Cluster;
 use PostgreSQL::Test::Utils;
 use Test::More;
 
-my ($stdin, $stdout, $stderr, $cascading_stdout, $cascading_stderr, $ret, 
$handle, $slot);
+my ($stdin, $stdout,$stderr,
+   $cascading_stdout,  $cascading_stderr,  $subscriber_stdin,
+   $subscriber_stdout, $subscriber_stderr, $ret,
+   $handle,$slot);
 
 my $node_primary = PostgreSQL::Test::Cluster->new('primary');
 my $node_standby = PostgreSQL::Test::Cluster->new('standby');
 my $node_cascading_standby = 
PostgreSQL::Test::Cluster->new('cascading_standby');
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
 my $default_timeout = $PostgreSQL::Test::Utils::timeout_default;
+my $psql_timeout= IPC::Ru

<    1   2   3   4   5   >