Re: Noah Misch
> On Tue, Jul 25, 2023 at 01:56:41PM +0530, Bharath Rupireddy wrote:
> > I've observed the following failure once in one of my Cirrus CI runs
> > on Windows Server on HEAD:
> > 
> > timed out waiting for match: (?^:User was holding shared buffer pin
> > for too long) at
> > C:/cirrus/src/test/recovery/t/031_recovery_conflict.pl line 318.
> > # Postmaster PID for node "primary" is 696
> > 
> > https://api.cirrus-ci.com/v1/artifact/task/5272062399348736/testrun/build/testrun/recovery/031_recovery_conflict/log/regress_log_031_recovery_conflict
> > 
> > https://github.com/BRupireddy/postgres/runs/15296698158
> 
> Known:
> https://postgr.es/m/flat/20220409045515.35ypjzddp25v72ou%40alap3.anarazel.de
> https://postgr.es/m/flat/CA+hUKGK3PGKwcKqzoosamn36YW-fsuTdOPPF1i_rtEO=ney...@mail.gmail.com
> 
> A good next step would be patch "Fix recovery conflict SIGUSR1 handling" from
> https://postgr.es/m/ca+hukg+hi5p1j_8cveqllwnsvyjh4rntf04fywkenxftrh2...@mail.gmail.com
> (near the end of that second thread).

I am also seeing problems with t/031_recovery_conflict.pl, on the
newly added s390x architecture on apt.postgresql.org. The failure is
flaky, but has been popping up in build logs often enough.

I managed to reproduce it on the shell by running the test in a loop a
few times. The failure looks like this:

echo "# +++ tap check in src/test/recovery +++" && rm -rf 
'/home/myon/postgresql/pg/postgresql/build/src/test/recovery'/tmp_check && 
/bin/mkdir -p 
'/home/myon/postgresql/pg/postgresql/build/src/test/recovery'/tmp_check && cd 
/home/myon/postgresql/pg/postgresql/build/../src/test/recovery && 
TESTLOGDIR='/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log'
 
TESTDATADIR='/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check'
 
PATH="/home/myon/postgresql/pg/postgresql/build/tmp_install/usr/lib/postgresql/17/bin:/home/myon/postgresql/pg/postgresql/build/src/test/recovery:$PATH"
 
LD_LIBRARY_PATH="/home/myon/postgresql/pg/postgresql/build/tmp_install/usr/lib/s390x-linux-gnu"
  PGPORT='65432' 
top_builddir='/home/myon/postgresql/pg/postgresql/build/src/test/recovery/../../..'
 
PG_REGRESS='/home/myon/postgresql/pg/postgresql/build/src/test/recovery/../../../src/test/regress/pg_regress'
 /usr/bin/prove -I /home/myon/postgresql/pg/postgresql/build/../src/test/perl/ 
-I /home/myon/postgresql/pg/postgresql/build/../src/test/recovery --verbose 
t/031_recovery_conflict.pl
# +++ tap check in src/test/recovery +++
t/031_recovery_conflict.pl ..
# issuing query via background psql:
#     BEGIN;
#     DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM 
test_recovery_conflict_table1;
#     FETCH FORWARD FROM test_recovery_conflict_cursor;
ok 1 - buffer pin conflict: cursor with conflicting pin established
ok 2 - buffer pin conflict: logfile contains terminated connection due to 
recovery conflict
ok 3 - buffer pin conflict: stats show conflict on standby
# issuing query via background psql:
#         BEGIN;
#         DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM 
test_recovery_conflict_table1;
#         FETCH FORWARD FROM test_recovery_conflict_cursor;
#
ok 4 - snapshot conflict: cursor with conflicting snapshot established
ok 5 - snapshot conflict: logfile contains terminated connection due to 
recovery conflict
ok 6 - snapshot conflict: stats show conflict on standby
# issuing query via background psql:
#         BEGIN;
#         LOCK TABLE test_recovery_conflict_table1 IN ACCESS SHARE MODE;
#         SELECT 1;
#
ok 7 - lock conflict: conflicting lock acquired
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 255 just after 7.
Dubious, test returned 255 (wstat 65280, 0xff00)
All 7 subtests passed

Test Summary Report
-------------------
t/031_recovery_conflict.pl (Wstat: 65280 Tests: 7 Failed: 0)
  Non-zero exit status: 255
  Parse errors: No plan found in TAP output
Files=1, Tests=7, 186 wallclock secs ( 0.01 usr  0.00 sys +  0.74 cusr  0.71 
csys =  1.46 CPU)
Result: FAIL
make: *** [Makefile:23: check] Error 1

I.e. the test file just exits after test 7, without running the rest.

Is it dying because it's running into a deadlock instead of some other
expected error message?

timed out waiting for match: (?^:User was holding a relation lock for too long) 
at t/031_recovery_conflict.pl line 318.
2023-08-04 12:20:38.061 UTC [1920073] 031_recovery_conflict.pl FATAL:  
terminating connection due to conflict with recovery

Christoph
2023-08-04 12:20:36.343 UTC [1920003] LOG:  starting PostgreSQL 17devel (Ubuntu 
17~~devel-1) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 
11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
2023-08-04 12:20:36.343 UTC [1920003] LOG:  listening on Unix socket 
"/tmp/GoFBiPMX9Y/.s.PGSQL.55090"
2023-08-04 12:20:36.345 UTC [1920006] LOG:  database system was shut down at 
2023-08-04 12:20:36 UTC
2023-08-04 12:20:36.351 UTC [1920003] LOG:  database system is ready to accept 
connections
2023-08-04 12:20:36.445 UTC [1920011] 031_recovery_conflict.pl LOG:  statement: 
CREATE TABLESPACE test_recovery_conflict_tblspc LOCATION ''
2023-08-04 12:20:36.475 UTC [1920013] 031_recovery_conflict.pl LOG:  received 
replication command: SHOW data_directory_mode
2023-08-04 12:20:36.475 UTC [1920013] 031_recovery_conflict.pl STATEMENT:  SHOW 
data_directory_mode
2023-08-04 12:20:36.480 UTC [1920013] 031_recovery_conflict.pl LOG:  received 
replication command: SHOW wal_segment_size
2023-08-04 12:20:36.480 UTC [1920013] 031_recovery_conflict.pl STATEMENT:  SHOW 
wal_segment_size
2023-08-04 12:20:36.485 UTC [1920013] 031_recovery_conflict.pl LOG:  received 
replication command: IDENTIFY_SYSTEM
2023-08-04 12:20:36.485 UTC [1920013] 031_recovery_conflict.pl STATEMENT:  
IDENTIFY_SYSTEM
2023-08-04 12:20:36.491 UTC [1920013] 031_recovery_conflict.pl LOG:  received 
replication command: BASE_BACKUP ( LABEL 'pg_basebackup base backup',  
PROGRESS,  CHECKPOINT 'fast',  WAIT 0,  MANIFEST 'yes',  TARGET 'client')
2023-08-04 12:20:36.491 UTC [1920013] 031_recovery_conflict.pl STATEMENT:  
BASE_BACKUP ( LABEL 'pg_basebackup base backup',  PROGRESS,  CHECKPOINT 'fast', 
 WAIT 0,  MANIFEST 'yes',  TARGET 'client')
2023-08-04 12:20:36.498 UTC [1920004] LOG:  checkpoint starting: immediate 
force wait
2023-08-04 12:20:36.505 UTC [1920004] LOG:  checkpoint complete: wrote 7 
buffers (5.5%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.001 s, 
sync=0.001 s, total=0.008 s; sync files=0, longest=0.000 s, average=0.000 s; 
distance=11360 kB, estimate=11360 kB; lsn=0/2000060, redo lsn=0/2000028
2023-08-04 12:20:36.519 UTC [1920014] 031_recovery_conflict.pl LOG:  received 
replication command: SHOW data_directory_mode
2023-08-04 12:20:36.519 UTC [1920014] 031_recovery_conflict.pl STATEMENT:  SHOW 
data_directory_mode
2023-08-04 12:20:36.520 UTC [1920014] 031_recovery_conflict.pl LOG:  received 
replication command: CREATE_REPLICATION_SLOT "pg_basebackup_1920014" TEMPORARY 
PHYSICAL ( RESERVE_WAL)
2023-08-04 12:20:36.520 UTC [1920014] 031_recovery_conflict.pl STATEMENT:  
CREATE_REPLICATION_SLOT "pg_basebackup_1920014" TEMPORARY PHYSICAL ( 
RESERVE_WAL)
2023-08-04 12:20:36.530 UTC [1920014] 031_recovery_conflict.pl LOG:  received 
replication command: IDENTIFY_SYSTEM
2023-08-04 12:20:36.530 UTC [1920014] 031_recovery_conflict.pl STATEMENT:  
IDENTIFY_SYSTEM
2023-08-04 12:20:36.536 UTC [1920014] 031_recovery_conflict.pl LOG:  received 
replication command: START_REPLICATION SLOT "pg_basebackup_1920014" 0/2000000 
TIMELINE 1
2023-08-04 12:20:36.536 UTC [1920014] 031_recovery_conflict.pl STATEMENT:  
START_REPLICATION SLOT "pg_basebackup_1920014" 0/2000000 TIMELINE 1
2023-08-04 12:20:36.746 UTC [1920013] 031_recovery_conflict.pl LOG:  temporary 
file: path "base/pgsql_tmp/pgsql_tmp1920013.0", size 137324
2023-08-04 12:20:36.746 UTC [1920013] 031_recovery_conflict.pl STATEMENT:  
BASE_BACKUP ( LABEL 'pg_basebackup base backup',  PROGRESS,  CHECKPOINT 'fast', 
 WAIT 0,  MANIFEST 'yes',  TARGET 'client')
2023-08-04 12:20:36.899 UTC [1920025] standby LOG:  received replication 
command: IDENTIFY_SYSTEM
2023-08-04 12:20:36.899 UTC [1920025] standby STATEMENT:  IDENTIFY_SYSTEM
2023-08-04 12:20:36.935 UTC [1920025] standby LOG:  received replication 
command: START_REPLICATION 0/3000000 TIMELINE 1
2023-08-04 12:20:36.935 UTC [1920025] standby STATEMENT:  START_REPLICATION 
0/3000000 TIMELINE 1
2023-08-04 12:20:36.990 UTC [1920027] 031_recovery_conflict.pl LOG:  statement: 
CREATE DATABASE test_db
2023-08-04 12:20:37.036 UTC [1920029] 031_recovery_conflict.pl LOG:  statement: 
CREATE TABLE test_recovery_conflict_table1(a int, b int);
2023-08-04 12:20:37.037 UTC [1920029] 031_recovery_conflict.pl LOG:  statement: 
INSERT INTO test_recovery_conflict_table1 SELECT i % 3, 0 FROM 
generate_series(1,20) i;
2023-08-04 12:20:37.037 UTC [1920029] 031_recovery_conflict.pl LOG:  statement: 
CREATE TABLE test_recovery_conflict_table2(a int, b int);
2023-08-04 12:20:37.223 UTC [1920031] 031_recovery_conflict.pl LOG:  statement: 
SELECT pg_current_wal_flush_lsn()
2023-08-04 12:20:37.241 UTC [1920033] 031_recovery_conflict.pl LOG:  statement: 
SELECT '0/3429B90' <= replay_lsn AND state = 'streaming'
                 FROM pg_catalog.pg_stat_replication
                 WHERE application_name IN ('standby', 'walreceiver')
2023-08-04 12:20:37.283 UTC [1920037] 031_recovery_conflict.pl LOG:  statement: 
BEGIN;
2023-08-04 12:20:37.305 UTC [1920037] 031_recovery_conflict.pl LOG:  statement: 
INSERT INTO test_recovery_conflict_table1 VALUES (1,0);
2023-08-04 12:20:37.310 UTC [1920037] 031_recovery_conflict.pl LOG:  statement: 
ROLLBACK;
2023-08-04 12:20:37.314 UTC [1920037] 031_recovery_conflict.pl LOG:  statement: 
BEGIN;
2023-08-04 12:20:37.314 UTC [1920037] 031_recovery_conflict.pl LOG:  statement: 
LOCK test_recovery_conflict_table1;
2023-08-04 12:20:37.314 UTC [1920037] 031_recovery_conflict.pl LOG:  statement: 
COMMIT;
2023-08-04 12:20:37.324 UTC [1920039] 031_recovery_conflict.pl LOG:  statement: 
SELECT pg_current_wal_flush_lsn()
2023-08-04 12:20:37.335 UTC [1920041] 031_recovery_conflict.pl LOG:  statement: 
SELECT '0/3429C50' <= replay_lsn AND state = 'streaming'
                 FROM pg_catalog.pg_stat_replication
                 WHERE application_name IN ('standby', 'walreceiver')
2023-08-04 12:20:37.369 UTC [1920043] 031_recovery_conflict.pl LOG:  statement: 
VACUUM test_recovery_conflict_table1;
2023-08-04 12:20:37.389 UTC [1920045] 031_recovery_conflict.pl LOG:  statement: 
SELECT pg_current_wal_flush_lsn()
2023-08-04 12:20:37.400 UTC [1920047] 031_recovery_conflict.pl LOG:  statement: 
SELECT '0/3430000' <= replay_lsn AND state = 'streaming'
                 FROM pg_catalog.pg_stat_replication
                 WHERE application_name IN ('standby', 'walreceiver')
2023-08-04 12:20:37.549 UTC [1920049] 031_recovery_conflict.pl LOG:  statement: 
SELECT '0/3430000' <= replay_lsn AND state = 'streaming'
                 FROM pg_catalog.pg_stat_replication
                 WHERE application_name IN ('standby', 'walreceiver')
2023-08-04 12:20:37.624 UTC [1920057] 031_recovery_conflict.pl LOG:  statement: 
INSERT INTO test_recovery_conflict_table1 SELECT i, 0 FROM 
generate_series(1,20) i
2023-08-04 12:20:37.654 UTC [1920059] 031_recovery_conflict.pl LOG:  statement: 
SELECT pg_current_wal_flush_lsn()
2023-08-04 12:20:37.678 UTC [1920061] 031_recovery_conflict.pl LOG:  statement: 
SELECT '0/3432470' <= replay_lsn AND state = 'streaming'
                 FROM pg_catalog.pg_stat_replication
                 WHERE application_name IN ('standby', 'walreceiver')
2023-08-04 12:20:37.716 UTC [1920063] 031_recovery_conflict.pl LOG:  statement: 
UPDATE test_recovery_conflict_table1 SET a = a + 1 WHERE a > 2;
2023-08-04 12:20:37.734 UTC [1920065] 031_recovery_conflict.pl LOG:  statement: 
VACUUM test_recovery_conflict_table1;
2023-08-04 12:20:37.751 UTC [1920067] 031_recovery_conflict.pl LOG:  statement: 
SELECT pg_current_wal_flush_lsn()
2023-08-04 12:20:37.766 UTC [1920069] 031_recovery_conflict.pl LOG:  statement: 
SELECT '0/34329A8' <= replay_lsn AND state = 'streaming'
                 FROM pg_catalog.pg_stat_replication
                 WHERE application_name IN ('standby', 'walreceiver')
2023-08-04 12:20:37.980 UTC [1920077] 031_recovery_conflict.pl LOG:  statement: 
DROP TABLE test_recovery_conflict_table1;
2023-08-04 12:20:38.061 UTC [1920079] 031_recovery_conflict.pl LOG:  statement: 
SELECT pg_current_wal_flush_lsn()
2023-08-04 12:20:38.165 UTC [1920081] 031_recovery_conflict.pl LOG:  statement: 
SELECT '0/34330C0' <= replay_lsn AND state = 'streaming'
                 FROM pg_catalog.pg_stat_replication
                 WHERE application_name IN ('standby', 'walreceiver')
2023-08-04 12:23:41.116 UTC [1920003] LOG:  received immediate shutdown request
2023-08-04 12:23:41.123 UTC [1920003] LOG:  database system is shut down
2023-08-04 12:20:36.887 UTC [1920020] LOG:  starting PostgreSQL 17devel (Ubuntu 
17~~devel-1) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 
11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
2023-08-04 12:20:36.887 UTC [1920020] LOG:  listening on Unix socket 
"/tmp/GoFBiPMX9Y/.s.PGSQL.55091"
2023-08-04 12:20:36.891 UTC [1920023] LOG:  database system was interrupted; 
last known up at 2023-08-04 12:20:36 UTC
2023-08-04 12:20:36.891 UTC [1920023] LOG:  entering standby mode
2023-08-04 12:20:36.891 UTC [1920023] LOG:  redo starts at 0/2000028
2023-08-04 12:20:36.891 UTC [1920023] WARNING:  unexpected directory entry 
"16384" found in pg_tblspc/
2023-08-04 12:20:36.891 UTC [1920023] DETAIL:  All directory entries in 
pg_tblspc/ should be symbolic links.
2023-08-04 12:20:36.891 UTC [1920023] HINT:  Remove those directories, or set 
allow_in_place_tablespaces to ON transiently to let recovery complete.
2023-08-04 12:20:36.891 UTC [1920023] LOG:  consistent recovery state reached 
at 0/2000100
2023-08-04 12:20:36.892 UTC [1920020] LOG:  database system is ready to accept 
read-only connections
2023-08-04 12:20:36.936 UTC [1920024] LOG:  started streaming WAL from primary 
at 0/3000000 on timeline 1
2023-08-04 12:20:37.350 UTC [1920035] 031_recovery_conflict.pl LOG:  statement: 
BEGIN;
2023-08-04 12:20:37.350 UTC [1920035] 031_recovery_conflict.pl LOG:  statement: 
DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM 
test_recovery_conflict_table1;
2023-08-04 12:20:37.354 UTC [1920035] 031_recovery_conflict.pl LOG:  statement: 
FETCH FORWARD FROM test_recovery_conflict_cursor;
2023-08-04 12:20:37.355 UTC [1920035] 031_recovery_conflict.pl LOG:  statement: 
;
2023-08-04 12:20:37.385 UTC [1920023] LOG:  recovery still waiting after 13.382 
ms: recovery conflict on buffer pin
2023-08-04 12:20:37.385 UTC [1920023] CONTEXT:  WAL redo at 0/3429C50 for 
Heap2/PRUNE: snapshotConflictHorizon: 0, nredirected: 0, ndead: 1, nunused: 0, 
redirected: [], dead: [21], unused: []; blkref #0: rel 1663/16385/16386, blk 0
2023-08-04 12:20:37.426 UTC [1920035] 031_recovery_conflict.pl FATAL:  
terminating connection due to conflict with recovery
2023-08-04 12:20:37.426 UTC [1920035] 031_recovery_conflict.pl DETAIL:  User 
was holding shared buffer pin for too long.
2023-08-04 12:20:37.426 UTC [1920035] 031_recovery_conflict.pl HINT:  In a 
moment you should be able to reconnect to the database and repeat your command.
2023-08-04 12:20:37.428 UTC [1920023] LOG:  recovery finished waiting after 
56.327 ms: recovery conflict on buffer pin
2023-08-04 12:20:37.428 UTC [1920023] CONTEXT:  WAL redo at 0/3429C50 for 
Heap2/PRUNE: snapshotConflictHorizon: 0, nredirected: 0, ndead: 1, nunused: 0, 
redirected: [], dead: [21], unused: []; blkref #0: rel 1663/16385/16386, blk 0
2023-08-04 12:20:37.601 UTC [1920055] 031_recovery_conflict.pl LOG:  statement: 
SELECT confl_bufferpin FROM pg_stat_database_conflicts WHERE datname='test_db';
2023-08-04 12:20:37.684 UTC [1920053] 031_recovery_conflict.pl LOG:  statement: 
BEGIN;
2023-08-04 12:20:37.693 UTC [1920053] 031_recovery_conflict.pl LOG:  statement: 
DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM 
test_recovery_conflict_table1;
2023-08-04 12:20:37.695 UTC [1920053] 031_recovery_conflict.pl LOG:  statement: 
FETCH FORWARD FROM test_recovery_conflict_cursor;
2023-08-04 12:20:37.700 UTC [1920053] 031_recovery_conflict.pl LOG:  statement: 
;
2023-08-04 12:20:37.788 UTC [1920023] LOG:  recovery still waiting after 19.486 
ms: recovery conflict on snapshot
2023-08-04 12:20:37.788 UTC [1920023] DETAIL:  Conflicting process: 1920053.
2023-08-04 12:20:37.788 UTC [1920023] CONTEXT:  WAL redo at 0/34329A8 for 
Heap2/PRUNE: snapshotConflictHorizon: 738, nredirected: 18, ndead: 0, nunused: 
0, redirected: [23->41, 24->42, 25->43, 26->44, 27->45, 28->46, 29->47, 30->48, 
31->49, 32->50, 33->51, 34->52, 35->53, 36->54, 37->55, 38->56, 39->57, 
40->58], dead: [], unused: []; blkref #0: rel 1663/16385/16386, blk 0
2023-08-04 12:20:37.839 UTC [1920053] 031_recovery_conflict.pl FATAL:  
terminating connection due to conflict with recovery
2023-08-04 12:20:37.839 UTC [1920053] 031_recovery_conflict.pl DETAIL:  User 
query might have needed to see row versions that must be removed.
2023-08-04 12:20:37.839 UTC [1920053] 031_recovery_conflict.pl HINT:  In a 
moment you should be able to reconnect to the database and repeat your command.
2023-08-04 12:20:37.844 UTC [1920023] LOG:  recovery finished waiting after 
75.763 ms: recovery conflict on snapshot
2023-08-04 12:20:37.844 UTC [1920023] CONTEXT:  WAL redo at 0/34329A8 for 
Heap2/PRUNE: snapshotConflictHorizon: 738, nredirected: 18, ndead: 0, nunused: 
0, redirected: [23->41, 24->42, 25->43, 26->44, 27->45, 28->46, 29->47, 30->48, 
31->49, 32->50, 33->51, 34->52, 35->53, 36->54, 37->55, 38->56, 39->57, 
40->58], dead: [], unused: []; blkref #0: rel 1663/16385/16386, blk 0
2023-08-04 12:20:37.959 UTC [1920075] 031_recovery_conflict.pl LOG:  statement: 
SELECT confl_snapshot FROM pg_stat_database_conflicts WHERE datname='test_db';
2023-08-04 12:20:37.969 UTC [1920073] 031_recovery_conflict.pl LOG:  statement: 
BEGIN;
2023-08-04 12:20:37.970 UTC [1920073] 031_recovery_conflict.pl LOG:  statement: 
LOCK TABLE test_recovery_conflict_table1 IN ACCESS SHARE MODE;
2023-08-04 12:20:37.970 UTC [1920073] 031_recovery_conflict.pl LOG:  statement: 
SELECT 1;
2023-08-04 12:20:37.970 UTC [1920073] 031_recovery_conflict.pl LOG:  statement: 
;
2023-08-04 12:20:38.061 UTC [1920023] LOG:  recovery still waiting after 69.530 
ms: recovery conflict on lock
2023-08-04 12:20:38.061 UTC [1920023] DETAIL:  Conflicting process: 1920073.
2023-08-04 12:20:38.061 UTC [1920023] CONTEXT:  WAL redo at 0/3432B88 for 
Standby/LOCK: xid 739 db 16385 rel 16386 
2023-08-04 12:20:38.061 UTC [1920073] 031_recovery_conflict.pl FATAL:  
terminating connection due to conflict with recovery
2023-08-04 12:20:38.061 UTC [1920073] 031_recovery_conflict.pl DETAIL:  User 
transaction caused buffer deadlock with recovery.
2023-08-04 12:20:38.061 UTC [1920073] 031_recovery_conflict.pl HINT:  In a 
moment you should be able to reconnect to the database and repeat your command.
2023-08-04 12:20:38.111 UTC [1920023] LOG:  recovery finished waiting after 
119.873 ms: recovery conflict on lock
2023-08-04 12:20:38.111 UTC [1920023] CONTEXT:  WAL redo at 0/3432B88 for 
Standby/LOCK: xid 739 db 16385 rel 16386 
2023-08-04 12:23:41.120 UTC [1920024] FATAL:  could not receive data from WAL 
stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2023-08-04 12:23:41.122 UTC [1920023] LOG:  invalid record length at 0/3480320: 
expected at least 24, got 0
2023-08-04 12:23:41.129 UTC [1920090] FATAL:  could not connect to the primary 
server: connection to server on socket "/tmp/GoFBiPMX9Y/.s.PGSQL.55090" failed: 
No such file or directory
                Is the server running locally and accepting connections on that 
socket?
2023-08-04 12:23:41.132 UTC [1920023] LOG:  waiting for WAL to become available 
at 0/3480338
2023-08-04 12:23:41.226 UTC [1920020] LOG:  received immediate shutdown request
2023-08-04 12:23:41.232 UTC [1920020] LOG:  database system is shut down
# Checking port 55090
# Found port 55090
Name: primary
Data directory: 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata
Backup directory: 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/backup
Archive directory: 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/archives
Connection string: port=55090 host=/tmp/GoFBiPMX9Y
Log file: 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_primary.log
# Running: initdb -D 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata
 -A trust -N
The files belonging to this database system will be owned by user "myon".
This user must also own the server process.

The database cluster will be initialized with this locale configuration:
  provider:    libc
  LC_COLLATE:  en_US.utf8
  LC_CTYPE:    en_US.utf8
  LC_MESSAGES: C
  LC_MONETARY: en_US.utf8
  LC_NUMERIC:  en_US.utf8
  LC_TIME:     en_US.utf8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata
 ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok

Sync to disk skipped.
The data directory might become corrupt if the operating system crashes.

Success. You can now start the database server using:

    pg_ctl -D 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata
 -l logfile start

# Running: 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/../../../src/test/regress/pg_regress
 --config-auth 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata
### Starting node "primary"
# Running: pg_ctl -w -D 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata
 -l 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_primary.log
 -o --cluster-name=primary start
waiting for server to start.... done
server started
# Postmaster PID for node "primary" is 1920003
# Taking pg_basebackup my_backup from node "primary"
# Running: pg_basebackup -D 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/backup/my_backup
 -h /tmp/GoFBiPMX9Y -p 55090 --checkpoint fast --no-sync
# Backup finished
# Checking port 55091
# Found port 55091
Name: standby
Data directory: 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata
Backup directory: 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/backup
Archive directory: 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/archives
Connection string: port=55091 host=/tmp/GoFBiPMX9Y
Log file: 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log
# Initializing node "standby" from backup "my_backup" of node "primary"
### Enabling streaming replication for node "standby"
### Starting node "standby"
# Running: pg_ctl -w -D 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata
 -l 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log
 -o --cluster-name=standby start
waiting for server to start.... done
server started
# Postmaster PID for node "standby" is 1920020
Waiting for replication conn standby's replay_lsn to pass 0/3429B90 on primary
done
Waiting for replication conn standby's replay_lsn to pass 0/3429C50 on primary
done
[12:20:37.346](1.941s) # issuing query via background psql: 
#     BEGIN;
#     DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM 
test_recovery_conflict_table1;
#     FETCH FORWARD FROM test_recovery_conflict_cursor;
[12:20:37.355](0.009s) ok 1 - buffer pin conflict: cursor with conflicting pin 
established
Waiting for replication conn standby's replay_lsn to pass 0/3430000 on primary
done
[12:20:37.557](0.202s) ok 2 - buffer pin conflict: logfile contains terminated 
connection due to recovery conflict
[12:20:37.603](0.046s) ok 3 - buffer pin conflict: stats show conflict on 
standby
Waiting for replication conn standby's replay_lsn to pass 0/3432470 on primary
done
[12:20:37.682](0.079s) # issuing query via background psql: 
#         BEGIN;
#         DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM 
test_recovery_conflict_table1;
#         FETCH FORWARD FROM test_recovery_conflict_cursor;
#         
[12:20:37.702](0.019s) ok 4 - snapshot conflict: cursor with conflicting 
snapshot established
Waiting for replication conn standby's replay_lsn to pass 0/34329A8 on primary
done
[12:20:37.875](0.173s) ok 5 - snapshot conflict: logfile contains terminated 
connection due to recovery conflict
[12:20:37.967](0.092s) ok 6 - snapshot conflict: stats show conflict on standby
[12:20:37.967](0.000s) # issuing query via background psql: 
#         BEGIN;
#         LOCK TABLE test_recovery_conflict_table1 IN ACCESS SHARE MODE;
#         SELECT 1;
#         
[12:20:37.970](0.003s) ok 7 - lock conflict: conflicting lock acquired
Waiting for replication conn standby's replay_lsn to pass 0/34330C0 on primary
done
timed out waiting for match: (?^:User was holding a relation lock for too long) 
at t/031_recovery_conflict.pl line 318.
# Postmaster PID for node "primary" is 1920003
### Stopping node "primary" using mode immediate
# Running: pg_ctl -D 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_primary_data/pgdata
 -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "primary"
# Postmaster PID for node "standby" is 1920020
### Stopping node "standby" using mode immediate
# Running: pg_ctl -D 
/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/t_031_recovery_conflict_standby_data/pgdata
 -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby"
[12:23:41.338](183.367s) # Tests were run but no plan was declared and 
done_testing() was not seen.
[12:23:41.338](0.000s) # Looks like your test exited with 255 just after 7.

Reply via email to