pgsql: Add test to prevent premature removal of conflict-relevant data.
Add test to prevent premature removal of conflict-relevant data. A test has been added to ensure that conflict-relevant data is not prematurely removed when a concurrent prepared transaction is being committed on the publisher. This test introduces an injection point that simulates the presence of a prepared transaction in the commit phase, validating that the system correctly delays conflict slot advancement until the transaction is fully committed. Additionally, the test serves as a safeguard for developers, ensuring that the acquisition of the commit timestamp does not occur before marking DELAY_CHKPT_IN_COMMIT in RecordTransactionCommitPrepared. Reported-by: Robert Haas Author: Zhijie Hou Reviewed-by: shveta malik Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/os9pr01mb16913f67856b0da2a9097881294...@os9pr01mb16913.jpnprd01.prod.outlook.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/6456c6e2c4ad1cf9752e09cce37bfcfe2190c5e0 Modified Files -- src/backend/access/transam/twophase.c| 6 ++ src/test/subscription/Makefile | 4 +- src/test/subscription/meson.build| 5 +- src/test/subscription/t/035_conflicts.pl | 160 +++ 4 files changed, 173 insertions(+), 2 deletions(-)
pgsql: Fix Coverity issue reported in commit a850be2fe.
Fix Coverity issue reported in commit a850be2fe. Address a potential SIGSEGV that may occur when the tablesync worker attempts to locate a deleted row while applying changes. This situation arises during conflict detection for update-deleted scenarios. To prevent this crash, ensure that the operation is errored out early if the leader apply worker is unavailable. Since the leader worker maintains the necessary conflict detection metadata, proceeding without it serves no purpose and risks reporting incorrect conflict type. In the passing, improve a nearby comment. Reported by Tom Lane as per Coverity Author: shveta malik Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/334468.1757280...@sss.pgh.pa.us Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/5ac3c1ac22cb325844d0bee37f79f2c11931b32e Modified Files -- src/backend/replication/logical/worker.c | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-)
pgsql: pg_upgrade: Transfer pg_largeobject_metadata's files when possib
pg_upgrade: Transfer pg_largeobject_metadata's files when possible. Commit 161a3e8b68 taught pg_upgrade to use COPY for large object metadata for upgrades from v12 and newer, which is much faster to restore than the proper large object commands. For upgrades from v16 and newer, we can take this a step further and transfer the large object metadata files as if they were user tables. We can't transfer the files from older versions because the aclitem data type (needed by pg_largeobject_metadata.lomacl) changed its storage format in v16 (see commit 7b378237aa). Note that this commit is essentially a revert of commit 12a53c732c. There are a couple of caveats. First, we still need to COPY the corresponding pg_shdepend rows for large objects. Second, we need to COPY anything in pg_largeobject_metadata with a comment or security label, else restoring those will fail. This means that an upgrade in which every large object has a comment or security label won't gain anything from this commit, but it should at least avoid making those unusual use-cases any worse. pg_upgrade must also take care to transfer the relfilenodes of pg_largeobject_metadata and its index, as was done for pg_largeobject in commits d498e052b4 and bbe08b8869. Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/aJ3_Gih_XW1_O2HF%40nathan Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/3bcfcd815e1a2d51772ba27e0d034467f0344f15 Modified Files -- src/backend/commands/tablecmds.c | 12 +++-- src/bin/pg_dump/pg_dump.c | 80 +- src/bin/pg_upgrade/Makefile| 3 +- src/bin/pg_upgrade/info.c | 11 ++-- src/bin/pg_upgrade/pg_upgrade.c| 6 +-- src/bin/pg_upgrade/t/006_transfer_modes.pl | 67 + 6 files changed, 154 insertions(+), 25 deletions(-)
pgsql: meson: build checksums with extra optimization flags.
meson: build checksums with extra optimization flags. Use -funroll-loops and -ftree-vectorize when building checksum.c to match what autoconf does. Discussion: https://postgr.es/m/a81f2f7ef34afc24a89c613671ea017e3651329c.ca...@j-davis.com Reviewed-by: Andres Freund Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/9af672bcb245950e58198119ba6eb17043fd3a6d Modified Files -- src/backend/storage/page/meson.build | 10 +- 1 file changed, 9 insertions(+), 1 deletion(-)
pgsql: Remove unneeded VM pin from VM replay
Remove unneeded VM pin from VM replay Previously, heap_xlog_visible() called visibilitymap_pin() even after getting a buffer from XLogReadBufferForRedoExtended() -- which returns a pinned buffer containing the specified block of the visibility map. This would just have resulted in visibilitymap_pin() returning early since the specified page was already present and pinned, but it was confusing extraneous code, so remove it. It doesn't seem worth backporting, though. It appears to be an oversight in 2c03216. While we are at it, remove two VM-related redundant asserts in the COPY FREEZE code path. visibilitymap_set() already asserts that PD_ALL_VISIBLE is set on the heap page and checks that the vmbuffer contains the bits corresponding to the specified heap block, so callers do not also need to check this. Author: Melanie Plageman Reported-by: Melanie Plageman Reported-by: Kirill Reshke Reviewed-by: Kirill Reshke Reviewed-by: Andres Freund Discussion: https://postgr.es/m/CALdSSPhu7WZd%2BEfQDha1nz%3DDC93OtY1%3DUFEdWwSZsASka_2eRQ%40mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/3399c265543ec3cdbeff2fa2900e03b326705f63 Modified Files -- src/backend/access/heap/heapam.c | 3 --- src/backend/access/heap/heapam_xlog.c | 1 - 2 files changed, 4 deletions(-)
pgsql: Remove unused xl_heap_prune member, reason
Remove unused xl_heap_prune member, reason f83d709760d8 refactored xl_heap_prune and added an unused member, reason. While PruneReason is used when constructing this WAL record to set the WAL record definition, it doesn't need to be stored in a separate field in the record. Remove it. We won't backport this, since modifying an exposed struct definition to remove an unused field would do more harm than good. Author: Melanie Plageman Reported-by: Andres Freund Reviewed-by: Robert Haas Discussion: https://postgr.es/m/tvvtfoxz5ykpsctxjbzxg3nldnzfc7geplrt2z2s54pmgto27y%40hbijsndifu45 Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/4b5f206de2bb9152a99a5c218caf2580cc5a0e9e Modified Files -- src/include/access/heapam_xlog.h | 1 - 1 file changed, 1 deletion(-)
pgsql: Fix corruption of pgstats shared hashtable due to OOM failures
Fix corruption of pgstats shared hashtable due to OOM failures A new pgstats entry is created as a two-step process: - The entry is looked at in the shared hashtable of pgstats, and is inserted if not found. - When not found and inserted, its fields are then initialized. This part include a DSA chunk allocation for the stats data of the new entry. As currently coded, if the DSA chunk allocation fails due to an out-of-memory failure, an ERROR is generated, leaving in the pgstats shared hashtable an inconsistent entry due to the first step, as the entry has already been inserted in the hashtable. These broken entries can then be found by other backends, crashing them. There are only two callers of pgstat_init_entry(), when loading the pgstats file at startup and when creating a new pgstats entry. This commit changes pgstat_init_entry() so as we use dsa_allocate_extended() with DSA_ALLOC_NO_OOM, making it return NULL on allocation failure instead of failing. This way, a backend failing an entry creation can take appropriate cleanup actions in the shared hashtable before throwing an error. Currently, this means removing the entry from the shared hashtable before throwing the error for the allocation failure. Out-of-memory errors unlikely happen in the wild, and we do not bother with back-patches when these are fixed, usually. However, the problem dealt with here is a degree worse as it breaks the shared memory state of pgstats, impacting other processes that may look at an inconsistent entry that a different process has failed to create. Author: Mikhail Kot Discussion: https://postgr.es/m/caai9e7jelo5_-sbenftnc2e8xhw2pkzjwftc3i2y-gmqd2b...@mail.gmail.com Backpatch-through: 15 Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/8191e0c16a0373f851a9f5a8112e3aec105b5276 Modified Files -- src/backend/utils/activity/pgstat.c | 11 +++ src/backend/utils/activity/pgstat_shmem.c | 28 +++- 2 files changed, 38 insertions(+), 1 deletion(-)
pgsql: Add error codes when vacuum discovers VM corruption
Add error codes when vacuum discovers VM corruption Commit fd6ec93bf890314a and other previous work established the principle that when an error is potentially reachable in case of on-disk corruption but is not expected to be reached otherwise, ERRCODE_DATA_CORRUPTED should be used. This allows log monitoring software to search for evidence of corruption by filtering on the error code. Enhance the existing log messages emitted when the heap page is found to be inconsistent with the VM by adding this error code. Suggested-by: Andrey Borodin Author: Melanie Plageman Reviewed-by: Robert Haas Discussion: https://postgr.es/m/87DD95AA-274F-4F4F-BAD9-7738E5B1F905%40yandex-team.ru Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/8ec97e78a7713a1ebf4976b55c19f6c9bc2716d9 Modified Files -- src/backend/access/heap/vacuumlazy.c | 14 ++ 1 file changed, 10 insertions(+), 4 deletions(-)