Re: pgsql: Add pg_depend.refobjversion.
This commit assigns OIDs from the reserved range: +DECLARE_TOAST(pg_depend, , 8889); Do you want to try the renumber_oids.pl script to fix this? On 2020-11-02 13:27, Thomas Munro wrote: Add pg_depend.refobjversion. Provide a place for the version of referenced database objects to be recorded. A follow-up commit will use this to record dependencies on collation versions for indexes, but similar ideas for other kinds of objects have also been mooted. Author: Thomas Munro Reviewed-by: Julien Rouhaud Reviewed-by: Peter Eisentraut Discussion: https://postgr.es/m/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/cd6f479e79f3a33ef7a919c6b6c0c498c790f154 Modified Files -- doc/src/sgml/catalogs.sgml| 11 +++ src/backend/catalog/dependency.c | 14 ++ src/backend/catalog/pg_depend.c | 14 ++ src/include/catalog/catversion.h | 2 +- src/include/catalog/dependency.h | 1 + src/include/catalog/pg_depend.h | 4 src/include/catalog/toasting.h| 1 + src/test/regress/expected/misc_sanity.out | 4 ++-- 8 files changed, 40 insertions(+), 11 deletions(-)
Re: pgsql: Get rid of the dedicated latch for signaling the startup process
On 2020/11/05 6:02, Fujii Masao wrote:
On 2020/11/05 5:36, Heikki Linnakangas wrote:
On 04/11/2020 15:17, Heikki Linnakangas wrote:
On 04/11/2020 14:03, Fujii Masao wrote:
Or ISTM that WakeupRecovery() should set the latch only when the latch
has not been reset to NULL yet.
Got to be careful with race conditions, if the latch is set to NULL at
the same time that WakeupRecovery() is called.
I don't think commit 113d3591b8 got this quite right:
void
WakeupRecovery(void)
{
if (XLogCtl->recoveryWakeupLatch)
SetLatch(XLogCtl->recoveryWakeupLatch);
}
If XLogCtl->recoveryWakeupLatch is set to NULL between the if and the SetLatch,
you'll still get a segfault. That's highly unlikely to happen in practice because
the compiler will optimize that into a single load instruction, but could happen
with -O0. I think you'd need to do the access only once, using a volatile pointer,
to make that safe.
On second thought, since fetching the latch pointer might not be atomic,
maybe we need to use spinlock like WalRcvForceReply() does. But using
spinlock in every calls of WakeupRecovery() might cause performance
overhead. So I'm thinking to use spinlock only when it's necessary, i.e.,
when the latch may be reset to NULL concurrently. Attached is the POC
patch implementing that. Thought?
Maybe it's simpler to just not reset it to NULL, after all.
Yes, you're right. I agree it's simpler to remove the code resetting
the latch to NULL. Also as the comment for that code explains,
basically it's not necessary to reset it to NULL.
On second thought, your comment in upthread "it seems a bit sloppy to
leave it pointing to a latch that belongs to a random process though."
seems right. So I'm inclined to avoid this approach, for now.
Regards,
--
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION
diff --git a/src/backend/access/transam/xlog.c
b/src/backend/access/transam/xlog.c
index a1078a7cfc..fc8ae318fd 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -7488,11 +7488,15 @@ StartupXLOG(void)
ResetUnloggedRelations(UNLOGGED_RELATION_INIT);
/*
-* We don't need the latch anymore. It's not strictly necessary to reset
-* it to NULL, but let's do it for the sake of tidiness.
+* Reset recoveryWakeupLatch to NULL to prevent us from leaving it
+* pointing to the latch that might belong to other process later.
*/
if (ArchiveRecoveryRequested)
+ {
+ SpinLockAcquire(&XLogCtl->info_lck);
XLogCtl->recoveryWakeupLatch = NULL;
+ SpinLockRelease(&XLogCtl->info_lck);
+ }
/*
* We are now done reading the xlog from stream. Turn off streaming
@@ -12669,12 +12673,24 @@ CheckPromoteSignal(void)
/*
* Wake up startup process to replay newly arrived WAL, or to notice that
* failover has been requested.
+ *
+ * use_lock: if true, acquire spinlock to fetch the latch pointer. This
+ * should be true if walreceiver has already been marked as inactive.
+ * Because the startup process might reset the latch to NULL concurrently
+ * in that case.
*/
void
-WakeupRecovery(void)
+WakeupRecovery(bool use_lock)
{
- if (XLogCtl->recoveryWakeupLatch)
- SetLatch(XLogCtl->recoveryWakeupLatch);
+ Latch *latch;
+
+ if (use_lock)
+ SpinLockAcquire(&XLogCtl->info_lck);
+ latch = XLogCtl->recoveryWakeupLatch;
+ if (use_lock)
+ SpinLockRelease(&XLogCtl->info_lck);
+ if (latch)
+ SetLatch(latch);
}
/*
diff --git a/src/backend/replication/walreceiver.c
b/src/backend/replication/walreceiver.c
index bb1d44ccb7..c89757ec69 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -666,7 +666,7 @@ WalRcvWaitForStartPosition(XLogRecPtr *startpoint,
TimeLineID *startpointTLI)
* nudge startup process to notice that we've stopped streaming and are
* now waiting for instructions.
*/
- WakeupRecovery();
+ WakeupRecovery(false);
for (;;)
{
ResetLatch(walrcv->latch);
@@ -803,7 +803,7 @@ WalRcvDie(int code, Datum arg)
walrcv_disconnect(wrconn);
/* Wake up the startup process to notice promptly that we're gone */
- WakeupRecovery();
+ WakeupRecovery(true);
}
/* SIGHUP: set flag to re-read config file at next convenient time */
@@ -1025,7 +1025,7 @@ XLogWalRcvFlush(bool dying)
SpinLockRelease(&walrcv->mutex);
/* Signal the startup process and walsender that new WAL has
arrived */
- WakeupRecovery();
+ WakeupRecovery(false);
if (AllowCascadeReplication())
WalSndWakeup();
diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h
index 221af87e71..43
Re: pgsql: Add pg_depend.refobjversion.
Peter Eisentraut writes: > This commit assigns OIDs from the reserved range: > +DECLARE_TOAST(pg_depend, , 8889); > Do you want to try the renumber_oids.pl script to fix this? AFAIK the agreed-on process is to do that once per release cycle. Is there some reason why it needs to be done now? regards, tom lane
pgsql: Don't throw an error for LOCK TABLE on a self-referential view.
Don't throw an error for LOCK TABLE on a self-referential view. LOCK TABLE has complained about "infinite recursion" when applied to a self-referential view, ever since we made it recurse into views in v11. However, that breaks pg_dump's new assumption that it's okay to lock every relation. There doesn't seem to be any good reason to throw an error: if we just abandon the recursion, we've still satisfied the requirement of locking every referenced relation. Per bug #16703 from Andrew Bille (via Alexander Lakhin). Discussion: https://postgr.es/m/[email protected] Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/7dc18c619dac7ddf8af62b4d33e4bd33c7bfb067 Modified Files -- src/backend/commands/lockcmds.c| 21 - src/test/regress/expected/lock.out | 4 +--- src/test/regress/sql/lock.sql | 2 +- 3 files changed, 14 insertions(+), 13 deletions(-)
pgsql: Don't throw an error for LOCK TABLE on a self-referential view.
Don't throw an error for LOCK TABLE on a self-referential view. LOCK TABLE has complained about "infinite recursion" when applied to a self-referential view, ever since we made it recurse into views in v11. However, that breaks pg_dump's new assumption that it's okay to lock every relation. There doesn't seem to be any good reason to throw an error: if we just abandon the recursion, we've still satisfied the requirement of locking every referenced relation. Per bug #16703 from Andrew Bille (via Alexander Lakhin). Discussion: https://postgr.es/m/[email protected] Branch -- REL_13_STABLE Details --- https://git.postgresql.org/pg/commitdiff/44b973b91029cb5aecf09d589bdf3f05cfddaa60 Modified Files -- src/backend/commands/lockcmds.c| 21 - src/test/regress/expected/lock.out | 4 +--- src/test/regress/sql/lock.sql | 2 +- 3 files changed, 14 insertions(+), 13 deletions(-)
pgsql: Don't throw an error for LOCK TABLE on a self-referential view.
Don't throw an error for LOCK TABLE on a self-referential view. LOCK TABLE has complained about "infinite recursion" when applied to a self-referential view, ever since we made it recurse into views in v11. However, that breaks pg_dump's new assumption that it's okay to lock every relation. There doesn't seem to be any good reason to throw an error: if we just abandon the recursion, we've still satisfied the requirement of locking every referenced relation. Per bug #16703 from Andrew Bille (via Alexander Lakhin). Discussion: https://postgr.es/m/[email protected] Branch -- REL_12_STABLE Details --- https://git.postgresql.org/pg/commitdiff/0bdf1ef3d53e42c1c050a9f30b663fe72269a129 Modified Files -- src/backend/commands/lockcmds.c| 21 - src/test/regress/expected/lock.out | 4 +--- src/test/regress/sql/lock.sql | 2 +- 3 files changed, 14 insertions(+), 13 deletions(-)
pgsql: Don't throw an error for LOCK TABLE on a self-referential view.
Don't throw an error for LOCK TABLE on a self-referential view. LOCK TABLE has complained about "infinite recursion" when applied to a self-referential view, ever since we made it recurse into views in v11. However, that breaks pg_dump's new assumption that it's okay to lock every relation. There doesn't seem to be any good reason to throw an error: if we just abandon the recursion, we've still satisfied the requirement of locking every referenced relation. Per bug #16703 from Andrew Bille (via Alexander Lakhin). Discussion: https://postgr.es/m/[email protected] Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/5b7bfc39726ff9f6c52dd73e337c34e74e0d1f39 Modified Files -- src/backend/commands/lockcmds.c| 21 - src/test/regress/expected/lock.out | 4 +--- src/test/regress/sql/lock.sql | 2 +- 3 files changed, 14 insertions(+), 13 deletions(-)
Re: pgsql: Add pg_depend.refobjversion.
On 2020-11-05 15:50, Tom Lane wrote: Peter Eisentraut writes: This commit assigns OIDs from the reserved range: +DECLARE_TOAST(pg_depend, , 8889); Do you want to try the renumber_oids.pl script to fix this? AFAIK the agreed-on process is to do that once per release cycle. Is there some reason why it needs to be done now? Ah, okay, then nevermind.
Re: pgsql: Add pg_depend.refobjversion.
On Thu, Nov 5, 2020 at 3:50 PM Tom Lane wrote: > > Peter Eisentraut writes: > > This commit assigns OIDs from the reserved range: > > +DECLARE_TOAST(pg_depend, , 8889); > > > Do you want to try the renumber_oids.pl script to fix this? > > AFAIK the agreed-on process is to do that once per release cycle. > Is there some reason why it needs to be done now? Did we ever actually document this process somewhere? I went looking for it some time ago and failed to find it (and eventually found the old email thread(s) around it, but that's not really documentation), but that could definitely be my searching -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/
Re: pgsql: Add pg_depend.refobjversion.
Magnus Hagander writes: > On Thu, Nov 5, 2020 at 3:50 PM Tom Lane wrote: >> AFAIK the agreed-on process is to do that once per release cycle. >> Is there some reason why it needs to be done now? > Did we ever actually document this process somewhere? I went looking > for it some time ago and failed to find it (and eventually found the > old email thread(s) around it, but that's not really documentation), > but that could definitely be my searching Yeah, it's documented here: https://www.postgresql.org/docs/devel/system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT and there is an entry in src/tools/RELEASE_CHANGES to remind us to actually do the cleanup work. regards, tom lane
Re: pgsql: Add pg_depend.refobjversion.
On Thu, Nov 5, 2020 at 8:17 PM Tom Lane wrote: > > Magnus Hagander writes: > > On Thu, Nov 5, 2020 at 3:50 PM Tom Lane wrote: > >> AFAIK the agreed-on process is to do that once per release cycle. > >> Is there some reason why it needs to be done now? > > > Did we ever actually document this process somewhere? I went looking > > for it some time ago and failed to find it (and eventually found the > > old email thread(s) around it, but that's not really documentation), > > but that could definitely be my searching > > Yeah, it's documented here: > > https://www.postgresql.org/docs/devel/system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT > > and there is an entry in src/tools/RELEASE_CHANGES to remind us > to actually do the cleanup work. Ha! Thanks. I did find the RELEASE_CHANGES ones, but for some reason managed to miss the other one. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/
pgsql: Fix wal_consistency_checking nbtree bug.
Fix wal_consistency_checking nbtree bug. wal_consistency_checking indicated an inconsistency in certain cases involving nbtree page deletion. The underlying issue is that there was a minor difference between the page image produced after a REDO routine ran and the corresponding page image following original execution. This harmless inconsistency has been around forever. We more or less expect total consistency among even deleted nbtree pages these days, though, so this won't do anymore. To fix, tweak the REDO routine to match original execution. Oversight in commit f47b5e13. Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/efc5dcfd8ad4e1df633025d8a91b64cd44d93f42 Modified Files -- src/backend/access/nbtree/nbtxlog.c | 2 ++ 1 file changed, 2 insertions(+)
pgsql: Use strlcpy instead of memcpy for copying the slot name in pgsta
Use strlcpy instead of memcpy for copying the slot name in pgstat.c. There is no outright bug here but it is better to be consistent with the usage at other places in the same file. In the passing, fix a wrong assertion in pgstat_recv_replslot. Author: Kyotaro Horiguchi Reviewed-by: Sawada Masahiko and Amit Kapila Discussion: https://postgr.es/m/[email protected] Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/4f841ce3f7f4d429a3a275f82745d63c78cde4b2 Modified Files -- src/backend/postmaster/pgstat.c | 12 +++- 1 file changed, 7 insertions(+), 5 deletions(-)
