On Mon, Nov 28, 2022 at 11:55 PM Andres Freund <and...@anarazel.de> wrote:
>
> > Is tracking lastImportantPos as pg_atomic_uint64 in XLogCtlInsert any
> > better than an explicit spinlock? I think it's better on platforms
> > where atomics are supported, however, it boils down to using a spin
> > lock on the platforms where atomics aren't supported.
>
> A central atomic in XLogCtlInsert would be better than a spinlock protected
> variable, but still bad. We do *not* want to have more central state that
> needs to be manipulated, we want *less*.

Agreed.

> If we wanted to optimize this - and I haven't seen any evidence it's worth
> doing so - we should just optimize the lock acquisitions in
> GetLastImportantRecPtr() away. *Without* centralizing the state.

Hm. I can think of converting lastImportantAt from XLogRecPtr to
pg_atomic_uint64 and letting it stay within the WALInsertLock
structure. This prevents torn-reads and also avoids WAL insertion lock
acquire-release cycles in GetLastImportantRecPtr(). Please see the
attached patch herewith.

If this idea is worth it, I would like to bring this and the other
thread [1] that converts insertingAt to atomic and modifies other WAL
insert locks related code under one roof and start a new thread. BTW,
the patch at [1] seems to be showing a good benefit for
high-concurrent inserts with small records.

Thoughts?

[1] 
https://www.postgresql.org/message-id/CALj2ACWkWbheFhkPwMw83CUpzHFGXSV_HXTBxG9%2B-PZ3ufHE%3DQ%40mail.gmail.com

--
Bharath Rupireddy
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com
From 1f37fd240f2d381917595acbbc18d3b3ce714d2c Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Tue, 29 Nov 2022 06:55:27 +0000
Subject: [PATCH v2] Convert lastImportantAt to 64-bit atomic for torn-free
 reads

GetLastImportantRecPtr() currently reads lastImportantAt under WAL
insertion lock acquire-release cycles to get torn-free reads.

Converting lastImportantAt to 64-bit atomic from XLogRecPtr
provides inherent benefit of torn-free reads and simplifies
GetLastImportantRecPtr() a bit as the WAL insertion lock
acquire-release cycles are no longer needed.
---
 src/backend/access/transam/xlog.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index a31fbbff78..831a18d3bb 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -377,7 +377,7 @@ typedef struct
 {
 	LWLock		lock;
 	XLogRecPtr	insertingAt;
-	XLogRecPtr	lastImportantAt;
+	pg_atomic_uint64	lastImportantAt;
 } WALInsertLock;
 
 /*
@@ -873,7 +873,8 @@ XLogInsertRecord(XLogRecData *rdata,
 		{
 			int			lockno = holdingAllLocks ? 0 : MyLockNo;
 
-			WALInsertLocks[lockno].l.lastImportantAt = StartPos;
+			pg_atomic_write_u64(&WALInsertLocks[lockno].l.lastImportantAt,
+								StartPos);
 		}
 	}
 	else
@@ -4603,7 +4604,8 @@ XLOGShmemInit(void)
 	{
 		LWLockInitialize(&WALInsertLocks[i].l.lock, LWTRANCHE_WAL_INSERT);
 		WALInsertLocks[i].l.insertingAt = InvalidXLogRecPtr;
-		WALInsertLocks[i].l.lastImportantAt = InvalidXLogRecPtr;
+		pg_atomic_write_u64(&WALInsertLocks[i].l.lastImportantAt,
+							InvalidXLogRecPtr);
 	}
 
 	/*
@@ -6124,13 +6126,10 @@ GetLastImportantRecPtr(void)
 		XLogRecPtr	last_important;
 
 		/*
-		 * Need to take a lock to prevent torn reads of the LSN, which are
-		 * possible on some of the supported platforms. WAL insert locks only
-		 * support exclusive mode, so we have to use that.
+		 * We atomically read lastImportantAt which prevents torn reads. Hence
+		 * no need to take WAL insert lock here.
 		 */
-		LWLockAcquire(&WALInsertLocks[i].l.lock, LW_EXCLUSIVE);
-		last_important = WALInsertLocks[i].l.lastImportantAt;
-		LWLockRelease(&WALInsertLocks[i].l.lock);
+		last_important = pg_atomic_read_u64(&WALInsertLocks[i].l.lastImportantAt);
 
 		if (res < last_important)
 			res = last_important;
-- 
2.34.1

Reply via email to