Hello.

On Wed, Nov 16, 2022 at 3:44 AM Andres Freund <and...@anarazel.de> wrote:
> Approach 1:

> We could have an atomic variable in ProcArrayStruct that counts the amount of
> wasted effort and have processes update it whenever they've wasted a
> meaningful amount of effort.  Something like counting the skipped elements in
> KnownAssignedXidsGetAndSetXmin in a function local static variable and
> updating the shared counter whenever that reaches

I made the WIP patch for that approach and some initial tests. It
seems like it works pretty well.
At least it is better than previous ways for standbys without high
read only load.

Both patch and graph in attachments. Strange numbers is a limit of
wasted work to perform compression.
I have used the same (1) testing script and configuration as before
(two 16-CPU machines, long transaction on primary at 60th second,
simple-update and select-only for pgbench).

If such approach looks committable - I could do more careful
performance testing to find the best value for
WASTED_SNAPSHOT_WORK_LIMIT_TO_COMPRESS.

[1]: https://gist.github.com/michail-nikolaev/e1dfc70bdd7cfd1b902523dbb3db2f28
--
Michail Nikolaev
Index: src/backend/storage/ipc/procarray.c
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
--- a/src/backend/storage/ipc/procarray.c	(revision fb32748e32e2b6b2fcb32220980b93d5436f855e)
+++ b/src/backend/storage/ipc/procarray.c	(date 1668950653116)
@@ -272,6 +272,7 @@
  */
 static TransactionId *KnownAssignedXids;
 static bool *KnownAssignedXidsValid;
+static pg_atomic_uint32 *KnownAssignedXidsWastedSnapshotWork;
 static TransactionId latestObservedXid = InvalidTransactionId;
 
 /*
@@ -451,6 +452,10 @@
 			ShmemInitStruct("KnownAssignedXidsValid",
 							mul_size(sizeof(bool), TOTAL_MAX_CACHED_SUBXIDS),
 							&found);
+		KnownAssignedXidsWastedSnapshotWork = (pg_atomic_uint32 *)
+			ShmemInitStruct("KnownAssignedXidsWastedSnapshotWork",
+							sizeof(pg_atomic_uint32), &found);
+		pg_atomic_init_u32(KnownAssignedXidsWastedSnapshotWork, 0);
 	}
 }
 
@@ -4616,20 +4621,9 @@
 
 	if (!force)
 	{
-		/*
-		 * If we can choose how much to compress, use a heuristic to avoid
-		 * compressing too often or not often enough.
-		 *
-		 * Heuristic is if we have a large enough current spread and less than
-		 * 50% of the elements are currently in use, then compress. This
-		 * should ensure we compress fairly infrequently. We could compress
-		 * less often though the virtual array would spread out more and
-		 * snapshots would become more expensive.
-		 */
-		int			nelements = head - tail;
-
-		if (nelements < 4 * PROCARRAY_MAXPROCS ||
-			nelements < 2 * pArray->numKnownAssignedXids)
+#define WASTED_SNAPSHOT_WORK_LIMIT_TO_COMPRESS 1111111
+		if (pg_atomic_read_u32(KnownAssignedXidsWastedSnapshotWork)
+									< WASTED_SNAPSHOT_WORK_LIMIT_TO_COMPRESS)
 			return;
 	}
 
@@ -4650,6 +4644,8 @@
 
 	pArray->tailKnownAssignedXids = 0;
 	pArray->headKnownAssignedXids = compress_index;
+	/* Reset wasted work counter */
+	pg_atomic_write_u32(KnownAssignedXidsWastedSnapshotWork, 0);
 }
 
 /*
@@ -5031,6 +5027,7 @@
 KnownAssignedXidsGetAndSetXmin(TransactionId *xarray, TransactionId *xmin,
 							   TransactionId xmax)
 {
+	ProcArrayStruct *pArray = procArray;
 	int			count = 0;
 	int			head,
 				tail;
@@ -5078,6 +5075,10 @@
 		}
 	}
 
+	/* Add number of invalid items scanned to wasted work counter */
+	pg_atomic_add_fetch_u32(KnownAssignedXidsWastedSnapshotWork,
+							(head - tail) - pArray->numKnownAssignedXids);
+
 	return count;
 }
 

Reply via email to