Attached is PoC v2, which fully replaces the hash table with radix
tree and passes regression tests.

The architecture is still one of tidbitmap.c having it's own tree of
(fixed length) PagetableEntries. The eventual goal is for vacuum and
tidbitmap to both use TID store in some form, but there are some
architectural issues to work out first.

0001 temporarily gets rid of the tidbitmap one-page cache so I could
always count on the hash table and radix tree having matching contents
during development, barring bugs. I'm curious how important it is, so
it doesn't yet get put back in this patch set.
0002 temporarily gets turns off page lossification for the same reason
of matching contents. (Still not reimplemented)
0003 makes some radix tree APIs more similar to simplehash, in a
backward compatible way.
0004 is a step towards putting allocation of tree values in the
caller's hands. This will be even more necessary when doing unions and
intersections with variable-length bitmaps (not yet implemented).
0005 make it easier to add new tree APIs that aren't used everywhere.
0006 RT_BEGIN_ITERATE_AT, similar to simplehash. This is a
prerequisite for 0007, and could also be used for shared iteration.
This implementation is naive and may slow down iteration, but that can
easily be fixed.

On Thu, Jul 3, 2025 at 3:43 PM I wrote:
> Second issue: TBM intersection
> 1) Radix tree iteration assumes that no one will delete keys out from
> under it. That includes the caller itself! This is not sufficient for
> intersecting bitmaps. I kluged around it by saving
> blocks-to-be-deleted in a stack array, but a proper solution could be
> to (vigorous handwaving)
> 1) save per-leaf-node keys in the iterator
> 2) delete the keys when iteration leaves that node, possibly with a
> bulk-delete mechanism
> 3) restart iteration from the top, at the next part of the key space

0007 implements the above (but no bulk deletion). It's so far only
used for intersection, but will also be the basis for page
lossification. Because latter requires inserts as well as deletes, it
could be tricky to do nicely. Compared to the hash table, retail
deletion is likely slower in general, but since they're batched per
leaf node it might be okay. 0006 and 0007 were tricky to get right,
and may contain bugs.

0008 is the main event. It's sloppy about memory management and
contains a kluge for the shared memory case: I was unable to quickly
adapt the existing relative pointer logic so I gave up and and
allocated a new DSA array, since it works and gets the point across.

There are new radix tree tests (separately, some of the existing ones
should be simplified to look more like the new bitmapset testing
module), but no new bitmap heap scan tests.

With some cleanups, restoring lossification, and maybe performance
work, the above could be the basis of a viable commit, since it has a
fairly small blast radius and gets rid of the qsort steps. However, I
believe it won't quite be enough on its own for a major release. For
that, we'd want
* variable-length keys to save memory, which would add some complexity
to union and intersection steps
* probably direct iteration over the tree, rather than via reference
arrays, which add memory and are no longer as useful because we don't
need a separate sorting step
* integration with TID store in some form
* adjustments to costing?

--
John Naylor
Amazon Web Services
From a01739c0989c0876aeed38818e4ca826b056bb17 Mon Sep 17 00:00:00 2001
From: John Naylor <[email protected]>
Date: Fri, 19 Sep 2025 16:06:37 +0700
Subject: [PATCH v2 4/8] Decouple RT_ALLOC_LEAF from node types

Previously, leaf pointers were treated as RT_CHILD_PTR

Change RT_ALLOC_LEAF to return an pointer to an allocation (either
void * or DSA) and add RT_GET_LOCAL in a similar vein to RT_SET_LOCAL.

We may need to expose leaf allocation in the public API, but this
hasn't been done yet.
---
 src/include/lib/radixtree.h | 46 ++++++++++++++++++++++++-------------
 1 file changed, 30 insertions(+), 16 deletions(-)

diff --git a/src/include/lib/radixtree.h b/src/include/lib/radixtree.h
index b0902988ee1..8e8f3bd507a 100644
--- a/src/include/lib/radixtree.h
+++ b/src/include/lib/radixtree.h
@@ -209,6 +209,7 @@
 #define RT_EXTEND_UP RT_MAKE_NAME(extend_up)
 #define RT_EXTEND_DOWN RT_MAKE_NAME(extend_down)
 #define RT_COPY_COMMON RT_MAKE_NAME(copy_common)
+#define RT_PTR_GET_LOCAL RT_MAKE_NAME(ptr_get_local)
 #define RT_PTR_SET_LOCAL RT_MAKE_NAME(ptr_set_local)
 #define RT_NODE_16_SEARCH_EQ RT_MAKE_NAME(node_16_search_eq)
 #define RT_NODE_4_GET_INSERTPOS RT_MAKE_NAME(node_4_get_insertpos)
@@ -405,7 +406,7 @@ typedef struct RT_NODE
 #define RT_INVALID_PTR_ALLOC InvalidDsaPointer
 #define RT_PTR_ALLOC_IS_VALID(ptr) DsaPointerIsValid(ptr)
 #else
-#define RT_PTR_ALLOC RT_NODE *
+#define RT_PTR_ALLOC void *
 #define RT_INVALID_PTR_ALLOC NULL
 #define RT_PTR_ALLOC_IS_VALID(ptr) PointerIsValid(ptr)
 #endif
@@ -772,6 +773,16 @@ RT_PTR_SET_LOCAL(RT_RADIX_TREE * tree, RT_CHILD_PTR * node)
 #endif
 }
 
+static inline void *
+RT_PTR_GET_LOCAL(RT_RADIX_TREE * tree, RT_PTR_ALLOC nodealloc)
+{
+#ifdef RT_SHMEM
+	return dsa_get_address(tree->dsa, nodealloc);
+#else
+	return nodealloc;
+#endif
+}
+
 /* Convenience functions for node48 and node256 */
 
 /* Return true if there is an entry for "chunk" */
@@ -894,23 +905,22 @@ RT_ALLOC_NODE(RT_RADIX_TREE * tree, const uint8 kind, const RT_SIZE_CLASS size_c
 /*
  * Allocate a new leaf.
  */
-static RT_CHILD_PTR
+static RT_PTR_ALLOC
 RT_ALLOC_LEAF(RT_RADIX_TREE * tree, size_t allocsize)
 {
-	RT_CHILD_PTR leaf;
+	RT_PTR_ALLOC leafalloc;
 
 #ifdef RT_SHMEM
-	leaf.alloc = dsa_allocate(tree->dsa, allocsize);
-	RT_PTR_SET_LOCAL(tree, &leaf);
+	leafalloc = dsa_allocate(tree->dsa, allocsize);
 #else
-	leaf.alloc = (RT_PTR_ALLOC) MemoryContextAlloc(tree->leaf_context, allocsize);
+	leafalloc = (RT_PTR_ALLOC) MemoryContextAlloc(tree->leaf_context, allocsize);
 #endif
 
 #ifdef RT_DEBUG
 	tree->ctl->num_leaves++;
 #endif
 
-	return leaf;
+	return leafalloc;
 }
 
 /*
@@ -1784,33 +1794,36 @@ RT_SET_SLOT(RT_RADIX_TREE * tree, void * slot_v, RT_VALUE_TYPE * value_p, bool f
 	}
 	else
 	{
-		RT_CHILD_PTR leaf;
+		RT_PTR_ALLOC leafalloc;
+		RT_VALUE_TYPE *leaf;
 
 		if (found && !RT_CHILDPTR_IS_VALUE(*slot))
 		{
 			Assert(RT_PTR_ALLOC_IS_VALID(*slot));
-			leaf.alloc = *slot;
-			RT_PTR_SET_LOCAL(tree, &leaf);
+			leafalloc = *slot;
+			leaf = (RT_VALUE_TYPE *) RT_PTR_GET_LOCAL(tree, leafalloc);
 
-			if (RT_GET_VALUE_SIZE((RT_VALUE_TYPE *) leaf.local) != value_sz)
+			if (RT_GET_VALUE_SIZE(leaf) != value_sz)
 			{
 				/*
 				 * different sizes, so first free the existing leaf before
 				 * allocating a new one
 				 */
 				RT_FREE_LEAF(tree, *slot);
-				leaf = RT_ALLOC_LEAF(tree, value_sz);
-				*slot = leaf.alloc;
+				leafalloc = RT_ALLOC_LEAF(tree, value_sz);
+				*slot = leafalloc;
+				leaf = (RT_VALUE_TYPE *) RT_PTR_GET_LOCAL(tree, leafalloc);
 			}
 		}
 		else
 		{
 			/* allocate new leaf and store it in the child array */
-			leaf = RT_ALLOC_LEAF(tree, value_sz);
-			*slot = leaf.alloc;
+			leafalloc = RT_ALLOC_LEAF(tree, value_sz);
+			*slot = leafalloc;
+			leaf = (RT_VALUE_TYPE *) RT_PTR_GET_LOCAL(tree, leafalloc);
 		}
 
-		memcpy(leaf.local, value_p, value_sz);
+		memcpy(leaf, value_p, value_sz);
 	}
 
 	return found;
@@ -3021,6 +3034,7 @@ RT_DUMP_NODE(RT_NODE * node)
 #undef RT_EXTEND_UP
 #undef RT_EXTEND_DOWN
 #undef RT_COPY_COMMON
+#undef RT_PTR_GET_LOCAL
 #undef RT_PTR_SET_LOCAL
 #undef RT_PTR_ALLOC_IS_VALID
 #undef RT_NODE_16_SEARCH_EQ
-- 
2.51.0

From c6394b202070aef28d2fa75617e930a6514a614d Mon Sep 17 00:00:00 2001
From: John Naylor <[email protected]>
Date: Tue, 2 Sep 2025 15:57:38 +0700
Subject: [PATCH v2 6/8] Add RT_BEGIN_ITERATE_AT

Instead of beginning at the lowest key, start with any given key.
The first call to RT_ITERATE_NEXT will return the start key/value
if it exists, otherwise the next key.

RT_BEGIN_ITERATE is now a thin wrapper around RT_BEGIN_ITERATE_AT
---
 src/include/lib/radixtree.h                   | 59 +++++++++++++------
 .../modules/test_radixtree/test_radixtree.c   | 55 ++++++++++++++++-
 2 files changed, 92 insertions(+), 22 deletions(-)

diff --git a/src/include/lib/radixtree.h b/src/include/lib/radixtree.h
index 21cd0156fe9..080d5abe8b2 100644
--- a/src/include/lib/radixtree.h
+++ b/src/include/lib/radixtree.h
@@ -182,6 +182,7 @@
 #define RT_SET_SLOT RT_MAKE_NAME (set_slot)
 #define RT_SET RT_MAKE_NAME(set)
 #define RT_BEGIN_ITERATE RT_MAKE_NAME(begin_iterate)
+#define RT_BEGIN_ITERATE_AT RT_MAKE_NAME(begin_iterate_at)
 #define RT_ITERATE_NEXT RT_MAKE_NAME(iterate_next)
 #define RT_END_ITERATE RT_MAKE_NAME(end_iterate)
 #define RT_DELETE RT_MAKE_NAME(delete)
@@ -290,6 +291,7 @@ RT_SCOPE bool RT_SET(RT_RADIX_TREE * tree, uint64 key, RT_VALUE_TYPE * value_p);
 RT_SCOPE bool RT_DELETE(RT_RADIX_TREE * tree, uint64 key);
 
 RT_SCOPE	RT_ITER *RT_BEGIN_ITERATE(RT_RADIX_TREE * tree);
+RT_SCOPE	RT_ITER *RT_BEGIN_ITERATE_AT(RT_RADIX_TREE * tree, uint64 key);
 RT_SCOPE	RT_VALUE_TYPE *RT_ITERATE_NEXT(RT_ITER * iter, uint64 *key_p);
 RT_SCOPE void RT_END_ITERATE(RT_ITER * iter);
 
@@ -726,11 +728,7 @@ typedef struct RT_NODE_ITER
 {
 	RT_CHILD_PTR node;
 
-	/*
-	 * The next index of the chunk array in RT_NODE_KIND_4 and RT_NODE_KIND_16
-	 * nodes, or the next chunk in RT_NODE_KIND_48 and RT_NODE_KIND_256 nodes.
-	 * 0 for the initial value.
-	 */
+	// TODO: add next_chunk so idx can just save the last index
 	int			idx;
 }			RT_NODE_ITER;
 
@@ -2085,6 +2083,12 @@ RT_FREE(RT_RADIX_TREE * tree)
  */
 RT_SCOPE	RT_ITER *
 RT_BEGIN_ITERATE(RT_RADIX_TREE * tree)
+{
+	return RT_BEGIN_ITERATE_AT(tree, 0);
+}
+
+RT_SCOPE	RT_ITER *
+RT_BEGIN_ITERATE_AT(RT_RADIX_TREE * tree, uint64 key)
 {
 	RT_ITER    *iter;
 	RT_CHILD_PTR root;
@@ -2101,7 +2105,12 @@ RT_BEGIN_ITERATE(RT_RADIX_TREE * tree)
 	/* Set the root to start */
 	iter->cur_level = iter->top_level;
 	iter->node_iters[iter->cur_level].node = root;
-	iter->node_iters[iter->cur_level].idx = 0;
+
+	/* pre-fill iter stack for first key */
+	for (int level = 0; level <= iter->top_level; level++)
+	{
+		iter->node_iters[level].idx = RT_GET_KEY_CHUNK(key, level * RT_SPAN);
+	}
 
 	return iter;
 }
@@ -2133,25 +2142,35 @@ RT_NODE_ITERATE_NEXT(RT_ITER * iter, int level)
 			{
 				RT_NODE_4  *n4 = (RT_NODE_4 *) (node.local);
 
-				if (node_iter->idx >= n4->base.count)
-					return NULL;
+				for (int i = 0; i < n4->base.count; i++)
+				{
+					if (n4->chunks[i] >= node_iter->idx)
+					{
+						slot = &n4->children[i];
+						key_chunk = n4->chunks[i];
+						node_iter->idx = key_chunk + 1;
+						goto update;
+					}
+				}
 
-				slot = &n4->children[node_iter->idx];
-				key_chunk = n4->chunks[node_iter->idx];
-				node_iter->idx++;
-				break;
+				return NULL;
 			}
 		case RT_NODE_KIND_16:
 			{
 				RT_NODE_16 *n16 = (RT_NODE_16 *) (node.local);
 
-				if (node_iter->idx >= n16->base.count)
-					return NULL;
+				for (int i = 0; i < n16->base.count; i++)
+				{
+					if (n16->chunks[i] >= node_iter->idx)
+					{
+						slot = &n16->children[i];
+						key_chunk = n16->chunks[i];
+						node_iter->idx = key_chunk + 1;
+						goto update;
+					}
+				}
 
-				slot = &n16->children[node_iter->idx];
-				key_chunk = n16->chunks[node_iter->idx];
-				node_iter->idx++;
-				break;
+				return NULL;
 			}
 		case RT_NODE_KIND_48:
 			{
@@ -2195,6 +2214,7 @@ RT_NODE_ITERATE_NEXT(RT_ITER * iter, int level)
 			}
 	}
 
+update:
 	/* Update the key */
 	iter->key &= ~(((uint64) RT_CHUNK_MASK) << (level * RT_SPAN));
 	iter->key |= (((uint64) key_chunk) << (level * RT_SPAN));
@@ -2240,11 +2260,11 @@ RT_ITERATE_NEXT(RT_ITER * iter, uint64 *key_p)
 
 			iter->cur_level--;
 			iter->node_iters[iter->cur_level].node = node;
-			iter->node_iters[iter->cur_level].idx = 0;
 		}
 		else
 		{
 			/* Not found the child slot, move up the tree */
+			iter->node_iters[iter->cur_level].idx = 0;
 			iter->cur_level++;
 		}
 	}
@@ -2998,6 +3018,7 @@ RT_DUMP_NODE(RT_NODE * node)
 #undef RT_SET_SLOT
 #undef RT_SET
 #undef RT_BEGIN_ITERATE
+#undef RT_BEGIN_ITERATE_AT
 #undef RT_ITERATE_NEXT
 #undef RT_END_ITERATE
 #undef RT_DELETE
diff --git a/src/test/modules/test_radixtree/test_radixtree.c b/src/test/modules/test_radixtree/test_radixtree.c
index 787162c8793..a5465a3a2a6 100644
--- a/src/test/modules/test_radixtree/test_radixtree.c
+++ b/src/test/modules/test_radixtree/test_radixtree.c
@@ -150,6 +150,10 @@ test_empty(void)
 	EXPECT_TRUE(rt_iterate_next(iter, &key) == NULL);
 	rt_end_iterate(iter);
 
+	iter = rt_begin_iterate_at(radixtree, 17);
+	EXPECT_TRUE(rt_iterate_next(iter, &key) == NULL);
+	rt_end_iterate(iter);
+
 	rt_free(radixtree);
 
 #ifdef TEST_SHARED_RT
@@ -157,7 +161,7 @@ test_empty(void)
 #endif
 }
 
-/* Basic set, find, and delete tests */
+/* Basic set, find, delete, and iteration tests */
 static void
 test_basic(rt_node_class_test_elem *test_info, int shift, bool asc)
 {
@@ -265,13 +269,58 @@ test_basic(rt_node_class_test_elem *test_info, int shift, bool asc)
 
 	rt_end_iterate(iter);
 
+	/* test iteration beginning at the largest key */
+	{
+		uint64		largest;
+		uint64		iterkey;
+		TestValueType *iterval;
+		uint64 maxkey = ~UINT64CONST(0);
+
+		/* iteration is ordered by key, so adjust expected value accordingly */
+		if (asc)
+			largest = keys[children - 1];
+		else
+			largest = keys[0];
+
+		iter = rt_begin_iterate_at(radixtree, largest);
+
+		/* expect the value we started at */
+		iterval = rt_iterate_next(iter, &iterkey);
+		EXPECT_TRUE(iterval != NULL);
+		EXPECT_EQ_U64(iterkey, largest);
+		EXPECT_EQ_U64(*iterval, largest);
+
+		/* ...and no more */
+		iterval = rt_iterate_next(iter, &iterkey);
+		EXPECT_TRUE(iterval == NULL);
+
+		rt_end_iterate(iter);
+
+		/* insert maximum key */
+		EXPECT_FALSE(rt_set(radixtree, maxkey, (TestValueType *) &maxkey));
+		/* begin between orginal largest and max we just inserted */
+		Assert(largest + 1 < maxkey);
+		iter = rt_begin_iterate_at(radixtree, largest + 1);
+		iterval = rt_iterate_next(iter, &iterkey);
+		EXPECT_TRUE(iterval != NULL);
+		EXPECT_EQ_U64(iterkey, maxkey);
+		EXPECT_EQ_U64(*iterval, maxkey);
+		rt_end_iterate(iter);
+
+		/* delete maximum key */
+		EXPECT_TRUE(rt_delete(radixtree, maxkey));
+		iter = rt_begin_iterate_at(radixtree, largest + 1);
+		iterval = rt_iterate_next(iter, &iterkey);
+		EXPECT_TRUE(iterval == NULL);
+		rt_end_iterate(iter);
+	}
+
 	/* delete all keys again */
 	for (int i = 0; i < children; i++)
 		EXPECT_TRUE(rt_delete(radixtree, keys[i]));
 
 	/* test that all keys were deleted */
-	for (int i = 0; i < children; i++)
-		EXPECT_TRUE(rt_find(radixtree, keys[i]) == NULL);
+	EXPECT_TRUE(rt_num_entries(radixtree) == 0);
 
 	rt_stats(radixtree);
 
-- 
2.51.0

From a06b8b7d6d53575dc4345bc8b437c4199c3f3f85 Mon Sep 17 00:00:00 2001
From: John Naylor <[email protected]>
Date: Tue, 2 Sep 2025 13:25:21 +0700
Subject: [PATCH v2 5/8] Get rid of RT_USE_DELETE

We are adding more functions, so it's not sustainable to
play whack-a-mole with compiler warnings. Instead, make
TID store declare its static functions as "unused".
---
 src/backend/access/common/tidstore.c |  4 ++--
 src/include/lib/radixtree.h          | 15 ---------------
 2 files changed, 2 insertions(+), 17 deletions(-)

diff --git a/src/backend/access/common/tidstore.c b/src/backend/access/common/tidstore.c
index 5bd75fb499c..2d0d2d1b64d 100644
--- a/src/backend/access/common/tidstore.c
+++ b/src/backend/access/common/tidstore.c
@@ -88,7 +88,7 @@ typedef struct BlocktableEntry
 		(sizeof(bitmapword) * WORDS_PER_PAGE(MAX_OFFSET_IN_BITMAP))
 
 #define RT_PREFIX local_ts
-#define RT_SCOPE static
+#define RT_SCOPE static pg_attribute_unused()
 #define RT_DECLARE
 #define RT_DEFINE
 #define RT_VALUE_TYPE BlocktableEntry
@@ -100,7 +100,7 @@ typedef struct BlocktableEntry
 
 #define RT_PREFIX shared_ts
 #define RT_SHMEM
-#define RT_SCOPE static
+#define RT_SCOPE static pg_attribute_unused()
 #define RT_DECLARE
 #define RT_DEFINE
 #define RT_VALUE_TYPE BlocktableEntry
diff --git a/src/include/lib/radixtree.h b/src/include/lib/radixtree.h
index 8e8f3bd507a..21cd0156fe9 100644
--- a/src/include/lib/radixtree.h
+++ b/src/include/lib/radixtree.h
@@ -137,11 +137,6 @@
  * RT_UNLOCK		- Unlock the radix tree
  * RT_GET_HANDLE	- Return the handle of the radix tree
  *
- * Optional Interface
- * ---------
- *
- * RT_DELETE		- Delete a key-value pair. Declared/defined if RT_USE_DELETE is defined
- *
  *
  * Copyright (c) 2024-2025, PostgreSQL Global Development Group
  *
@@ -189,9 +184,7 @@
 #define RT_BEGIN_ITERATE RT_MAKE_NAME(begin_iterate)
 #define RT_ITERATE_NEXT RT_MAKE_NAME(iterate_next)
 #define RT_END_ITERATE RT_MAKE_NAME(end_iterate)
-#ifdef RT_USE_DELETE
 #define RT_DELETE RT_MAKE_NAME(delete)
-#endif
 #define RT_MEMORY_USAGE RT_MAKE_NAME(memory_usage)
 #define RT_DUMP_NODE RT_MAKE_NAME(dump_node)
 #define RT_STATS RT_MAKE_NAME(stats)
@@ -294,10 +287,7 @@ RT_SCOPE	RT_VALUE_TYPE *RT_FIND(RT_RADIX_TREE * tree, uint64 key);
 RT_SCOPE void* RT_GET_SLOT(RT_RADIX_TREE * tree, uint64 key, bool* found);
 RT_SCOPE bool RT_SET_SLOT(RT_RADIX_TREE * tree, void* slot_v, RT_VALUE_TYPE * value_p, bool found);
 RT_SCOPE bool RT_SET(RT_RADIX_TREE * tree, uint64 key, RT_VALUE_TYPE * value_p);
-
-#ifdef RT_USE_DELETE
 RT_SCOPE bool RT_DELETE(RT_RADIX_TREE * tree, uint64 key);
-#endif
 
 RT_SCOPE	RT_ITER *RT_BEGIN_ITERATE(RT_RADIX_TREE * tree);
 RT_SCOPE	RT_VALUE_TYPE *RT_ITERATE_NEXT(RT_ITER * iter, uint64 *key_p);
@@ -2274,8 +2264,6 @@ RT_END_ITERATE(RT_ITER * iter)
 
 /***************** DELETION *****************/
 
-#ifdef RT_USE_DELETE
-
 /* Delete the element at "deletepos" */
 static inline void
 RT_SHIFT_ARRAYS_AND_DELETE(uint8 *chunks, RT_PTR_ALLOC * children, int count, int deletepos)
@@ -2676,8 +2664,6 @@ RT_DELETE(RT_RADIX_TREE * tree, uint64 key)
 	return deleted;
 }
 
-#endif							/* RT_USE_DELETE */
-
 /***************** UTILITY FUNCTIONS *****************/
 
 /*
@@ -2939,7 +2925,6 @@ RT_DUMP_NODE(RT_NODE * node)
 #undef RT_VARLEN_VALUE_SIZE
 #undef RT_RUNTIME_EMBEDDABLE_VALUE
 #undef RT_SHMEM
-#undef RT_USE_DELETE
 #undef RT_DEBUG
 
 /* locally declared macros */
-- 
2.51.0

From 571d1fc63ec4c239015fbb2de98c2ea515da3326 Mon Sep 17 00:00:00 2001
From: John Naylor <[email protected]>
Date: Thu, 4 Sep 2025 15:22:23 +0700
Subject: [PATCH v2 7/8] RT_ITER_STAGE_DELETE

Allow staging the current key for deletion. It will be deleted
at next node iteration or when ending iteration.

WIP: This could be a callback facility. We'll need something
to help lossify pages, which requires both deletion and insertion.
---
 src/include/lib/radixtree.h                   | 41 ++++++++
 .../expected/test_radixtree.out               |  2 +
 .../modules/test_radixtree/test_radixtree.c   | 98 +++++++++++++++++++
 3 files changed, 141 insertions(+)

diff --git a/src/include/lib/radixtree.h b/src/include/lib/radixtree.h
index 080d5abe8b2..9db4d9a3615 100644
--- a/src/include/lib/radixtree.h
+++ b/src/include/lib/radixtree.h
@@ -184,6 +184,7 @@
 #define RT_BEGIN_ITERATE RT_MAKE_NAME(begin_iterate)
 #define RT_BEGIN_ITERATE_AT RT_MAKE_NAME(begin_iterate_at)
 #define RT_ITERATE_NEXT RT_MAKE_NAME(iterate_next)
+#define RT_ITER_STAGE_DELETE RT_MAKE_NAME(iter_stage_delete)
 #define RT_END_ITERATE RT_MAKE_NAME(end_iterate)
 #define RT_DELETE RT_MAKE_NAME(delete)
 #define RT_MEMORY_USAGE RT_MAKE_NAME(memory_usage)
@@ -293,6 +294,7 @@ RT_SCOPE bool RT_DELETE(RT_RADIX_TREE * tree, uint64 key);
 RT_SCOPE	RT_ITER *RT_BEGIN_ITERATE(RT_RADIX_TREE * tree);
 RT_SCOPE	RT_ITER *RT_BEGIN_ITERATE_AT(RT_RADIX_TREE * tree, uint64 key);
 RT_SCOPE	RT_VALUE_TYPE *RT_ITERATE_NEXT(RT_ITER * iter, uint64 *key_p);
+RT_SCOPE void RT_ITER_STAGE_DELETE(RT_ITER * iter);
 RT_SCOPE void RT_END_ITERATE(RT_ITER * iter);
 
 RT_SCOPE uint64 RT_MEMORY_USAGE(RT_RADIX_TREE * tree);
@@ -747,6 +749,10 @@ struct RT_ITER
 
 	/* The key constructed during iteration */
 	uint64		key;
+
+	/* state for deletion that happens during iteration */
+	uint64		to_delete[RT_NODE_MAX_SLOTS];
+	int			num_to_delete;
 };
 
 
@@ -2112,6 +2118,7 @@ RT_BEGIN_ITERATE_AT(RT_RADIX_TREE * tree, uint64 key)
 		iter->node_iters[level].idx = RT_GET_KEY_CHUNK(key, level * RT_SPAN);
 	}
 
+	iter->key = key;
 	return iter;
 }
 
@@ -2230,6 +2237,10 @@ RT_SCOPE	RT_VALUE_TYPE *
 RT_ITERATE_NEXT(RT_ITER * iter, uint64 *key_p)
 {
 	RT_PTR_ALLOC *slot = NULL;
+restart:
+	/* guard against starting iteration larger than max val */
+	if (iter->key > iter->tree->ctl->max_val)
+		return NULL;
 
 	while (iter->cur_level <= iter->top_level)
 	{
@@ -2261,6 +2272,23 @@ RT_ITERATE_NEXT(RT_ITER * iter, uint64 *key_p)
 			iter->cur_level--;
 			iter->node_iters[iter->cur_level].node = node;
 		}
+		else if (iter->num_to_delete > 0)
+		{
+			// WIP test overflow
+			/* zero out lowest byte */
+			uint64 leaf_min_key = iter->key & ~((uint64) RT_CHUNK_MASK);
+			RT_RADIX_TREE *save_tree = iter->tree;
+
+			Assert(iter->cur_level == 0);
+
+			RT_END_ITERATE(iter);
+			if (leaf_min_key < (PG_UINT64_MAX & ~((uint64) RT_CHUNK_MASK)))
+			{
+				/* restart at lowest possible key for next iteration */
+				iter = RT_BEGIN_ITERATE_AT(save_tree, leaf_min_key + RT_NODE_MAX_SLOTS);
+				goto restart;
+			}
+		}
 		else
 		{
 			/* Not found the child slot, move up the tree */
@@ -2273,12 +2301,24 @@ RT_ITERATE_NEXT(RT_ITER * iter, uint64 *key_p)
 	return NULL;
 }
 
+RT_SCOPE void
+RT_ITER_STAGE_DELETE(RT_ITER * iter)
+{
+	iter->to_delete[iter->num_to_delete++] = iter->key;
+}
+
 /*
  * Terminate the iteration. The caller is responsible for releasing any locks.
  */
 RT_SCOPE void
 RT_END_ITERATE(RT_ITER * iter)
 {
+	for (int i = 0; i < iter->num_to_delete; i++)
+	{
+		bool deleted = RT_DELETE(iter->tree, iter->to_delete[i]);
+		Assert(deleted);
+	}
+
 	pfree(iter);
 }
 
@@ -3020,6 +3060,7 @@ RT_DUMP_NODE(RT_NODE * node)
 #undef RT_BEGIN_ITERATE
 #undef RT_BEGIN_ITERATE_AT
 #undef RT_ITERATE_NEXT
+#undef RT_ITER_STAGE_DELETE
 #undef RT_END_ITERATE
 #undef RT_DELETE
 #undef RT_MEMORY_USAGE
diff --git a/src/test/modules/test_radixtree/expected/test_radixtree.out b/src/test/modules/test_radixtree/expected/test_radixtree.out
index 14aceecfede..ea65e64f371 100644
--- a/src/test/modules/test_radixtree/expected/test_radixtree.out
+++ b/src/test/modules/test_radixtree/expected/test_radixtree.out
@@ -34,6 +34,8 @@ NOTICE:  testing node node-256 with shift 8 and ascending keys
 NOTICE:  testing node node-256 with shift 8 and descending keys
 NOTICE:  testing node node-256 with shift 56 and ascending keys
 NOTICE:  testing node node-256 with shift 56 and descending keys
+NOTICE:  testing iteration with delete at shift 0
+NOTICE:  testing iteration with delete at shift 8
  test_radixtree 
 ----------------
  
diff --git a/src/test/modules/test_radixtree/test_radixtree.c b/src/test/modules/test_radixtree/test_radixtree.c
index a5465a3a2a6..b2a433e8023 100644
--- a/src/test/modules/test_radixtree/test_radixtree.c
+++ b/src/test/modules/test_radixtree/test_radixtree.c
@@ -332,6 +332,101 @@ test_basic(rt_node_class_test_elem *test_info, int shift, bool asc)
 #endif
 }
 
+static void
+test_iter_delete(int shift)
+{
+	rt_radix_tree *radixtree;
+	rt_iter    *iter;
+	uint64		iterkey;
+	TestValueType *iterval;
+
+	/* limit memory usage by limiting the key space */
+	//int			num_keys = 2;
+	uint64	   keys[2] = {7 << shift, 17 << shift};
+#ifdef TEST_SHARED_RT
+	int			tranche_id = LWLockNewTrancheId("test_radix_tree");
+	dsa_area   *dsa;
+
+	dsa = dsa_create(tranche_id);
+	radixtree = rt_create(dsa, tranche_id);
+#else
+	MemoryContext radixtree_ctx;
+
+	radixtree_ctx = SlabContextCreate(CurrentMemoryContext,
+									  "test_radix_tree",
+									  SLAB_DEFAULT_BLOCK_SIZE,
+									  sizeof(TestValueType));
+	radixtree = rt_create(radixtree_ctx);
+#endif
+
+	elog(NOTICE, "testing iteration with delete at shift %d",
+		 shift);
+
+	rt_set(radixtree, keys[0], &keys[0]);
+	rt_set(radixtree, keys[1], &keys[1]);
+	rt_stats(radixtree);
+
+	/* begin at last key */
+	iter = rt_begin_iterate_at(radixtree, keys[1]);
+
+	/* expect the value we started at */
+	iterval = rt_iterate_next(iter, &iterkey);
+	EXPECT_TRUE(iterval != NULL);
+	EXPECT_EQ_U64(iterkey, keys[1]);
+	EXPECT_EQ_U64(*iterval, keys[1]);
+
+	/* ...and no more */
+	iterval = rt_iterate_next(iter, &iterkey);
+	EXPECT_TRUE(iterval == NULL);
+
+	/* begin lower than first key */
+	Assert(keys[0] > 0);
+	iter = rt_begin_iterate_at(radixtree, 0);
+
+	/* advance to first key */
+	iterval = rt_iterate_next(iter, &iterkey);
+	EXPECT_TRUE(iterval != NULL);
+	EXPECT_EQ_U64(iterkey, keys[0]);
+	EXPECT_EQ_U64(*iterval, keys[0]);
+
+	/* stage it for deletion */
+	rt_iter_stage_delete(iter);
+
+	/* get next key/value, should be the last one */
+	iterval = rt_iterate_next(iter, &iterkey);
+	EXPECT_TRUE(iterval != NULL);
+	EXPECT_EQ_U64(iterkey, keys[1]);
+	EXPECT_EQ_U64(*iterval, keys[1]);
+
+	/* commit deletion */
+	rt_end_iterate(iter);
+	EXPECT_TRUE(rt_num_entries(radixtree) == 1);
+
+	rt_stats(radixtree);
+
+	/* begin after last key, making sure root node has lowest possible chunk */
+
+	Assert(keys[1] < 65536 + 1);
+	rt_begin_iterate_at(radixtree, 65536 + 1);
+	/* should find nothing */
+	iterval = rt_iterate_next(iter, &iterkey);
+	EXPECT_TRUE(iterval == NULL);
+
+	/* delete everything through the iterator */
+
+	for (TestValueType i = 1000; i< 2000; i++)
+		rt_set(radixtree, i, &i);
+
+	iter = rt_begin_iterate(radixtree);
+	while ((iterval = rt_iterate_next(iter, &iterkey)) != NULL)
+	{
+		Assert (*iterval = iterkey);
+		rt_iter_stage_delete(iter);
+	}
+	rt_end_iterate(iter);
+	EXPECT_TRUE(rt_num_entries(radixtree) == 0);
+}
+
 static int
 key_cmp(const void *a, const void *b)
 {
@@ -503,5 +598,8 @@ test_radixtree(PG_FUNCTION_ARGS)
 
 	test_random();
 
+	test_iter_delete(0);
+	test_iter_delete(8);
+
 	PG_RETURN_VOID();
 }
-- 
2.51.0

From a08fe0966deabdbc6b08c6b30af443307847df1d Mon Sep 17 00:00:00 2001
From: John Naylor <[email protected]>
Date: Tue, 2 Sep 2025 12:41:27 +0700
Subject: [PATCH v2 8/8] Replace tidbitmap.c's hash table with a radix tree.

It still has page and chunk arrays, but they are populated in
sorted order by iterating over the radix tree, skipping the
qsort step.

FIXME for shared memory it allocates a new page tablearray
for expediency

FIXME memory management
---
 src/backend/nodes/tidbitmap.c | 234 +++++++++++-----------------------
 1 file changed, 77 insertions(+), 157 deletions(-)

diff --git a/src/backend/nodes/tidbitmap.c b/src/backend/nodes/tidbitmap.c
index 4a5c0e6ad1b..ff209eec395 100644
--- a/src/backend/nodes/tidbitmap.c
+++ b/src/backend/nodes/tidbitmap.c
@@ -116,6 +116,15 @@ typedef enum
 	TBM_ITERATING_SHARED,		/* converted to shared page and chunk array */
 } TBMIteratingState;
 
+// WIP: this should eventually go away in favor of the same tree as TID store
+#define RT_PREFIX tbmrt
+#define RT_SCOPE
+#define RT_DECLARE
+#define RT_DEFINE
+#define RT_VALUE_TYPE PagetableEntry
+#define RT_USE_DELETE
+#include "lib/radixtree.h"
+
 /*
  * Here is the representation for a whole TIDBitMap:
  */
@@ -123,7 +132,8 @@ struct TIDBitmap
 {
 	NodeTag		type;			/* to make it a valid Node */
 	MemoryContext mcxt;			/* memory context containing me */
-	struct pagetable_hash *pagetable;	/* hash table of PagetableEntry's */
+	MemoryContext rt_cxt;		/* radix tree memory context WIP would be nice not to need this */
+	tbmrt_radix_tree *pagetree;	/* radix tree of PagetableEntry's */
 	int			nentries;		/* number of entries in pagetable */
 	int			maxentries;		/* limit on same to meet maxbytes */
 	int			npages;			/* number of exact entries in pagetable */
@@ -134,7 +144,6 @@ struct TIDBitmap
 	PagetableEntry **spages;	/* sorted exact-page list, or NULL */
 	PagetableEntry **schunks;	/* sorted lossy-chunk list, or NULL */
 	dsa_pointer dsapagetable;	/* dsa_pointer to the element array */
-	dsa_pointer dsapagetableold;	/* dsa_pointer to the old element array */
 	dsa_pointer ptpages;		/* dsa_pointer to the page array */
 	dsa_pointer ptchunks;		/* dsa_pointer to the chunk array */
 	dsa_area   *dsa;			/* reference to per-query dsa area */
@@ -204,23 +213,6 @@ static PagetableEntry *tbm_get_pageentry(TIDBitmap *tbm, BlockNumber pageno);
 static bool tbm_page_is_lossy(const TIDBitmap *tbm, BlockNumber pageno);
 static void tbm_mark_page_lossy(TIDBitmap *tbm, BlockNumber pageno);
 static void tbm_lossify(TIDBitmap *tbm);
-static int	tbm_comparator(const void *left, const void *right);
-static int	tbm_shared_comparator(const void *left, const void *right,
-								  void *arg);
-
-/* define hashtable mapping block numbers to PagetableEntry's */
-#define SH_USE_NONDEFAULT_ALLOCATOR
-#define SH_PREFIX pagetable
-#define SH_ELEMENT_TYPE PagetableEntry
-#define SH_KEY_TYPE BlockNumber
-#define SH_KEY blockno
-#define SH_HASH_KEY(tb, key) murmurhash32(key)
-#define SH_EQUAL(tb, a, b) a == b
-#define SH_SCOPE static inline
-#define SH_DEFINE
-#define SH_DECLARE
-#include "lib/simplehash.h"
-
 
 /*
  * tbm_create - create an initially-empty bitmap
@@ -240,14 +232,15 @@ tbm_create(Size maxbytes, dsa_area *dsa)
 	tbm = makeNode(TIDBitmap);
 
 	tbm->mcxt = CurrentMemoryContext;
-	Assert(tbm->pagetable == NULL);
-	tbm->pagetable = pagetable_create(tbm->mcxt, 128, tbm);
+	tbm->rt_cxt = AllocSetContextCreate(tbm->mcxt,
+										"tidbitmap radix tree",
+										ALLOCSET_DEFAULT_SIZES);
+	tbm->pagetree = tbmrt_create(tbm->rt_cxt);
 
 	tbm->maxentries = tbm_calculate_entries(maxbytes);
 	tbm->lossify_start = 0;
 	tbm->dsa = dsa;
 	tbm->dsapagetable = InvalidDsaPointer;
-	tbm->dsapagetableold = InvalidDsaPointer;
 	tbm->ptpages = InvalidDsaPointer;
 	tbm->ptchunks = InvalidDsaPointer;
 
@@ -260,8 +253,8 @@ tbm_create(Size maxbytes, dsa_area *dsa)
 void
 tbm_free(TIDBitmap *tbm)
 {
-	if (tbm->pagetable)
-		pagetable_destroy(tbm->pagetable);
+	if (tbm->pagetree)
+		tbmrt_free(tbm->pagetree);
 	if (tbm->spages)
 		pfree(tbm->spages);
 	if (tbm->schunks)
@@ -396,18 +389,21 @@ tbm_add_page(TIDBitmap *tbm, BlockNumber pageno)
 void
 tbm_union(TIDBitmap *a, const TIDBitmap *b)
 {
+elog(NOTICE, "UNTESTED: UNION");
 	Assert(!a->iterating);
 	/* Nothing to do if b is empty */
 	if (b->nentries == 0)
 		return;
 	/* Scan through chunks and pages in b, merge into a */
 	{
-		pagetable_iterator i;
+		tbmrt_iter *i;
 		PagetableEntry *bpage;
+		uint64 key;
 
-		pagetable_start_iterate(b->pagetable, &i);
-		while ((bpage = pagetable_iterate(b->pagetable, &i)) != NULL)
+		i = tbmrt_begin_iterate(b->pagetree);
+		while ((bpage = tbmrt_iterate_next(i, &key)) != NULL)
 			tbm_union_page(a, bpage);
+		tbmrt_end_iterate(i);
 	}
 }
 
@@ -480,24 +476,27 @@ tbm_intersect(TIDBitmap *a, const TIDBitmap *b)
 		return;
 	/* Scan through chunks and pages in a, try to match to b */
 	{
-		pagetable_iterator i;
+		tbmrt_iter *i;
 		PagetableEntry *apage;
+		uint64 key;
 
-		pagetable_start_iterate(a->pagetable, &i);
-		while ((apage = pagetable_iterate(a->pagetable, &i)) != NULL)
+		i = tbmrt_begin_iterate(a->pagetree);
+		while ((apage = tbmrt_iterate_next(i, &key)) != NULL)
 		{
 			if (tbm_intersect_page(a, apage, b))
 			{
+				Assert(key == apage->blockno);
+
 				/* Page or chunk is now empty, remove it from a */
 				if (apage->ischunk)
 					a->nchunks--;
 				else
 					a->npages--;
 				a->nentries--;
-				if (!pagetable_delete(a->pagetable, apage->blockno))
-					elog(ERROR, "hash table corrupted");
+				tbmrt_iter_stage_delete(i);
 			}
 		}
+		tbmrt_end_iterate(i);
 	}
 }
 
@@ -517,6 +516,8 @@ tbm_intersect_page(TIDBitmap *a, PagetableEntry *apage, const TIDBitmap *b)
 		/* Scan each bit in chunk, try to clear */
 		bool		candelete = true;
 
+elog(NOTICE, "UNTESTED: PAGE CHUNK a %u", apage->blockno);
+
 		for (wordnum = 0; wordnum < WORDS_PER_CHUNK; wordnum++)
 		{
 			bitmapword	w = apage->words[wordnum];
@@ -549,10 +550,12 @@ tbm_intersect_page(TIDBitmap *a, PagetableEntry *apage, const TIDBitmap *b)
 					candelete = false;
 			}
 		}
+
 		return candelete;
 	}
 	else if (tbm_page_is_lossy(b, apage->blockno))
 	{
+elog(NOTICE, "UNTESTED: TREE FOUND LOSSY b %u", apage->blockno);
 		/*
 		 * Some of the tuples in 'a' might not satisfy the quals for 'b', but
 		 * because the page 'b' is lossy, we don't know which ones. Therefore
@@ -579,6 +582,7 @@ tbm_intersect_page(TIDBitmap *a, PagetableEntry *apage, const TIDBitmap *b)
 			}
 			apage->recheck |= bpage->recheck;
 		}
+
 		/* If there is no matching b page, we can just delete the a page */
 		return candelete;
 	}
@@ -635,8 +639,9 @@ tbm_begin_private_iterate(TIDBitmap *tbm)
 	 */
 	if (tbm->iterating == TBM_NOT_ITERATING)
 	{
-		pagetable_iterator i;
+		tbmrt_iter *iter = NULL;
 		PagetableEntry *page;
+		uint64 key;
 		int			npages;
 		int			nchunks;
 
@@ -650,22 +655,17 @@ tbm_begin_private_iterate(TIDBitmap *tbm)
 								   tbm->nchunks * sizeof(PagetableEntry *));
 
 		npages = nchunks = 0;
-		pagetable_start_iterate(tbm->pagetable, &i);
-		while ((page = pagetable_iterate(tbm->pagetable, &i)) != NULL)
+		iter = tbmrt_begin_iterate(tbm->pagetree);
+		while((page = tbmrt_iterate_next(iter, &key)) != NULL)
 		{
 			if (page->ischunk)
 				tbm->schunks[nchunks++] = page;
 			else
 				tbm->spages[npages++] = page;
 		}
+		tbmrt_end_iterate(iter);
 		Assert(npages == tbm->npages);
 		Assert(nchunks == tbm->nchunks);
-		if (npages > 1)
-			qsort(tbm->spages, npages, sizeof(PagetableEntry *),
-				  tbm_comparator);
-		if (nchunks > 1)
-			qsort(tbm->schunks, nchunks, sizeof(PagetableEntry *),
-				  tbm_comparator);
 	}
 
 	tbm->iterating = TBM_ITERATING_PRIVATE;
@@ -679,7 +679,7 @@ tbm_begin_private_iterate(TIDBitmap *tbm)
  * The necessary shared state will be allocated from the DSA passed to
  * tbm_create, so that multiple processes can attach to it and iterate jointly.
  *
- * This will convert the pagetable hash into page and chunk array of the index
+ * This will convert the page radix tree into page and chunk array of the index
  * into pagetable array.
  */
 dsa_pointer
@@ -708,11 +708,13 @@ tbm_prepare_shared_iterate(TIDBitmap *tbm)
 	 */
 	if (tbm->iterating == TBM_NOT_ITERATING)
 	{
-		pagetable_iterator i;
+		tbmrt_iter *i;
 		PagetableEntry *page;
-		int			idx;
+		int			idx = 0;
+		int			nentries;
 		int			npages;
 		int			nchunks;
+		uint64 key;
 
 		/*
 		 * Allocate the page and chunk array memory from the DSA to share
@@ -734,35 +736,37 @@ tbm_prepare_shared_iterate(TIDBitmap *tbm)
 		}
 
 		/*
-		 * iterate over the pagetable and
+		 * iterate over the pagetree and
 		 * convert it to page and chunk arrays.
 		 */
-		npages = nchunks = 0;
+		nentries = npages = nchunks = 0;
 		{
+			tbm->dsapagetable = dsa_allocate(tbm->dsa, sizeof(PTEntryArray) +
+											 sizeof(PagetableEntry) * tbm->nentries);
 			ptbase = dsa_get_address(tbm->dsa, tbm->dsapagetable);
 
-			pagetable_start_iterate(tbm->pagetable, &i);
-			while ((page = pagetable_iterate(tbm->pagetable, &i)) != NULL)
+			i = tbmrt_begin_iterate(tbm->pagetree);
+			while ((page = tbmrt_iterate_next(i, &key)) != NULL)
 			{
-				idx = page - ptbase->ptentry;
+				/* copy page to separate array */
+				memcpy(&ptbase->ptentry[idx], page, sizeof(PagetableEntry));
+
 				if (page->ischunk)
 					ptchunks->index[nchunks++] = idx;
 				else
 					ptpages->index[npages++] = idx;
+				nentries++;
+				idx++;
 			}
+			tbmrt_end_iterate(i);
 
+			Assert(nentries == tbm->nentries);
 			Assert(npages == tbm->npages);
 			Assert(nchunks == tbm->nchunks);
 		}
 
 		if (ptbase != NULL)
 			pg_atomic_init_u32(&ptbase->refcount, 0);
-		if (npages > 1)
-			qsort_arg(ptpages->index, npages, sizeof(int),
-					  tbm_shared_comparator, ptbase->ptentry);
-		if (nchunks > 1)
-			qsort_arg(ptchunks->index, nchunks, sizeof(int),
-					  tbm_shared_comparator, ptbase->ptentry);
 	}
 
 	/*
@@ -1086,10 +1090,7 @@ tbm_find_pageentry(const TIDBitmap *tbm, BlockNumber pageno)
 {
 	const PagetableEntry *page;
 
-	if (tbm->nentries == 0)		/* in case pagetable doesn't exist */
-		return NULL;
-
-	page = pagetable_lookup(tbm->pagetable, pageno);
+	page = tbmrt_find(tbm->pagetree, pageno);
 	if (page == NULL)
 		return NULL;
 	if (page->ischunk)
@@ -1110,24 +1111,24 @@ tbm_get_pageentry(TIDBitmap *tbm, BlockNumber pageno)
 {
 	PagetableEntry *page;
 	bool		found;
+	PagetableEntry ** slot;
 
-	{
 		/* Look up or create an entry */
-		page = pagetable_insert(tbm->pagetable, pageno, &found);
-	}
+		slot = tbmrt_get_slot(tbm->pagetree, pageno, &found);
 
 	/* Initialize it if not present before */
 	if (!found)
 	{
-		char		oldstatus = page->status;
-
+		page = tbmrt_alloc_leaf(tbm->pagetree, sizeof(PagetableEntry));
 		MemSet(page, 0, sizeof(PagetableEntry));
-		page->status = oldstatus;
 		page->blockno = pageno;
+		*slot = page;
 		/* must count it too */
 		tbm->nentries++;
 		tbm->npages++;
 	}
+	else
+		page = *slot;
 
 	return page;
 }
@@ -1149,7 +1150,7 @@ tbm_page_is_lossy(const TIDBitmap *tbm, BlockNumber pageno)
 	bitno = pageno % PAGES_PER_CHUNK;
 	chunk_pageno = pageno - bitno;
 
-	page = pagetable_lookup(tbm->pagetable, chunk_pageno);
+	page = tbmrt_find(tbm->pagetree, chunk_pageno);
 
 	if (page != NULL && page->ischunk)
 	{
@@ -1171,6 +1172,7 @@ tbm_page_is_lossy(const TIDBitmap *tbm, BlockNumber pageno)
 static void
 tbm_mark_page_lossy(TIDBitmap *tbm, BlockNumber pageno)
 {
+	PagetableEntry ** slot;
 	PagetableEntry *page;
 	bool		found;
 	BlockNumber chunk_pageno;
@@ -1187,7 +1189,7 @@ tbm_mark_page_lossy(TIDBitmap *tbm, BlockNumber pageno)
 	 */
 	if (bitno != 0)
 	{
-		if (pagetable_delete(tbm->pagetable, pageno))
+		if (tbmrt_delete(tbm->pagetree, pageno))
 		{
 			/* It was present, so adjust counts */
 			tbm->nentries--;
@@ -1195,16 +1197,17 @@ tbm_mark_page_lossy(TIDBitmap *tbm, BlockNumber pageno)
 		}
 	}
 
-	/* Look up or create entry for chunk-header page */
-	page = pagetable_insert(tbm->pagetable, chunk_pageno, &found);
+	slot = tbmrt_get_slot(tbm->pagetree, pageno, &found);
+	if (found)
+		page = *slot;
 
 	/* Initialize it if not present before */
 	if (!found)
 	{
-		char		oldstatus = page->status;
-
+		page = tbmrt_alloc_leaf(tbm->pagetree, sizeof(PagetableEntry));
 		MemSet(page, 0, sizeof(PagetableEntry));
-		page->status = oldstatus;
+		*slot = page;
+
 		page->blockno = chunk_pageno;
 		page->ischunk = true;
 		/* must count it too */
@@ -1213,11 +1216,8 @@ tbm_mark_page_lossy(TIDBitmap *tbm, BlockNumber pageno)
 	}
 	else if (!page->ischunk)
 	{
-		char		oldstatus = page->status;
-
 		/* chunk header page was formerly non-lossy, make it lossy */
 		MemSet(page, 0, sizeof(PagetableEntry));
-		page->status = oldstatus;
 		page->blockno = chunk_pageno;
 		page->ischunk = true;
 		/* we assume it had some tuple bit(s) set, so mark it lossy */
@@ -1255,8 +1255,8 @@ tbm_lossify(TIDBitmap *tbm)
 	 */
 	Assert(tbm->iterating == TBM_NOT_ITERATING);
 
-	pagetable_start_iterate_at(tbm->pagetable, &i, tbm->lossify_start);
-	while ((page = pagetable_iterate(tbm->pagetable, &i)) != NULL)
+	pagetable_start_iterate_at(tbm->pagetree, &i, tbm->lossify_start);
+	while ((page = pagetable_iterate(tbm->pagetree, &i)) != NULL)
 	{
 		if (page->ischunk)
 			continue;			/* already a chunk header */
@@ -1304,37 +1304,6 @@ tbm_lossify(TIDBitmap *tbm)
 #endif
 }
 
-/*
- * qsort comparator to handle PagetableEntry pointers.
- */
-static int
-tbm_comparator(const void *left, const void *right)
-{
-	BlockNumber l = (*((PagetableEntry *const *) left))->blockno;
-	BlockNumber r = (*((PagetableEntry *const *) right))->blockno;
-
-	return pg_cmp_u32(l, r);
-}
-
-/*
- * As above, but this will get index into PagetableEntry array.  Therefore,
- * it needs to get actual PagetableEntry using the index before comparing the
- * blockno.
- */
-static int
-tbm_shared_comparator(const void *left, const void *right, void *arg)
-{
-	PagetableEntry *base = (PagetableEntry *) arg;
-	PagetableEntry *lpage = &base[*(int *) left];
-	PagetableEntry *rpage = &base[*(int *) right];
-
-	if (lpage->blockno < rpage->blockno)
-		return -1;
-	else if (lpage->blockno > rpage->blockno)
-		return 1;
-	return 0;
-}
-
 /*
  *	tbm_attach_shared_iterate
  *
@@ -1370,55 +1339,6 @@ tbm_attach_shared_iterate(dsa_area *dsa, dsa_pointer dp)
 	return iterator;
 }
 
-/*
- * pagetable_allocate
- *
- * Callback function for allocating the memory for hashtable elements.
- * Allocate memory for hashtable elements, using DSA if available.
- */
-static inline void *
-pagetable_allocate(pagetable_hash *pagetable, Size size)
-{
-	TIDBitmap  *tbm = (TIDBitmap *) pagetable->private_data;
-	PTEntryArray *ptbase;
-
-	if (tbm->dsa == NULL)
-		return MemoryContextAllocExtended(pagetable->ctx, size,
-										  MCXT_ALLOC_HUGE | MCXT_ALLOC_ZERO);
-
-	/*
-	 * Save the dsapagetable reference in dsapagetableold before allocating
-	 * new memory so that pagetable_free can free the old entry.
-	 */
-	tbm->dsapagetableold = tbm->dsapagetable;
-	tbm->dsapagetable = dsa_allocate_extended(tbm->dsa,
-											  sizeof(PTEntryArray) + size,
-											  DSA_ALLOC_HUGE | DSA_ALLOC_ZERO);
-	ptbase = dsa_get_address(tbm->dsa, tbm->dsapagetable);
-
-	return ptbase->ptentry;
-}
-
-/*
- * pagetable_free
- *
- * Callback function for freeing hash table elements.
- */
-static inline void
-pagetable_free(pagetable_hash *pagetable, void *pointer)
-{
-	TIDBitmap  *tbm = (TIDBitmap *) pagetable->private_data;
-
-	/* pfree the input pointer if DSA is not available */
-	if (tbm->dsa == NULL)
-		pfree(pointer);
-	else if (DsaPointerIsValid(tbm->dsapagetableold))
-	{
-		dsa_free(tbm->dsa, tbm->dsapagetableold);
-		tbm->dsapagetableold = InvalidDsaPointer;
-	}
-}
-
 /*
  * tbm_calculate_entries
  *
-- 
2.51.0

From 288f8bc4f0dfc9ffb5bf66b30ffb015ae0bc23bf Mon Sep 17 00:00:00 2001
From: John Naylor <[email protected]>
Date: Fri, 14 Feb 2025 14:30:14 +0700
Subject: [PATCH v2 2/8] TEMP disable tbm_lossify()

Lossifying a radix tree is not implemented yet, so remove
one source of difference between that and the hash table.
---
 src/backend/nodes/tidbitmap.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/backend/nodes/tidbitmap.c b/src/backend/nodes/tidbitmap.c
index 112f16c2424..4a5c0e6ad1b 100644
--- a/src/backend/nodes/tidbitmap.c
+++ b/src/backend/nodes/tidbitmap.c
@@ -1239,6 +1239,8 @@ tbm_mark_page_lossy(TIDBitmap *tbm, BlockNumber pageno)
 static void
 tbm_lossify(TIDBitmap *tbm)
 {
+
+#if 0
 	pagetable_iterator i;
 	PagetableEntry *page;
 
@@ -1299,6 +1301,7 @@ tbm_lossify(TIDBitmap *tbm)
 	 */
 	if (tbm->nentries > tbm->maxentries / 2)
 		tbm->maxentries = Min(tbm->nentries, (INT_MAX - 1) / 2) * 2;
+#endif
 }
 
 /*
-- 
2.51.0

From d3668d0bfa25d2662ab8bf4e41a40026d5de0ed1 Mon Sep 17 00:00:00 2001
From: John Naylor <[email protected]>
Date: Fri, 14 Feb 2025 12:49:54 +0700
Subject: [PATCH v2 1/8] TEMP Get rid of TBMStatus

This is for simplification. It's useful to force everything to
the hash table for comparing its contents with the equivalent
radix tree logic.

Also, some one-page paths don't have coverage in regression tests,
so it would be nice not to have to worry about it.

TODO: Measure for regression
If creating a radix tree for a single page has too much overhead,
we can create its memory contexts lazily, but that's left for
future work. We can always put the one-page logic back if
necessary.
---
 src/backend/nodes/tidbitmap.c | 130 ++--------------------------------
 1 file changed, 5 insertions(+), 125 deletions(-)

diff --git a/src/backend/nodes/tidbitmap.c b/src/backend/nodes/tidbitmap.c
index 41031aa8f2f..112f16c2424 100644
--- a/src/backend/nodes/tidbitmap.c
+++ b/src/backend/nodes/tidbitmap.c
@@ -106,24 +106,6 @@ typedef struct PTEntryArray
 	PagetableEntry ptentry[FLEXIBLE_ARRAY_MEMBER];
 } PTEntryArray;
 
-/*
- * We want to avoid the overhead of creating the hashtable, which is
- * comparatively large, when not necessary. Particularly when we are using a
- * bitmap scan on the inside of a nestloop join: a bitmap may well live only
- * long enough to accumulate one entry in such cases.  We therefore avoid
- * creating an actual hashtable until we need two pagetable entries.  When
- * just one pagetable entry is needed, we store it in a fixed field of
- * TIDBitMap.  (NOTE: we don't get rid of the hashtable if the bitmap later
- * shrinks down to zero or one page again.  So, status can be TBM_HASH even
- * when nentries is zero or one.)
- */
-typedef enum
-{
-	TBM_EMPTY,					/* no hashtable, nentries == 0 */
-	TBM_ONE_PAGE,				/* entry1 contains the single entry */
-	TBM_HASH,					/* pagetable is valid, entry1 is not */
-} TBMStatus;
-
 /*
  * Current iterating state of the TBM.
  */
@@ -141,7 +123,6 @@ struct TIDBitmap
 {
 	NodeTag		type;			/* to make it a valid Node */
 	MemoryContext mcxt;			/* memory context containing me */
-	TBMStatus	status;			/* see codes above */
 	struct pagetable_hash *pagetable;	/* hash table of PagetableEntry's */
 	int			nentries;		/* number of entries in pagetable */
 	int			maxentries;		/* limit on same to meet maxbytes */
@@ -149,7 +130,6 @@ struct TIDBitmap
 	int			nchunks;		/* number of lossy entries in pagetable */
 	TBMIteratingState iterating;	/* tbm_begin_iterate called? */
 	uint32		lossify_start;	/* offset to start lossifying hashtable at */
-	PagetableEntry entry1;		/* used when status == TBM_ONE_PAGE */
 	/* these are valid when iterating is true: */
 	PagetableEntry **spages;	/* sorted exact-page list, or NULL */
 	PagetableEntry **schunks;	/* sorted lossy-chunk list, or NULL */
@@ -260,7 +240,8 @@ tbm_create(Size maxbytes, dsa_area *dsa)
 	tbm = makeNode(TIDBitmap);
 
 	tbm->mcxt = CurrentMemoryContext;
-	tbm->status = TBM_EMPTY;
+	Assert(tbm->pagetable == NULL);
+	tbm->pagetable = pagetable_create(tbm->mcxt, 128, tbm);
 
 	tbm->maxentries = tbm_calculate_entries(maxbytes);
 	tbm->lossify_start = 0;
@@ -273,37 +254,6 @@ tbm_create(Size maxbytes, dsa_area *dsa)
 	return tbm;
 }
 
-/*
- * Actually create the hashtable.  Since this is a moderately expensive
- * proposition, we don't do it until we have to.
- */
-static void
-tbm_create_pagetable(TIDBitmap *tbm)
-{
-	Assert(tbm->status != TBM_HASH);
-	Assert(tbm->pagetable == NULL);
-
-	tbm->pagetable = pagetable_create(tbm->mcxt, 128, tbm);
-
-	/* If entry1 is valid, push it into the hashtable */
-	if (tbm->status == TBM_ONE_PAGE)
-	{
-		PagetableEntry *page;
-		bool		found;
-		char		oldstatus;
-
-		page = pagetable_insert(tbm->pagetable,
-								tbm->entry1.blockno,
-								&found);
-		Assert(!found);
-		oldstatus = page->status;
-		memcpy(page, &tbm->entry1, sizeof(PagetableEntry));
-		page->status = oldstatus;
-	}
-
-	tbm->status = TBM_HASH;
-}
-
 /*
  * tbm_free - free a TIDBitmap
  */
@@ -451,14 +401,10 @@ tbm_union(TIDBitmap *a, const TIDBitmap *b)
 	if (b->nentries == 0)
 		return;
 	/* Scan through chunks and pages in b, merge into a */
-	if (b->status == TBM_ONE_PAGE)
-		tbm_union_page(a, &b->entry1);
-	else
 	{
 		pagetable_iterator i;
 		PagetableEntry *bpage;
 
-		Assert(b->status == TBM_HASH);
 		pagetable_start_iterate(b->pagetable, &i);
 		while ((bpage = pagetable_iterate(b->pagetable, &i)) != NULL)
 			tbm_union_page(a, bpage);
@@ -533,24 +479,10 @@ tbm_intersect(TIDBitmap *a, const TIDBitmap *b)
 	if (a->nentries == 0)
 		return;
 	/* Scan through chunks and pages in a, try to match to b */
-	if (a->status == TBM_ONE_PAGE)
-	{
-		if (tbm_intersect_page(a, &a->entry1, b))
-		{
-			/* Page is now empty, remove it from a */
-			Assert(!a->entry1.ischunk);
-			a->npages--;
-			a->nentries--;
-			Assert(a->nentries == 0);
-			a->status = TBM_EMPTY;
-		}
-	}
-	else
 	{
 		pagetable_iterator i;
 		PagetableEntry *apage;
 
-		Assert(a->status == TBM_HASH);
 		pagetable_start_iterate(a->pagetable, &i);
 		while ((apage = pagetable_iterate(a->pagetable, &i)) != NULL)
 		{
@@ -701,7 +633,7 @@ tbm_begin_private_iterate(TIDBitmap *tbm)
 	 * attached to the bitmap not the iterator, so they can be used by more
 	 * than one iterator.
 	 */
-	if (tbm->status == TBM_HASH && tbm->iterating == TBM_NOT_ITERATING)
+	if (tbm->iterating == TBM_NOT_ITERATING)
 	{
 		pagetable_iterator i;
 		PagetableEntry *page;
@@ -802,13 +734,10 @@ tbm_prepare_shared_iterate(TIDBitmap *tbm)
 		}
 
 		/*
-		 * If TBM status is TBM_HASH then iterate over the pagetable and
-		 * convert it to page and chunk arrays.  But if it's in the
-		 * TBM_ONE_PAGE mode then directly allocate the space for one entry
-		 * from the DSA.
+		 * iterate over the pagetable and
+		 * convert it to page and chunk arrays.
 		 */
 		npages = nchunks = 0;
-		if (tbm->status == TBM_HASH)
 		{
 			ptbase = dsa_get_address(tbm->dsa, tbm->dsapagetable);
 
@@ -825,19 +754,6 @@ tbm_prepare_shared_iterate(TIDBitmap *tbm)
 			Assert(npages == tbm->npages);
 			Assert(nchunks == tbm->nchunks);
 		}
-		else if (tbm->status == TBM_ONE_PAGE)
-		{
-			/*
-			 * In one page mode allocate the space for one pagetable entry,
-			 * initialize it, and directly store its index (i.e. 0) in the
-			 * page array.
-			 */
-			tbm->dsapagetable = dsa_allocate(tbm->dsa, sizeof(PTEntryArray) +
-											 sizeof(PagetableEntry));
-			ptbase = dsa_get_address(tbm->dsa, tbm->dsapagetable);
-			memcpy(ptbase->ptentry, &tbm->entry1, sizeof(PagetableEntry));
-			ptpages->index[0] = 0;
-		}
 
 		if (ptbase != NULL)
 			pg_atomic_init_u32(&ptbase->refcount, 0);
@@ -1027,10 +943,6 @@ tbm_private_iterate(TBMPrivateIterator *iterator, TBMIterateResult *tbmres)
 	{
 		PagetableEntry *page;
 
-		/* In TBM_ONE_PAGE state, we don't allocate an spages[] array */
-		if (tbm->status == TBM_ONE_PAGE)
-			page = &tbm->entry1;
-		else
 			page = tbm->spages[iterator->spageptr];
 
 		tbmres->internal_page = page;
@@ -1177,15 +1089,6 @@ tbm_find_pageentry(const TIDBitmap *tbm, BlockNumber pageno)
 	if (tbm->nentries == 0)		/* in case pagetable doesn't exist */
 		return NULL;
 
-	if (tbm->status == TBM_ONE_PAGE)
-	{
-		page = &tbm->entry1;
-		if (page->blockno != pageno)
-			return NULL;
-		Assert(!page->ischunk);
-		return page;
-	}
-
 	page = pagetable_lookup(tbm->pagetable, pageno);
 	if (page == NULL)
 		return NULL;
@@ -1208,24 +1111,7 @@ tbm_get_pageentry(TIDBitmap *tbm, BlockNumber pageno)
 	PagetableEntry *page;
 	bool		found;
 
-	if (tbm->status == TBM_EMPTY)
 	{
-		/* Use the fixed slot */
-		page = &tbm->entry1;
-		found = false;
-		tbm->status = TBM_ONE_PAGE;
-	}
-	else
-	{
-		if (tbm->status == TBM_ONE_PAGE)
-		{
-			page = &tbm->entry1;
-			if (page->blockno == pageno)
-				return page;
-			/* Time to switch from one page to a hashtable */
-			tbm_create_pagetable(tbm);
-		}
-
 		/* Look up or create an entry */
 		page = pagetable_insert(tbm->pagetable, pageno, &found);
 	}
@@ -1259,7 +1145,6 @@ tbm_page_is_lossy(const TIDBitmap *tbm, BlockNumber pageno)
 	/* we can skip the lookup if there are no lossy chunks */
 	if (tbm->nchunks == 0)
 		return false;
-	Assert(tbm->status == TBM_HASH);
 
 	bitno = pageno % PAGES_PER_CHUNK;
 	chunk_pageno = pageno - bitno;
@@ -1293,10 +1178,6 @@ tbm_mark_page_lossy(TIDBitmap *tbm, BlockNumber pageno)
 	int			wordnum;
 	int			bitnum;
 
-	/* We force the bitmap into hashtable mode whenever it's lossy */
-	if (tbm->status != TBM_HASH)
-		tbm_create_pagetable(tbm);
-
 	bitno = pageno % PAGES_PER_CHUNK;
 	chunk_pageno = pageno - bitno;
 
@@ -1371,7 +1252,6 @@ tbm_lossify(TIDBitmap *tbm)
 	 * just end up doing this again very soon.  We shoot for maxentries/2.
 	 */
 	Assert(tbm->iterating == TBM_NOT_ITERATING);
-	Assert(tbm->status == TBM_HASH);
 
 	pagetable_start_iterate_at(tbm->pagetable, &i, tbm->lossify_start);
 	while ((page = pagetable_iterate(tbm->pagetable, &i)) != NULL)
-- 
2.51.0

From 75a29b286320317f60be5bc49506ef1c76918399 Mon Sep 17 00:00:00 2001
From: John Naylor <[email protected]>
Date: Fri, 21 Feb 2025 14:14:07 +0700
Subject: [PATCH v2 3/8] Split RT_SET

Turn RT_SET into wrapper that calls:

RT_GET_SLOT: helper function for finding or creating a value slot
in the tree, similar to simplehash's "insert" method (whose name
we should use in a later version).

RT_SET_SLOT: update an existing value in place

Put both of these in scope for now.
---
 src/include/lib/radixtree.h | 61 ++++++++++++++++++++++++++++---------
 1 file changed, 46 insertions(+), 15 deletions(-)

diff --git a/src/include/lib/radixtree.h b/src/include/lib/radixtree.h
index a75b77270c4..b0902988ee1 100644
--- a/src/include/lib/radixtree.h
+++ b/src/include/lib/radixtree.h
@@ -183,6 +183,8 @@
 #define RT_LOCK_SHARE RT_MAKE_NAME(lock_share)
 #define RT_UNLOCK RT_MAKE_NAME(unlock)
 #endif
+#define RT_GET_SLOT RT_MAKE_NAME (get_slot)
+#define RT_SET_SLOT RT_MAKE_NAME (set_slot)
 #define RT_SET RT_MAKE_NAME(set)
 #define RT_BEGIN_ITERATE RT_MAKE_NAME(begin_iterate)
 #define RT_ITERATE_NEXT RT_MAKE_NAME(iterate_next)
@@ -288,6 +290,8 @@ RT_SCOPE	RT_RADIX_TREE *RT_CREATE(MemoryContext ctx);
 RT_SCOPE void RT_FREE(RT_RADIX_TREE * tree);
 
 RT_SCOPE	RT_VALUE_TYPE *RT_FIND(RT_RADIX_TREE * tree, uint64 key);
+RT_SCOPE void* RT_GET_SLOT(RT_RADIX_TREE * tree, uint64 key, bool* found);
+RT_SCOPE bool RT_SET_SLOT(RT_RADIX_TREE * tree, void* slot_v, RT_VALUE_TYPE * value_p, bool found);
 RT_SCOPE bool RT_SET(RT_RADIX_TREE * tree, uint64 key, RT_VALUE_TYPE * value_p);
 
 #ifdef RT_USE_DELETE
@@ -1692,18 +1696,10 @@ RT_GET_SLOT_RECURSIVE(RT_RADIX_TREE * tree, RT_PTR_ALLOC * parent_slot, uint64 k
 	}
 }
 
-/*
- * Set key to value that value_p points to. If the entry already exists, we
- * update its value and return true. Returns false if entry doesn't yet exist.
- *
- * Taking a lock in exclusive mode is the caller's responsibility.
- */
-RT_SCOPE bool
-RT_SET(RT_RADIX_TREE * tree, uint64 key, RT_VALUE_TYPE * value_p)
+RT_SCOPE void *
+RT_GET_SLOT(RT_RADIX_TREE * tree, uint64 key, bool * found)
 {
-	bool		found;
 	RT_PTR_ALLOC *slot;
-	size_t		value_sz = RT_GET_VALUE_SIZE(value_p);
 
 #ifdef RT_SHMEM
 	Assert(tree->ctl->magic == RT_RADIX_TREE_MAGIC);
@@ -1733,7 +1729,8 @@ RT_SET(RT_RADIX_TREE * tree, uint64 key, RT_VALUE_TYPE * value_p)
 			n4->chunks[0] = RT_GET_KEY_CHUNK(key, start_shift);
 
 			slot = RT_EXTEND_DOWN(tree, &n4->children[0], key, start_shift);
-			found = false;
+			*found = false;
+
 			tree->ctl->start_shift = start_shift;
 			tree->ctl->max_val = RT_SHIFT_GET_MAX_VAL(start_shift);
 			goto have_slot;
@@ -1743,11 +1740,30 @@ RT_SET(RT_RADIX_TREE * tree, uint64 key, RT_VALUE_TYPE * value_p)
 	}
 
 	slot = RT_GET_SLOT_RECURSIVE(tree, &tree->ctl->root,
-								 key, tree->ctl->start_shift, &found);
+								 key, tree->ctl->start_shift, found);
 
 have_slot:
 	Assert(slot != NULL);
 
+	/* Update the statistics */
+	if (!*found)
+		tree->ctl->num_keys++;
+
+	return slot;
+}
+
+/*
+ * Set key to value that value_p points to. If the entry already exists, we
+ * update its value and return true. Returns false if entry doesn't yet exist.
+ *
+ * Taking a lock in exclusive mode is the caller's responsibility.
+ */
+RT_SCOPE bool
+RT_SET_SLOT(RT_RADIX_TREE * tree, void * slot_v, RT_VALUE_TYPE * value_p, bool found)
+{
+	RT_PTR_ALLOC * slot = (RT_PTR_ALLOC *) (slot_v);
+	size_t		value_sz = RT_GET_VALUE_SIZE(value_p);
+
 	if (RT_VALUE_IS_EMBEDDABLE(value_p))
 	{
 		/* free the existing leaf */
@@ -1797,10 +1813,23 @@ have_slot:
 		memcpy(leaf.local, value_p, value_sz);
 	}
 
-	/* Update the statistics */
-	if (!found)
-		tree->ctl->num_keys++;
+	return found;
+}
+
+/*
+ * Set key to value that value_p points to. If the entry already exists, we
+ * update its value and return true. Returns false if entry doesn't yet exist.
+ *
+ * Taking a lock in exclusive mode is the caller's responsibility.
+ */
+RT_SCOPE bool
+RT_SET(RT_RADIX_TREE * tree, uint64 key, RT_VALUE_TYPE * value_p)
+{
+	bool found;
+	void * slot;
 
+	slot = RT_GET_SLOT(tree, key, &found);
+	RT_SET_SLOT(tree, slot, value_p, found);
 	return found;
 }
 
@@ -2967,6 +2996,8 @@ RT_DUMP_NODE(RT_NODE * node)
 #undef RT_UNLOCK
 #undef RT_GET_HANDLE
 #undef RT_FIND
+#undef RT_GET_SLOT
+#undef RT_SET_SLOT
 #undef RT_SET
 #undef RT_BEGIN_ITERATE
 #undef RT_ITERATE_NEXT
-- 
2.51.0

Reply via email to