Hi,

Here's two patched to deal with this open item. 0001 is a trivial fix of
typos and wording, I moved it into a separate commit for clarity. 0002
does the actual fix - adding the index_insert_cleanup(). It's 99% the
patch Alvaro shared some time ago, with only some minor formatting
tweaks by me.

I've also returned to this Alvaro's comment:

> Lastly, I kinda disagree with the notion that only some of the callers
> of aminsert should call aminsertcleanup, even though btree doesn't
> have an aminsertcleanup and thus it can't affect TOAST or catalogs.

which was a reaction to my earlier statement about places calling
index_insert():

> There's a call in toast_internals.c, but that seems OK because that
> only deals with btree indexes (and those don't need any cleanup). The
> same logic applies to unique_key_recheck(). The rest goes through
> execIndexing.c, which should do the cleanup in ExecCloseIndices().

I think Alvaro is right, so I went through all index_insert() callers
and checked which need the cleanup. Luckily there's not that many of
them, only 5 in total call index_insert() directly:

1) toast_save_datum (src/backend/access/common/toast_internals.c)

  This is safe, because the index_insert() passes indexInfo=NULL, so
  there can't possibly be any cache. If we ever decide to pass a valid
  indexInfo, we can add the cleanup, now it seems pointless.

  Note: If someone created a BRIN index on a TOAST table, that'd already
  crash, because BRIN blindly dereferences the indexInfo. Maybe that
  should be fixed, but we don't support CREATE INDEX on TOAST tables.

2) heapam_index_validate_scan (src/backend/access/heap/heapam_handler.c)

  Covered by the committed fix, adding cleanup to validate_index.

3) CatalogIndexInsert (src/backend/catalog/indexing.c)

  Covered by all callers also calling CatalogCloseIndexes, which in turn
  calls ExecCloseIndices and cleanup.

4) unique_key_recheck (src/backend/commands/constraint.c)

  This seems like the only place missing the cleanup call.

5) ExecInsertIndexTuples (src/backend/executor/execIndexing.c)

  Should be covered by ExecCloseIndices, called after the insertions.


So it seems only (4) unique_key_recheck needs the extra call (it can't
really happen higher because the indexInfo is a local variable). So the
0002 patch adds the call.

The patch also adds a test for this (or rather tweaks an existing one).


It's a bit too late for me to push this now, I'll do so early tomorrow.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From 0f89677b99b4b70ddfcc6c2cd08f433584bf65aa Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.von...@postgresql.org>
Date: Thu, 18 Apr 2024 22:22:37 +0200
Subject: [PATCH 1/2] Fix a couple typos in BRIN code

Typos introduced by commits c1ec02be1d79, b43757171470 and dae761a87eda.

Author: Alvaro Herrera
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/202401091043.e3nrqiad6gb7@alvherre.pgsql
---
 doc/src/sgml/indexam.sgml      |  2 +-
 src/backend/access/brin/brin.c | 10 +++++-----
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/doc/src/sgml/indexam.sgml b/doc/src/sgml/indexam.sgml
index 76ac0fcddd7..18cf23296f2 100644
--- a/doc/src/sgml/indexam.sgml
+++ b/doc/src/sgml/indexam.sgml
@@ -370,7 +370,7 @@ aminsert (Relation indexRelation,
    initially). After the index insertions complete, the memory will be freed
    automatically. If additional cleanup is required (e.g. if the cache contains
    buffers and tuple descriptors), the AM may define an optional function
-   <literal>indexinsertcleanup</literal>, called before the memory is released.
+   <literal>aminsertcleanup</literal>, called before the memory is released.
   </para>
 
   <para>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 32722f0961b..4f708bba658 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -222,7 +222,7 @@ static bool add_values_to_range(Relation idxRel, BrinDesc *bdesc,
 								BrinMemTuple *dtup, const Datum *values, const bool *nulls);
 static bool check_null_keys(BrinValues *bval, ScanKey *nullkeys, int nnullkeys);
 static void brin_fill_empty_ranges(BrinBuildState *state,
-								   BlockNumber prevRange, BlockNumber maxRange);
+								   BlockNumber prevRange, BlockNumber nextRange);
 
 /* parallel index builds */
 static void _brin_begin_parallel(BrinBuildState *buildstate, Relation heap, Relation index,
@@ -1151,8 +1151,8 @@ brinbuild(Relation heap, Relation index, IndexInfo *indexInfo)
 	 *
 	 * XXX plan_create_index_workers makes the number of workers dependent on
 	 * maintenance_work_mem, requiring 32MB for each worker. That makes sense
-	 * for btree, but not for BRIN, which can do away with much less memory.
-	 * So maybe make that somehow less strict, optionally?
+	 * for btree, but not for BRIN, which can do with much less memory. So
+	 * maybe make that somehow less strict, optionally?
 	 */
 	if (indexInfo->ii_ParallelWorkers > 0)
 		_brin_begin_parallel(state, heap, index, indexInfo->ii_Concurrent,
@@ -2621,7 +2621,7 @@ _brin_parallel_merge(BrinBuildState *state)
 
 	/*
 	 * Initialize BrinMemTuple we'll use to union summaries from workers (in
-	 * case they happened to produce parts of the same paga range).
+	 * case they happened to produce parts of the same page range).
 	 */
 	memtuple = brin_new_memtuple(state->bs_bdesc);
 
@@ -2932,7 +2932,7 @@ _brin_parallel_build_main(dsm_segment *seg, shm_toc *toc)
  * specified block number. The empty tuple is initialized only once, when it's
  * needed for the first time, stored in the memory context bs_context to ensure
  * proper life span, and reused on following calls. All empty tuples are
- * exactly the same except for the bs_blkno field, which is set to the value
+ * exactly the same except for the bt_blkno field, which is set to the value
  * in blkno parameter.
  */
 static void
-- 
2.44.0

From b9f93f05fb0060135eddb7f68f754793d61fe08b Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.von...@postgresql.org>
Date: Thu, 18 Apr 2024 23:58:30 +0200
Subject: [PATCH 2/2] Add index_insert_cleanup call to validate_index

The optimization for inserts into BRIN indexes added by commit
c1ec02be1d79 relies on a cache that needs to be explicitly released
after calling index_insert(). The commit however failed to call the
cleanup in validate_index(), as it calls index_insert() indirectly
through table_index_validate_scan().

Fixed by adding the missing call to validate_index(), and also to
unique_key_recheck().

The commit does two additional improvements. It adds the index as the
first arguments to aminsertcleanup(), to make it more like the other AM
callbacks. And it calls the aminsertcleanup() always, even if the
ii_AmCache is NULL - this means it's up to the AM callback to decide if
cleanup is needed.

Author: Alvaro Herrera, Tomas Vondra
Reported-by: Alexander Lakhin
---
 doc/src/sgml/indexam.sgml          | 14 +++++++-------
 src/backend/access/brin/brin.c     |  6 ++++--
 src/backend/access/index/indexam.c |  5 ++---
 src/backend/catalog/index.c        |  3 +++
 src/backend/commands/constraint.c  |  2 ++
 src/include/access/amapi.h         |  3 ++-
 src/include/access/brin_internal.h |  2 +-
 src/test/regress/expected/brin.out |  3 ++-
 src/test/regress/sql/brin.sql      |  3 ++-
 9 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/doc/src/sgml/indexam.sgml b/doc/src/sgml/indexam.sgml
index 18cf23296f2..e3c1539a1e3 100644
--- a/doc/src/sgml/indexam.sgml
+++ b/doc/src/sgml/indexam.sgml
@@ -367,21 +367,21 @@ aminsert (Relation indexRelation,
    within an SQL statement, it can allocate space
    in <literal>indexInfo-&gt;ii_Context</literal> and store a pointer to the
    data in <literal>indexInfo-&gt;ii_AmCache</literal> (which will be NULL
-   initially). After the index insertions complete, the memory will be freed
-   automatically. If additional cleanup is required (e.g. if the cache contains
-   buffers and tuple descriptors), the AM may define an optional function
-   <literal>aminsertcleanup</literal>, called before the memory is released.
+   initially).  If resources other than memory have to be released after
+   index insertions, <function>aminsertcleanup</function> may be provided,
+   which will be called before the memory is released.
   </para>
 
   <para>
 <programlisting>
 void
-aminsertcleanup (IndexInfo *indexInfo);
+aminsertcleanup (Relation indexRelation,
+                 IndexInfo *indexInfo);
 </programlisting>
    Clean up state that was maintained across successive inserts in
    <literal>indexInfo-&gt;ii_AmCache</literal>. This is useful if the data
-   requires additional cleanup steps, and simply releasing the memory is not
-   sufficient.
+   requires additional cleanup steps (e.g., releasing pinned buffers), and
+   simply releasing the memory is not sufficient.
   </para>
 
   <para>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 4f708bba658..bf28400dd84 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -500,11 +500,13 @@ brininsert(Relation idxRel, Datum *values, bool *nulls,
  * Callback to clean up the BrinInsertState once all tuple inserts are done.
  */
 void
-brininsertcleanup(IndexInfo *indexInfo)
+brininsertcleanup(Relation index, IndexInfo *indexInfo)
 {
 	BrinInsertState *bistate = (BrinInsertState *) indexInfo->ii_AmCache;
 
-	Assert(bistate);
+	/* bail out if cache not initialized */
+	if (indexInfo->ii_AmCache == NULL)
+		return;
 
 	/*
 	 * Clean up the revmap. Note that the brinDesc has already been cleaned up
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index 7510159fc8d..dcd04b813d8 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -242,10 +242,9 @@ index_insert_cleanup(Relation indexRelation,
 					 IndexInfo *indexInfo)
 {
 	RELATION_CHECKS;
-	Assert(indexInfo);
 
-	if (indexRelation->rd_indam->aminsertcleanup && indexInfo->ii_AmCache)
-		indexRelation->rd_indam->aminsertcleanup(indexInfo);
+	if (indexRelation->rd_indam->aminsertcleanup)
+		indexRelation->rd_indam->aminsertcleanup(indexRelation, indexInfo);
 }
 
 /*
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 9b7ef71d6fe..5a8568c55c9 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -3402,6 +3402,9 @@ validate_index(Oid heapId, Oid indexId, Snapshot snapshot)
 	/* Done with tuplesort object */
 	tuplesort_end(state.tuplesort);
 
+	/* Make sure to release resources cached in indexInfo (if needed). */
+	index_insert_cleanup(indexRelation, indexInfo);
+
 	elog(DEBUG2,
 		 "validate_index found %.0f heap tuples, %.0f index tuples; inserted %.0f missing tuples",
 		 state.htups, state.itups, state.tups_inserted);
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index 94d491b7541..00880446c49 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -197,6 +197,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 
 	ExecDropSingleTupleTableSlot(slot);
 
+	index_insert_cleanup(indexRel, indexInfo);
+
 	index_close(indexRel, RowExclusiveLock);
 
 	return PointerGetDatum(NULL);
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index 00300dd720e..f25c9d58a7d 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -114,7 +114,8 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 								   struct IndexInfo *indexInfo);
 
 /* cleanup after insert */
-typedef void (*aminsertcleanup_function) (struct IndexInfo *indexInfo);
+typedef void (*aminsertcleanup_function) (Relation indexRelation,
+										  struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
diff --git a/src/include/access/brin_internal.h b/src/include/access/brin_internal.h
index 1c7eabe6041..a5a9772621c 100644
--- a/src/include/access/brin_internal.h
+++ b/src/include/access/brin_internal.h
@@ -96,7 +96,7 @@ extern bool brininsert(Relation idxRel, Datum *values, bool *nulls,
 					   IndexUniqueCheck checkUnique,
 					   bool indexUnchanged,
 					   struct IndexInfo *indexInfo);
-extern void brininsertcleanup(struct IndexInfo *indexInfo);
+extern void brininsertcleanup(Relation index, struct IndexInfo *indexInfo);
 extern IndexScanDesc brinbeginscan(Relation r, int nkeys, int norderbys);
 extern int64 bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm);
 extern void brinrescan(IndexScanDesc scan, ScanKey scankey, int nscankeys,
diff --git a/src/test/regress/expected/brin.out b/src/test/regress/expected/brin.out
index 2662bb6ed43..d6779d8c7d2 100644
--- a/src/test/regress/expected/brin.out
+++ b/src/test/regress/expected/brin.out
@@ -575,6 +575,7 @@ DROP TABLE brintest_unlogged;
 -- test that the insert optimization works if no rows end up inserted
 CREATE TABLE brin_insert_optimization (a int);
 INSERT INTO brin_insert_optimization VALUES (1);
-CREATE INDEX ON brin_insert_optimization USING brin (a);
+CREATE INDEX brin_insert_optimization_idx ON brin_insert_optimization USING brin (a);
 UPDATE brin_insert_optimization SET a = a;
+REINDEX INDEX CONCURRENTLY brin_insert_optimization_idx;
 DROP TABLE brin_insert_optimization;
diff --git a/src/test/regress/sql/brin.sql b/src/test/regress/sql/brin.sql
index 0d3beabb3d7..695cfad4bea 100644
--- a/src/test/regress/sql/brin.sql
+++ b/src/test/regress/sql/brin.sql
@@ -519,6 +519,7 @@ DROP TABLE brintest_unlogged;
 -- test that the insert optimization works if no rows end up inserted
 CREATE TABLE brin_insert_optimization (a int);
 INSERT INTO brin_insert_optimization VALUES (1);
-CREATE INDEX ON brin_insert_optimization USING brin (a);
+CREATE INDEX brin_insert_optimization_idx ON brin_insert_optimization USING brin (a);
 UPDATE brin_insert_optimization SET a = a;
+REINDEX INDEX CONCURRENTLY brin_insert_optimization_idx;
 DROP TABLE brin_insert_optimization;
-- 
2.44.0

Reply via email to