On 2021-Jun-23, houzj.f...@fujitsu.com wrote:

> For a multi-level partition, for example: table 'A' is partition of table
> 'B', and 'B' is also partition of table 'C'. After I 'ALTER TABLE C DETACH B',
> I thought partition constraint check of table 'C' does not matter anymore if
> INSERT INTO table 'A'. But it looks like the relcache of 'A' is not 
> invalidated
> after detaching 'B'. And the relcache::rd_partcheck still include the 
> partition
> constraint of table 'C'. Note If I invalidate the table 'A''s relcache 
> manually,
> then next time the relcache::rd_partcheck will be updated to the expected
> one which does not include partition constraint check of table 'C'.

Hmm, if I understand correctly, this means that we need to invalidate
relcache for all partitions of the partition being detached.  Maybe like
in the attached WIP ("XXX VERY CRUDE XXX DANGER EATS DATA") patch, which
solves what you complained about, but I didn't run any other tests.
(Also, in the concurrent case I think this should be done during the
first transaction, so this patch is wrong for it.)

Did you have a misbehaving test for the ATTACH case?

-- 
Álvaro Herrera           39°49'30"S 73°17'W  —  https://www.EnterpriseDB.com/
"I dream about dreams about dreams", sang the nightingale
under the pale moon (Sandman)
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 03dfd2e7fa..b24fa05e42 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -18107,6 +18107,20 @@ DetachPartitionFinalize(Relation rel, Relation partRel, bool concurrent,
 	 * included in its partition descriptor.
 	 */
 	CacheInvalidateRelcache(rel);
+
+	/*
+	 * If the partition is partitioned, invalidate relcache for all its
+	 * partitions also, because ... XXX explain
+	 */
+	if (partRel->rd_rel->relispartition)
+	{
+		PartitionDesc	partdesc = RelationGetPartitionDesc(partRel, true);
+
+		for (int i = 0; i < partdesc->nparts; i++)
+		{
+			CacheInvalidateRelcacheByRelid(partdesc->oids[i]);
+		}
+	}
 }
 
 /*

Reply via email to