On Fri, 23 Feb 2024 at 00:12, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
> So after studying this for awhile, I see that the planner is emitting
> a PlanRowMark that presumes that the UNION ALL subquery will be
> scanned as though it's a base relation; but since we've converted it
> to an appendrel, the executor just ignores that rowmark, and the wrong
> things happen.  I think the really right fix would be to teach the
> executor to honor such PlanRowMarks, by getting nodeAppend.c and
> nodeMergeAppend.c to perform EPQ row substitution.

Yes, I agree that's a much better solution, if it can be made to work,
though I have been really struggling to see how.


> the planner produces targetlists like
>
>             Output: src_1.val, src_1.id, ROW(src_1.id, src_1.val)
>
> and as you can see the order of the columns doesn't match.
> I can see three ways we might attack that:
>
> 1. Persuade the planner to build output tlists that always match
> the row identity Var.
>
> 2. Change generation of the ROW() expression so that it lists only
> the values we're going to output, in the order we're going to
> output them.
>
> 3. Fix the executor to remap what it gets out of the ROW() into the
> order of the subquery tlists.  This is probably do-able but I'm
> not certain; it may be that the executor hasn't enough info.
> We might need to teach the planner to produce a mapping projection
> and attach it to the Append node, which carries some ABI risk (but
> in the past we've gotten away with adding new fields to the ends
> of plan nodes in the back branches).  Another objection is that
> adding cycles to execution rather than planning might be a poor
> tradeoff --- although if we only do the work when EPQ is invoked,
> maybe it'd be the best way.
>

Of those, option 3 feels like the best one, though I'm really not
sure. I played around with it and convinced myself that the executor
doesn't have the information it needs to make it work, but I think all
it needs is the Append node's original targetlist, as it is just
before it's rewritten by set_dummy_tlist_references(), which rewrites
the attribute numbers sequentially. In the original targetlist, all
the Vars have the right attribute numbers, so it can be used to build
the required projection (I think).

Attached is a very rough patch. It seemed better to build the
projection in the executor rather than the planner, since then the
extra work can be avoided, if EPQ is not invoked.

It seems to work (it passes the isolation tests, and I couldn't break
it in ad hoc testing), but it definitely needs tidying up, and it's
hard to be sure that it's not overlooking something.

Regards,
Dean
diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c
new file mode 100644
index c7059e7..ccd994c
--- a/src/backend/executor/nodeAppend.c
+++ b/src/backend/executor/nodeAppend.c
@@ -275,6 +275,8 @@ ExecInitAppend(Append *node, EState *est
 	/* For parallel query, this will be overridden later. */
 	appendstate->choose_next_subplan = choose_next_subplan_locally;
 
+	appendstate->as_epq_tupdesc = NULL;
+
 	return appendstate;
 }
 
@@ -288,8 +290,107 @@ static TupleTableSlot *
 ExecAppend(PlanState *pstate)
 {
 	AppendState *node = castNode(AppendState, pstate);
+	EState	   *estate = node->ps.state;
 	TupleTableSlot *result;
 
+	if (estate->es_epq_active != NULL)
+	{
+		/*
+		 * We are inside an EvalPlanQual recheck.  If there is a relevant
+		 * rowmark for the append relation, return the test tuple if one is
+		 * available.
+		 */
+		EPQState   *epqstate = estate->es_epq_active;
+		int			scanrelid;
+
+		if (bms_get_singleton_member(castNode(Append, node->ps.plan)->apprelids,
+									 &scanrelid))
+		{
+			if (epqstate->relsubs_done[scanrelid - 1])
+			{
+				/*
+				 * Return empty slot, as either there is no EPQ tuple for this
+				 * rel or we already returned it.
+				 */
+				TupleTableSlot *slot = node->ps.ps_ResultTupleSlot;
+
+				return ExecClearTuple(slot);
+			}
+			else if (epqstate->relsubs_slot[scanrelid - 1] != NULL)
+			{
+				/*
+				 * Return replacement tuple provided by the EPQ caller.
+				 */
+				TupleTableSlot *slot = epqstate->relsubs_slot[scanrelid - 1];
+
+				Assert(epqstate->relsubs_rowmark[scanrelid - 1] == NULL);
+
+				/* Mark to remember that we shouldn't return it again */
+				epqstate->relsubs_done[scanrelid - 1] = true;
+
+				return slot;
+			}
+			else if (epqstate->relsubs_rowmark[scanrelid - 1] != NULL)
+			{
+				/*
+				 * Fetch and return replacement tuple using a non-locking
+				 * rowmark.
+				 */
+				ExecAuxRowMark *earm = epqstate->relsubs_rowmark[scanrelid - 1];
+				ExecRowMark *erm = earm->rowmark;
+				Datum		datum;
+				bool		isNull;
+				TupleTableSlot *slot;
+
+				Assert(erm->markType == ROW_MARK_COPY);
+
+				datum = ExecGetJunkAttribute(epqstate->origslot,
+											 earm->wholeAttNo,
+											 &isNull);
+				if (isNull)
+					return NULL;
+
+				if (node->as_epq_tupdesc == NULL)
+				{
+					HeapTupleHeader tuple;
+					Oid			tupType;
+					int32		tupTypmod;
+					MemoryContext oldcontext;
+
+					tuple = DatumGetHeapTupleHeader(datum);
+					tupType = HeapTupleHeaderGetTypeId(tuple);
+					tupTypmod = HeapTupleHeaderGetTypMod(tuple);
+
+					oldcontext = MemoryContextSwitchTo(estate->es_query_cxt);
+
+					node->as_epq_tupdesc = lookup_rowtype_tupdesc_copy(tupType, tupTypmod);
+
+					ExecAssignExprContext(estate, &node->ps);
+
+					node->ps.ps_ProjInfo =
+						ExecBuildProjectionInfo(castNode(Append, node->ps.plan)->epq_targetlist,
+												node->ps.ps_ExprContext,
+												node->ps.ps_ResultTupleSlot,
+												&node->ps,
+												NULL);
+
+					MemoryContextSwitchTo(oldcontext);
+				}
+
+				slot = MakeTupleTableSlot(node->as_epq_tupdesc,
+										  &TTSOpsVirtual);
+				ExecStoreHeapTupleDatum(datum, slot);
+
+				/* Mark to remember that we shouldn't return more */
+				epqstate->relsubs_done[scanrelid - 1] = true;
+
+				node->ps.ps_ExprContext->ecxt_scantuple = slot;
+
+				return ExecProject(node->ps.ps_ProjInfo);
+			}
+		}
+	}
+
 	/*
 	 * If this is the first call after Init or ReScan, we need to do the
 	 * initialization work.
@@ -405,6 +506,7 @@ ExecEndAppend(AppendState *node)
 void
 ExecReScanAppend(AppendState *node)
 {
+	EState	   *estate = node->ps.state;
 	int			nasyncplans = node->as_nasyncplans;
 	int			i;
 
@@ -443,6 +545,23 @@ ExecReScanAppend(AppendState *node)
 			ExecReScan(subnode);
 	}
 
+	/*
+	 * Rescan EvalPlanQual tuple(s) if we're inside an EvalPlanQual recheck.
+	 * But don't lose the "blocked" status of blocked target relations.
+	 */
+	if (estate->es_epq_active != NULL)
+	{
+		EPQState   *epqstate = estate->es_epq_active;
+		int			scanrelid;
+
+		if (bms_get_singleton_member(castNode(Append, node->ps.plan)->apprelids,
+									 &scanrelid))
+		{
+			epqstate->relsubs_done[scanrelid - 1] =
+				epqstate->relsubs_blocked[scanrelid - 1];
+		}
+	}
+
 	/* Reset async state */
 	if (nasyncplans > 0)
 	{
diff --git a/src/backend/executor/nodeMergeAppend.c b/src/backend/executor/nodeMergeAppend.c
new file mode 100644
index 0817868..2c0caee
--- a/src/backend/executor/nodeMergeAppend.c
+++ b/src/backend/executor/nodeMergeAppend.c
@@ -187,6 +187,8 @@ ExecInitMergeAppend(MergeAppend *node, E
 	 */
 	mergestate->ms_initialized = false;
 
+	mergestate->ms_epq_tupdesc = NULL;
+
 	return mergestate;
 }
 
@@ -200,11 +202,110 @@ static TupleTableSlot *
 ExecMergeAppend(PlanState *pstate)
 {
 	MergeAppendState *node = castNode(MergeAppendState, pstate);
+	EState	   *estate = node->ps.state;
 	TupleTableSlot *result;
 	SlotNumber	i;
 
 	CHECK_FOR_INTERRUPTS();
 
+	if (estate->es_epq_active != NULL)
+	{
+		/*
+		 * We are inside an EvalPlanQual recheck.  If there is a relevant
+		 * rowmark for the append relation, return the test tuple if one is
+		 * available.
+		 */
+		EPQState   *epqstate = estate->es_epq_active;
+		int			scanrelid;
+
+		if (bms_get_singleton_member(castNode(MergeAppend, node->ps.plan)->apprelids,
+									 &scanrelid))
+		{
+			if (epqstate->relsubs_done[scanrelid - 1])
+			{
+				/*
+				 * Return empty slot, as either there is no EPQ tuple for this
+				 * rel or we already returned it.
+				 */
+				TupleTableSlot *slot = node->ps.ps_ResultTupleSlot;
+
+				return ExecClearTuple(slot);
+			}
+			else if (epqstate->relsubs_slot[scanrelid - 1] != NULL)
+			{
+				/*
+				 * Return replacement tuple provided by the EPQ caller.
+				 */
+				TupleTableSlot *slot = epqstate->relsubs_slot[scanrelid - 1];
+
+				Assert(epqstate->relsubs_rowmark[scanrelid - 1] == NULL);
+
+				/* Mark to remember that we shouldn't return it again */
+				epqstate->relsubs_done[scanrelid - 1] = true;
+
+				return slot;
+			}
+			else if (epqstate->relsubs_rowmark[scanrelid - 1] != NULL)
+			{
+				/*
+				 * Fetch and return replacement tuple using a non-locking
+				 * rowmark.
+				 */
+				ExecAuxRowMark *earm = epqstate->relsubs_rowmark[scanrelid - 1];
+				ExecRowMark *erm = earm->rowmark;
+				Datum		datum;
+				bool		isNull;
+				TupleTableSlot *slot;
+
+				Assert(erm->markType == ROW_MARK_COPY);
+
+				datum = ExecGetJunkAttribute(epqstate->origslot,
+											 earm->wholeAttNo,
+											 &isNull);
+				if (isNull)
+					return NULL;
+
+				if (node->ms_epq_tupdesc == NULL)
+				{
+					HeapTupleHeader tuple;
+					Oid			tupType;
+					int32		tupTypmod;
+					MemoryContext oldcontext;
+
+					tuple = DatumGetHeapTupleHeader(datum);
+					tupType = HeapTupleHeaderGetTypeId(tuple);
+					tupTypmod = HeapTupleHeaderGetTypMod(tuple);
+
+					oldcontext = MemoryContextSwitchTo(estate->es_query_cxt);
+
+					node->ms_epq_tupdesc = lookup_rowtype_tupdesc_copy(tupType, tupTypmod);
+
+					ExecAssignExprContext(estate, &node->ps);
+
+					node->ps.ps_ProjInfo =
+						ExecBuildProjectionInfo(castNode(MergeAppend, node->ps.plan)->epq_targetlist,
+												node->ps.ps_ExprContext,
+												node->ps.ps_ResultTupleSlot,
+												&node->ps,
+												NULL);
+
+					MemoryContextSwitchTo(oldcontext);
+				}
+
+				slot = MakeTupleTableSlot(node->ms_epq_tupdesc,
+										  &TTSOpsVirtual);
+				ExecStoreHeapTupleDatum(datum, slot);
+
+				/* Mark to remember that we shouldn't return more */
+				epqstate->relsubs_done[scanrelid - 1] = true;
+
+				node->ps.ps_ExprContext->ecxt_scantuple = slot;
+
+				return ExecProject(node->ps.ps_ProjInfo);
+			}
+		}
+	}
+
 	if (!node->ms_initialized)
 	{
 		/* Nothing to do if all subplans were pruned */
@@ -339,6 +440,7 @@ ExecEndMergeAppend(MergeAppendState *nod
 void
 ExecReScanMergeAppend(MergeAppendState *node)
 {
+	EState	   *estate = node->ps.state;
 	int			i;
 
 	/*
@@ -372,6 +474,24 @@ ExecReScanMergeAppend(MergeAppendState *
 		if (subnode->chgParam == NULL)
 			ExecReScan(subnode);
 	}
+
+	/*
+	 * Rescan EvalPlanQual tuple(s) if we're inside an EvalPlanQual recheck.
+	 * But don't lose the "blocked" status of blocked target relations.
+	 */
+	if (estate->es_epq_active != NULL)
+	{
+		EPQState   *epqstate = estate->es_epq_active;
+		int			scanrelid;
+
+		if (bms_get_singleton_member(castNode(MergeAppend, node->ps.plan)->apprelids,
+									 &scanrelid))
+		{
+			epqstate->relsubs_done[scanrelid - 1] =
+				epqstate->relsubs_blocked[scanrelid - 1];
+		}
+	}
+
 	binaryheap_reset(node->ms_heap);
 	node->ms_initialized = false;
 }
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
new file mode 100644
index 22a1fa2..bb07ca4
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -1761,8 +1761,9 @@ set_append_references(PlannerInfo *root,
 	/*
 	 * Otherwise, clean up the Append as needed.  It's okay to do this after
 	 * recursing to the children, because set_dummy_tlist_references doesn't
-	 * look at those.
+	 * look at those.  Save the original targetlist for EPQ checking.
 	 */
+	aplan->epq_targetlist = aplan->plan.targetlist;
 	set_dummy_tlist_references((Plan *) aplan, rtoffset);
 
 	aplan->apprelids = offset_relid_set(aplan->apprelids, rtoffset);
@@ -1837,8 +1838,9 @@ set_mergeappend_references(PlannerInfo *
 	/*
 	 * Otherwise, clean up the MergeAppend as needed.  It's okay to do this
 	 * after recursing to the children, because set_dummy_tlist_references
-	 * doesn't look at those.
+	 * doesn't look at those.  Save the original targetlist for EPQ checking.
 	 */
+	mplan->epq_targetlist = mplan->plan.targetlist;
 	set_dummy_tlist_references((Plan *) mplan, rtoffset);
 
 	mplan->apprelids = offset_relid_set(mplan->apprelids, rtoffset);
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
new file mode 100644
index 444a5f0..6c3ca6e
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1377,6 +1377,7 @@ struct AppendState
 	Bitmapset  *as_valid_subplans;
 	Bitmapset  *as_valid_asyncplans;	/* valid asynchronous plans indexes */
 	bool		(*choose_next_subplan) (AppendState *);
+	TupleDesc	as_epq_tupdesc; /* wholerow tuple descriptor for EPQ check */
 };
 
 /* ----------------
@@ -1406,6 +1407,7 @@ typedef struct MergeAppendState
 	bool		ms_initialized; /* are subplans started? */
 	struct PartitionPruneState *ms_prune_state;
 	Bitmapset  *ms_valid_subplans;
+	TupleDesc	ms_epq_tupdesc; /* wholerow tuple descriptor for EPQ check */
 } MergeAppendState;
 
 /* ----------------
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
new file mode 100644
index b4ef6bc..e178175
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -275,6 +275,8 @@ typedef struct Append
 
 	/* Info for run-time subplan pruning; NULL if we're not doing that */
 	struct PartitionPruneInfo *part_prune_info;
+
+	List	   *epq_targetlist; /* Targetlist for EPQ checking */
 } Append;
 
 /* ----------------
@@ -310,6 +312,8 @@ typedef struct MergeAppend
 
 	/* Info for run-time subplan pruning; NULL if we're not doing that */
 	struct PartitionPruneInfo *part_prune_info;
+
+	List	   *epq_targetlist; /* Targetlist for EPQ checking */
 } MergeAppend;
 
 /* ----------------
diff --git a/src/test/isolation/expected/merge-join.out b/src/test/isolation/expected/merge-join.out
new file mode 100644
index 57f048c..a9b49a7
--- a/src/test/isolation/expected/merge-join.out
+++ b/src/test/isolation/expected/merge-join.out
@@ -146,3 +146,144 @@ id|val
  3| 30
 (3 rows)
 
+
+starting permutation: b1 b2 m1 hj exu m2u c1 c2 s1
+step b1: BEGIN ISOLATION LEVEL READ COMMITTED;
+step b2: BEGIN ISOLATION LEVEL READ COMMITTED;
+step m1: MERGE INTO tgt USING src ON tgt.id = src.id
+             WHEN MATCHED THEN UPDATE SET val = src.val
+             WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val);
+step hj: SET LOCAL enable_mergejoin = off; SET LOCAL enable_nestloop = off;
+step exu: EXPLAIN (verbose, costs off)
+           MERGE INTO tgt USING (SELECT * FROM src
+                                 UNION ALL
+                                 SELECT * FROM src2) src ON tgt.id = src.id
+             WHEN MATCHED THEN UPDATE SET val = src.val
+             WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val);
+QUERY PLAN                                                               
+-------------------------------------------------------------------------
+Merge on public.tgt                                                      
+  ->  Hash Left Join                                                     
+        Output: tgt.ctid, src_1.val, src_1.id, (ROW(src_1.id, src_1.val))
+        Inner Unique: true                                               
+        Hash Cond: (src_1.id = tgt.id)                                   
+        ->  Append                                                       
+              ->  Seq Scan on public.src src_1                           
+                    Output: src_1.val, src_1.id, ROW(src_1.id, src_1.val)
+              ->  Seq Scan on public.src2                                
+                    Output: src2.val, src2.id, ROW(src2.id, src2.val)    
+        ->  Hash                                                         
+              Output: tgt.ctid, tgt.id                                   
+              ->  Seq Scan on public.tgt                                 
+                    Output: tgt.ctid, tgt.id                             
+(14 rows)
+
+step m2u: MERGE INTO tgt USING (SELECT * FROM src
+                                 UNION ALL
+                                 SELECT * FROM src2) src ON tgt.id = src.id
+             WHEN MATCHED THEN UPDATE SET val = src.val
+             WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val); <waiting ...>
+step c1: COMMIT;
+step m2u: <... completed>
+step c2: COMMIT;
+step s1: SELECT * FROM tgt;
+id|val
+--+---
+ 1| 10
+ 2| 20
+ 3| 30
+(3 rows)
+
+
+starting permutation: b1 b2 m1 mj exu m2u c1 c2 s1
+step b1: BEGIN ISOLATION LEVEL READ COMMITTED;
+step b2: BEGIN ISOLATION LEVEL READ COMMITTED;
+step m1: MERGE INTO tgt USING src ON tgt.id = src.id
+             WHEN MATCHED THEN UPDATE SET val = src.val
+             WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val);
+step mj: SET LOCAL enable_hashjoin = off; SET LOCAL enable_nestloop = off;
+step exu: EXPLAIN (verbose, costs off)
+           MERGE INTO tgt USING (SELECT * FROM src
+                                 UNION ALL
+                                 SELECT * FROM src2) src ON tgt.id = src.id
+             WHEN MATCHED THEN UPDATE SET val = src.val
+             WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val);
+QUERY PLAN                                                               
+-------------------------------------------------------------------------
+Merge on public.tgt                                                      
+  ->  Merge Left Join                                                    
+        Output: tgt.ctid, src_1.val, src_1.id, (ROW(src_1.id, src_1.val))
+        Inner Unique: true                                               
+        Merge Cond: (src_1.id = tgt.id)                                  
+        ->  Merge Append                                                 
+              Sort Key: src_1.id                                         
+              ->  Index Scan using src_pkey on public.src src_1          
+                    Output: src_1.val, src_1.id, ROW(src_1.id, src_1.val)
+              ->  Index Scan using src2_pkey on public.src2              
+                    Output: src2.val, src2.id, ROW(src2.id, src2.val)    
+        ->  Index Scan using tgt_pkey on public.tgt                      
+              Output: tgt.ctid, tgt.id                                   
+(13 rows)
+
+step m2u: MERGE INTO tgt USING (SELECT * FROM src
+                                 UNION ALL
+                                 SELECT * FROM src2) src ON tgt.id = src.id
+             WHEN MATCHED THEN UPDATE SET val = src.val
+             WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val); <waiting ...>
+step c1: COMMIT;
+step m2u: <... completed>
+step c2: COMMIT;
+step s1: SELECT * FROM tgt;
+id|val
+--+---
+ 1| 10
+ 2| 20
+ 3| 30
+(3 rows)
+
+
+starting permutation: b1 b2 m1 nl exu m2u c1 c2 s1
+step b1: BEGIN ISOLATION LEVEL READ COMMITTED;
+step b2: BEGIN ISOLATION LEVEL READ COMMITTED;
+step m1: MERGE INTO tgt USING src ON tgt.id = src.id
+             WHEN MATCHED THEN UPDATE SET val = src.val
+             WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val);
+step nl: SET LOCAL enable_hashjoin = off; SET LOCAL enable_mergejoin = off;
+step exu: EXPLAIN (verbose, costs off)
+           MERGE INTO tgt USING (SELECT * FROM src
+                                 UNION ALL
+                                 SELECT * FROM src2) src ON tgt.id = src.id
+             WHEN MATCHED THEN UPDATE SET val = src.val
+             WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val);
+QUERY PLAN                                                               
+-------------------------------------------------------------------------
+Merge on public.tgt                                                      
+  ->  Nested Loop Left Join                                              
+        Output: tgt.ctid, src_1.val, src_1.id, (ROW(src_1.id, src_1.val))
+        Inner Unique: true                                               
+        ->  Append                                                       
+              ->  Seq Scan on public.src src_1                           
+                    Output: src_1.val, src_1.id, ROW(src_1.id, src_1.val)
+              ->  Seq Scan on public.src2                                
+                    Output: src2.val, src2.id, ROW(src2.id, src2.val)    
+        ->  Index Scan using tgt_pkey on public.tgt                      
+              Output: tgt.ctid, tgt.id                                   
+              Index Cond: (tgt.id = src_1.id)                            
+(12 rows)
+
+step m2u: MERGE INTO tgt USING (SELECT * FROM src
+                                 UNION ALL
+                                 SELECT * FROM src2) src ON tgt.id = src.id
+             WHEN MATCHED THEN UPDATE SET val = src.val
+             WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val); <waiting ...>
+step c1: COMMIT;
+step m2u: <... completed>
+step c2: COMMIT;
+step s1: SELECT * FROM tgt;
+id|val
+--+---
+ 1| 10
+ 2| 20
+ 3| 30
+(3 rows)
+
diff --git a/src/test/isolation/specs/merge-join.spec b/src/test/isolation/specs/merge-join.spec
new file mode 100644
index e33a02c..564702b
--- a/src/test/isolation/specs/merge-join.spec
+++ b/src/test/isolation/specs/merge-join.spec
@@ -6,6 +6,7 @@
 setup
 {
   CREATE TABLE src (id int PRIMARY KEY, val int);
+  CREATE TABLE src2 (id int PRIMARY KEY, val int);
   CREATE TABLE tgt (id int PRIMARY KEY, val int);
   INSERT INTO src SELECT x, x*10 FROM generate_series(1,3) g(x);
   INSERT INTO tgt SELECT x, x FROM generate_series(1,3) g(x);
@@ -13,7 +14,7 @@ setup
 
 teardown
 {
-  DROP TABLE src, tgt;
+  DROP TABLE src, src2, tgt;
 }
 
 session s1
@@ -36,6 +37,17 @@ step ex  { EXPLAIN (verbose, costs off)
 step m2  { MERGE INTO tgt USING src ON tgt.id = src.id
              WHEN MATCHED THEN UPDATE SET val = src.val
              WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val); }
+step exu { EXPLAIN (verbose, costs off)
+           MERGE INTO tgt USING (SELECT * FROM src
+                                 UNION ALL
+                                 SELECT * FROM src2) src ON tgt.id = src.id
+             WHEN MATCHED THEN UPDATE SET val = src.val
+             WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val); }
+step m2u { MERGE INTO tgt USING (SELECT * FROM src
+                                 UNION ALL
+                                 SELECT * FROM src2) src ON tgt.id = src.id
+             WHEN MATCHED THEN UPDATE SET val = src.val
+             WHEN NOT MATCHED THEN INSERT VALUES (src.id, src.val); }
 step s2  { SELECT * FROM tgt; }
 step c2  { COMMIT; }
 
@@ -43,3 +55,6 @@ permutation b1 m1 s1 c1 b2 m2 s2 c2
 permutation b1 b2 m1 hj ex m2 c1 c2 s1
 permutation b1 b2 m1 mj ex m2 c1 c2 s1
 permutation b1 b2 m1 nl ex m2 c1 c2 s1
+permutation b1 b2 m1 hj exu m2u c1 c2 s1
+permutation b1 b2 m1 mj exu m2u c1 c2 s1
+permutation b1 b2 m1 nl exu m2u c1 c2 s1

Reply via email to