On 2012-12-20 21:17:04 +0100, Andres Freund wrote:
> On 2012-12-20 12:40:44 +0000, no...@nix.hu wrote:
> > The following bug has been logged on the website:
> >
> > Bug reference:      7763
> > Logged by:          Norbert Buchmuller
> > Email address:      no...@nix.hu
> > PostgreSQL version: 9.2.2
> > Operating system:   Linux 2.6.32, i386, Debian GNU/Linux 6.0.5
> > Description:
> >
> > There's a table that has a B-Tree index on a composite type expression. When
> > attempting to create another table just like the first table and with the
> > indexes also "copied" using the "CREATE TABLE ... (LIKE ... INCLUDING
> > INDEXES ...)" statement, it throws an error (see below) and the table is not
> > created.
> >
> > I believe it's a bug, from the documentation i assumed that it should create
> > the table with a similar index, no matter that the index is on a composite
> > type expression.
> >
> > postgres@vger:~$ cat
> > create_table_like_including_indexes-and-index_on_composite_type.sql
> > \set VERBOSITY verbose
> > \set ECHO all
> > SELECT version();
> > CREATE TYPE type1 AS (x int, y int);
> > CREATE TABLE table1 (a int, b int);
> > CREATE INDEX index1 ON table1 ( ( (a, b)::type1 ) );
> > CREATE TABLE table2 ( LIKE table1 INCLUDING INDEXES );
> > \d table2
> > postgres@vger:~$ dropdb test1; createdb test1 && psql --no-align --tuples -d
> > test1 -f create_table_like_including_indexes-and-index_on_composite_type.sql
> >
> > SELECT version();
> > PostgreSQL 9.2.2 on i686-pc-linux-gnu, compiled by gcc-4.4.real (Debian
> > 4.4.5-8) 4.4.5, 32-bit
> > CREATE TYPE type1 AS (x int, y int);
> > CREATE TYPE
> > CREATE TABLE table1 (a int, b int);
> > CREATE TABLE
> > CREATE INDEX index1 ON table1 ( ( (a, b)::type1 ) );
> > CREATE INDEX
> > CREATE TABLE table2 ( LIKE table1 INCLUDING INDEXES );
> > psql:create_table_like_including_indexes-and-index_on_composite_type.sql:7:
> > ERROR:  42P16: column "" has pseudo-type record
> > LOCATION:  CheckAttributeType, heap.c:496
> > \d table2
> > Did not find any relation named "table2".
>
> Concretely that seems to be transformRowExpr's fault. It overwrites
> row_typeid even though its marked as COERCE_EXPLICIT_CAST.
>
> Now the original problem seems to be that we are transforming an already
> transformed expression. generateClonedIndexStmt gets the expression from
> the old index via nodeToString, remaps some attnos, but thats about
> it.
> ISTM IndexElem grow a cooked_expr member.

+should

Ok, here are two patches:
* Add a cooked_expr member to IndexElem and use it for transformed
  expressions, including filling it directly in generateClonedIndexStmt.

* Follow the pattern set by other routines in parse_expr.c and don't
  transformRowExpr the same expression twice.

While the first one fixes the above bug - and I think its the right
approach not to analyze the expression twice, the second one seems like
a good idea anyway because as transformExpr says:
 *      1. At least one construct (BETWEEN/AND) puts the same nodes
 *      into two branches of the parse tree; hence, some nodes
 *      are transformed twice.
 *      2. Another way it can happen is that coercion of an operator or
 *      function argument to the required type (via coerce_type())
 *      can apply transformExpr to an already-transformed subexpression.
 *      An example here is "SELECT count(*) + 1.0 FROM table".

There unfortunately is not sufficient padding in IndexElem to do that
without changing its size. Not sure whether we consider that to be a big
problem for the back branches, its nothing user code should do, but ...

Greetings,

Andres Freund

--
 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
>From 8aa997a790d7ef8bbcf9c9f3ec186f08fa4324ea Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Thu, 20 Dec 2012 21:59:33 +0100
Subject: [PATCH 1/2] Treat transformed index expression differently from raw
 ones during index creation

When doing a CREATE TABLE ... (LIKE ... INCLUDING INDEXES) index expression
would get transformed twice as the old indexes pg_index.indexpr is used to
create build a new IndexStmt which would then get transformed again.

For that introduce a separate cooked_expr memer in IndexElem.

This fixes problems #7763 reported by Norbert Buchmuller where required casts
are lost due to transforming twice.
---
 src/backend/bootstrap/bootparse.y  |  1 +
 src/backend/commands/indexcmds.c   |  3 +-
 src/backend/nodes/copyfuncs.c      |  1 +
 src/backend/nodes/equalfuncs.c     |  1 +
 src/backend/nodes/outfuncs.c       |  1 +
 src/backend/parser/gram.y          |  3 ++
 src/backend/parser/parse_utilcmd.c | 15 ++++---
 src/backend/utils/cache/catcache.c | 87 +++++++++++++++++++++-----------------
 src/include/access/valid.h         | 54 +++++++++++++++++++++++
 src/include/nodes/parsenodes.h     |  8 +++-
 10 files changed, 127 insertions(+), 47 deletions(-)

diff --git a/src/backend/bootstrap/bootparse.y b/src/backend/bootstrap/bootparse.y
index ec7786a..0f94763 100644
--- a/src/backend/bootstrap/bootparse.y
+++ b/src/backend/bootstrap/bootparse.y
@@ -378,6 +378,7 @@ boot_index_param:
 					IndexElem *n = makeNode(IndexElem);
 					n->name = $1;
 					n->expr = NULL;
+					n->cooked_expr = NULL;
 					n->indexcolname = NULL;
 					n->collation = NIL;
 					n->opclass = list_make1(makeString($2));
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 75f9ff1..7311222 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -1007,8 +1007,9 @@ ComputeIndexAttrs(IndexInfo *indexInfo,
 		else
 		{
 			/* Index expression */
-			Node	   *expr = attribute->expr;
+			Node	   *expr = attribute->cooked_expr;
 
+			Assert(attribute->expr == NULL);
 			Assert(expr != NULL);
 			atttype = exprType(expr);
 			attcollation = exprCollation(expr);
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 9387ee9..8b33e9a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -2316,6 +2316,7 @@ _copyIndexElem(const IndexElem *from)
 
 	COPY_STRING_FIELD(name);
 	COPY_NODE_FIELD(expr);
+	COPY_NODE_FIELD(cooked_expr);
 	COPY_STRING_FIELD(indexcolname);
 	COPY_NODE_FIELD(collation);
 	COPY_NODE_FIELD(opclass);
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 95a95f4..0b26f0d 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2134,6 +2134,7 @@ _equalIndexElem(const IndexElem *a, const IndexElem *b)
 {
 	COMPARE_STRING_FIELD(name);
 	COMPARE_NODE_FIELD(expr);
+	COMPARE_NODE_FIELD(cooked_expr);
 	COMPARE_STRING_FIELD(indexcolname);
 	COMPARE_NODE_FIELD(collation);
 	COMPARE_NODE_FIELD(opclass);
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 35c6287..cf53cdd 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -2184,6 +2184,7 @@ _outIndexElem(StringInfo str, const IndexElem *node)
 
 	WRITE_STRING_FIELD(name);
 	WRITE_NODE_FIELD(expr);
+	WRITE_NODE_FIELD(cooked_expr);
 	WRITE_STRING_FIELD(indexcolname);
 	WRITE_NODE_FIELD(collation);
 	WRITE_NODE_FIELD(opclass);
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index ad98b36..b991d80 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -6044,6 +6044,7 @@ index_elem:	ColId opt_collate opt_class opt_asc_desc opt_nulls_order
 					$$ = makeNode(IndexElem);
 					$$->name = $1;
 					$$->expr = NULL;
+					$$->cooked_expr = NULL;
 					$$->indexcolname = NULL;
 					$$->collation = $2;
 					$$->opclass = $3;
@@ -6055,6 +6056,7 @@ index_elem:	ColId opt_collate opt_class opt_asc_desc opt_nulls_order
 					$$ = makeNode(IndexElem);
 					$$->name = NULL;
 					$$->expr = $1;
+					$$->cooked_expr = NULL;
 					$$->indexcolname = NULL;
 					$$->collation = $2;
 					$$->opclass = $3;
@@ -6066,6 +6068,7 @@ index_elem:	ColId opt_collate opt_class opt_asc_desc opt_nulls_order
 					$$ = makeNode(IndexElem);
 					$$->name = NULL;
 					$$->expr = $2;
+					$$->cooked_expr = NULL;
 					$$->indexcolname = NULL;
 					$$->collation = $4;
 					$$->opclass = $5;
diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c
index 086cc75..edb353e 100644
--- a/src/backend/parser/parse_utilcmd.c
+++ b/src/backend/parser/parse_utilcmd.c
@@ -1124,6 +1124,7 @@ generateClonedIndexStmt(CreateStmtContext *cxt, Relation source_idx,
 		int16		opt = source_idx->rd_indoption[keyno];
 
 		iparam = makeNode(IndexElem);
+		iparam->expr = NULL;
 
 		if (AttributeNumberIsValid(attnum))
 		{
@@ -1134,7 +1135,7 @@ generateClonedIndexStmt(CreateStmtContext *cxt, Relation source_idx,
 			keycoltype = get_atttype(indrelid, attnum);
 
 			iparam->name = attname;
-			iparam->expr = NULL;
+			iparam->cooked_expr = NULL;
 		}
 		else
 		{
@@ -1162,7 +1163,7 @@ generateClonedIndexStmt(CreateStmtContext *cxt, Relation source_idx,
 								   RelationGetRelationName(source_idx))));
 
 			iparam->name = NULL;
-			iparam->expr = indexkey;
+			iparam->cooked_expr = indexkey;
 
 			keycoltype = exprType(indexkey);
 		}
@@ -1787,6 +1788,7 @@ transformIndexConstraint(Constraint *constraint, CreateStmtContext *cxt)
 		iparam = makeNode(IndexElem);
 		iparam->name = pstrdup(key);
 		iparam->expr = NULL;
+		iparam->cooked_expr = NULL;
 		iparam->indexcolname = NULL;
 		iparam->collation = NIL;
 		iparam->opclass = NIL;
@@ -1930,11 +1932,12 @@ transformIndexStmt(IndexStmt *stmt, const char *queryString)
 				ielem->indexcolname = FigureIndexColname(ielem->expr);
 
 			/* Now do parse transformation of the expression */
-			ielem->expr = transformExpr(pstate, ielem->expr,
-										EXPR_KIND_INDEX_EXPRESSION);
+			ielem->cooked_expr = transformExpr(pstate, ielem->expr,
+											   EXPR_KIND_INDEX_EXPRESSION);
+			ielem->expr = NULL;
 
 			/* We have to fix its collations too */
-			assign_expr_collations(pstate, ielem->expr);
+			assign_expr_collations(pstate, ielem->cooked_expr);
 
 			/*
 			 * transformExpr() should have already rejected subqueries,
@@ -1945,7 +1948,7 @@ transformIndexStmt(IndexStmt *stmt, const char *queryString)
 			 * with what transformWhereClause() checks for the predicate.
 			 * DefineIndex() will make more checks.
 			 */
-			if (expression_returns_set(ielem->expr))
+			if (expression_returns_set(ielem->cooked_expr))
 				ereport(ERROR,
 						(errcode(ERRCODE_DATATYPE_MISMATCH),
 						 errmsg("index expression cannot return a set")));
diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c
index 9ae1432..bee6f3d 100644
--- a/src/backend/utils/cache/catcache.c
+++ b/src/backend/utils/cache/catcache.c
@@ -73,7 +73,7 @@ static CatCacheHeader *CacheHdr = NULL;
 
 
 static uint32 CatalogCacheComputeHashValue(CatCache *cache, int nkeys,
-							 ScanKey cur_skey);
+										   ScanKey cur_skey, Datum *argument);
 static uint32 CatalogCacheComputeTupleHashValue(CatCache *cache,
 								  HeapTuple tuple);
 
@@ -173,7 +173,7 @@ GetCCHashEqFuncs(Oid keytype, PGFunction *hashfunc, RegProcedure *eqfunc)
  * Compute the hash value associated with a given set of lookup keys
  */
 static uint32
-CatalogCacheComputeHashValue(CatCache *cache, int nkeys, ScanKey cur_skey)
+CatalogCacheComputeHashValue(CatCache *cache, int nkeys, ScanKey cur_skey, Datum *argument)
 {
 	uint32		hashValue = 0;
 	uint32		oneHash;
@@ -188,28 +188,28 @@ CatalogCacheComputeHashValue(CatCache *cache, int nkeys, ScanKey cur_skey)
 		case 4:
 			oneHash =
 				DatumGetUInt32(DirectFunctionCall1(cache->cc_hashfunc[3],
-												   cur_skey[3].sk_argument));
+												   argument[3]));
 			hashValue ^= oneHash << 24;
 			hashValue ^= oneHash >> 8;
 			/* FALLTHROUGH */
 		case 3:
 			oneHash =
 				DatumGetUInt32(DirectFunctionCall1(cache->cc_hashfunc[2],
-												   cur_skey[2].sk_argument));
+												   argument[2]));
 			hashValue ^= oneHash << 16;
 			hashValue ^= oneHash >> 16;
 			/* FALLTHROUGH */
 		case 2:
 			oneHash =
 				DatumGetUInt32(DirectFunctionCall1(cache->cc_hashfunc[1],
-												   cur_skey[1].sk_argument));
+												   argument[1]));
 			hashValue ^= oneHash << 8;
 			hashValue ^= oneHash >> 24;
 			/* FALLTHROUGH */
 		case 1:
 			oneHash =
 				DatumGetUInt32(DirectFunctionCall1(cache->cc_hashfunc[0],
-												   cur_skey[0].sk_argument));
+												   argument[0]));
 			hashValue ^= oneHash;
 			break;
 		default:
@@ -228,17 +228,14 @@ CatalogCacheComputeHashValue(CatCache *cache, int nkeys, ScanKey cur_skey)
 static uint32
 CatalogCacheComputeTupleHashValue(CatCache *cache, HeapTuple tuple)
 {
-	ScanKeyData cur_skey[CATCACHE_MAXKEYS];
+	Datum arguments[CATCACHE_MAXKEYS];
 	bool		isNull = false;
 
-	/* Copy pre-initialized overhead data for scankey */
-	memcpy(cur_skey, cache->cc_skey, sizeof(cur_skey));
-
 	/* Now extract key fields from tuple, insert into scankey */
 	switch (cache->cc_nkeys)
 	{
 		case 4:
-			cur_skey[3].sk_argument =
+			arguments[3] =
 				(cache->cc_key[3] == ObjectIdAttributeNumber)
 				? ObjectIdGetDatum(HeapTupleGetOid(tuple))
 				: fastgetattr(tuple,
@@ -248,7 +245,7 @@ CatalogCacheComputeTupleHashValue(CatCache *cache, HeapTuple tuple)
 			Assert(!isNull);
 			/* FALLTHROUGH */
 		case 3:
-			cur_skey[2].sk_argument =
+			arguments[2] =
 				(cache->cc_key[2] == ObjectIdAttributeNumber)
 				? ObjectIdGetDatum(HeapTupleGetOid(tuple))
 				: fastgetattr(tuple,
@@ -258,7 +255,7 @@ CatalogCacheComputeTupleHashValue(CatCache *cache, HeapTuple tuple)
 			Assert(!isNull);
 			/* FALLTHROUGH */
 		case 2:
-			cur_skey[1].sk_argument =
+			arguments[1] =
 				(cache->cc_key[1] == ObjectIdAttributeNumber)
 				? ObjectIdGetDatum(HeapTupleGetOid(tuple))
 				: fastgetattr(tuple,
@@ -268,7 +265,7 @@ CatalogCacheComputeTupleHashValue(CatCache *cache, HeapTuple tuple)
 			Assert(!isNull);
 			/* FALLTHROUGH */
 		case 1:
-			cur_skey[0].sk_argument =
+			arguments[0] =
 				(cache->cc_key[0] == ObjectIdAttributeNumber)
 				? ObjectIdGetDatum(HeapTupleGetOid(tuple))
 				: fastgetattr(tuple,
@@ -282,7 +279,7 @@ CatalogCacheComputeTupleHashValue(CatCache *cache, HeapTuple tuple)
 			break;
 	}
 
-	return CatalogCacheComputeHashValue(cache, cache->cc_nkeys, cur_skey);
+	return CatalogCacheComputeHashValue(cache, cache->cc_nkeys, cache->cc_skey, arguments);
 }
 
 
@@ -1058,6 +1055,7 @@ SearchCatCache(CatCache *cache,
 			   Datum v4)
 {
 	ScanKeyData cur_skey[CATCACHE_MAXKEYS];
+	Datum arguments[CATCACHE_MAXKEYS];
 	uint32		hashValue;
 	Index		hashIndex;
 	dlist_iter	iter;
@@ -1080,16 +1078,15 @@ SearchCatCache(CatCache *cache,
 	/*
 	 * initialize the search key information
 	 */
-	memcpy(cur_skey, cache->cc_skey, sizeof(cur_skey));
-	cur_skey[0].sk_argument = v1;
-	cur_skey[1].sk_argument = v2;
-	cur_skey[2].sk_argument = v3;
-	cur_skey[3].sk_argument = v4;
+	arguments[0] = v1;
+	arguments[1] = v2;
+	arguments[2] = v3;
+	arguments[3] = v4;
 
 	/*
 	 * find the hash bucket in which to look for the tuple
 	 */
-	hashValue = CatalogCacheComputeHashValue(cache, cache->cc_nkeys, cur_skey);
+	hashValue = CatalogCacheComputeHashValue(cache, cache->cc_nkeys, cache->cc_skey, arguments);
 	hashIndex = HASH_INDEX(hashValue, cache->cc_nbuckets);
 
 	/*
@@ -1114,10 +1111,11 @@ SearchCatCache(CatCache *cache,
 		/*
 		 * see if the cached tuple matches our key.
 		 */
-		HeapKeyTest(&ct->tuple,
+		HeapKeyTestArg(&ct->tuple,
 					cache->cc_tupdesc,
 					cache->cc_nkeys,
-					cur_skey,
+					cache->cc_skey,
+					arguments,
 					res);
 		if (!res)
 			continue;
@@ -1162,6 +1160,12 @@ SearchCatCache(CatCache *cache,
 		}
 	}
 
+	memcpy(cur_skey, cache->cc_skey, sizeof(cur_skey));
+	cur_skey[0].sk_argument = v1;
+	cur_skey[1].sk_argument = v2;
+	cur_skey[2].sk_argument = v3;
+	cur_skey[3].sk_argument = v4;
+
 	/*
 	 * Tuple was not found in cache, so we have to try to retrieve it directly
 	 * from the relation.  If found, we will add it to the cache; if not
@@ -1300,7 +1304,7 @@ GetCatCacheHashValue(CatCache *cache,
 					 Datum v3,
 					 Datum v4)
 {
-	ScanKeyData cur_skey[CATCACHE_MAXKEYS];
+	Datum arguments[CATCACHE_MAXKEYS];
 
 	/*
 	 * one-time startup overhead for each cache
@@ -1311,16 +1315,15 @@ GetCatCacheHashValue(CatCache *cache,
 	/*
 	 * initialize the search key information
 	 */
-	memcpy(cur_skey, cache->cc_skey, sizeof(cur_skey));
-	cur_skey[0].sk_argument = v1;
-	cur_skey[1].sk_argument = v2;
-	cur_skey[2].sk_argument = v3;
-	cur_skey[3].sk_argument = v4;
+	arguments[0] = v1;
+	arguments[1] = v2;
+	arguments[2] = v3;
+	arguments[3] = v4;
 
 	/*
 	 * calculate the hash value
 	 */
-	return CatalogCacheComputeHashValue(cache, cache->cc_nkeys, cur_skey);
+	return CatalogCacheComputeHashValue(cache, cache->cc_nkeys, cache->cc_skey, arguments);
 }
 
 
@@ -1342,6 +1345,7 @@ SearchCatCacheList(CatCache *cache,
 				   Datum v4)
 {
 	ScanKeyData cur_skey[CATCACHE_MAXKEYS];
+	Datum arguments[CATCACHE_MAXKEYS];
 	uint32		lHashValue;
 	dlist_iter  iter;
 	CatCList   *cl;
@@ -1369,18 +1373,18 @@ SearchCatCacheList(CatCache *cache,
 	/*
 	 * initialize the search key information
 	 */
-	memcpy(cur_skey, cache->cc_skey, sizeof(cur_skey));
-	cur_skey[0].sk_argument = v1;
-	cur_skey[1].sk_argument = v2;
-	cur_skey[2].sk_argument = v3;
-	cur_skey[3].sk_argument = v4;
+
+	arguments[0] = v1;
+	arguments[1] = v2;
+	arguments[2] = v3;
+	arguments[3] = v4;
 
 	/*
 	 * compute a hash value of the given keys for faster search.  We don't
 	 * presently divide the CatCList items into buckets, but this still lets
 	 * us skip non-matching items quickly most of the time.
 	 */
-	lHashValue = CatalogCacheComputeHashValue(cache, nkeys, cur_skey);
+	lHashValue = CatalogCacheComputeHashValue(cache, nkeys, cache->cc_skey, arguments);
 
 	/*
 	 * scan the items until we find a match or exhaust our list
@@ -1405,10 +1409,11 @@ SearchCatCacheList(CatCache *cache,
 		 */
 		if (cl->nkeys != nkeys)
 			continue;
-		HeapKeyTest(&cl->tuple,
+		HeapKeyTestArg(&cl->tuple,
 					cache->cc_tupdesc,
 					nkeys,
-					cur_skey,
+					cache->cc_skey,
+					arguments,
 					res);
 		if (!res)
 			continue;
@@ -1451,6 +1456,12 @@ SearchCatCacheList(CatCache *cache,
 
 	ctlist = NIL;
 
+	memcpy(cur_skey, cache->cc_skey, sizeof(cur_skey));
+	cur_skey[0].sk_argument = v1;
+	cur_skey[1].sk_argument = v2;
+	cur_skey[2].sk_argument = v3;
+	cur_skey[3].sk_argument = v4;
+
 	PG_TRY();
 	{
 		Relation	relation;
diff --git a/src/include/access/valid.h b/src/include/access/valid.h
index ce970e4..283623d 100644
--- a/src/include/access/valid.h
+++ b/src/include/access/valid.h
@@ -66,4 +66,58 @@ do \
 	} \
 } while (0)
 
+/*
+ *		HeapKeyTest
+ *
+ *		Test a heap tuple to see if it satisfies a scan key.
+ */
+#define HeapKeyTestArg(tuple, \
+					tupdesc, \
+					nkeys, \
+					keys, \
+					arguments, \
+					result) \
+do \
+{ \
+	/* Use underscores to protect the variables passed in as parameters */ \
+	int			__cur_nkeys = (nkeys); \
+	ScanKey		__cur_keys = (keys); \
+	Datum	   *__cur_args = (arguments); \
+ \
+	(result) = true; /* may change */ \
+	for (; __cur_nkeys--; __cur_keys++, __cur_args++)		\
+	{ \
+		Datum	__atp; \
+		bool	__isnull; \
+		Datum	__test; \
+ \
+		if (__cur_keys->sk_flags & SK_ISNULL) \
+		{ \
+			(result) = false; \
+			break; \
+		} \
+ \
+		__atp = heap_getattr((tuple), \
+							 __cur_keys->sk_attno, \
+							 (tupdesc), \
+							 &__isnull); \
+ \
+		if (__isnull) \
+		{ \
+			(result) = false; \
+			break; \
+		} \
+ \
+		__test = FunctionCall2Coll(&__cur_keys->sk_func, \
+								   __cur_keys->sk_collation, \
+								   __atp, *__cur_args); \
+ \
+		if (!DatumGetBool(__test)) \
+		{ \
+			(result) = false; \
+			break; \
+		} \
+	} \
+} while (0)
+
 #endif   /* VALID_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 8834499..8dddb09 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -530,13 +530,17 @@ typedef enum TableLikeOption
  *
  * For a plain index attribute, 'name' is the name of the table column to
  * index, and 'expr' is NULL.  For an index expression, 'name' is NULL and
- * 'expr' is the expression tree.
+ * 'expr' is the expression tree.  During transformation the transformed
+ * expression is stored in 'cooked_expr'. It is also used when an IndexElem is
+ * built to clone an existing index. Only one of expr and cooked_expr should be
+ * valid.
  */
 typedef struct IndexElem
 {
 	NodeTag		type;
 	char	   *name;			/* name of attribute to index, or NULL */
-	Node	   *expr;			/* expression to index, or NULL */
+	Node	   *expr;			/* untransformed index expression, or NULL */
+	Node	   *cooked_expr;	/* transformed index expression, or NULL */
 	char	   *indexcolname;	/* name for index column; NULL = default */
 	List	   *collation;		/* name of collation; NIL = default */
 	List	   *opclass;		/* name of desired opclass; NIL = default */
-- 
1.7.12.289.g0ce9864.dirty

>From c6e5cf906e14136c478dc579131aceed364a234e Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Thu, 20 Dec 2012 22:41:32 +0100
Subject: [PATCH 2/2] Don't loose information when transforming a RowExpr
 twice

---
 src/backend/parser/parse_expr.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index e9267c5..fdcd1a1 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -1775,11 +1775,17 @@ transformArrayExpr(ParseState *pstate, A_ArrayExpr *a,
 static Node *
 transformRowExpr(ParseState *pstate, RowExpr *r)
 {
-	RowExpr    *newr = makeNode(RowExpr);
+	RowExpr    *newr;
 	char		fname[16];
 	int			fnum;
 	ListCell   *lc;
 
+	/* If we already transformed this node, do nothing */
+	if (OidIsValid(r->row_typeid))
+		return (Node*) r;
+
+	newr = makeNode(RowExpr);
+
 	/* Transform the field expressions */
 	newr->args = transformExpressionList(pstate, r->args, pstate->p_expr_kind);
 
-- 
1.7.12.289.g0ce9864.dirty

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to