On Mon, 2004-11-15 at 20:53 -0500, Tom Lane wrote:
> I think the SELECT limit should be MaxTupleAttributeNumber not
> MaxHeapAttributeNumber.

Ah, true -- I forgot about the distinction...

> What I think needs to happen is to check p_next_resno at some point
> after the complete tlist has been built.

Attached is a revised patch -- I just did the check at the end of
transformStmt(), since otherwise we'll need to duplicate code in the
various places that resnos are used/incremented (set operation
statements, normal selects, updates, and so on). This is somewhat
fragile in that we usually assign p_next_resno to an AttrNumber and only
check for overflow at the end of the analysis phase, but it seems safe
for the moment...

BTW I figure this should be backpatched to REL7_4_STABLE. Barring any
objections I will do that (and apply to HEAD) this evening.

-Neil

--- src/backend/commands/tablecmds.c
+++ src/backend/commands/tablecmds.c
@@ -681,6 +681,23 @@
 	int			child_attno;
 
 	/*
+	 * Check for and reject tables with too many columns. We perform
+	 * this check relatively early for two reasons: (a) we don't run
+	 * the risk of overflowing an AttrNumber in subsequent code (b) an
+	 * O(n^2) algorithm is okay if we're processing <= 1600 columns,
+	 * but could take minutes to execute if the user attempts to
+	 * create a table with hundreds of thousands of columns.
+	 *
+	 * Note that we also need to check that any we do not exceed this
+	 * figure after including columns from inherited relations.
+	 */
+	if (list_length(schema) > MaxHeapAttributeNumber)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_COLUMNS),
+				 errmsg("tables can have at most %d columns",
+						MaxHeapAttributeNumber)));
+
+	/*
 	 * Check for duplicate names in the explicit list of attributes.
 	 *
 	 * Although we might consider merging such entries in the same way that
@@ -979,6 +996,16 @@
 		}
 
 		schema = inhSchema;
+
+		/*
+		 * Check that we haven't exceeded the legal # of columns after
+		 * merging in inherited columns.
+		 */
+		if (list_length(schema) > MaxHeapAttributeNumber)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("tables can have at most %d columns",
+							MaxHeapAttributeNumber)));
 	}
 
 	/*
--- src/backend/parser/analyze.c
+++ src/backend/parser/analyze.c
@@ -396,6 +396,18 @@
 	result->querySource = QSRC_ORIGINAL;
 	result->canSetTag = true;
 
+	/*
+	 * Check that we did not produce too many resnos; at the very
+	 * least we cannot allow more than 2^16, since that would exceed
+	 * the range of a AttrNumber. It seems safest to use
+	 * MaxTupleAttributeNumber.
+	 */
+	if (pstate->p_next_resno - 1 > MaxTupleAttributeNumber)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("target lists can have at most %d entries",
+						MaxTupleAttributeNumber)));
+
 	return result;
 }
 
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to