Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
On second thought, expanding AttrNumber to int32, wholesale, might not be a good idea,

No, it wouldn't.  For one thing it'd be a protocol break --- column
numbers are int16 ---

I wasn't planning to change that.

and for another, we'd have terrible performance
problems with such wide rows.

Yes, we probably would :-). Though if there's any nasty O(n^2) behavior left in there, we should look at optimizing it anyway to speed up more reasonably sized queries, in the range of a few hundred columns.

 Actually rows are supposed to be limited
to ~1600 columns, anyway, because of HeapTupleHeader limitations.

The trick is that that limitation doesn't apply to the intermediate virtual tuples we move around in the executor. Those are just arrays of Datums, and can have more than MaxTupleAttributeNumber attributes, as long as you project away enough attributes, bringing it below that limit, before returning it to the client or materializing it into a HeapTuple or MinimalTuple in the executor.

Apparently you've found a path where that restriction isn't enforced
correctly, but I haven't seen the referenced message yet ...

Enforcing the limit for virtual tuples as well, and checking for the limit in the planner is one option, but it would cripple the ability to join extremely wide tables. For example, if you had 10 tables with 200 columns each, you couldn't join them together even for the purposes of COUNT(*). Granted, that's not a very common thing to do, this is the first time this bug is reported after all, but I'd prefer to keep the capability if possible.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Reply via email to