"Dann Corbit" <[EMAIL PROTECTED]> writes:
> Why not waste a bit of memory and make the row buffer the maximum
> possible length?
> E.g. for varchar(2000) allocate 2000 characters + size element and point
> to the start of that thing.

Surely you're not proposing that we store data on disk that way.

The real issue here is avoiding overhead while extracting columns out of
a stored tuple.  We could perhaps use a different, less space-efficient
format for temporary tuples in memory than we do on disk, but I don't
think that will help a lot.  The nature of O(N^2) bottlenecks is you
have to kill them all --- for example, if we fix printtup and don't do
anything with ExecEvalVar, we can't do more than double the speed of
Steve's example, so it'll still be slow.  So we must have a solution for
the case where we are disassembling a stored tuple, anyway.

I have been sitting here toying with a related idea, which is to use the
heap_deformtuple code I suggested before to form an array of pointers to
Datums in a specific tuple (we could probably use the TupleTableSlot
mechanisms to manage the memory for these).  Then subsequent accesses to
individual columns would just need an array-index operation, not a
nocachegetattr call.  The trick with that would be that if only a few
columns are needed out of a row, it might be a net loss to compute the
Datum values for all columns.  How could we avoid slowing that case down
while making the wide-tuple case faster?

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html

Reply via email to