Calling this a creeping feature is quite a leap.
It's true that the real creep is having the payload at all rather than
not having it.
Not having the payload at all is like santa showing up without his bag
of toys. Instead, you have to drive/fly to the north pole where he just
came from to get them.
One person described stuffing the payload with the primary key of the
record being invalidated. This means the requirements have just gone
from holding at most some small fixed number of records bounded by the
number of tables or other shared data structures to holding a large
number of records bounded only by the number of records in their
tables which is usually much much larger.
Now you're talking about making the payloads variable size, which
means you need to do free space management within shared pages to keep
track of how much space is free and available for reuse.
So we've gone from a simple hash table of fixed size entries
containing an oid or "name" datum where we expect the hash table to
fit in memory and a simple lru can handle old pages that aren't part
of the working set to something that's going to look a lot like a
database table -- it has to handle reusing space in collections of
variable size data and scale up to millions of entries.
And I note someone else in the thread was suggesting it needed ACID
properties which makes space reuse even more complex and will need
something like vacuum to implement it.
I think the original OP was close. The structure can still be fixed
length but maybe we can bump it to 8k (BLCKSZ)?
--
Andrew Chernow
eSilo, LLC
every bit counts
http://www.esilo.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers