On 06/24/2013 01:50 PM, Tom Lane wrote:
> The point of what I was suggesting isn't to conserve storage, but to
> reduce downtime during a schema change.  Remember that a rewriting ALTER
> TABLE locks everyone out of that table for a long time.

Right, but I'm worried about the "surprise!" factor.  That is, if we
make the first default "free" by using a magic value, then a SET DEFAULT
on that column is going to have some very surprising results as suddenly
the whole table needs to get written out for the old default.  In many
use cases, this would still be a net win, since 80% of the time users
don't change defaults after column creation.  But we'd have to make it
much less surprising somehow.  Also for the reason Tom pointed out, the
optimization would only work on with NOT NULL columns ... leading to
another potential unexpected surprise when the column is marked NULLable.

> So unless we consider that many-hundreds-of-columns is a design center
> for general purpose use of Postgres, we should be evaluating this patch
> strictly on its usefulness for more typical table widths.  And my take
> on that is that (1) lots of columns isn't our design center (for the
> reasons you mentioned among others), and (2) the case for the patch
> looks pretty weak otherwise.

Well, actually, hundreds of columns is reasonably common for a certain
user set (ERP, CRM, etc.).  If we could handle that use case very
efficiently, then it would win us some users, since other RDMBSes don't.
 However, there are multiple issues with having hundreds of columns, of
which storage optimization is only one ... and probably the smallest one
at that.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to