On 2014-01-06 12:40:25 -0500, Robert Haas wrote:
> On Mon, Jan 6, 2014 at 11:47 AM, Andres Freund <and...@2ndquadrant.com> wrote:
> > On 2014-01-06 11:08:41 -0500, Robert Haas wrote:
> > Yea. But at least it would fail reliably instead of just under
> > concurrency and other strange circumstances - and there'd be a safe way
> > out. Currently there seem to be all sorts of odd behaviour possible.
> >
> > I simply don't have a better idea :(
> 
> Is "forcibly detoast everything" a complete no-go?  I realize there
> are performance concerns with that approach, but I'm not sure how
> realistic a worry it actually is.

The scenario I am primarily worried about is turning a record assignment
which previously took up to BLOCK_SIZE + slop amount of memory into
something taking up to a gigabyte. That's a pretty damn hefty
change.
And there's no good way of preventing it short of using a variable for
each actually desired column which imnsho isn't really a solution.

Greetings,

Andres Freund

-- 
 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to