"Stephen R. van den Berg" <s...@cuci.nl> writes:
> What seems to be hurting the most is the 1MB upper limit.  What is the
> rationale behind that limit?

The argument was that compressing/decompressing such large chunks would
require a lot of CPU effort; also it would defeat attempts to fetch
subsections of a large string.  In the past we've required people to
explicitly "ALTER TABLE SET STORAGE external" if they wanted to make
use of the substring-fetch optimization, but it was argued that this
would make that more likely to work automatically.

I'm not entirely convinced by Alex' analysis anyway; the only way
those 39 large values explain the size difference is if they are
*tremendously* compressible, like almost all zeroes.  The toast
compressor isn't so bright that it's likely to get 10X compression
on typical data.

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to