Guaranteed compression of large data fields is the responsibility of the client. The database should feel free to compress behind the scenes if it turns out to be desirable, but an expectation that it compresses is wrong in my opinion.

That said, I'm wondering why compression has to be a problem or why >1 Mbyte is a reasonable compromise? I missed the original thread that lead to 8.4. It seems to me that transparent file system compression doesn't have limits like "files must be less than 1 Mbyte to be compressed". They don't exhibit poor file system performance. I remember back in the 386/486 days, that I would always DriveSpace compress everything, because hard disks were so slow then that DriveSpace would actually increase performance. The toast tables already give a sort of block-addressable scheme. Compression can be on a per block or per set of blocks basis allowing for seek into the block, or if compression doesn't seem to be working for the first few blocks, the later blocks can be stored uncompressed? Or is that too complicated compared to what we have now? :-)

Cheers,
mark

--
Mark Mielke <m...@mielke.cc>


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to