On 12/08/2014 09:21 PM, Andres Freund wrote:
I still think that just compressing the whole record if it's above a
certain size is going to be better than compressing individual
parts. Michael argued thta that'd be complicated because of the varying
size of the required 'scratch space'. I don't buy that argument
though. It's easy enough to simply compress all the data in some fixed
chunk size. I.e. always compress 64kb in one go. If there's more
compress that independently.

Doing it in fixed-size chunks doesn't help - you have to hold onto the compressed data until it's written to the WAL buffers.

But you could just allocate a "large enough" scratch buffer, and give up if it doesn't fit. If the compressed data doesn't fit in e.g. 3 * 8kb, it didn't compress very well, so there's probably no point in compressing it anyway. Now, an exception to that might be a record that contains something else than page data, like a commit record with millions of subxids, but I think we could live with not compressing those, even though it would be beneficial to do so.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to