On Fri, Dec 12, 2014 at 10:04 AM, Andres Freund <and...@anarazel.de> wrote:
>> Note that autovacuum and fsync are off.
>> =# select phase, user_diff, system_diff,
>> pg_size_pretty(pre_update - pre_insert),
>> pg_size_pretty(post_update - pre_update) from results;
>>        phase        | user_diff | system_diff | pg_size_pretty |
>> pg_size_pretty
>> --------------------+-----------+-------------+----------------+----------------
>>  Compression FPW    | 42.990799 |    0.868179 | 429 MB         | 567 MB
>>  No compression     | 25.688731 |    1.236551 | 429 MB         | 727 MB
>>  Compression record | 56.376750 |    0.769603 | 429 MB         | 566 MB
>> (3 rows)
>> If we do record-level compression, we'll need to be very careful in
>> defining a lower-bound to not eat unnecessary CPU resources, perhaps
>> something that should be controlled with a GUC. I presume that this stands
>> true as well for the upper bound.
>
> Record level compression pretty obviously would need a lower boundary
> for when to use compression. It won't be useful for small heapam/btree
> records, but it'll be rather useful for large multi_insert, clean or
> similar records...

Unless I'm missing something, this test is showing that FPW
compression saves 298MB of WAL for 17.3 seconds of CPU time, as
against master.  And compressing the whole record saves a further 1MB
of WAL for a further 13.39 seconds of CPU time.  That makes
compressing the whole record sound like a pretty terrible idea - even
if you get more benefit by reducing the lower boundary, you're still
burning a ton of extra CPU time for almost no gain on the larger
records.  Ouch!

(Of course, I'm assuming that Michael's patch is reasonably efficient,
which might not be true.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to