On 11/10/13 19:06, Andres Freund wrote:
On 2013-10-11 09:22:50 +0530, Amit Kapila wrote:
I think it will be difficult to prove by using any compression
algorithm, that it compresses in most of the scenario's.
In many cases it can so happen that the WAL will also not be reduced
and tps can also come down if the data is non-compressible, because
any compression algorithm will have to try to compress the data and it
will burn some cpu for that, which inturn will reduce tps.
Then those concepts maybe aren't such a good idea after all. Storing
lots of compressible data in an uncompressed fashion isn't an all that
common usecase. I most certainly don't want postgres to optimize for
blank padded data, especially if it can hurt other scenarios. Just not
enough benefit.
That said, I actually have relatively high hopes for compressing full
page writes. There often enough is lot of repetitiveness between rows on
the same page that it should be useful outside of such strange
scenarios. But maybe pglz is just not a good fit for this, it really
isn't a very good algorithm in this day and aage.

Hm,. There is a clear benefit for compressible data and clearly
no benefit from incompressible data..

how about letting autovacuum "taste" the compressibillity of
pages on per relation/index basis and set a flag that triggers
this functionality where it provides a benefit?

not hugely more magical than figuring out wether the data ends up
in the heap or in a toast table as it is now.

--
Jesper


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to