On Sat, Feb 15, 2014 at 8:21 PM, Andres Freund <and...@2ndquadrant.com> wrote:
> Hi,
>
>
> Some quick review comments:
Thanks for the review, I shall handle/reply to comments with the
updated version in which I am planing to fix a bug (right now preparing a
test to reproduce it) in this code.
Bug:
Tag can handle maximum length of 273 bytes, but this patch is not
considering it.

> I have to admit, I have serious doubts about this approach. I have a
> very hard time believing this won't cause performance regression in many
> common cases...

Actually, till now I was majorly focusing on worst case (i.e at boundary of
compression ratio) thinking that most others will do good. However I shall
produce data for much more common cases as well.
Please let me know if you have anything specific thing in mind where this
will not work well.

>More importantly I don't think doing the compression on
> this level is that interesting. I know Heikki argued for it, but I think
> extending the bitmap that's computed for HOT to cover all columns and
> doing this on a column level sounds much more sensible to me.

Previously we have tried to do at column boundaries, but the main problem
turned out to be in worst cases where we spend time in extracting values
from tuples based on column boundaries and later found that data is not
compressible.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to