On Tue, Feb 4, 2014 at 12:39 PM, Amit Kapila <amit.kapil...@gmail.com> wrote:
> Now there is approximately 1.4~5% CPU gain for
> "hundred tiny fields, half nulled" case

I don't want to advocate too strongly for this patch because, number
one, Amit is a colleague and more importantly, number two, I can't
claim to be an expert on compression.  But that having been said, I
think these numbers are starting to look awfully good.  The only
remaining regressions are in the cases where a large fraction of the
tuple turns over, and they're not that big even then.  The two *worst*
tests now seem to be "hundred tiny fields, all changed" and "hundred
tiny fields, half changed".  For the "all changed" case, the median
unpatched time is 16.3172590732574 and the median patched time is
16.9294109344482, a <4% loss; for the "half changed" case, the median
unpatched time is 16.5795118808746 and the median patched time is
17.0454230308533, a <3% loss.  Both cases show minimal change in WAL
volume.

Meanwhile, in friendlier cases, like "one short and one long field, no
change", we're seeing big improvements.  That particular case shows a
speedup of 21% and a WAL reduction of 36%.  That's a pretty big deal,
and I think not unrepresentative of many real-world workloads.  Some
might well do better, having either more or longer unchanged fields.
Assuming that the logic isn't buggy, a point in need of further study,
I'm starting to feel like we want to have this.  And I might even be
tempted to remove the table-level off switch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to