Hi,

On 04/29/15 23:54, Robert Haas wrote:
On Mon, Apr 20, 2015 at 9:03 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
Sure, it's not an ultimate solution, but it might help a bit. I do have
other ideas how to optimize this, but in the planner every milisecond
counts. Looking at 'perf top' and seeing pglz_decompress() in top 3.

I suggested years ago that we should not compress data in
pg_statistic.  Tom shot that down, but I don't understand why.  It
seems to me that when we know data is extremely frequently accessed,
storing it uncompressed makes sense.

I'm not convinced not compressing the data is a good idea - it suspect it would only move the time to TOAST, increase memory pressure (in general and in shared buffers). But I think that using a more efficient compression algorithm would help a lot.

For example, when profiling the multivariate stats patch (with multiple quite large histograms), the pglz_decompress is #1 in the profile, occupying more than 30% of the time. After replacing it with the lz4, the data are bit larger, but it drops to ~0.25% in the profile and planning the drops proportionally.

It's not a silver bullet, but it would help a lot in those cases.


--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to