On Sun, May 11, 2014 at 7:30 PM, Simon Riggs <si...@2ndquadrant.com> wrote: > On 30 August 2013 04:55, Fujii Masao <masao.fu...@gmail.com> wrote: > >> My idea is very simple, just compress FPW because FPW is >> a big part of WAL. I used pglz_compress() as a compression method, >> but you might think that other method is better. We can add >> something like FPW-compression-hook for that later. The patch >> adds new GUC parameter, but I'm thinking to merge it to full_page_writes >> parameter to avoid increasing the number of GUC. That is, >> I'm thinking to change full_page_writes so that it can accept new value >> 'compress'. > >> * Result >> [tps] >> 1386.8 (compress_backup_block = off) >> 1627.7 (compress_backup_block = on) >> >> [the amount of WAL generated during running pgbench] >> 4302 MB (compress_backup_block = off) >> 1521 MB (compress_backup_block = on) > > Compressing FPWs definitely makes sense for bulk actions. > > I'm worried that the loss of performance occurs by greatly elongating > transaction response times immediately after a checkpoint, which were > already a problem. I'd be interested to look at the response time > curves there.
Yep, I agree that we should check how the compression of FPW affects the response time, especially just after checkpoint starts. > I was thinking about this and about our previous thoughts about double > buffering. FPWs are made in foreground, so will always slow down > transaction rates. If we could move to double buffering we could avoid > FPWs altogether. Thoughts? If I understand the double buffering correctly, it would eliminate the need for FPW. But I'm not sure how easy we can implement the double buffering. Regards, -- Fujii Masao -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers