On Wed, Aug 24, 2016 at 6:31 PM, Robert Haas <robertmh...@gmail.com> wrote:
> 3. archive_timeout is no longer a frequently used option.  Obviously,
> if you are frequently archiving partial segments, you don't want the
> segment size to be too large, because if it is, each forced segment
> switch potentially wastes a large amount of space (and bandwidth).
> But given streaming replication and pg_receivexlog, the use case for
> archiving partial segments is, at least according to my understanding,
> a lot narrower than it used to be.  So, I think we don't have to worry
> as much about keeping forced segment switches cheap as we did during
> the 8.x series.

Heroku uses archive_timeout. It is considered important, because S3
archives are more reliable than EBS storage. We want to cap how much
time can pass before WAL is shipped to S3, to some degree. It's weird
to talk about degrees of durability, since we tend to assume that it's
either/or, but distinctions like that start to matter when you have an
enormous number of databases. S3 has an extremely good track record,
reliability-wise.

We're not too concerned about the overhead of all of this, I think,
because WAL segments consist of zeroes at the end when archive_timeout
is applied (at least from 9.4 on). We compress the WAL segments, and
many zeroes compress very well.

I admit that I haven't looked at it in much detail, but that is my
current understanding.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to