On 2016-09-20 16:18:02 -0400, Robert Haas wrote:
> On Tue, Sep 20, 2016 at 4:09 PM, Andres Freund <and...@anarazel.de> wrote:
> > That sounds way too big to me. WAL file allocation would trigger pretty
> > massive IO storms during zeroing, max_wal_size is going to be hard to
> > tune, the amounts of dirty data during bulk loads is going to be very
> > hard to control.  If somebody wants to do something like this they
> > better be well informed enough to override a #define.
> 
> EnterpriseDB has customers generating multiple TB of WAL per day.

Sure, that's kind of common.


> Even with a 1GB segment size, some of them will fill multiple files
> per minute.  At the current limit of 64MB, a few of them would still
> fill more than one file per second.  That is not sane.

I doubt generating much larger files actually helps a lot there. I bet
you a patch review that 1GB files are going to regress in pretty much
every situation; especially when taking latency into account.
I think what's actually needed for that is:
- make it easier to implement archiving via streaming WAL; i.e. make
  pg_receivexlog actually usable
- make archiving parallel
- decouple WAL write & fsyncing granularity from segment size

Requiring a non-default compile time or even just cluster creation time
option for tuning isn't something worth expanding energy on imo.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to