On Tue, May 27, 2014 at 1:19 PM, Fujii Masao <masao.fu...@gmail.com> wrote: > On Tue, May 27, 2014 at 3:57 PM, Simon Riggs <si...@2ndquadrant.com> wrote: > > The requirements we were discussing were around > > > > A) reducing WAL volume > > B) reducing foreground overhead of writing FPWs - which spikes badly > > after checkpoint and the overhead is paid by the user processes > > themselves > > C) need for FPWs during base backup > > > > So that gives us a few approaches > > > > * Compressing FPWs gives A > > * Background FPWs gives us B > > which look like we can combine both ideas > > > > * Double-buffering would give us A and B, but not C > > and would be incompatible with other two ideas > > Double-buffering would allow us to disable FPW safely but which would make > a recovery slow.
Is it due to the fact that during recovery, it needs to check the contents of double buffer as well as the page in original location for consistency or there is something else also which will lead to slow recovery? Won't DBW (double buffer write) reduce the need for number of pages that needs to be read from disk as compare to FPW which will suffice the performance degradation due to any other impact? IIUC in DBW mechanism, we need to have a temporary sequential log file of fixed size which will be used to write data before the data gets written to its actual location in tablespace. Now as the temporary log file is of fixed size, the number of pages that needs to be read during recovery should be less as compare to FPW because in FPW it needs to read all the pages written in WAL log after last successful checkpoint. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com