On Wed, Jan 7, 2015 at 12:33:20PM -0300, Alvaro Herrera wrote: > Bruce Momjian wrote: > > > Have you done any measurements to determine how much backup can be > > skipped using this method for a typical workload, i.e. how many 16MB > > page ranges are not modified in a typical span between incremental > > backups? > > That seems entirely dependent on the specific workload.
Well, obviously. Is that worth even stating? My question is whether there are enough workloads for this to be generally useful, particularly considering the recording granularity, hint bits, and freezing. Do we have cases where 16MB granularity helps compared to file or table-level granularity? How would we even measure the benefits? How would the administrator know they are benefitting from incremental backups vs complete backups, considering the complexity of incremental restores? -- Bruce Momjian <br...@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers