On Wed, Jan  7, 2015 at 10:50:38AM +0100, Marco Nenciarini wrote:
> Implementation
> --------------
> 
> We create an additional fork which contains a raw stream of LSNs. To
> limit the space used, every entry represent the maximum LSN of a group
> of blocks of a fixed size. I chose arbitrarily the size of 2048
> which is equivalent to 16MB of heap data, which means that we need 64k
> entry to track one terabyte of heap.

I like the idea of summarizing the LSN to keep its size reaonable.  Have
you done any measurements to determine how much backup can be skipped
using this method for a typical workload, i.e. how many 16MB page ranges
are not modified in a typical span between incremental backups?

-- 
  Bruce Momjian  <br...@momjian.us>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to