On Jul 21, 2006, at 9:03 AM, Tom Lane wrote:
One
possibility is that early freeze is at 1B transactions and we push
forced-freeze back to 1.5B transactions (the current forced-freeze
at 1B
transactions seems rather aggresive anyway, now that the server will
refuse to issue new commands rather than lose data due to
wraparound).
No, the freeze-at-1B rule is the maximum safe delay. Read the docs.
But we could do early freeze at 0.5B and forced freeze at 1B and
probably still get the effect you want.
However, I remain unconvinced that this is a good idea. You'll be
adding very real cycles to regular vacuum processing (to re-scan
tuples
already examined) in hopes of obtaining a later savings that is really
pretty hypothetical. Where is your evidence that writes caused solely
by tuple freezing are a performance issue?
I didn't think vacuum would be a CPU-bound process, but is there any
way to gather that evidence right now?
What about adding some verbage to vacuum verbose that reports how
many pages were dirtied to freeze tuples? It seems to be useful info
to have, and would help establish if it's worth worrying about.
--
Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED]
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend