On Mon, 2 Jul 2007, Tom Lane wrote:

# wal_buffers = 1MB
Is there really evidence in favor of such a high setting for this,
either?

I noticed consistant improvements in throughput on pgbench results with lots of clients going from the default to 256KB, flatlining above that; it seemed sufficiently large for any system I've used. I've taken to using 1MB anyway nowadays because others suggested that number, and it seemed to be well beyond the useful range and thus never likely to throttle anything. Is there any downside to it being larger than necessary beyond what seems like a trivial amount of additional RAM?

# checkpoint_segments = 8 to 16 if you have the disk space (0.3 to 0.6 GB)
This seems definitely too small --- for write-intensive databases I like
to set it to 30 or so, which should eat about a GB if I did the
arithmetic right.

You did--I approximate larger values in my head by saying 1GB at 30 segments and scaling up from there. But don't forget this is impacted by the LDC change, with the segments expected to be active now

(2 + checkpoint_completion_target) * checkpoint_segments + 1

so with a default install setting the segments to 30 will creep that up to closer to a 1.2GB footprint.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to