On Wed, 3 Dec 2008, Gregory Stark wrote:

It sure seems strange to me to have initdb which presumably is targeting a
"mixed" system -- where it doesn't know for sure what workload will be run --
produce a different set of values than the tuner on the same machine.

It's been a long time since the output from initdb was targeting anything but a minimal system with an untuned kernel and limited resources. If you invert the normal tuning recommendations, as if its initial configuration were the output from typical practice, it would be aiming at a system with approximately 128MB of RAM. That feels about right to me; when I had 128MB of RAM in my high-end P2-300 server running PG 7.0, 32MB of shared_buffers was huge and 3 checkpoints segments was plenty. I don't recall regularly dirtying things fast enough to see "checkpoints occuring too fast" then like you can do trivially nowadays. Here in 2008, I push checkpoint_segments up to 10 even for the most trivial apps lest the logs fill with those buggers the first time I run an update on a table.

Right now, my program doesn't fiddle with any memory settings if you've got less than 256MB of RAM. Were someone to champion the idea that *nothing* should be fiddled with in those cases, that's not an unreasonable position. I'm not the sort to be too concerned myself that the guy who thinks he's running a DW on a system with 64MB of RAM might get bad settings, but it's a fair criticism to point that out as a problem.

In fact, really it would be nice if we allowed units of space (MB, GB,
etc) for checkpoint_segments.

That's a good way to think about this, let's run with that for a minute. The values I'm throwing in there look like this (if your tab stops aren't at 8 characters this will suck):

              Completion  Max           Max
Type    Segs    Target  Segments        Usage
web     8       0.7     23              368MB
oltp    16      0.9     47              752MB
dw      64      0.9     187             3GB
mixed   16      0.9     47              752MB
desktop 3       0.5     9               144MB

Is 368MB of overhead unreasonable for a web application database today, where you can get a mirrored pair of disks for under $1/GB? It's only the DW case that even starts to leave trivial territory. Your example of somebody who thinks the overhead is too high on their 10MB database is already being blown away even at the default of 3 segments (assuming that data has enough churn on it to go through that many segments ever--if it doesn't then the maximum doesn't matter anyway).

The reality here is that it's the recovery playback time that's the real bear. If I were trying to argue against me, what would be more persuasive is some tests showing how long it takes to sort through, cleanup, and replay the appropriate portions of as many as 47 segments worth of WAL after an unclean shutdown when checkpoint_segments=16. Given how long that takes, it might be possible to find a modern system takes a while to process that much WAL volume. It's pretty rare I run into that (usually only after I do something abusive), whereas complaints about the logs filling with checkpoint warnings on systems set to the default seem to pop up all the time.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to