Greg Smith <[EMAIL PROTECTED]> writes: > On Wed, 20 Jun 2007, Campbell, Lance wrote: >> If everything I said is correct then I agree "Why have >> effective_cache_size?" Why not just go down the approach that Oracle >> has taken and require people to rely more on shared_buffers and the >> general memory driven approach? Why rely on the disk caching of the OS?
> [ reasons why snipped ] There's another reason for not setting shared_buffers huge, beyond the good ones Greg listed: the kernel may or may not consider a large shared-memory segment as potentially swappable. If the kernel starts swapping out low-usage areas of the shared-buffer arena, you lose badly: accessing a supposedly "in cache" page takes just as long as fetching it from the disk file would've, and if a dirty page gets swapped out, you'll have to swap it back in before you can write it; making a total of *three* I/Os expended to get it down to where it should have been, not one. So unless you can lock the shared memory segment in RAM, it's best to keep it small enough that all the buffers are heavily used. Marginal-use pages will be handled much more effectively in the O/S cache. I'd also like to re-emphasize the point about "don't be a pig if you don't have to". It would be very bad if Postgres automatically operated on the assumption that it should try to consume all available resources. Personally, I run half a dozen postmasters (of varying vintages) on one not-especially-impressive development machine. I can do this exactly because the default configuration doesn't try to eat the whole machine. To get back to the comparison to Oracle: Oracle can assume that it's running on a dedicated machine, because their license fees are more than the price of the machine anyway. We shouldn't make that assumption, at least not in the out-of-the-box configuration. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend