David,
For example, if you have 1G of RAM on the box, you can't configure a cache of 900 meg and expect things to work well. This is because the OS and associated other stuff running on the box will use ~300megs. The system will page as a result.
Overcommitting of memory leads to trashing, yes, that is also my experience.
The only sure fire way I know of to find the absolute maximum cache size that can be safely configured is to experiment with larger and larger sizes until paging occurs, then back off a bit.
Yeah, I know the trial and error method. But I also learned that reading the manuals and documentation often helps. So after fastreading the various PostgreSQL tuning materials, I came accross formulas to calculate a fine starting point for shared memory size; and the recommendation to check with shared_memory information tools if that size is okay. And THAT is exactly the challenge of this thread: I am searching for tools to check shared memory usage on Windows. ipcs is not available. And neither Magnus nor Dave, both main contributors of the win32 port of PostgreSQL, and both way wiser concerning Windows internas then me, know of some :( The challenge below that: I maintain a win32 PostgreSQL server, which gets slow every 3-4 weeks. After restarting it runs perfect, for again 3-4 weeks. The Oracle-guys at the same customer solved a similiar problem by simply restarting Oracle every night. But that would be not good enough for my sence of honour :) Thanks for your thoughts, Harald -- GHUM Harald Massa persuadere et programmare Harald Armin Massa Reinsburgstraße 202b 70197 Stuttgart 0173/9409607 - Python: the only language with more web frameworks than keywords. ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster