Anton Maksimenkov wrote:
2010/2/5 Ted Unangst <ted.unan...@gmail.com>:
On Thu, Feb 4, 2010 at 2:21 PM, Jeff Ross <jr...@openvistas.net> wrote:
kern.shminfo.shmall=512000
kern.shminfo.shmmax=768000000
Oh, when I said it was safe to crank shmmax I didn't know you'd be
setting the bufcache to huge numbers too.  ;)

Furthermore, postgres documentation recommends not set shared buffers
to big values, because postgres itself depends on buffer caches (it
suppose that buffer cache is big).


The docs I've read do not say that.  Here's a snip from

http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

"If you have a system with 1GB or more of RAM, a reasonable starting value for shared_buffers is 1/4 of the memory in your system. If you have less ram you'll have to account more carefully for how much RAM the OS is taking up, closer to 15% is more typical there. There are some workloads where even larger settings for shared_buffers are effective, but given the way PostgreSQL also relies on the operating system cache it's unlikely you'll find using more than 40% of RAM to work better than a smaller amount."

When I set shared_buffers to slightly less than 1/4 of my system ram I started this whole round of system panics and reports because in order to get postgres to start I then had to crank kern.shminfo.shmmax to 1GB, and that would trigger a panic very quickly under load.

Jeff, since you can set buffer cache to about 90% of RAM then you can
set chared buffers to something not so big... say, 256Mb or so (I just
show the way, not exact values), and decrease shmall/shmmax. And set
postgres parameter effective_cache_size to estimated size of your
buffer cache.
After that postgres will actively use your buffer cache, I suppose.
And then you will show some results to us, right?
Interesting to see if all these will prove things what documentation says :-).

With postgres's shared_buffers set to 256M I have to set kern.shminfo.shmmax to 270M to get the server to start. I then set postgres's effective_cache_size to 2816M (a value suggested by the pgtune program that equals 68% of my 4G ram) and set kern.bufcachepercent to 70.

This drops my TPS in a select only test by about 25%, and it panicked with "panic: pmap_enter: no pv entries available" when scale hit 80. Database size at scale 80 is slightly over 1100M, with simulated 80 client connections each running 20000 select transactions. At no time does free ram ever drop below 900M--most of the time it is over 1GB.

I am now running the same series of scale tests but instead of the select only test I'm running the select/insert/update test that more closely matches TCP-B. At scale 30 the "uvm_mapent_alloc: out of static map entries" message appeared on the console.

This test run is going to take a while to either complete or panic so here's an "an off on a tangent" type question.

Could I avoid all of this messing around if I had a server that could run amd64? How would a dual processor 1.8 Opteron 244 w/4GB ram compare to this 2.4GHZ dual XEON w/4GB ram? Bog knows I don't need another server, but...

Thanks to all,

Jeff

Reply via email to