We're in the process of upgrading our db server's memory from 2GB to 8GB to improve access performance. This is a dedicated dual Xeon db server not running any significant non-db processes. Our database size on disk is ~11GB, although we expect it to grow to ~20GB. Much of this data is inactive and seldom used or accessed. Read access is our primary concern as our data is normalized and retrieveing a complete product requires many reads to associated tables and indexes to put it all together.

Our larger tables have 10-20 rows per disk block, I assume that most blocks will have a mix of frequently accessed and inactive rows. Of course, we wouldn't want to double-cache, so my inclination would be to either give most of the memory to shared_buffers, or to leave that small and let the kernel (Linux 2.6x) do the buffering.

I have no idea regarding the inner working of the pg's shared cache, but what I would like to find out is whether it is table-row-based, or disk-block-based. In the case of it being disk-block based, my inclination would be to let the kernel do the buffering. In the case of the cache being table-row-based, I would expect it to be much more space-efficient and I would be inclined to give the memory to the pg. In that case, is it feasible to set shared_buffers to something like 500000 x 8k blocks? We make extensive use of indexes on the larger tables and would seldom, if ever, do sequential scans.

Any comments or advice would be great!

Michael.


---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to