Thank you for all your suggestions - will attempt to make changes as recommended one at a time and will post back the results.
Regards, Lawrence Cohan. -----Original Message----- From: pgsql-bugs-ow...@postgresql.org [mailto:pgsql-bugs-ow...@postgresql.org] On Behalf Of Kevin Grittner Sent: March-30-11 4:12 PM To: pgsql-bugs@postgresql.org; Lawrence Cohan Subject: Re: [BUGS] Postgres not using indexes Lawrence Cohan <lco...@web.com> wrote: > looks like we will need to change at least the two values below > and maybe play with work_mem to see if it solves our issues. You will probably get better throughput by bumping up shared_buffers to the recommended setting, but beware of "stalls" in query processing at checkpoint time. If that happens you want to make the background writer more aggressive and/or back off on shared_memory, so that there isn't such a glut of dirty pages to write during a checkpoint. I think even the recommended setting for effective_cache size is on the low side. This one affects how expensive the optimizer thinks index usage will be, so given your current problem this is probably important to raise. I add up shared_buffers and what free tells me is cached space is after PostgreSQL has been running a while. That usually winds up being 1GB to 2GB less than total memory on our machines, so actually, I usually just start there. We usually need to reduce random_page_cost to get good plans. For a fully-cached database you may want to reduce both seq_page_cost and random_page_cost to equal numbers around 0.05. With partial caching, we often leave seq_page_cost alone and reduce random_page_cost to 2. YMMV. The setting for work_mem can be tricky, especially with 1200 connections configured. Each connection may be using one or more allocations of work_mem at the same time. Which leads to the question of why you have 1200 connections configured. You are almost always better off using a connection pooler to limit this to something on the order of twice your CPU cores plus your effective spindle count. Tomcat has a very good connection pooler built in, as do many other products. There are also good external poolers, like pgpool and pgbouncer. With a reasonable amount of RAM you're almost always better off bumping wal_buffers to 32MB. > The only issue is that we are running a 24/7 web site against the > db and if we need to restart PG for the changes to take place we > will need to wait for a downtime before any changes can be made. Some of these can be set per user with ALTER ROLE. New connections would then start using the new settings with no down time. -Kevin -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs Attention: The information contained in this message and or attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any system and destroy any copies. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs