I'm just a lowly end user. Bumping the default statistics target or using ALTER TABLE SET STATISTICS has made large differences in query performance on large tables. The query planner has to guesstimate using the statistics. Sometimes on large tables the guesstimate isn't very good with a small statistical sample.

On 3/30/2011 12:05 PM, Lawrence Cohan wrote:
I think you are right (my bad) and please see the results below plus a little 
bit more info about the environment and sorry I missed that before. I've been 
told the server was tuned to the best for what we need and looks like we will 
need to change at least the two values below and maybe play with work_mem to 
see if it solves our issues.
The only issue is that we are running a 24/7 web site against the db and if we 
need to restart PG for the changes to take place we will need to wait for a 
downtime before any changes can be made.

'shared_buffers';'500MB' - shared_buffers should be 10% to 25% of available RAM 
->  change it to 2GB
'effective_cache_size';'2GB' - effective_cache_size should be 75% of available RAM 
->  change it to 10GB
'work_mem';'1MB' - increase it to 8MB, 32MB, 256MB, 1GB and check if better 
results.


PostgreSQL 9.0.2 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 
20080704 (Red Hat 4.1.2-48), 64-bit
2 x Intel(R) Xeon(R) CPU E5345  @ 2.33GHz
4 x 4GB = 16GB RAM

--query results below:
'version';'PostgreSQL 9.0.2 on x86_64-unknown-linux-gnu, compiled by GCC gcc 
(GCC) 4.1.2 20080704 (Red Hat 4.1.2-48), 64-bit'
'archive_command';'cp %p /pglog/wal_export/%f'
'archive_mode';'on'
'archive_timeout';'3min'
'autovacuum_analyze_threshold';'1000'
'autovacuum_vacuum_threshold';'1000'
'bytea_output';'escape'
'checkpoint_segments';'64'
'checkpoint_warning';'1min'
'client_encoding';'UNICODE'
'effective_cache_size';'2GB'
'escape_string_warning';'off'
'lc_collate';'en_US.UTF-8'
'lc_ctype';'en_US.UTF-8'
'listen_addresses';'xxx.xxx.xxx.xxx'
'log_autovacuum_min_duration';'2s'
'log_checkpoints';'on'
'log_destination';'syslog'
'log_line_prefix';'user=%u,db=%d'
'log_min_duration_statement';'1s'
'maintenance_work_mem';'256MB'
'max_connections';'1200'
'max_stack_depth';'2MB'
'port';'5432'
'server_encoding';'UTF8'
'shared_buffers';'500MB'
'syslog_facility';'local0'
'syslog_ident';'postgres'
'TimeZone';'Canada/Eastern'
'vacuum_cost_delay';'10ms'
'wal_buffers';'4MB'
'wal_level';'hot_standby'


-----Original Message-----
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
Sent: March-30-11 1:33 PM
To: pgsql-bugs@postgresql.org; Lawrence Cohan
Subject: RE: [BUGS] Postgres not using indexes

Lawrence Cohan<lco...@web.com>  wrote:
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
[configuration advice]
If, after reading the above-cited page and tuning your server you
still have performance problems, pick one query to work on first,
and follow the step outlined here:

http://wiki.postgresql.org/wiki/SlowQueryQuestions
We thank you for the links that have a lots of info and please
note that we tuned our servers as recommended by Enterprise DB
experts while they were in house for our hardware/software
migrations and the setting you mentioned are in place already.
Then the next step would be to provide enough information on one of
the slow queries for people to be able to offer useful advice.  Your
other post showed the query and the EXPLAIN ANALYZE output, but the
other information listed in the above-cited page is useful when
trying to understand a problem.  I'm particularly curious about the
data types of the id columns and the specifics of the index
definitions.

-Kevin

Attention:
The information contained in this message and or attachments is intended only 
for the person or entity to which it is addressed and may contain confidential 
and/or privileged material.  Any review, retransmission, dissemination or other 
use of, or taking of any action in reliance upon, this information by persons 
or entities other than the intended recipient is prohibited. If you received 
this in error, please contact the sender and delete the material from any 
system and destroy any copies.



--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to