HI,

I have an application that maintains 150 open connections to a Postgres DB 
server. The application works fine without a problem for the most time. 

The problem seem to arise when a SELECT that returns a lot of rows is executed 
or the SELECT is run on a large object. These selects are run from time to time 
by a separate process whose purpose is to generate reports from the db data.

The problem is that when the SELECTs are run the main application starts 
running out of available connections which means that postgres is not returning 
the query results fast enough. What I find a little bit starnge is that the 
report engine's SELECTs operate on a different set of tables than the ones the 
main application is using. Also the db box is hardly breaking a sweat, CPU and 
memory utilization are ridiculously low and IOwaits are typically less than 10%.

Has anyone experienced this? Are there any settings I can change to improve 
throughput?  Any help will be greatly appreciated.


Thanks,
val


      __________________________________________________________
Sent from Yahoo! Mail.
A Smarter Email http://uk.docs.yahoo.com/nowyoucan.html

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to