FYI - We have implemented a number of changes...

a) some query and application optimizations
b) connection pool (on the cheap: set max number of clients on
Postgres server and created a blocking wrapper to pg_pconnect that
will block until it gets a connection)
c) moved the application server to a separate box

And, we pretty much doubled our capacity... from approx 40 "requests"
per second to approx 80.

The problem with our "cheap" connection pool is that the persistent
connections don't seem to be available immediately after they're
released by the previous process.   pg_close doesn't seem to help the
situation.  We understand that pg_close doesn't really close a
persistent connection, but we were hoping that it would cleanly
release it for another client to use.  Curious.

We've also tried third-party connection pools and they don't seem to
be real fast.

Thanks for all of your input.  We really appreciate it.

Bob

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to