Hello Kevin

Getting back at this.

> -----Original Message-----
> From: Kevin Grittner [mailto:kgri...@gmail.com]
> Sent: Mittwoch, 23. November 2016 17:04
> To: Charles Clavadetscher <clavadetsc...@swisspug.org>
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] max_connections limit violation not showing in 
> pg_stat_activity
> 
> On Wed, Nov 23, 2016 at 4:43 AM, Charles Clavadetscher 
> <clavadetsc...@swisspug.org> wrote:
> > From: Kevin Grittner [mailto:kgri...@gmail.com]
> 
> >> Is it possible to upgrade?  You are missing over a year's worth of
> >> fixes for serious bugs and security vulnerabilities.
> >
> > Yes. Actually it is foreseen to move to 9.6 (and RedHat 7.x).
> >
> >> https://www.postgresql.org/support/versioning/
> 
> An upgrade from 9.3.x to 9.6.x is a major release upgrade, which gets you new 
> features and usually gets you
> improvements in performance and scalability.  The 9.3 major release will be 
> supported for almost 2 more years, so I
> wasn't so concerned about that as being on 9.3.10 when the latest bug fix 
> version of 9.3 is 9.3.15.  To avoid
> hitting bugs that others have already hit and reported, with fixes published, 
> it is wise to try to upgrade to the
> latest minor release fairly quickly.  If the minor release fixes a serious 
> security vulnerability, I think it is a
> good idea to update within a day or two of release.
> 
> > I will have to check with our hosting people how many cores we have or
> > can have on the new environment.  I have seen that there is pgBouncer
> > and pgPool. Would you recommend one of those?
> 
> pgBouncer is more lightweight, so if you don't need any of the features 
> present only pgPool, I would go with
> pgBouncer.  Depending on your application software environment, it may be 
> even better to use a pool built into the
> application development framework.  There are several very good pools 
> available to Java environments.  I can
> personally attest to the quality of Apache dbcp, but others have mentioned 
> that they like other Java connection
> pools even better.
> You might want to search the archives, or maybe someone will mention the 
> others again on this thread.

Finally I set up pgbouncer and a simple first test with a somewhat heavy load 
(1000 users in 5 minutes corresponding to a total amount of 12000 http requests 
and 5000 DB requests) shows an incredible improvement in performance. Without 
the pooler roughly a fifth of the calls died in a timeout. The remaining show 
an average response time of more than 8 seconds. With the pooler all requests 
went through without any error whatsoever and the mean response time dropped to 
23 ms (the maximum being at 193 ms). At first I thought that I had some error 
in my simulation somewhere, but all 5000 DB requests have been correctly 
performed as I could check in a log table that I prepared for that specific 
purpose. There is definetely not doubt about the beneficial effect of the 
connection pooler.

Now I will take some time to better understand the configuration settings.

Maybe a question on that. I have following configuration entries:

pool_mode = session
server_reset_query = DISCARD ALL
max_client_conn = 100
default_pool_size = 60
min_pool_size = 5
max_db_connections = 60
max_user_connections = 60

The pooler only serves a single application and only has a single connection 
string with the same user for all calls (that being the application user 
defined for that specific web application). Our current DB has a 
max_connections of 100. I assumed that max_client_conn should match this entry. 
With max_db_connections and max_user_connections I try to make sure that calls 
from this  application won't take up all resources (as it happened without the 
pooler). This seems to work correctly. During the simulation other applications 
that also require the database but don't use the pooler, work normally, i.e. 
there are no noticeable effects.

If you have some suggestion or see potential problems in these settings, that I 
am not aware of so far, feel free to point me to them.

Thank you very much again for the excellent tips.
Bye
Charles

> >>> SELECT count(*) FROM pg_stat_activity; watch 1;
> >>
> >> At the times when the resources are overloaded by more connections
> >> than the resources can efficiently service -- well that's precisely
> >> the time that a sleeping "monitoring" process is least likely to be
> >> given a time slice to run. If you can manage to get pgbadger to run
> >> on your environment, and you turn on logging of connections and
> >> disconnections, you will be able to get far more accurate
> >> information.
> >
> > Yes, it sounds reasonable. I assumed that this kind of measurements
> > have a higher priority or reserved slots for them.
> > In those occasions is when they are most needed.
> 
> There is no such feature in PostgreSQL.  It might be worthwhile, although how 
> that would be implemented is not
> obvious, short of a sysadmin looking for the monitoring backend process and 
> running "nice" against it.
> 
> > And thank you for the hint to pgbadger. I will take a look into it,
> > but an installation on the server completely depends on our hosting
> > service people. I am not sure this is feasible, but I can imagine an
> > alternative scenario, using a local installation for tuning and then
> > pass the parameters to the hosters for implementation.
> 
> As long as you can control the PostgreSQL configuration (to set the right 
> logging options) and can retrieve the log
> files, you should be able to use it.
> 
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to