Re: [GENERAL] Too much clients connected to the PostgreSQL Database

2012-10-30 Thread Frank Lanitz
Am 30.10.2012 02:06, schrieb rodr...@paripassu.com.br:
 BTW, 200 seems alwfully high unless a *really* high end machine. Â You
 may have fewer timeouts if you avoid swamping the server with a
 thundering herd of requests.

I was maintaining a setup which had  1000 connections on a not very
high-end server (12GB of Ram). It was just most of the connections were
idling most the time. Tomcat with a high number of consistent
connections for some reasons and end user stand alone clients which are
establishing a database connection on startup and keeping them until
shutdown.

Cheers,
Frank



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Too much clients connected to the PostgreSQL Database

2012-10-30 Thread John R Pierce

On 10/30/12 12:32 AM, Frank Lanitz wrote:

I was maintaining a setup which had  1000 connections on a not very
high-end server (12GB of Ram). It was just most of the connections were
idling most the time. Tomcat with a high number of consistent
connections for some reasons and end user stand alone clients which are
establishing a database connection on startup and keeping them until
shutdown.


a better configuration for that is for your web applications to grab a 
connection from a connection pool, do a transaction, and then release 
the connection as soon as you are done.   keep the pool size modest, 
maybe 2-4X the number of CPU cores you have.   IF you have some things 
that do long running queries, like for calculating reports, create a 
separate smaller pool for these as there likely are only 1-2 of them at  
a time anyways.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too much clients connected to the PostgreSQL Database

2012-10-30 Thread Rodrigo Pereira da Silva

  
  
Hi Kevin,

  I think you are right.  We need to use the connection pool for
all clients, so we can limit the number of active database
connections.
  We have some legacy applications that is hard to change, but
we will have to do it. 

Thanks a lot, 

Rodrigo
 

Em 10/30/2012 2:10 AM, Kevin Grittner escreveu:

  rodr...@paripassu.com.br wrote:


  
150 maxActive setting, but we have some low usage application
getting direct db connection(out of the pool).

  
  
You will probably be better off with a connection pool somewhere
between 10 and 20, as long as it is transaction-based, the client
side has low latency to the database, and the transaction isn't
waiting for other things (like human beings or slow connections to
somewhere else) to get its work done.


  
The interesting is that the server suddently get a lot of
connections and reach the limit and then release them.

  
  
That is the pattern you will tend to see if you have too many
configured. The "thundering herd" syndrome. They charge in, drag
down performance while they over-stress the hardware, and then move
on.


  
I'm willing to change to Tomcat 7 connection pool. We are using
c3pO and apache ones, maybe that's a problem.

  
  
I have seen both the Tomcat pool and DBCP work well, although I think
the DBCP pool has more features. I don't have enough experience with
c3p0 to comment either way. I definitely would not bother to move
from Apache DBCP to the Tomcat pool. If I remember right I didn't
find much worth changing from defaults in the DBCP package, but it's
been a while. I would recommend either of those over an external
product like pgbouncer or pgpool; those are what you go to when you
don't have a quality connection pooler in your application software,
IMO.

The point is to configure whatever pool you use to limit the number
of active database transactions such that the active connections are
able to keep all available resources on the database server (cores,
spindles, network) busy, but no more. That should improve both
latency and throughput. There is no substitute for testing different
sizes with your actual load against your actual hardware; any advice
on size should be taken as the starting point for testing incremental
changes.

-Kevin




-- 
  
  
  

  

  
  
  

  Rodrigo
  Pereira da Silva
rodr...@paripassu.com.br
http://www.paripassu.com.br
Rua Coronel Luis Caldeira, nº 67, Bloco 1, Sala 3A
88034-110 • Florianópolis • SC
Tel. (48) 3207-5755
  


  
  

  
  

  

  



[GENERAL] Too much clients connected to the PostgreSQL Database

2012-10-29 Thread Rodrigo Pereira da Silva

  
  
Hi Guys,

We have about 10 web applications
deployed in a Tomcat 7.0 accessing a Postgresql 9.1 database. We
do use connection pooling.
 We have the max_connections parameter set to 200
connections, however, we are reaching the max connections
sporadically(about 4 time/day). 
 It's interesting, because we didn't have this problem
with Postgresql 8.3.
 I don't know if that's can be caused by any postgresql
internal task, like, auto-vaccum. 
 Someone knows if there is anything new in PostgreSQL 9.1
that it's probably consuming connections? Is there any default
approach to proceed with this kind of problem? I'm connected of
increasing the max connections and do not resolve the problem or
decrease performance.

Thanks in advance,
  
  

  

  
  
  

  Rodrigo
  Pereira da Silva
rodr...@paripassu.com.br
http://www.paripassu.com.br
Rua Coronel Luis Caldeira, n 67, Bloco 1, Sala 3A
88034-110  Florianpolis  SC
Tel. (48) 3207-5755
  


  
  

  
  

  

  



Re: [GENERAL] Too much clients connected to the PostgreSQL Database

2012-10-29 Thread Kevin Grittner
Rodrigo Pereira da Silva wrote:

 We have about 10 web applications deployed in a Tomcat 7.0 
 accessing a Postgresql 9.1 database. We do use connection pooling.
 We have the max_connections parameter set to 200 connections, 
 however, we are reaching the max connections sporadically(about 4 
 time/day).

What is the pooler's maxActive setting? (It should be less than the
PostgreSQL max_connections setting.)

BTW, 200 seems alwfully high unless a *really* high end machine.  You
may have fewer timeouts if you avoid swamping the server with a
thundering herd of requests.

http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too much clients connected to the PostgreSQL Database

2012-10-29 Thread rodrigo
150 maxActive setting, but we have some low usage application getting direct db
connection(out of the pool).

The interesting is that the server suddently get a lot of connections and reach 
the
limit and then release them.

I'm willing to change to Tomcat 7 connection pool. We are using c3pO and apache 
ones,
maybe that's a problem.

Thank you,

Well,

 Rodrigo Pereira da Silva wrote:

 We have about 10 web applications deployed in a Tomcat 7.0
 accessing a Postgresql 9.1 database. We do use connection pooling.
 We have the max_connections parameter set to 200 connections,
 however, we are reaching the max connections sporadically(about 4
 time/day).

 What is the pooler's maxActive setting? (It should be less than the
 PostgreSQL max_connections setting.)

 BTW, 200 seems alwfully high unless a *really* high end machine.  You
 may have fewer timeouts if you avoid swamping the server with a
 thundering herd of requests.

 http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

 -Kevin


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too much clients connected to the PostgreSQL Database

2012-10-29 Thread Xiong He
If you are using connection pool, did you use pgbouncer to reduce the maximum 
physical connection?
You said, but we have some low usage application getting direct db 
connection(out of the pool).   It will mean that some connections are not 
controlled by the connection pool.  This can be variable in the real 
environment.

--
ThanksRegards,
Xiong He





 




-- Original --
From:  rodrigorodr...@paripassu.com.br;
Date:  Tue, Oct 30, 2012 09:06 AM
To:  Kevin Grittnerkgri...@mail.com; 
Cc:  pgsql-generalpgsql-general@postgresql.org; 
Subject:  Re: [GENERAL] Too much clients connected to the PostgreSQL Database



150 maxActive setting, but we have some low usage application getting direct db
connection(out of the pool).

The interesting is that the server suddently get a lot of connections and reach 
the
limit and then release them.

I'm willing to change to Tomcat 7 connection pool. We are using c3pO and apache 
ones,
maybe that's a problem.

Thank you,

Well,

 Rodrigo Pereira da Silva wrote:

 We have about 10 web applications deployed in a Tomcat 7.0
 accessing a Postgresql 9.1 database. We do use connection pooling.
 We have the max_connections parameter set to 200 connections,
 however, we are reaching the max connections sporadically(about 4
 time/day).

 What is the pooler's maxActive setting? (It should be less than the
 PostgreSQL max_connections setting.)

 BTW, 200 seems alwfully high unless a *really* high end machine.   You
 may have fewer timeouts if you avoid swamping the server with a
 thundering herd of requests.

 http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

 -Kevin


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Too much clients connected to the PostgreSQL Database

2012-10-29 Thread Kevin Grittner
rodr...@paripassu.com.br wrote:

 150 maxActive setting, but we have some low usage application
 getting direct db connection(out of the pool).

You will probably be better off with a connection pool somewhere
between 10 and 20, as long as it is transaction-based, the client
side has low latency to the database, and the transaction isn't
waiting for other things (like human beings or slow connections to
somewhere else) to get its work done.

 The interesting is that the server suddently get a lot of
 connections and reach the limit and then release them.

That is the pattern you will tend to see if you have too many
configured. The thundering herd syndrome. They charge in, drag
down performance while they over-stress the hardware, and then move
on.

 I'm willing to change to Tomcat 7 connection pool. We are using
 c3pO and apache ones, maybe that's a problem.

I have seen both the Tomcat pool and DBCP work well, although I think
the DBCP pool has more features. I don't have enough experience with
c3p0 to comment either way. I definitely would not bother to move
from Apache DBCP to the Tomcat pool. If I remember right I didn't
find much worth changing from defaults in the DBCP package, but it's
been a while. I would recommend either of those over an external
product like pgbouncer or pgpool; those are what you go to when you
don't have a quality connection pooler in your application software,
IMO.

The point is to configure whatever pool you use to limit the number
of active database transactions such that the active connections are
able to keep all available resources on the database server (cores,
spindles, network) busy, but no more. That should improve both
latency and throughput. There is no substitute for testing different
sizes with your actual load against your actual hardware; any advice
on size should be taken as the starting point for testing incremental
changes.

-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general