I go back to this thread because today I ran with the same problem: 
postgresql reaching max_connection limits and, therefor, some of my 
websites throwing intermitent HTTP 500 errors (because web2py couldn't 
connect to the database).

To remind, we are talking of a VPS with multiple instances of web2py 
running, all of them serving the same web2py app, each one connecting to a 
different postgresql database (however the database structure is the same 
accross all the databases). Each web2py instance is served by a lighttpd 
virtual host through fastcgi. Each virtual host (that is, each web2py 
instance) receives a different volume of traffic (that is obvious, they are 
different websites with different public). 

The original problem (the one that caused I post this question in the first 
place) was that the postgresql database server was reaching the 
"max_connections" limit and, in consecuence, some of the websites were 
throwing intermitent HTTP 500 errors (web2py couldn't connect to database).

Then, the user oriented me with "pool_size" parameter of DAL constructor. 
Thanks again! 
I've been reading the web2py documentation about pooling [1] and I notice 
that it says that "When the next http request arrives, web2py tries to 
recycle a connection from the pool and use that for the new transaction. If 
there are no available connections in the pool, a new connection is 
established".
So, if I didn't get it wrong, I deduce that with web2py's pooling mechanism 
I can't overcome the "max_connections" postgresql limit. That is because, 
no matter the size of the pool, if the pool is full and the website is 
receiving a lot of requests, new connetions will be created, and eventually 
the database server will reach the "max_conectios" limit.

So, I read about pgBouncer [2], with special attention to the configuration 
parameter "max_client_conn" [3]. Also I've found this two posts [4] [5] 
explaining that this parameter can be set in **any** number, independently 
of the "max_connections" postgresql configuration.

Therefor, ¿is it ok to say that web2py's pooling mechanism **won't** let 
you overcome postgresql max_connections limit, but in the other hand, you 
**will** be able to overcome that limit using pgbouncer? Thanks in advance. 


[1] 
 
http://www.web2py.com/books/default/chapter/29/06/the-database-abstraction-layer#Connection-pooling
[2]  https://wiki.postgresql.org/wiki/PgBouncer
[3]  http://pgbouncer.projects.pgfoundry.org/doc/config.html
[4] 
 
http://stackoverflow.com/questions/18061541/pgbouncer-on-large-client-connections
[5] 
http://stackoverflow.com/questions/26844635/multi-process-pgbouncer-pooling-out-of-available-connections



El martes, 28 de octubre de 2014 16:55:42 UTC-3, Niphlod escribió:
>
> theoretically a db = DAL(...., pool_size=5) will create AT MOST 5 
> connections to that db. you have 20, so any app's instance will create AT 
> MOST 20 connections to the db. if you postgres accepts AT MOST 50 
> connections, you'll reach the top at 2 apps and a half. As for the ram 
> consumed by postgres, it's a setting too. Of course if you have 4 gb of 
> ram, 1.5 assigned to postgres seems normal (if not too conservative). Most 
> db engines (postgresql included) benefit in any operation if more ram is 
> available.
>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to