2012/5/11 Bruce Wade <bruce.w...@gmail.com>:
> Maybe in some places of the code but not everywhere. The problem is when
> there is a large load all 3 servers get very slow on every page. I think it
> is the DB layer as we have 90 tables in one database 45 in another. I am
> also using connection pooling which I think is causing problems. Because the
> DAL is loaded with every request, wouldn't that means the pool is open for
> each request? Or should there only ever be 10 connections open even if I
> have 1000 concurrent connections?
Connections = (pool_size) * (number of web2py processes)

So if you have 10 threads  and pool_size = 4
1 * 4 = 4 connections

If you have 10 processes (each with 6 threads):
10 * 4 = 40 connections

As you can see the number of processes is not a term of the computation.
You must count the number of concurrent processes, the number of
threads does not count, same for the number of requests in the nginx
queue.

If the db seems to be locked you can do on the db server host:

ps ax | grep TRANSACTION

you should get many postgres processes IDLE IN TRANSACTION.  It is a
symptom of web2py taking long to commit the transaction.
If you do not use the db in some complex view you can try to put a
db.rollback() at the beginning of the controller.
Are you using any web2py scripts (cron or the like)? check that you do
not keep the transaction open if the process is long.  Use alway
db.commit!

mic

Reply via email to