> your s.close() is not being reached

I've checked and found that every thread closes connection. And this
traceback :
"QueuePool limit of size 5 overflow 10 reached, connection timed out"

most probably caused by long running thread which walking over list of
filenames ~ 35 000 long. It renaming files and causing app to start
another thread.

> if you have a deadlock situation occurring, that can also quickly
> cause many connections to remain opened, all waiting on the same
> lock.   if youre locking rows and such, you have to be careful that
> other transactions dont try to lock the same rows in a different
> order.  by "different transactions" this could be a trans in the same
> thread, or in a different thread, and youll have to narrow down which.

as far as I understand 'SHARE ROW EXCLUSIVE' (
http://www.postgresql.org/docs/8.1/static/sql-lock.html )  lock is
necessary here, but I was unable to find how to set it up in
sqlalchemy.

With choosing proper lock in postgres disappears necessity to touch
"threadframe" mentioned above, right ?

> if youre stiill doing that thing where you delete a row and then
> immediately re-insert it, i will ask again that you consider using an
> UPDATE instead.

There was deleting every record from table and inserting another one
instead.

P.S. sqlalchemy revision 3942 was used.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to