On 6/25/06, Michael Bayer <[EMAIL PROTECTED]> wrote:
you should try 1657 to see if it solves your problems right now. I am also posting on the Python list to see what they think of formally allowing the Lock to be configurable, or at least making it an RLock (someone else also wanted this feature
you should try 1657 to see if it solves your problems right now. I am also posting on the Python list to see what they think of formally allowing the Lock to be configurable, or at least making it an RLock (someone else also wanted this feature : http://mail.python.org/pipermail/python-list/2006-
On 6/25/06, Michael Bayer <[EMAIL PROTECTED]> wrote:
Actually, closer inspection of the stack trace reveals somethinginteresting: there is one thread where the __del__ method on theConnectionFairy is getting called within the get() method of Queue,and creating a reentrant lockup since __del__ trie
On 6/25/06, Michael Bayer <[EMAIL PROTECTED]> wrote:
heres the gentoo version:http://bugs.gentoo..org/show_bug.cgi?id=54603
if you google "nptl" "glibc" "hang" you get a lot of hits all over the place across a lot of distros, generally
Just a note about versions, I found the references to the n
Actually, closer inspection of the stack trace reveals something
interesting: there is one thread where the __del__ method on the
ConnectionFairy is getting called within the get() method of Queue,
and creating a reentrant lockup since __del__ tries to return the
connection to the Pool, ca
heres the gentoo version:http://bugs.gentoo.org/show_bug.cgi?id=54603if you google "nptl" "glibc" "hang" you get a lot of hits all over the place across a lot of distros, generallyOn Jun 25, 2006, at 10:24 PM, Dennis Muhlestein wrote:Ok, The servers are running on Xen virtual machines (3.0.1).
it looks totally like Queue.Queue. but i cant find any mention of people having a problem like this with Queue.Queue, except one that pointed to that redhat bug.Looking at your stacktrace, it is almost certainly a problem with the threading library; every single thread is stuck on acquiring one of
Ok, The servers are running on Xen virtual machines (3.0.1). The OS for the host and the guest both are Gentoo. They both use nptl for threading. (glibc 2.3.6). I couldn't tell from the Redhat bug report if they were using similar glibc and nptl as I have. If this problem persists, I could poss
also what OS/server/etc. are you running on ? this is a strange problem youre having, im wondering if youre on a platform with threading problems, as I have found this (red hat 9 threading busted) :https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=86416in other news, the "timeout" parameter in
OK, this deadlock is entirely occuring as a result of the way the Queue.Queue module is being used, and its not related to a database-related deadlock as far as i can tell. which is a good thing ! because those things are hard to fix.now, the pool should be timing out when no more connections are
Here are the stack traces of my locked threads:
There server is still in this state (I have two operating now), so if anyone can provide insight into debugging things I can type to figure out what is happening, I'm available to chat on MSN or skype.
ThanksDennis
So, I've been reading the source
assuming youre using postgres, when it hangs you should take a look at the pg_locks table: http://www.postgresql.org/docs/8.1/interactive/monitoring-locks.htmland also you should install the threadframe module: http://www.majid.info/mylos/stories/2004/06/10/threadframe.htmland create an additional
Ok, so I've upgraded mydrawings.com to use SQLAlchemy 0.2.3 a few days ago w/ the hopes that any database locking up problems on 0.1.x were alleviated. I'm still having lockups though.
I installed httprepl into cherrypy(See this thread for details: http://groups.google.com/group/turbogears/browse_
13 matches
Mail list logo