Connor Wolf added the comment:

> IMNSHO, working on "fixes" for this issue while ignoring the larger 
> application design flaw elephant in the room doesn't make a lot of sense.

I understand the desire for a canonically "correct" fix, but it seems the issue 
with fixing it "correctly" has lead to the /actual/ implementation being broken 
for at least 6 years now.

As it is, my options are:
A. Rewrite the many, many libraries I use that internally spawn threads.
B. Not use multiprocessing.

(A) is prohibitive from a time perspective (I don't even know how many 
libraries I'd have to rewrite!), and (B) means I'd get 1/24-th of my VMs 
performance, so it's somewhat prohibitive.

At the moment, I've thrown together a horrible, horrible fix where I reach into 
the logging library (which is where I'm seeing deadlocks), and manually iterate 
over all attached log managers, resetting the locks in each immediately when 
each process spawns. 
This is, I think it can be agreed, a horrible, horrible hack, but in my 
particular case it works (the worst case result is garbled console output for a 
line or two). 

---

If a canonical fix is not possible, at least add a facility to the threading 
fork() call that lets the user decide what to do. In my case, my program is 
wedging in the logging system, and I am entirely OK with having transiently 
garbled logs, if it means I don't wind up deadlocking and having to force kill 
the interpreter (which is, I think, far /more/ destructive an action).

If I could basically do `multiprocessing.Process(*args, *kwargs, 
_clear_locks=True)`, that would be entirely sufficient, and not change existing 
behaviour at all.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue6721>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to