Alexey Izbyshev <[email protected]> added the comment:
(Restored test.py attachment)
The issue happens due to an incorrect usage of `multiprocessing.Pool`.
```
# Set up multiprocessing pool, initialising logging in each subprocess
with multiprocessing.Pool(initializer=process_setup,
initargs=(get_queue(),)) as pl:
# 100 seems to work fine, 500 fails most of the time.
# If you're having trouble reproducing the error, try bumping this
number up to 1000
pl.map(do_work, range(10000))
if _globalListener is not None:
# Stop the listener and join the thread it runs on.
# If we don't do this, we may lose log messages when we exit.
_globalListener.stop()
```
Leaving `with` statement causes `pl.terminate()` [1, 2]
Since multiprocessing simply sends SIGTERM to all workers, a worker might be
killed while it holds the cross-process lock guarding `_globalQueue`. In this
case, `_globalListener.stop()` blocks forever trying to acquire that lock (to
add a sentinel to `_globalQueue` to make a background thread stop monitoring
it).
Consider using `Pool.close()` and `Pool.join()` to properly wait for task
completion.
[1]
https://docs.python.org/3.9/library/multiprocessing.html#multiprocessing.pool.Pool.terminate
[2]
https://docs.python.org/3.9/library/multiprocessing.html#programming-guidelines
----------
nosy: +izbyshev
resolution: -> not a bug
stage: -> resolved
status: open -> closed
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue42097>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com