Antoine Pitrou <pit...@free.fr> added the comment:

The problem is you're joining the child processes before draining the queue in 
the parent.

Generally, instead of building your own kind of synchronization like this, I 
would recommend you use the higher-level abstractions provided by 
multiprocessing.Pool or concurrent.futures.ProcessPoolExecutor.

By the way, this issue is mentioned precisely in the documentation:

"""
As mentioned above, if a child process has put items on a queue (and it has not 
used JoinableQueue.cancel_join_thread), then that process will not terminate 
until all buffered items have been flushed to the pipe.

This means that if you try joining that process you may get a deadlock unless 
you are sure that all items which have been put on the queue have been 
consumed. Similarly, if the child process is non-daemonic then the parent 
process may hang on exit when it tries to join all its non-daemonic children.
"""

(from https://docs.python.org/3/library/multiprocessing.html#pipes-and-queues)

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue34140>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to