[issue35122] Process not exiting on unhandled exception when using multiprocessing module

2018-11-01 Thread akhi singhania


akhi singhania  added the comment:

Thank you very much for the reply and the link.  It seems like I escaped that 
bit in the documentation, my apologises.  I can confirm that using 
cancel_join_thread() removes the need for explicitly calling queue.close().

May I please ask for some more clarification if you do not mind.  My 
understanding now is that, there are two scenarios to consider when a process 
using queues tries to exit:

- The default behaviour seems to be that the process must flush the queue 
before it exits.  This is useful as it will ensure that none of the queued data 
is lost which can be very useful in some circumstances.

- The alternate behaviour (which can be enabled by setting 
cancel_join_thread()) is that you don't care about losing the data in the queue 
and just want to exit.  Again this can be useful in some circumstances if you 
don't care if the data is lost and emptying out the queue might potentially 
take a long time.


Does the above sound about right?  Thank you very much for your explanation and 
sorry again for the noise.

--

___
Python tracker 
<https://bugs.python.org/issue35122>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35122] Process not exiting on unhandled exception when using multiprocessing module

2018-10-31 Thread akhi singhania


New submission from akhi singhania :

I am not sure if this is an implementation bug or just a documentation bug.  
When using the multiprocessing module, I have come across a scenario where the 
process fails to exit when it throws an unhandled exception because it is 
waiting for the feeder thread to join forever.  Sending SIGINT doesn't cause 
the process to exit either but sending SIGTERM does cause it to exit.

I have attached a simple reproducer.

When main() raises the unhandled exception, the process does not exit.  
However, if the size of data that is enqueued is reduced or the child process 
closes the queue on exiting, then the process exits fine.  

In the scenario, when the process exits successfully, I see the following 
output:

 creating queue
[DEBUG/MainProcess] created semlock with handle 140197742751744
[DEBUG/MainProcess] created semlock with handle 140197742747648
[DEBUG/MainProcess] created semlock with handle 140197742743552
[DEBUG/MainProcess] Queue._after_fork()
 created queue
 creating process
 starting process
 started process
 starting enqueue
[DEBUG/MainProcess] Queue._start_thread()
[DEBUG/MainProcess] doing self._thread.start()
[DEBUG/Process-1] Queue._after_fork()
[INFO/Process-1] child process calling self.run()
[DEBUG/MainProcess] starting thread to feed data to pipe
[DEBUG/MainProcess] ... done self._thread.start()
 done enqueue
 starting sleep
 done sleep
Traceback (most recent call last):
  File "example.py", line 58, in 
main()
  File "example.py", line 54, in main
raise Exception('foo')
Exception: foo
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[DEBUG/MainProcess] telling queue thread to quit
[DEBUG/MainProcess] running the remaining "atexit" finalizers
[DEBUG/MainProcess] joining queue thread
[DEBUG/MainProcess] feeder thread got sentinel -- exiting
[DEBUG/MainProcess] ... queue thread joined


In the scenario when the process does not exit successfully, I see the 
following output:

 creating queue
[DEBUG/MainProcess] created semlock with handle 139683574689792
[DEBUG/MainProcess] created semlock with handle 139683574685696
[DEBUG/MainProcess] created semlock with handle 139683574681600
[DEBUG/MainProcess] Queue._after_fork()
 created queue
 creating process
 starting process
 started process
 starting enqueue
[DEBUG/MainProcess] Queue._start_thread()
[DEBUG/MainProcess] doing self._thread.start()
[DEBUG/Process-1] Queue._after_fork()
[INFO/Process-1] child process calling self.run()
[DEBUG/MainProcess] starting thread to feed data to pipe
[DEBUG/MainProcess] ... done self._thread.start()
 done enqueue
 starting sleep
 done sleep
Traceback (most recent call last):
  File "example.py", line 58, in 
main()
  File "example.py", line 54, in main
raise Exception('foo')
Exception: foo
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[DEBUG/MainProcess] telling queue thread to quit
[DEBUG/MainProcess] running the remaining "atexit" finalizers
[DEBUG/MainProcess] joining queue thread
<<<< Process hangs here >>>>



I found the "solution" of closing the queue in the child by trial and error and 
looking through the code.  The current documentation suggests that 
multiprocessing.Queue.close() and multiprocessing.Queue.join_thread() are 
"usually unnecessary for most code".  I am not sure if the attached code can be 
classified as normal code.  I believe that at the very least, the documentation 
should be updated or maybe it should be investigated if some code changes can 
address this.

--
components: Extension Modules
files: example.py
messages: 328983
nosy: akhi singhania
priority: normal
severity: normal
status: open
title: Process not exiting on unhandled exception when using multiprocessing 
module
type: behavior
versions: Python 3.6
Added file: https://bugs.python.org/file47896/example.py

___
Python tracker 
<https://bugs.python.org/issue35122>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com