I raised this issue and question on StackExchange and #python (FreeNode)
and have received little or no feedback. I fear that the only answer
will lie in profiling the python interpreter itself, which is beyond the
scope of my capabilities at present.
The original question can be found here:
http://stackoverflow.com/questions/38637282/multiprocessing-queue-seems-to-go-away-os-pipe-destruction-vs-python
Cliffs: after about one to two hours of processing, the consumer process
reading the queue terminates on timeout during queue.get(). The thread
writing objects to the queue receives no exception continues writing to
the queue. The thread keeps track of each child process using a simple
local list of processes, and each time an object is added to the queue,
the processes objects (consumers) are checked with .is_alive(). When
the child processes suicide on timeout, is_alive() continues to return
True, so they are not garbage collected nor are new processes started.
I sincerely apologize if my own understanding of the documentation is at
fault, but to the best of my knowledge this code should work according
to the documentation.
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com