New submission from David Decotigny <[EMAIL PROTECTED]>:

With the attached script, then demo() called with for example
datasize=40*1024*1024 and timeout=1 will deadlock: the program never
terminates.

The bug appears on Linux (RHEL4) / intel x86 with "multiprocessing"
coming with python 2.6b3 and I think it can be easily reproduced on
other Unices. It also appears with python 2.5 and the standalone
processing package 0.52
(https://developer.berlios.de/bugs/?func=detailbug&bug_id=14453&group_id=9001).

After a quick investigation, it seems to be a deadlock between waitpid
in the parent process, and a pipe::send in the "_feed" thread of the
child process. Indeed, the problem seems to be that "_feed" is still
sending data (the data is laaarge) to the pipe while the parent process
already called waitpid (because of the "short" timeout): the pipe fills
up because no consumer is eating the data (consumer already in waitpid)
and hence the "_feed" thread in the child blocks forever. Since the
child process does a _feed.join() before exiting (after function f), it
never exits. And hence the waitpid in the parent process never returns
because the child never exits.

This doesn't happen anymore if I use timeout=None or a larger timeout
(eg. 10 seconds). Because in both cases, waitpid is called /after/ the
"_feed" thread in the child process could send all of its data through
the pipe.

----------
components: Library (Lib)
files: c.py
messages: 72640
nosy: DavidDecotigny
severity: normal
status: open
title: multiprocessing deadlocks when sending large data through Queue with 
timeout
versions: Python 2.6
Added file: http://bugs.python.org/file11401/c.py

_______________________________________
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue3789>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to