Re: subprocess module and long-lived subprocesses
On Fri, 20 Jan 2012 08:42:16 -0600, skip wrote: > The library documentation doesn't talk a lot about long-lived subprocesses > other than the possibility of deadlock when using Popen.wait(). Ideally, I > would write to the subprocess's stdin, check for output on stdout and > stderr, then lather, rinse, repeat. Is it safe to assume that if the stdout > and/or stderr pipes have nothing for me the reads on those file objects (I'm > using PIPE for all three std* files) will return immediately with an empty > string for output? They won't block, will they? They will. You need to use either threads, select() or non-blocking I/O in order to avoid deadlock. See the definitions of subprocess._communicate() (there's one version for Windows which uses threads and another for Unix using select()). > Will a broken pipe IOError get raised as for os.popen() IOError(EPIPE) will be raised if you write to the stdin pipe when there are no readers. > or do I have to call Popen.poll() even in error situations? Once you're finished with the process, you should close .stdin then consume all output from .stdout and .stderr until both report EOF, then call .wait(). That should cover any possible child behaviour (e.g. if the child explicitly close()s its stdin, getting EPIPE doesn't mean that you can forget about the process or that .wait() won't deadlock). -- http://mail.python.org/mailman/listinfo/python-list
Re: subprocess module and long-lived subprocesses
(Apologies for the non-threaded reply. My subscription to the list is currently set to no-mail and I can't get to gmane.org, so have no clean way to reply...) Mike Fletcher wrote: > Definitely *will* block, you have to explicitly set them non-blocking to > have non-blocking behaviour: ... > I think everyone winds up with their own wrapper around subprocess > after they use it for more than a short period... Thanks, that saves me much head scratching. Skip -- http://mail.python.org/mailman/listinfo/python-list
Re: subprocess module and long-lived subprocesses
On 12-01-20 09:42 AM, s...@pobox.com wrote: > I'm converting some os.popen calls to use subprocess.Popen. I had > previously been ignoring stdout and stderr when using os.popen. The primary > motivation to switch to subprocess.Popen now is that I now want to check > stderr, so would have to make code changes to use os.popen[34] anyway. > Might as well go whole hog and switch to the new API. > > The library documentation doesn't talk a lot about long-lived subprocesses > other than the possibility of deadlock when using Popen.wait(). Ideally, I > would write to the subprocess's stdin, check for output on stdout and > stderr, then lather, rinse, repeat. Is it safe to assume that if the stdout > and/or stderr pipes have nothing for me the reads on those file objects (I'm > using PIPE for all three std* files) will return immediately with an empty > string for output? They won't block, will they? Will a broken pipe IOError > get raised as for os.popen() or do I have to call Popen.poll() even in error > situations? > > Thanks, > Definitely *will* block, you have to explicitly set them non-blocking to have non-blocking behaviour: def set_nb( fh ): """Set non-blocking flag on given file handle""" if isinstance( fh, int ) or hasattr( fh, 'fileno' ): flags = fcntl.fcntl(fh, fcntl.F_GETFL) fcntl.fcntl(fh, fcntl.F_SETFL, flags| os.O_NONBLOCK) on each of the 3 buffers, then you need to attempt read/write on each of them periodically, catching the EWOULDBLOCK errors, to prevent deadlocks where the buffers have filled up (e.g. because the subprocess is printing out errors on stderr, or because it is generating output, or because for some reason the process isn't reading your input fast enough). I think everyone winds up with their own wrapper around subprocess after they use it for more than a short period... HTH, Mike -- Mike C. Fletcher Designer, VR Plumber, Coder http://www.vrplumber.com http://blog.vrplumber.com -- http://mail.python.org/mailman/listinfo/python-list