On 23 May 2005 13:22:04 -0700, Simon Percivall <[EMAIL PROTECTED]> wrote: >Okay, so the reason what you're trying to do doesn't work is that the >readahead buffer used by the file iterator is 8192 bytes, which clearly >might be too much. It also might be because the output from the >application you're running is buffered, so you might have to do >something about that as well. > >Anyway, if the output from the child application is unbuffered, writing >a generator like this would work: > >def iterread(fobj): > stdout = fobj.stdout.read(1) # or what you like > data = "" > while stdout: > data += stdout > while "\n" in data: > line, data = data.split("\n", 1) > yield line > stdout = fobj.stdout.read(1) > if data: > yield data, >
Or, doing the same thing, but with less code: def iterread(fobj): return iter(fobj.readline, '') Haven't tried this on subprocess's pipes, but I assume they behave much the same way other file objects do (at least in this regard). Jp -- http://mail.python.org/mailman/listinfo/python-list