Piet van Oostrum wrote:
norseman <norse...@hughes.net> (n) wrote:
n> Piet van Oostrum wrote:
norseman <norse...@hughes.net> (n) wrote:
n> I have tried both and Popen2.popen2().
n> os.popen runs both way, contrary to docs.
What do you mean `os.popen runs both way'?
n> It reads from child while console writes directly to child - thus
n> eliminating the problem of coding a pass through from master.
Yes, but that is not `both way': popen connects the parent to the child
through a pipe. The pipe works one way: from the child to the parent
with 'r' (default), from the parent to the child with 'w'. You can't
communicate the other way through the pipe. So the communication from
the parent process to the child through the popen is ONE WAY. If you
want TWO WAY communication you can use popen2, or better use
subprocess.Popen.
We are actually saying the same thing - you do say it better.
A child process always inherits stdin, stdout and stderr from the parent
unless you change that (e.g. by redirecting to a pipe, like popen does
for one of them). It doesn't matter whether you use os.popen,
subprocess.Popen, os.system, or os.fork to create the child process. So
in your case if the parent inputs from the console, so does the child.
But note: this is not communication from the parent process to the
child, but from YOU to the child. So the parent-child communication is
ONE WAY.
n> "...
doesn't work as the iterator for a file, including pipes, does a
read ahead (see the doc on file.next()) and therefore is not suitable
for interactive use.
n> ..."
n> If I understand you the above can be stated as:
n> The above does not work as an iterator for any file type, including
n> pipes, but it does do read aheads .... and therefore is not suitable for
n> interactive use.
For files in general it is no problem because the contents of the file
is not interactively generated. Read ahead on a file works as long as
you do'nt use readline() on the file in between the iterator actions.
For a socket it could be the same problem if the other side generates
the output interactively.
n> If that is correct then read ahead is simply buffered file reads (grab a
n> chunk, parcel it out on demand) - yes?
I don't know, I have looked into the source code but it isn't clear to
me. I noticed that iteration didn't work and then looked up the
documentation. It talks about read ahead for efficiency.
n> As for "... not suitable for interactive ..." Really? Except for
n> special purpose use the current interactive components are all buffered
n> for read ahead use. Check the actual code for your keyboard, your mouse
n> and so forth. It's the read ahead that allows faster time to completion.
n> It's why C-code has the putch function.
Yes, but they only read what is available. The iterator apparently tries
to read more and then has to wait.
In actuality the read ahead does 'run off the end' and then waits.
Specifics are coder dependent. But I think I understand what you mean.
It may not be treating the incoming buffer as circular. That could
explain a few things I'm seeing.
n> Yes - Sync IS the bigger hammer! If that is what is needed - so be it.
n> All character readers (byte at a time) should obey a flush(). Depending
n> on type, code for the reader controls whether or not it flushes
n> incomplete "lines" in the in-buffer(s). Proper implementation limits lost
n> data on system crash.
I don't understand what you say here. As I told before it has nothing to
do with sync(). Also for reading there is no flush(); the flush() is
done on the other side of the pipe. Yes, and sync() has to do with
system crashes but that is not what we are talking about in this thread.
The line with Sync is just a comment. Sync'ing the whole system just to
force a singular flush is not a good way to proceed. The comment is not
actually connected to the comments on readers.
n> In trying to use flush at the master side I keep getting messages
n> indicating strings (completed or not) are not flushable. Strange practice.
If you print to the console in the master the flush is done
automatically.
The symptoms have been otherwise. Going back to your comments on
iterators not proceeding with 'in-line' processing but rather holding
onto the bytes until later would give the effect of flush not working.
n> ---
n> from subprocess import Popen, PIPE
n> xx = Popen(["z6.py"], stdout=PIPE).stdout
n> while True:
n> line = xx.readline()
n> if not line: break
n> print "\t" + line,
n> ---
n> DOES WORK on Python 2.5.2 on Slackware 10.2 - THANK YOU VERY MUCH!!!
n> Isn't working on Windows. error message comes as one of two forms.
n> 1- %1 not found #as shown above
n> 2- file not found #as ...["python z6.py"]...
n> same #as #2 even with full paths given
That should be Popen(["python", "z6.py"], stdout=PIPE).stdout
And then with both python and z6.py given as full paths.
n> I get the impression subprocess ignores system things on Windows.
n> The routines it purposes to replace do use them. At any rate, subprocess
n> is NOT consistent across platforms.
subprocess is, but Windows isn't. On Unix-like systems, the python
command is usually in your PATH, so just giving "python" works. In
Windows PATH is underused, and commands like python often are not in
PATH, unless you as a user has adapted the path, or maybe there is an
installation option to adapt the PATH. The reason probably is that in
Windows almost nobody uses the command line but only clicks and the PATH
is not relevant.
"The reason" most certainly "is ...." Yep!!
Technically - the path is passed as part of the click, but since it is
transparent it is easy to forget its presence.
n> Some questions:
n> 1) "...], stdout=PIPE).stdout
n> ^ ^ why the double use?
It is not a double use. Popen(["z6.py"], stdout=PIPE) gives you a Popen
object, not a file object. If you add .stdout you get the stdout
attribute of the Popen object of which you just before stated that it
should be a pipe. So the stdout=PIPE parameter makes it create a pipe,
and the .stdout returns you that pipe.
I rather thought it might be something like the military:
Company - Dress Right - Dress
Get the attention, state what is to be done, order it done. :)
Python goes to great lengths to "be helpful" but drops the ball on the
obvious. stdout=PIPE means the user wants stdout piped back so it
should be helpful and do the obvious rather than have the user be redundant.
Subprocess - run file - redirect stdout - run redirected
^ redundant :)
Subprocess - run I/O redirected file - (closing ')' means) run
1 percent of 100 characters typed is 1 (error).
1 percent of 10,000 characters typed is 100 (errors, minimum, not
factoring in fatigue)
The more the typing the more the errors and the longer to completion.
The less specific the verb, the more the research.
n> 2) "if not line: break" what tells what to look for EOL
n> or is the use of 'line' missleading?
n> is it a byte at a time output?
No, line is a line, as indicated by the readline() call. The
"if not line" is a test for end of file. In this case the end of the
child process (pipe closed).
Got it - that helps!
n> how much CPU usage does the loop use?
Not much, It is mainly waiting for input but that doesn't consume CPU.
n> is there something in that loop that
n> uses the system pipe triggers to
n> reduce excessive CPU waste or does it
n> constantly poll? If so where does
n> what code get put to signal (or set)
n> the courtesy semiphores so it does
n> not hog the system?
It doesn't poll or use semaphores. The beauty of a pipe is that it
automatically synchronises the reader and writer processes. If you do a
read on a pipe and it is empty the OS just blocks you. When the other
end writes something in the pipe, the OS will unblock you.
Unless the OS is using interrupts to signal things - looping is
expensive. Yes - interrupts are NOT a good idea for general computing
software. The use of interrupts is especially bad in multi-user,
multi-tasking environments.
The way you stated the process indicates a loop and thus the program is
removed from the processing list until the unblock re-instates processing.
Some clarity- Interrupts halt all processing until released. They
effectively knock the system unconscious until released.
Process polling creates an entry in the circular cue for
switching the program pointer (and associated registers
and maybe memory paging) to the next 'running' process so
it gets it's slice of execution time. The effect of
multi-processing is realized but the CPU only executes
one program at a time, in intermixed pieces. The time
slice is the gain. Different processes can have
different lengths of run time. Instructions are ignored
except during their time slice. Then the current is put
on hold and the next in list is run for it's time slice.
So if process polling is used and and the program is frozen at the
readline that would explain why the program apparently freezes. In fact
it does. Master has readline, readline is frozen so master is frozen.
Keyboard instructions to child are not passed to child because master is
frozen. Thus keyboard 'continue' isn't transmitted, but CTRL-C (tell OS
to crash program) is intercepted by OS. GUI interface is separate so
mouse clicks still work and at crash (end program) the buffers are
dumped. Facts all fit.
Solution to problem:
1) read byte at a time from pipe, compositing my own string
2) send self/test for EOD to exit loop, preventing endless loop.
Old School - but it works unless Python screws up byte by byte reads.
Piet - Thanks for the help.
Steve
--
http://mail.python.org/mailman/listinfo/python-list