> On Fri, 26 May 2000, Frank da Cruz wrote:
> 
> > The only other clue I can offer is that Kermit uses select() to
> > multiplex between the port and keyboard.  I don't know what Minicom
> > does -- maybe it uses forks and blocking reads like Kermit used to
> > in previous versions.
> 
> I am curious about the rationalization for using select() instead of
> the VTIME/VMIN approach.
> 
C-Kermit is targeted at *every* post-V6 Unix version, not just Linux.

> In my programming HOWTO, I recommend against using select() solely on
> the observation that it is designed to work with sockets() and must be 
> more complex that is necessary.
> 
> I have never read anything about select() and when or how it should be 
> used.
> 
As noted previously, Kermit's "terminal emulator" was originally coded using
two forks; one to read (blocking) from the port and put to the screen, the
other to read (blocking) from the keyboard and put to the port.  This method
is highly portable, at least within the Unix world (other OS's that have C
compilers and runtime systems, such as VMS, AOS/VS, etc, don't have fork()).

As the need grew for the two forks to share information, e.g. the results of
Telnet negotiations, Kermit outgrew this simple and portable model; it had to
run in a single process/address space so variables & buffers could be shared.

What to replace the old model with?  Threads were out of the question --
highly unportable and not available on many of the target platforms.  Ditto
for VTIME/VMIN.  That leaves select(), which as Jeff pointed out, works with
any file descriptor in Unix, not just TCP ones.

Of course select() is not totally portable either, but at least we know it's
there in any OS that has a sockets library, and the API is fairly uniform.

The Unix C-Kermit 7.0 CONNECT ("terminal emulation") module was cloned.  One
version (ckucns.c) uses select(); the original (ckucon.c) uses fork().  The
Unix makefile has hundreds of targets; most of them use ckucns.c, but many of
the older ones still use ckucon.c.  We use ckucns.c wherever we can because
it works more smoothly (no need to send signals between the forks and push
state info thru pipes), and of course we use ckucns.c in Linux.

I don't find anything particularly complex about select(); it's perfect for
this job.  I also never had any problem with it, and still don't as far as I
know.  It's not slow -- when C-Kermit is a Telnet or Rlogin client, it keeps
up at Ethernet speeds, so I can't imagine how it could be responsible for
losing serial-port characters at a mere 38400 bps.

So back to the original question: Does anybody have a theory why Kermit would
lose incoming bytes on a non-flow-controlled serial-port connection at
38400 bps (or any other speed), when Minicom does not lose them on the same
port at the same speed?

- Frank


-
To unsubscribe from this list: send the line "unsubscribe linux-serial" in
the body of a message to [EMAIL PROTECTED]

Reply via email to