Can anyone out there provide me with significant details about
the electrical interface between an 84 key keyboard or later and a
PC/AT or later?  I'm looking for timing data, details of recovery when
both the keyboard and computer decide to send at the same time, and
the sequence required at power-up and reset.

        The reason that I'm looking is that I'm considering a PIC based
project that plugs in in lieu of a keyboard.

        Delete now if this kind of thing isn't interesting.

        A couple of hours following AltaVista links yesterday didn't
turn up anything interesting.  I'll be interested, if someone points
me to a web page, in figuring out what query would have found it for
me.  I have a book "Hardware Bible" whose description is sketchy, and
perhaps not even self consistant.  It isn't consistant with the PC/AT
reference manual (IBM).  I don't have all the pages of the IBM manual,
so it might have what I need (though it isn't the most unambiguous
document that I've ever read), should anyone have a complete copy.

        I'm going to include some details, summarizing what I know, in
case someone who wouldn't otherwise respond just happens to have some
particulars that I'm missing.

        Of course the 83 key (PC - PC/XT) keyboard interface is easy
to follow, given that you can find schematics for early system boards
and the keyboard interface was all SSI/MSI standard parts.  But that's
not going to help me send all of the key codes of a 104 key keyboard
to a modern machine.  But the protocol changed (remember those
keyboards with the XT - AT switch on the bottom?) with the
introduction of the AT and its 84 key keyboard (with control and
escape moved to the wrong place, and the same physical plug).  It now
supports two way communication, so that you can choose which scan code
set, which keys are "typematic", the typematic rate and delay, and,
most usefully, control the keyboard LEDs.  But it still uses besides
ground and +5V, just one data and one clock wire for transmission,
still both open collector with resistive pullups so that they are the
wire AND of the drive from the two ends.  (The fifth wire was used as
a reset signal from the computer to the 83 key keyboard, but is not
used that way in the newer interface.)

        At the same time the AT started using an 8042 as the keyboard
interface, so my TTL data books don't help me understand the protocol
anymore.  IIUC, the 8042 is a 1 chip micro, so Intel's data sheet
won't help either, since IBM got to program it, and that won't be in
the data.

        The IBM manual implies that a character must complete being
sent in 2ms.  Since there are 11 bits (start, 8 data, parity, and
stop) to send that sets a lower bound on the clock of about 5.5kHz.
It also says that the keyboard will check whether the computer is
holding the clock low at least every 60us, and that when the computer
holds the clock low it must do so for at least 60us (so the keyboard
is guaranteed to see it).  You might argue that this implies an upper
bound on the clock rate of 16.7kHz (60us high and near 0 time low),
but so long as the computer drives the clock low before or quickly
after the keyboard would stop driving it low (the keyboard does the
clocking for both directions of communication, I think), then the
keyboard would be free to decide that the computer hadn't done it, and
could go on to the next clock.  Actually, my best guess is that both
clock high and clock low times should be at least 60us, for an upper
bound of 8.3kHz clock, but it would be nice to know for sure, since
if a faster clock is OK comming from the keyboard, I won't have to
worry about jitter due to serving other tasks in the PIC.

        The data valid times aren't particularly clear either.  My
best guess is that data must be valid while the clock is low, and for
a little while before and after (ambiguous statement in IBM manual).
Functionally I'll bet that the data is safe to become valid by the
falling edge of the clock and remain valid for some period (likely
60us) because there is a microprocessor at each end, and the receiver
is occasionally executing an instruction to sample the clock line, and
when it finds that it has gone from high to low, then (a minimum of 2
instruction times later) samples the data.  It should certainly be
safe to change the data right after changing the clock, since
instruction time on a cheap micro is going to be long compared to the
clock/data signal skew on even a very long keyboard cable.

        The old 83 key interface had the computer hold the clock low
after the falling edge (driven by the keyboard) that clocks the last
data bit (presumably now the stop bit, the old interface sent only 9
bits, no parity and no stop) until it was ready to receive another
byte (until PC software "accepted" the byte just sent).  I presume
that the 8042 emmulates this behavior (so that it can be sure of
getting in a "resend" command, if necessary, before the keyboard
starts on the next byte), but in software, so perhaps not (always) as
promptly as the old SSI/MSI implementation, which presents a lower
bound on the clock low time, at least for that bit.  But am I correct,
and if the required low time is longer than normal, then what?

        The IBM manual talks about asserting the clock line low to
"hold off" the keyboard.  Note that, unlike I2C and friends, the
ability of the computer to also drive the clock line is not used for
bit by bit flow control, just byte by byte flow control.  If the
computer drives the clock low during clock high times within the byte
sent by the keyboard, that means to abort the byte and to the computer
send instead.

        The "Hardware Bible" implies that the value of the start bit
differs between mode 1 and the other 2 modes.  The (pages that I have
of the) IBM manual would seem to indicate otherwise.  Is this true?
(I've got to convince my company to buy a logic analyzer.  This isn't
a "work" project, but we need one anyway, don't we?)  Note that it is
reasonable to have a start bit of either value since, because of the
presense of the clock line, you don't need a data line transisition to
define the sampling times of the bits (unlike asynchronous serial).
It does make it harder to identify which bit is which, though, should
the two ends get out of phase (because one is reset, or comes out of
reset, during the other ends transmission, or a bit be dropped in some
other way).  I presume, however, that this only applies to the
keyboard to computer direction, that the computer to keyboard
direction always uses a zero start bit.

        When the computer wants to send to the keyboard, and assuming
that the keyboard isn't sending, it asserts the data line low, which,
I think, also becomes the start bit eventually.  I think that the
keyboard still does the clocking, so it kind of acknowledges this
"request to send" by asserting clock low, and to the extent that the
keyboard actually needs to clock in the start bit, this clock low
period does it.  Since the keyboard does the clocking, the computer
(actually the 8042) must be ready to supply the bits at the keyboard's
rate.  I presume that it shouldn't change the data to the next value
until the clock rises, and that it had better already be the next
value by the time the clock falls again.  Probably the 2ms limit
applies here too, or the 8042 will declare a failure, but the same
clock rate should work.  Probably having the computer drive the clock
should be an abort here too, but it probably never happens.

        It's easy enough to see what happens if the computer decides
to "hold off" the keyboard just as its starting to send.  If the
keyboard doesn't see the hold off in time and starts to clock out the
start bit, then it will see the computer's clock drive between the
start bit and the first data bit, abort the transmission, and save the
byte to send later.  The 8042 has to be clever enough to delay any
clock driving until the end of the current byte if the start bit of
a transmission from the keyboard has already been clocked in at the
time it is told to hold off the keyboard (and buffer the byte itself
until any external hold off condition ends).

        More difficult is, if the start bit for both directions is a
zero, is to detect that both ends tried to transmit at once.  Both see
the data line low and the keyboard drives the clock low, so each think
that the other is getting the start bit.  They could watch to see that
the data line is the value they are specifying is correct at each
clock, then the one going for high would realize the conflict (like
I2C arbitration), stop driving data, and receive the rest of the bits
(the bits gone by are the same as it was sending, so it knows them),
but this is not described anywhere that I can find.  Also, if both
happened to be sending the same byte value, there would never be a
conflict, and each would think that it had successfully transmitted,
but neither would pay attention.  (There is at least one value that
can be sent in either direction.  They could be distinguised by using
opposite parity, or start bit value, or stop bit value, for the two
directions, but again, I haven't seen that described, except for the
Hardware Bible saying that the start bit is opposite in some modes,
but not all of them.)

        Another approach would be to require the computer end to
establish hold off (drive clock low) long enough to guaratee
recognition of hold off, possibly first and unknowingly aborting
keyboard transmision of the byte whose start bit the keyboard just
started sending, and then release the clock line when already holding
the data low.  (Or asserting data low within some time limit during
which the keyboard, after the end of hold off, promises not to start
driving data.)  The keyboard would then see, hold off having gone
away, that data is driven, and recognize the computer's request to
send.  (Or, having stopped driving data upon recognizing hold off,
recognizing that data is being driven low somewhat later during
hold off.)

        It's also possible that they just don't care about a few lost
characters.  The likelyhood of collision is very low.  If a user's
character is dropped he'll just think that he miss-typed.  The keyboard
never expects an answer to anything that it sends.  If a transmission
from the computer is dropped, then it won't get acknowledged by the
keyboard, and after a 25ms timeout it will try again.

        Lots of possibilities.  The question is which.

        Finally, there is the question of what handshake at a higher
level is required after various reset conditions.  It's pretty clear
that the keyboard has to send an 0xAA (self test completed
successfully) eventually in response to a reset command byte sent by
the computer.  It is also supposed to send one after power-up, I wonder
if the 8042 can tell.  There is no more reset wire, so if you push the
reset button on a PC that has one, the keyboard will not be directly
reset by the button.  The keyboard power doesn't get cycled either, so
it won't know that there has been a reset.  The 8042 could send it a
reset command, but when you press the button the 8042 gets reset too,
so won't it be expecting the 0xAA without having to send anything?

        Also, the "Hardware Bible" suggests that upon completing self
test, the keyboard begins periodicly sending 0xAA with bad parity.
When the computer finally gets ready to see that, it sees the parity
problem, sends "resend" and the keyboard responds with 0xAA with good
parity.  It claims this is a strategem for waiting until the computer
is awake enough to see what's sent, otherwise a single original 0xAA
could be lost while the computer is still stupid.  I wonder if this is
actually specified, and thus universally true, or just true of the
particular keyboard which that author analyzed.  It doesn't seem
necessary, since the 8042 is supposed to assert hold off pretty early,
and I think that there's a fairly long delay specified between the
keyboard getting power and beginning to run self test, let alone send
the result.

        (And a bonus question: Do the commands to stop scanning the
keyswitch array have any possible use other than production
diagnostics?)

                                                        Bill

**********************************************************
To unsubscribe from this list, send mail to
[EMAIL PROTECTED] with the following text in the
*body* (*not* the subject line) of the letter:
unsubscribe gnhlug
**********************************************************

Reply via email to