Jim Choate writes:

 > Trying to avoid software compromises by using hardware is impossible since
 > you can't build the hardware without software.

The point is to put the sensitive area (key ring, crypto engine) into
a small, isolated system which can't be easily compromised by a remote
exploit. It can be an embedded, it can be an ASIC as long as it is
open (a chip with a clear layout, simple interface/protocol, full gate
layout documentation, deliberately using large structures for ease of
interpretation, packaged to be easily accessed for sample
inspection). As long as we don't have nanolitho printers our our
desktops, this will be probably the best compromise there is.
 
 > You can't have your cake and eat it too.
 
I don't trust my Linux box. It is essentially unsecure, and has
24/7/365 xDSL access. Anybody with script kiddie skills can walk in
within seconds, bug the place, and walk out, without me noticing a
thing. I could improve the securety modestly, by investing many hours
of time (which I don't have, so it's not really an option). I don't
have the money to hire a really good securety consultant for a home
desktop box. The system is an agglomerate of mountains of code written
by thousands of people all over world (obviously, nothing prevents TLA
agencies from contributing code to OpenSource systems, riddled with
nonobvious backdoors), doing things which have nothing to do with
cryptography.  If I could insulate the sensible/compromisable area
within a small piece of tamperproof peer-reviewed hardware, I would
feel much better.
 
 > As to inserting a trapdoor in an FPGA, I don't see any reason at all that
 > a trapdoor can't be inserted with the appropriate understanding of the
 > state space and chosing a rare state to trigger your bypass.

If your VHDL->FPGA compiler comes with source, and your crypto engine
comes with VHDL source, which you have to compile yourself, all peer
reviewed, you must be really really clever to sneak in something. The
good point about FPGA is structure orthogonality: anything fishy will
be easily discernible. It is also of limited size, which is an
advantage when it comes to verifyability. In a pinch, you could sniff
on the bus, which will reveal which values are written where when the
circuitry is being downloaded/flashed.

I realize there are no quick and easy solutions (one would want to
integrate the display into the device (which would increase the device
complexity, and hence has to have minimal functionality, ASCII and raw
bitmaps would do plenty), to prevent snarfing of clear from the frame
buffer, using biometrics or a transponder to activate it (so no one
can sniff your passphrase with a keyboard sniffer), make it
tamperproof/self-destructing, etc), but the problem space is not of
boolean nature.

Reply via email to