Regarding bus-encryption processors such as

  http://www.cl.cam.ac.uk/~mgk25/trustno1.pdf

[EMAIL PROTECTED] was concerned:
> One potential problem with such a system is that it allows
> software vendors to include malicious code in their products with little
> or no chance of being caught.

Markus Kuhn said in response:
> I don't think this is a severe additional threat. Decompiling software
> is a rather difficult and not widely practiced art.

and John Gilmore replied:
> I would disagree, but I funded a decompilation of the Adobe PostScript
> interpreter from the original LaserWriter ROMs, eventually producing a
> specification for the encoded Type 1 fonts.  This effort only took
> a month or two of a skilled programmer's time.
> The eventual result was that Adobe released the specs for these fonts
> (a year or two later).

I think we are talking about *very* different things here. You talk
about decompilation in order to steal ("liberate" ;-) know-how and
technology, while we were more concerned about full security evaluations
without the cooperation of the vendor.

I am perfectly aware of how feasible it is to extract specific
algorithms and data from even huge and complicated binaries or chip
masks within a few weeks/months. (I have done it myself successfully
several time with various embedded security systems, though EU
legislation might make it not too advisable to name details in a public
forum). You know roughly what you are looking for and what the overall
architecture is, and if you are an experienced low-level programmer and
skilled in using good debugging tools, you can quickly narrow down the
code to the small unit that really deserves your attention.

If the goal of the investigation is however to exclude with high
probability that the vendor or someone in the development and
distribution pipeline has built a backdoor into the code that allows
unauthorized violation of documented security mechanisms, you do *not*
know what you are looking for. There are zillions of ways and places to
hide backdoors and I have seen proposals for extremely clever ones. You
can't usually narrow down the search to a limited area, because the trap
door could be hidden almost anywhere. Security evaluations are difficult
and complicated enough, even if you have access to the source code and
the documentation. Especially with low-assurance languages such as
Assembler/BCPL/C/C++, data structures can be modified in unexpected ways
from almost everywhere unless process space protection has been used
carefully to separate trusted and not-so-security-relevant code in a
clear way. You know as well as I do that this is rarely done in
practice. It is therefore not sufficient to reverse engineer only say
the key generator, key management and cipher implementation, because I
could have easily hidden somewhere buried deep in the GUI widget library
or in a multi-media decompresser code that resets the key generator with
a certain probability to a value known to the attacker (just one trivial
example of thousands of easy to hide backdoors). Similarly, innocuous
looking algorithms could have been spiked to act as easy to exploit
covert channels; you really do not see immediately that say a harmless
looking checksumming routine in a smartcard OS has been specifically
designed to allow external access to key material via CPU current
analyis, or that your printer port driver broadcasts critical
information using either the cable as a short-wave antenna or using the
kerning in the documents that you print.

Serious security evaluations to eliminate accidental and malicious
backdoors in typical non-trivial products can take years, and in certain
cases they are only possible at all within a reasonable time if the
product was designed right from the beginning to support evaluation.
This can be done by using a high-assurance programming language (Ada
seems very popular in this field) and by carefully modularizing into
clearly separated code domains of varying security relevance.

Moreover, security evaluations without access to the revision control
logs of the manufacturer become useless as soon as a significantly
modified revision appears. Even big players like Microsoft have given up
to do security evaluations, even though they only attempted ITSEC FC2/E3
for an old and restricted NT version, which is a rather mild evaluation
level anyway. Therefore we still all use routers, applications, and
operating systems that have been spiked with back doors by well-trained
undercover software engineers payed for by the say French and Bulgarian
signals intelligence agencies ...

So I certainly stay with my assessment that bus-encryption processors
will not significantly reduce our protection against malicious
backdoors. They can provide some tampering protection, since they can
very effectively prohibit unauthorized patching of binaries. If you want
to trust your software, then you have to put quite a lot of trust in the
producer and many of his employees. We all know how easy the Russians
could buy carefully vetted and monitored NSA and US Navy employees, so
it is naive to assume that Silicon Valley and Redmond is less of a
target than Ft. Meade, and this does not even account for unsponsored
backdoors that people might leave on their private initiative.

I fully understand John's concern that bus-encryption processors might
lead to a longer proprietary status of certain industry specifications,
but I believe that the real solution out of this is a change of
awareness and not aggressive reverse engineering. Customers start to
understand that investments in products with proprietary specifications
loose value much faster than investments in open standards. And
successful manufacturers understand quickly whatever the majority of
customers is concerned with. I see a very promising trend that more and
more manufacturers are jumping on the Open Spec train and get
enthusiastic about Open Source projects.

Markus

-- 
Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK
Email: mkuhn at acm.org,  WWW: <http://www.cl.cam.ac.uk/~mgk25/>

Reply via email to