GCHQ Challenge
For those who haven't managed yet to solve the new GCHQ crypto challenge on http://www.gchq.gov.uk/challenge.html I have quickly written up the solution we found yesterday on http://www.cl.cam.ac.uk/~mgk25/gchq-challenge.html As GCHQ is surely able to log and trace accurately who accesses the above URL on our server, it would be best not to look at it if you want to apply and impress their recruiters with finding your own solution. Markus -- Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK Email: mkuhn at acm.org, WWW: http://www.cl.cam.ac.uk/~mgk25/
Re: DSA security issues?
Rodney Thayer wrote on 1999-11-27 19:27 UTC: Gilmore etc. have made comments, includingt the quoted passage below from the Linux IPsec list, indiciating that DSA is "not as trustworthy as RSA". Can anyone here offer some more details? One of the papers that reverse engineered many of the design ideas behind the NIST DSS is R. Anderson, S. Vaudenay: Minding your p's and q's, Asiacrypt 96, http://www.cl.cam.ac.uk/ftp/users/rja14/psandqs.ps.gz Markus -- Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK Email: mkuhn at acm.org, WWW: http://www.cl.cam.ac.uk/~mgk25/
Re: DPA mapped to spectral analysis
"Marcus Leech" wrote on 1999-11-19 19:45 UTC: Has anyone considered experimenting with DPA (Differential Power Analysis), but using spectral data, instead of power consumption? Different operations will produce different EM spectra, and so the attack should work, given suitable selection of frequency range. This could potentially allow the bad guy to attack a card without having access to the card, using a suitably directional antenna, etc. We are working on experiments along such lines. The information carrying components of the power spectrum extend even for a 3.5 MHz clock microcontroller well into the VHF range, where meter-long cables become good antennas. (Note that normal spectrum analysers are useless for such studies, because they provide you only with the spectrum of the entire power line, and they do not show you the much weaker information- carrying components in it are are only of interest here.) We are pretty certain that the currents and path lengths on the chip itself are orders of magnitude too small to be picked up by any practical form of antenna (unless perhaps you are in a very well-shielded environment and use some esoteric helium-cooled lowest-noise antennas), even if long-time averaging is performed. However, this is not the case for currents on all the lines that leave the chip surface. Our experimental target is at the moment the PIC16F84 microcontroller. It is in many aspects fully comparable to a smartcard controller (it is in fact used in some smartcards), but assembler-level development kits for it are much more easily openly available then for other smartcard processors and we do not want to have to ask our students to sign manufacturer NDAs before they can join the project. The PIC has also more I/O ports than a normal smartcard CPU, which simplifies triggering the oscilloscope during measurements, and it has a reasonably simple architecture. We have been working with an 8-bit 200 MHz storage scope so far, which is more than sufficient for performing a number of attacks, but in order to fully characterize the spectral properties of the leaking information, we will now use a new 8-bit 2 GHz scope as well. Our interest in the EM aspects is not specific to smartcards. For smartcards, you can usually get easily galvanic access to the connectors, and for most attacks, direct microprobing of the chip surface is the easiest approach anyway. However, EM attacks on microcontrollers are a first step towards better understanding the CPU EM emissions of other more complex embedded security applications, eventually even workstation-class systems. That's where compromising emanations will really become interesting. Some related earlier publications are on http://www.cl.cam.ac.uk/Research/Security/tamper/ especially http://www.cl.cam.ac.uk/~mgk25/ih98-tempest.pdf http://www.cl.cam.ac.uk/~mgk25/sc99-tamper.pdf Markus -- Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK Email: mkuhn at acm.org, WWW: http://www.cl.cam.ac.uk/~mgk25/
Re: New Scientist Article on Do-it-yourself Evesdropping
Martin Minow wrote on 1999-11-08 17:43 UTC: http://www.newscientist.com/ns/19991106/newsstory6.html "SOFTWARE that allows a computer to receive radio signals could make spying on other computers all too simple, according to two scientists at the University of Cambridge. Such are the dangers that they are patenting countermeasures that computer manufacturers can take to foil any electronic eavesdroppers. " This New Scientist article refers to some work that we have been doing here over a year ago and which was published already as Markus G. Kuhn, Ross J. Anderson: Soft Tempest: Hidden Data Transmission Using Electromagnetic Emanations, in David Aucsmith (Ed.): Information Hiding, Lecture Notes in Computer Science 1525, Springer-Verlag, ISBN 3-540-65386-4, pp. 124-142. http://www.cl.cam.ac.uk/~mgk25/ih98-tempest.pdf The New Scientist just stumbled last week across a related patent application that was recently finally published after the usual 18 months. Read the above paper if you are interested in the full story. If you are interested in the sort of equipment on which I was quoted and what I consider to be an appropriate platform for production-grade compromising emanations attacks (automatic character recognition from VDU signals, utilization of data-dependent emissions of firewall systems for cryptanalysis, etc.), then have a look at for instance http://www.tm.agilent.com/tmo/datasheets/English/HPE3238S.html and its components: an 8-MHz wideband tuner covering 2-2600 MHz, a 20 MHz and 21 bit A/D converter, followed by an array of powerful DSPs that can do various processing steps and turn the digitized IF signal directly into your output. That plus suitable software and a set of good antennas and coupling probes is roughly what I would expect to find in the better versions of the unmarked spook van in the neighborhood. Turning equipment like this into a GSM phone, GPS receiver, TV set, or specialized compromising emanations receiver is just a matter of what software you load into it. At the moment, lab setups of such flexible "software radios" still cost in the £2 range. However, the technology is moving quickly and has the potential to enter the mass market in the next few years, probably at first via it's use in multi-mode reprogrammable cellular base stations. With prices for suitable components for software radios (especiall the ADC and DSP section) dropping with Moore's law, we can look forward to home amateur priced software radios that will allow us to build sophisticated Tempest DSP experiments which are today only in the reach of military research labs. Though it will not become "child's play" - as the New Scientists reporter wrote in the above article - sophisticated EM snooping technology might very well come into the reach of the advanced information security hobbyist or the determined criminal in the next 5-10 years. The field will certainly remain interesting, any if you study information security, it might not be unwise to add a high-frequency electronics and DSP course to your curriculum today. Markus -- Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK Email: mkuhn at acm.org, WWW: http://www.cl.cam.ac.uk/~mgk25/
Re: Power analysis of AES candidates
Andreas Bogk wrote on 1999-09-15 00:04 UTC: The usual setup for DPA involves a 10 Ohm resistor which sits in the power supply and measuring the voltage across that resistor. The countermeasure we're talking about is an on-chip capacitor that smoothes the power consumption, or a power supply inside an tamper-resistant package such as the Dallas iButton, which essentially serves the same purpose. The battery in the Dallas iButton is *NOT* there to power CPU operations. This battery acts only to provide the around 1 nA data retention current needed by the SRAM to keep its data reliably when external power is removed. As soon as external power is supplied, the internal Li battery is disconnected by the CPU power supply management system. The iButton does however have a power supply buffer capacitor on board. Its primary function is to maintain power in communications mode. The iButton can operate in two modes: communication and calculation. In communication mode, only a large shift register is operated that is connected to the serial port. Power is drawn from the interface pull-up resistor during the transmitted 1 bits. While a 0 bit is transmitted, the shift register draws its energy from the internal capacitor. In calculating mode, the interface shorts the pull-up resistor, such that the iButton CPU is now directly connected to the full power supply, but it can't communicate any more. By the way, one rather simple yet effective power analysis countermeasure is described in http://www.cl.cam.ac.uk/~mgk25/sc99-tamper.pdf http://www.cl.cam.ac.uk/~mgk25/sc99-tamper-slides.pdf Adding a random bit stream generator into the internal clock line that switches between genuine CPU cycles and realistic dummy loads at a clock-cycle level can help to add sufficient amounts of timing variation to make DPA infeasible. While software-based random-delay loops can usually be rather easily spotted with single-shot cross-correlation techniques and therefore be compensated by the power analyser before applying the usual algorithms, the time intervals between two clock cycles does usually not provide enough information to reliably resynchronize externally with the program flow. Another approach is to use asynchronous processors, which do not depend on an external clock at all, and whose power consumption spectrum tends to smooth itself out very nicely. Designing attacks and defenses against asynchronous smartcard processors promises to become a highly interesting area of work. (By the way, if you are seriously interested in working in this field, we have just received a substantial grant to develop invasive and non-invasive attacks on upcoming asynchronous high-security smartcard CPU technologies, and we will be offering very soon 2-3 research PhD student and post-doc positions for people with a strong interest in microelectronics, tamper resistance, digital signal processing and hardware security. Contact us me for details if you are interested. http://www.cl.cam.ac.uk/Research/Security/tamper/). At typical smartcard frequencies, the information leaking in the power signal is spread across the entire HF and VHF band. It does not seem to be too practical to place sufficiently good passive RC or LC filters onto a chip given the current CMOS processes commonly used for 8-bit microcontrollers. Another approach is to add a broadband OpAmp that implements a current regulator. Make the CPU draw a constant current and dissipate any power not needed by the CPU temporarily in an on-chip resistor. This works nicely for low frequencies, but is also rather difficult to do with normal CMOS processes in the VHF bands. It would be possible to add such an opamp as a separate second chip, but many customers are not likely to pay two dollars more for the entire smartcard just for power-analysis protection. The challenge is to get a really cheap countermeasure. Markus -- Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK Email: mkuhn at acm.org, WWW: http://www.cl.cam.ac.uk/~mgk25/
Re: Sue MSNSA for Key?
John Young wrote on 1999-09-07 09:05 UTC: Assuming the key is a backdoor to intercepted encrypted information, Microsoft would be walking on very thin ice indeed, but may have severe legal problems in any event. The federal wiretapping statute is very clear in its prohibitions against advertising or distributing in commerce "devices" for intercepting electronic communications. This calls for a small Critical Thinking[TM] exercise: If company X produces and distributes a telecommunications product Y that does not provide a sufficient degree of message secrecy against signal intelligence agency Z, and if in addition X never has claimed or implied that Y provides message confidentiality against Z, do you really believe that you could sue X for doing so? Wouldn't this in effect also allow you to sue every US telefone manufacturer for shipping products with a built-in NSA backdoor by implementing a 0-bit cipher? Or the developer of my email software? And all of this aside the rather obvious observation, that the CPS verification keys are in no way relevant or effective for user data protection and only protect the NSA from US exports of an OS that can work with strong CSPs. Again: The NSA doesn't need their own key to sign *weakened* cryptographic Trojan modules, because Microsoft has already been shipping with public knowledge *signed* *weak* 40-bit cryptographic modules for years, because these are the only ones they were allowed to sell in Europe. For more information about what is going wrong in this discussion, please check out http://www.criticalthinking.org/K12/k12library/library.nclk Markus -- Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK Email: mkuhn at acm.org, WWW: http://www.cl.cam.ac.uk/~mgk25/
Re: NSA key in MSFT Crypto API
The actual funny story behind the presence of the NSA key has been seriously misunderstood here. CSP verification keys have only one *real* purpose: They are intended to enforce the US export restriction requirement that Microsoft is not allowed to ship software abroad that can easily be extended with strong cryptography. They are certainly not intended as any useful form of integrity protection for your system. The NSA got their own CSP verification key, because they want to be able to change their own secret US government CSPs required for the handling of classified documents, without having to go to Microsoft each time to get a signature for an NSA CSP update. Fair enough. So Microsoft built in a second verification key such that the NSA can produce and install on DoD PCs their own CSPs without requiring any Microsoft involvement. The real funny part is that Microsoft did not protect the NSA key particularly well, such that everyone can easily replace the NSA key easily with his own key. This was reported by Nicko van Someren at the Crypto'98 rump session. This means that everyone can now easily install his own CSPs with arbitrarily strong cryptography. This means that the NSA's demand to get quickly a second key added led in effect to the easy international availability of strong encryption CSPs. My guess is that this is Microsoft's sweet revenge against the NSA for creating all these Export hassles (e.g., the requirement that CSPs be signed) in the first place. It backfired nicely against the NSA. :) All this has nothing to do with an NSA backdoor, because the CSP keys are an export enforcement tool and not an integrity protection tool. They do not protect all parts of the system that could be compromised by someone who wants to install some eavesdropping malware. The CSP verification keys only authenticate that no cryptography that violates export laws has been installed. If you are worried about the NSA installing malicious software on your PC, you should not rely on the CSP verification keys (which were never designed for that purpose anyway), but on virus scanners with tripwire functionality that report any modifications to your DLLs. There is no digital signature functionality required to implement these, simple secure hash algorithms will perfectly do. Please apply a bit of simple critical thinking here: If the NSA wanted to have real backdoor functionality, they would much more likely simply steal Microsofts own keys instead of embedding additional keys with an obvious symbol name. Remember: The NSA is the world's largest key thief. They have stolen crypto variables from well-protected military and government agencies from all over the world using the usual repertoire of techniques (bribery, extortion, eavesdropping, hacking, infiltration, etc.). If they can do it with eastern military agencies, they can most certainly also do it easily with Microsoft, which is orders of magnitudes less well protected than the usual NSA target. If there is a real NSA backdoor key in Windows, that it would certainly be identical to Microsoft's own key. Markus -- Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK Email: mkuhn at acm.org, WWW: http://www.cl.cam.ac.uk/~mgk25/
Re: Questions regarding export restrictions in Europe
Bill Stewart wrote on 1999-07-20 18:06 UTC: The real question is whether there are any Danish laws against exporting crypto, and whether they apply only to physical exports or also to publishing information on your web site. The relevant Danish law should be the same as in the rest of the European Union: the EU Dual Use Directive (Council Regulation (EC) No 3381/94 of 19 December 1994). The EU has export controls in place on commercial and military cryptographic software and systems. Public domain and shrink-wrapped mass-market software are explicitly excluded from these restrictions. So you can put any cryptographic software for free download onto your web site as long as you are not violating other regulations (copyright, patent, etc.). Note that in some European countries (e.g., Germany), unlike under US law, non-commercial products such as freeware are fortunately not affected by patents. Literature: Exportkontrollen für Verschlüsselungsprodukte, Harald H. Roth, Datenschutz und Datensicherheit (1+2/1998) pp 8-13 81-85. http://www.larissa.frankfurt-online.de/rkineu/encryption.html http://jya.com/roth-crypto.htm Hope this helped ... Markus -- Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK Email: mkuhn at acm.org, WWW: http://www.cl.cam.ac.uk/~mgk25/
Re: entry level cryptography books
"MIKE SHAW" wrote on 1999-06-01 15:43 UTC: Can anyone recommend some entry level cryptography books? I'm looking for something that will just start to get into the nitty-gritty of the math involved. Those who are more comfortable with reading German than Mathematics and who are looking for a really entry-level book will enjoy Alfred Beutelspacher: Kryptologie. Vieweg, 1996, ISBN 3-528-48990-1, 34.00 DEM, 179 p. "Eine Einfuehrung in die Wissenschaft vom Verschlüsseln, Verbergen und Verheimlichen; ohne alle Geheimniskrämerei, aber nicht ohne hinterlistigen Schalk, dargestellt zum Nutzen und Ergötzen des allgemeinen Publikums." This is a very well-written book by a German professor of mathematics that specifically addresses the non-specialist reader that is scared by mathematics but has a desire to learn about cryptography and its applications. Lots of nice bed-time reading stories about Alice, Bob, and friends, covering a surprising range or cryptographic protocols with extremely little formal ballast in a rather entertaining way. I don't know unfortunately, whether someone has translated it already into English. [Canonical answers: Schneier, "Applied Cryptography" Menezes Oorschot, "Handbook of Applied Cryptography".] Scheier is a book for the applied mind (programmer and application designer) without much interest in theoretical foundations, Menezes Oorschot is for someone looking for a comprehensive treatment of the field. "Stinson: Cryptography - Theory and Practice, CRC Press" is also a good addition to that list, especially if it is for a university course, since it does provide a better fundamental treatment than especially Schneier without trying to be as comprehensive as Menezes Oorschot. Markus -- Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK Email: mkuhn at acm.org, WWW: http://www.cl.cam.ac.uk/~mgk25/
Smartcard Hardware Tampering Paper
Research Announcement We recently published the following paper, which should be of great interest to anyone concerned about smartcard hardware security: Oliver Kömmerling, Markus G. Kuhn: Design Principles for Tamper-Resistant Smartcard Processors. Proceedings of the USENIX Workshop on Smartcard Technology (Smartcard '99), Chicago, Illinois, USA, May 10-11, 1999, USENIX Association, pp. 9-20, ISBN 1-880446-34-0. (This work received the "USENIX Association Best Student Paper Award".) Various non-invasive cryptanalysis techniques against smartcards, which have been publicised as "Differential Fault Analysis", "Differential Power Analysis", etc., have received considerable attention recently. However, these are not the attack techniques that have been used by pirates to break practically all types of smartcard processors that are fielded in millions of conditional-access systems. We show in our paper how invasive microprobing techniques are a far more powerful and universally applicable threat to smartcard security, which processor architecture elements simplify attacks significantly, and what designers could quite easily do to make it more difficult. Unlike fault and current analysis techniques, microprobing attacks do not depend on any prior knowledge or guessing of the implemented cryptographic algorithms. Microprobing gives the attacker not only access to cryptographic keys, but also leads to full disassembler listings of the extracted security software. Availability of the full smartcard software then often allows the design of fast and simple non-invasive glitch and current analysis attacks, which -- unlike DPA-style attacks -- do not require many hundred seconds of protocol interactions. Such very fast non-invasive attacks can then be performed inconspicuously in a Trojan card terminal together with a normal transaction and without giving the card holder a chance to notice them. They form a serious additional threat over microprobing even for applications such as digital signature and banking cards, which do not rely on global keys stored in the card. Microprobing attacks can be carried out by skilled technicians starting with an investment of little more than ten thousand euros and they can then be repeated at rather low cost. Our paper not only describes the range of attack techniques that have been used in the past to break numerous commercially fielded security systems. We also suggest a number of lowest-cost countermeasures that will help to make many of these attacks considerably more challenging to perform. Some of these we believe to be new, while others have already been implemented in products but are either not widely used or the implementations we found had design flaws that allowed us to circumvent them more easily than would have been necessary. Online version of the paper: http://www.cl.cam.ac.uk/~mgk25/sc99-tamper.pdf Presentation slides with more photos: http://www.cl.cam.ac.uk/~mgk25/sc99-tamper-slides.pdf [Important note to avoid misunderstandings: Our paper does *not* provide any comparative evaluation of the security mechanisms of specific products and it should not be quoted to that effect. We present a few interesting vulnerabilities in the security mechanisms of one commercial smartcard processor that we named. This processor is of particular interest primarily, because it features comparatively advanced security features not found in most other products. The reader should understand that in spite of the vulnerabilities that we outline, unmentioned competing products are not necessarily more secure. Indeed, many of them do not have these advanced security mechanisms implemented and are easier to break. Much easier.] Markus Kuhn -- Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK Email: mkuhn at acm.org, WWW: http://www.cl.cam.ac.uk/~mgk25/
Re: Cryptoprocessors and reverse engineering
Regarding bus-encryption processors such as http://www.cl.cam.ac.uk/~mgk25/trustno1.pdf [EMAIL PROTECTED] was concerned: One potential problem with such a system is that it allows software vendors to include malicious code in their products with little or no chance of being caught. Markus Kuhn said in response: I don't think this is a severe additional threat. Decompiling software is a rather difficult and not widely practiced art. and John Gilmore replied: I would disagree, but I funded a decompilation of the Adobe PostScript interpreter from the original LaserWriter ROMs, eventually producing a specification for the encoded Type 1 fonts. This effort only took a month or two of a skilled programmer's time. The eventual result was that Adobe released the specs for these fonts (a year or two later). I think we are talking about *very* different things here. You talk about decompilation in order to steal ("liberate" ;-) know-how and technology, while we were more concerned about full security evaluations without the cooperation of the vendor. I am perfectly aware of how feasible it is to extract specific algorithms and data from even huge and complicated binaries or chip masks within a few weeks/months. (I have done it myself successfully several time with various embedded security systems, though EU legislation might make it not too advisable to name details in a public forum). You know roughly what you are looking for and what the overall architecture is, and if you are an experienced low-level programmer and skilled in using good debugging tools, you can quickly narrow down the code to the small unit that really deserves your attention. If the goal of the investigation is however to exclude with high probability that the vendor or someone in the development and distribution pipeline has built a backdoor into the code that allows unauthorized violation of documented security mechanisms, you do *not* know what you are looking for. There are zillions of ways and places to hide backdoors and I have seen proposals for extremely clever ones. You can't usually narrow down the search to a limited area, because the trap door could be hidden almost anywhere. Security evaluations are difficult and complicated enough, even if you have access to the source code and the documentation. Especially with low-assurance languages such as Assembler/BCPL/C/C++, data structures can be modified in unexpected ways from almost everywhere unless process space protection has been used carefully to separate trusted and not-so-security-relevant code in a clear way. You know as well as I do that this is rarely done in practice. It is therefore not sufficient to reverse engineer only say the key generator, key management and cipher implementation, because I could have easily hidden somewhere buried deep in the GUI widget library or in a multi-media decompresser code that resets the key generator with a certain probability to a value known to the attacker (just one trivial example of thousands of easy to hide backdoors). Similarly, innocuous looking algorithms could have been spiked to act as easy to exploit covert channels; you really do not see immediately that say a harmless looking checksumming routine in a smartcard OS has been specifically designed to allow external access to key material via CPU current analyis, or that your printer port driver broadcasts critical information using either the cable as a short-wave antenna or using the kerning in the documents that you print. Serious security evaluations to eliminate accidental and malicious backdoors in typical non-trivial products can take years, and in certain cases they are only possible at all within a reasonable time if the product was designed right from the beginning to support evaluation. This can be done by using a high-assurance programming language (Ada seems very popular in this field) and by carefully modularizing into clearly separated code domains of varying security relevance. Moreover, security evaluations without access to the revision control logs of the manufacturer become useless as soon as a significantly modified revision appears. Even big players like Microsoft have given up to do security evaluations, even though they only attempted ITSEC FC2/E3 for an old and restricted NT version, which is a rather mild evaluation level anyway. Therefore we still all use routers, applications, and operating systems that have been spiked with back doors by well-trained undercover software engineers payed for by the say French and Bulgarian signals intelligence agencies ... So I certainly stay with my assessment that bus-encryption processors will not significantly reduce our protection against malicious backdoors. They can provide some tampering protection, since they can very effectively prohibit unauthorized patching of binaries. If you want to trust your software, then you have to put quite a lot of trust in the producer and m
Pentium III serial number mechanism
Keith Lofstrom [EMAIL PROTECTED] has sent me a very good argument for why the serial number is probably not at all located on the die (forwarded below with permission). This goes also very well with earlier rumours that I have heard that the now officially announced Pentium III features are actually implemented in the chip set and not in the very critical CPU die itself. There are certainly manufacturers such as Dallas Semiconductor who can do laser programming of serial numbers for security processors directly onto the CPU die, but these are low-cost microcontrollers and not Pentium-grade devices where everything is much more difficult. Makes sense to me. Here is Keith's argument (forwarded with permission): From: Keith Lofstrom [EMAIL PROTECTED] Subject: Pentium III mechanism Date: Thu, 28 Jan 1999 09:00:31 -0800 (PST) Regards the Pentium III programming mechanism: I'm not on the cryptography mailing list, just reading mail-archive.com. Forward this if you deem it worthwhile. You folks run a very polite list given the volatility of the topic. Kudos. --- The Pentium III may very well follow the techniques used on the Pentium II and the Xeon. The Xeon uses a separate, serially accessed EEPROM chip in the cartridge. There are some good reasons for this: 1) Designing "extra" technology into the Pentium III process will delay its introduction. A week delay in the introduction of a new processor costs Intel about a billion dollars. It shortens the very narrow window of "cream skimming" until prices are eroded by competition. Moore's law can be restated as "1% performance per week." And at Intel, Moore's law is not just a good idea... 2) Yield and reliability and fab time of the extra steps. 3) Tester time for Pentium-speed devices is very expensive - perhaps 30 cents per second. It is cheaper to put an extra flash chip in the cartridge, costing perhaps 10 cents, than spend another half of a second cooking fuses or NVRAM cells on the CPU tester, which take a long time (milliseconds!) to heat or move charge. Best to do this on an ancient ( 2 year old ) chip tester separately from the CPU. 4) Same thing applies to lasers (moving mirrors - horrors!) with the additional difficulty that laser systems fail or go out of mechanical alignment, reducing throughput on the $1000/hr tester. And laser targets need a lot of die real estate. 5) Treating the CPU wafer to a separate pass through a "programming tester" might make sense - but this costs you in "scrub yield". Every time you drop a probe on a pad, you have to contact it strongly enough to scrub through any surface oxides that have formed. You tear it up a bit, reducing its reliability later after it is permanently bonded. Separate pads might make sense, but then you are back to the real estate problem - a pad costs about half a penny on a Pentium-class wafer. There are tradeoffs, of course, and the cost and yield of a separate chip in the cartridge may make the above costs relatively palatable. Certainly when the Pentium die get complicated enough, lasers or fuses for redundancy may make economic sense. But on a mature Pentium line ( 3 months in production ) the yields become very good, while the test time gets longer - minimizing expensive test time is a strong driving force. Economica and past history strongly points to a separate chip in the cartridge. I hope those clever folks at Intel have something new that proves me wrong. Keith -- Keith Lofstrom [EMAIL PROTECTED] Voice (503)-520-1993 KLIC --- Keith Lofstrom Integrated Circuits --- "Your Ideas in Silicon" Design Contracting in Bipolar and CMOS - Analog, Digital, and Power ICs
Re: Pentium III...
"Marty Levy" wrote on 1999-01-26 15:48 UTC: Does anyone know the mechanism Intel plans to use to put the infamous serial numbers on Pentium III chips? I wasn't aware that Pentiums had any non-volitaile memory (other than ROM) on board. The only practical systems I can think of is to use a fuse or laser repair type scheme. There are basically the following ways of doing this: a) laser interruption of a top-layer metal line (see http://www.new-wave.com/products/ezlaze.html for suitable lasers) b) high-current evaporation of a weak link in a metal or polysilicon layer link c) high voltage anti-fuse (create an isolation barrier break and let the flowing current carry metal across the break to "weld" a permanent interconnect). d) some NVRAM technology (EEPROM, FeRAM, etc.) The advantage of a) is that is does not require additional chip circuitry. It is cheap and reliable, but has to be done before packaging. The advantage of the others is that they can be done after packaging. b) is fairly easy to spot under a microscope. I have attacked a small jpeg file showing a blown polysilicon fuse as they are found on SGS Thompson ST16Fxyz smartcard security processors (photo prepared by O. Kömmerling, ADSR, Germany). The advantage of c) and d) is that they take a bit more work to read-out, but they also need a more complex production process. I am told that c) was used in NSA's clipper chip to store the classified SKIPJACK parameters (the masks were apparently not secret). If someone sends me a Pentium III chip, I'd be happy to depackage it and send you snapshots of any visible metal/poly fuses that I can spot. Markus -- Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK Email: mkuhn at acm.org, WWW: http://www.cl.cam.ac.uk/~mgk25/ polyfuse.jpg
Re: Trojan Processors
David Honig wrote on 1999-01-20 22:42 UTC: At 08:56 PM 1/20/99 +, Ben Laurie wrote: Steve Bellovin wrote: Intel has announced a number of interesting things at the RSA conference. The most important, to me, is the inclusion of a hardware random number generator (based on thermal noise) in the Pentium III instruction set. They also announced hardware support for IPSEC. An interesting question (for me, at least) is: how will I know that the hardware RNG is really producing stuff based on thermal noise, and not, say, on the serial number, some secret known to Intel, and a PRNG? You would have to reverse engineer random samples of the chip to gain *some* confidence. Intel could make this easier by providing their "source" and tool flow, from specs to a HDL to synthesis to layout. There are, I am told, commercial firms who will give you a netlist given *only* a sample chip and lots of money. Oh, I can also tell you names: Semiconductor Insights Inc. and Chipworks Inc., both Canada, are two of these companies for instance who make their living by reverse engineering netlists from VLSI products, primarily for patent infringement lawsuits. Chipsworks has a nice micrograph collection of greetings from chip designers to reverse engineers left on the chip layout on their web site. These labs will however ask for a dozen sample chips usually (etching is an irreversible process after all, so you often need several samples to get the parameters right). The commercial prices range from 1 USD for reading out a smartcard to 10 USD for reverse engineering a cryptographic ASIC (see the court evidence presented in the BSkyB v. Christopher-Carey case). Obviously, knowing the netlist of one type sample tells you nothing about the functionality of other processors that you buy from the same source. I would not be worried about chips purchased directly from Intel, but knowing that NSA and their international competitors own similar production fabs as Intel, and smaller firms can easily hire these as well, it is to be expected that there are already loads of Trojan processors in circulation. Tampering with the RNG by replacing it with a PRNG is one potential manipulation. Other much more interesting manipulations are hidden ways to get you from user into supervisor mode, thus allowing attackers to completely circumvent a B3 multi-level security OS trivially. Installing a root kit on dockmaster.ncsc.mil this way would be fun. Who needs covert channels if you have a backdoor instruction? I have even better ideas, what a Trojan CPU manufacturer might want to put into his hardware: Pentium and friends have all string copy machine instructions. Add a tiny (LFSR) substring detection hardware to this, and trigger an "interesting" effect once a specific rare substring is processed. This could be the slow death of the processor within the next 30 minutes, or the immediate switch to supervisor mode and execution of bytes following the magic string. No OS could protect you from a CPU trap door like that. To shutdown the Iraqi defense system, you would just send them email (or radar pulses?) with the right strings in it, and no matter how secure the OS is, the CPU would bypass it all at a hardware level. In a full-paranoia application, I might seriously consider to remove from the compiler's code generator certain string-handling instructions that make implementation of substring triggered Trojans particularly easy. And there's of course Ross Anderson and Marcus Kuhn and their chip-stripping labs.. There is indeed an interest from government agencies in reverse engineering capabilities to make sure hardware bought for the processing of critical information has no trap-door. At the moment - if at all - the only implemented protection is a controlled transport of components between trusted (vetted) manufacturers. At the moment, we ourselves can only look reasonably at chips with up to two or three metal layers, because we use hydrofluoric acid wet etching to strip off metal layers. Once we get access to a small reactive ion etcher, we might also start looking at processors with more metal layers such as the Pentium. I am told that workstation CPUs are actually easier to understand than many of the 8-bit microcontrollers that smartcard hackers normally work with. The latter have a layout that has been hand optimized over more than a decade, while modern processor have much more regular and much less optimized routing that was dumped directly out of some VHDL netlist generator without the loving hand of a 1980s microcontroller designer. Chip decompiling might not be as difficult as it seems at first. We believe that much of it can be done cheaply semi-automatically even using optical microscopy. I can now read 0.8 µm CMOS designs with a single metal layer on micrographs of the metal and the poly layer almost as easily as circuit diagrams. (Actually, if someone here wants to do a PhD project in VLSI