Re: [Cryptography] Broken RNG renders gov't-issued smartcards easily hackable.
Dear Ray, On 2013-10-11, at 19:38 , Ray Dillinger wrote: > This is despite meeting (for some inscrutable definition of "meeting") > FIPS 140-2 Level 2 and Common Criteria standards. These standards > require steps that were clearly not done here. Yet, validation > certificates were issued. This is a misunderstanding of the CC certification and FIPS validation processes: the certificates were issued *under the condition* that the software/system built on it uses/implements the RNG tests mandated. The software didn't, invalidating the results of the certifications. At best the mandatory guidance is there because it was too difficult to prove that the smart card meets the criteria without it (typical example in the OS world: the administrator is assumed to be trusted, the typical example in smart card hardware: do the RNG tests!). At worst the mandatory guidance is there because without it, the smart card would not have met the criteria (i.e. without following the guidance there is a vulnerability) This is an example of the latter case. Most likely the software also hasn't implement the other requirements, leaving it somewhat to very vulnerable to the standard smart card attack such as side channel analysis and perturbation. If the total (the smart card + software) would have been CC certified, this would have been checked as part of the composite certification. (I've been in the smart card CC world for more than a decade. This kind of misunderstanding/misapplication is rare for the financial world thanks to EMVco, i.e. the credit card companies. It is also rare for European government organisations, as they know to contact the Dutch/French/German/UK agencies involved in these things. European ePassports for example are generally certified for the whole thing and a mistake in those of this order would be ... surprising and cause for some intense discussion in the smart card certification community. Newer parties into the smart card world tend to have to relearn the lessons again and again it seems.) With kind regards, Wouter Slegers ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: Who cares about side-channel attacks?
nt resources to implement the countermeasures well. To do the whole protection, not just the blinding, well is a real engineering effort. It also requires a specific type of expertise that is not so easy to get or develop (although it is great fun to do for the developer as a person), i.e. it is expensive in your development personel costs. - Testing and production might suffer from the security measures. This can be surprisingly expensive in terms of production speed. - Reliability in the field of the product is potentially going to suffer, because of the risk of the countermeasures tripping in the field. Out there the power is bad (looking just like a voltage glitch attack), the sun is on the device (looking just like a temperature attack), the device falls of the counter (causing a short disconnect in the tamper sensors connectors, looking just like a tamper event). Because it is hard to get good information on these events in the field (attacks and accidents alike), the reliability takes an unknown but potentially high hit. This is the big cost in the eyes of management (and in mine). Benefits: - No compromittation of the resources. But in many cases, it is not the product's resources that are compromised... - Warm fuzzy feeling. If you look at it this way, it makes no sense to implement countermeasures. Unless the costs are reduced by doing exactly what Peter had already excluded: using a ready made crypto library / smartcard /... that is already tested and shown to work. Or, which is my experience, because regulations in the product domain force the developer to have these countermeasures and show them to be effective to third parties (evaluation labs). This is the domain of financial organisations with their accreditations, and government(-like) organisations requiring Common Criteria evaluations. (Which also is excluded by Peter: the group that does this because they have no choice). > Does SCA protection enter the picture? Marginally at best. For real threats out there, I agree that it is not as high a priority as perturbation or API attacks are. It is however relatively easy to implement only the blinding of the SCA protection (just take a crypto library that does this). Implementing the real anti-perturbation and side channel analysis protection, that is where it becomes a serious amount of work. So in short, I would see the group that Peter was looking for, as an economic anomaly ;-) Although I would be fascinated to hear why it is interesting for them to do anyway. With kind regards, Wouter Slegers - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: combining entropy
L.S., > If I have N pools of entropy (all same size X) and I pool them > together with XOR, is that as good as it gets? > > My assumptions are: > > * I trust no single source of Random Numbers. > * I trust at least one source of all the sources. > * no particular difficulty with lossy combination. I take the last item to mean that you do not mind wasting entropy but want to be sure the resulting random number is unpredictable. If you add one additional assumption: * The sources are independent of each other then the XOR of the random sources will be at least as unpredictable as the most unpredictable individual random source (to keep away from the entropy discussion). As far as I can se, this the "if at least one source is unpredictable for a workload of x, the resulting random is also at least that unpredictable" property that you seem to be looking for. If the sources are not independent, in the most extreme case: the sources are the same, the result is not so good. XORing in the same RNG stream twice, however good the RNG, is not so useful ;-) Without the threatmodel, I am not sure if this is a problem for you, but if the attacker has control or knowledge of some of the sources, he also knows the XOR of the remaining ones. In the case he knows all but one sources, and the remaining source is not so unpredictable (LFSR, poorly biased noise source), the result can be quite predictable (and in weak RNG designs, the remaining source might be compromised). Note that this could also be used to force the combined RNG to more likely generate a chosen output. Using hashfunctions to combine the randoms makes it computationally harder for such chosen results to be generated, it quickly becomes effectively a search problem for hash-collisions where you have only limited choice on the input. Also temporary lulls in the quality of the random sources are much better handled. Peter Gutmann's dissertation has a very good description of what he did for hardening his cryptolib's the random generation from many such attacks/mistakes. With kind regards, Wouter Slegers - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]