Re: [cryptography] Intel RNG - RdSeed

2012-07-21 Thread David Johnston
If you thought RdRand caused a lot of chatter on this list, we've just 
announced a new sister instruction.. RdSeed.

It's here.. http://software.intel.com/file/45207

RdSeed is SP800-90B C and X9.82 parts 2  4 compliant in the XOR 
construction. But they're all draft specs so things could change.
RdSeed is to RdRand as /dev/random is to /dev/urandom. It returns 100% 
entropy (minus epsilon if you're picky).


Since it is dependent on the supply of entropy and has quite a 
conservative conditioning ratio, its maximum throughput is less than 
that of RdRand.


We haven't released anything other documentation on this yet, so until 
we do, this is as good a place to ask questions as any.


DJ


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-23 Thread James A. Donald
On 2012-06-23 10:48 PM, ianG wrote: And, now it is possible to see a 
case where even if we didn't need the

 secrecy for administrative reasons, random number generation may want to
 keep the seed input to the DRBG secret.

If we had the raw unwhitened semi random data, an attacker could 
partially predict it - but only partially.  If we get enough randomness, 
no problem.


Side channel attacks on a true random generator are irrelevant unless 
the attacker can predict the contents of the true random generator 100%.




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-22 Thread Marsh Ray

On 06/21/2012 09:05 PM, ianG wrote:


On 22/06/12 06:53 AM, Michael Nelson wrote:


At the output of the DRBG, through RdRand, you have no visibility
of these processes. We seek to limit the side channels through
which an attacker could determine the internal state of the DRNG.


Good answer!


This be the right choice, but I can't figure out how it's different from 
security-through-obscurity.



I suppose that if the rng was shared between multiple processes,
and if a malicious process could read the internal state, then it
could predict what another process was going to be given in the
near future.

That said, I think that it's a natural factoring to let the user
see the bits directly from the hardware source, before any
massaging. Perhaps this could be a mode.


Perhaps another way to phrase it is providing a different source of 
(raw, unconditioned) entropy for folks who already have a software pool 
which requires entropy estimates on the incoming data.



It's a natural human question to ask. I want to see what's under the
hood. But it seems there is also a very good response - if you can
see under the hood, so can your side-channel-equipped attacker.


It seems to me that the bits one gets to see via RdRand aren't a side 
channel, by defintion. But if the attacker gets to see a disjoint set of 
samples from the same oscillator then we only need to worry about 
dependencies lurking between the sample sets.


The oscillator is a fairly simple circuit, so it should be 
straightforward to show it has a memory capacity of only bit or two. 
Allowing the oscillator to run for a few cycles between sample sets 
going to different consumers should eliminate the possibility of short 
term dependencies.


Longer term dependencies would be slowly changing things like 
temperature and supply voltage. But these are possibly available via 
other channels?


I don't recall how fast those capacitor voltages were changing.


So what you get is what you get. Love it or leave it. [...]  So we
have a situation where we can rely on the chip to do what is
advertised, but we can't rely on the manufacturer to give us exactly
the chip that they advertised.


The same is true of the instructions handling our private key material 
though, right? They could all be backdoored to leak info through some 
secret side channel. What's special about the RdRand instruction?


Is it that random number generation is considered 'in scope' for 
cryptography whereas ALU backdoors are not?


Is it that it seems more practical to engineer a weak key attack from 
weak entropy source?


That weak key generation produces exposure that persists across space 
and time via the ciphertext?


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-22 Thread Kevin W. Wall
Marsh,

Am I missing something?

On Fri, Jun 22, 2012 at 1:06 PM, Marsh Ray ma...@extendedsubset.com wrote:
 On 06/21/2012 09:05 PM, ianG wrote:


 On 22/06/12 06:53 AM, Michael Nelson wrote:
[snip]

 It's a natural human question to ask. I want to see what's under the
 hood. But it seems there is also a very good response - if you can
 see under the hood, so can your side-channel-equipped attacker.

 It seems to me that the bits one gets to see via RdRand aren't a side
 channel, by defintion. But if the attacker gets to see a disjoint set of
 samples from the same oscillator then we only need to worry about
 dependencies lurking between the sample sets.

 The oscillator is a fairly simple circuit, so it should be straightforward
 to show it has a memory capacity of only bit or two. Allowing the oscillator
 to run for a few cycles between sample sets going to different consumers
 should eliminate the possibility of short term dependencies.

You wrote going to DIFFERENT consumers. I am interpreting that as
different processes, but I don't see how a CPU instruction like RdRand
or anything else is going to be process or thread or insert your favorite
security context here aware.  If you would have omitted the different,
then it would have made sense.

So am I just reading too much into your statement and you didn't really
mean *different* consumers or am I simply not understanding what
you meant? If the latter, if you could kindly explain.

Thanks,
-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-21 Thread James A. Donald

On 2012-06-20 5:22 AM, Matthew Green wrote:

If you assume that every manufactured device will meet the standards of Intel's 
test units, then you can live with the CRI/Intel review.

If you're /not/ confident in that assumption, the ability to access raw ES 
output would be useful...


I see no valid case for on chip whitening.  Whitening looks like a 
classic job for software.  Why waste chip real estate on something that 
will only be used 0.0001% of the time?  Whitening is never going to have 
an impact on performance, while it has an impact on our ability to know 
where our supposedly random numbers are coming from.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-21 Thread Michael Nelson
 James A. Donald wrote:



 I see no valid case for on chip whitening.  Whitening looks like a classic 
 job for
 software.  Why
waste chip real estate on something that will only be used

On that Intel forum site someone pointed to, one of the Intel guys said with 
respect to the whitening and health testing processes:

At the output of the DRBG, through RdRand, you have no visibility of these 
processes. We seek to limit the side channels through which an attacker could 
determine the internal state of the DRNG.

I suppose that if the rng was shared between multiple processes, and if a 
malicious process could read the internal state, then it could predict what 
another process was going to be given in the near future.

That said, I think that it's a natural factoring to let the user see the bits 
directly from the hardware source, before any massaging.  Perhaps this could be 
a mode.

Mike
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-21 Thread James A. Donald

James A. Donald wrote:
  I see no valid case for on chip whitening. Whitening
  looks like a classic job for software. Why waste chip
  real estate on something that will only be used 0.001% of
  the time.

On 2012-06-22 6:53 AM, Michael Nelson wrote:
 I suppose that if the rng was shared between multiple
 processes, and if a malicious process could read the
 internal state, then it could predict what another process
 was going to be given in the near future.

To the extent that rng generates true randomness, it can only partially 
predict.  Assuming that each process collects sufficient true randomness 
for its purposes, not a problem.  That is the whole point and purpose of 
generating true randomness.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-21 Thread Thierry Moreau

James A. Donald wrote:

James A. Donald wrote:
   I see no valid case for on chip whitening. Whitening
   looks like a classic job for software. Why waste chip
   real estate on something that will only be used 0.001% of
   the time.

On 2012-06-22 6:53 AM, Michael Nelson wrote:
  I suppose that if the rng was shared between multiple
  processes, and if a malicious process could read the
  internal state, then it could predict what another process
  was going to be given in the near future.

To the extent that rng generates true randomness, it can only partially 
predict.  Assuming that each process collects sufficient true randomness 
for its purposes, not a problem.  That is the whole point and purpose of 
generating true randomness.




Just a few more random arguments in this discussion.

The NIST SP800-90 architecture, which is used in the Intel RNG, has

(A) a true random sampling process which provides less than full 
entropy, followed by


(B) an adaptation process, deterministic but not a NIST algorithm, 
called conditioning which provides well quantified full entropy bits 
(the designer has to make the demonstration that the goal is reached 
given the available understanding of the random sampling process), and 
finally


(C) the DRBG (deterministic random bit generator) which is periodically 
seeded by the output of the conditioning algorithm.


(A) is truly random, (B) and (C) are deterministic.

If your enemy has access to the data used by either the conditioning 
algorithm or the DRBG, he can figure out their respective output.


Because the Intel RNG designers do not know which CPU request comes from 
a user versus an enemy, so they only provide a unique and independent 
output portion to each of them. One can not guess what the other 
received. If the enemy can trace the user program with debugging support 
CPU facilities, he might be in a position to eavesdrop an output portion 
given to the user. Be careful.


But don't trust me about these explanations, I might be an enemy. At 
least Intel designers don't trust me to audit their deterministic 
algorithms implementations within production parts. So they protect your 
secure applications, just in case my Trojan horse software is loaded 
when your application runs.


As a concluding remark, ... well why should I share a conclusion with 
potential enemies? You may as well (truly random) draw your own conclusion.


Regards,



--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-20 Thread Joachim Strömbergson
Aloha!

On 2012-06-20 05:32 , James A. Donald wrote:
 If intel told me how it worked, and provided low level access to raw
 unwhitened output, I could find pretty good evidence that the low level
 randomness generator was working as described, and perfect evidence that
 the whitener was working as described.  Certification does not tell me
 anything much.

Good point. And even more so. What I think we would like to have is:

(1) Read access to the raw output of the entropy source.
(2) Possibly read access after whitening.
(3) Write access to inputs of the PRNG

This would allow us to probe that the whole chain works as intended with
KATs for the PRNG part.

This would still not prove that Intel, when MUXing in data from (1)/(2)
into the PRNG actually does something completely different.

-- 
Med vänlig hälsning, Yours

Joachim Strömbergson - Alltid i harmonisk svängning.







signature.asc
Description: OpenPGP digital signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Jon Callas

On Jun 18, 2012, at 9:03 PM, Matthew Green wrote:

 On Jun 18, 2012, at 4:21 PM, Jon Callas wrote:
 
 Reviewers don't want a review published that shows they gave a pass on a 
 crap system. Producing a crap product hurts business more than any thing in 
 the world. Reviews are products. If a professional organization gives a pass 
 on something that turned out to be bad, it can (and has) destroyed the 
 organization.
 
 
 I would really love to hear some examples from the security world. 
 
 I'm not being skeptical: I really would like to know if any professional 
 security evaluation firm has suffered meaningful, lasting harm as a result of 
 having approved a product that was later broken.
 
 I can think of several /counterexamples/, a few in particular from the 
 satellite TV world. But not the reverse.
 
 Anyone?

The canonical example I was thinking of was Arthur Anderson, which doesn't meet 
your definition, I'm sure.

But we'll never get to requiring security reviews if we don't start off seeing 
them as desirable.

Jon



PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Jun 18, 2012, at 4:12 PM, Marsh Ray wrote:

 
 150 clocks (Intel's figure) implies 18.75 clocks per byte.
 

That's not bad at all. It's in the neighborhood of what I remember my DRBG 
running at with AES-NI. Faster, but not by a lot. However, I will getting the 
full 16 bytes out of the AES operation and RDRAND is doing 64 bits at a time, 
right?

 
 Note that Skein 512 in pure software costs only about 6.25 clocks per byte. 
 Three times faster! If RDRAND were entered in the SHA-3 contest, it would 
 rank in the bottom third of the remaining contestants.
 http://bench.cr.yp.to/results-sha3.html

As much as it warms my heart to hear you say that, it's not a fair comparison. 
A DRBG has to do a lot of other stuff, too. The DRBG is an interesting beast 
and a subject of a whole different conversation.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFP4B3lsTedWZOD3gYRAkegAJ0Z491IAfNVXX3hKOdOghPczZmWMACgztIG
Ym7qE1e/es0m0o+macE+Iv0=
=GJXv
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread James A. Donald

And, to get back on topic after having gone dangerously off topic:

The market for cryptography is the market for silver bullets:  Those 
actually paying money cannot tell the difference between real experts 
and salesmen, thus the incentive to actually be any good at this is not 
high.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread coderman
On Tue, Jun 19, 2012 at 12:48 AM, Marsh Ray ma...@extendedsubset.com wrote:
...
 Right, 500 MB/s of random numbers out to be enough for anybody.

these rates often are not useful. even busy secure web or VPN servers
use orders of magnitude less.

initialization of full disk crypto across an SSD RAID could consume
it, but that's the only practical use case i've encountered so far
:)

that said, a typical host entropy gathering daemon is often
insufficient, and even off-die serial bus and other old skewl sources
were providing entropy in kbit/sec rates, not MByte/sec.


 My main point in running the perf numbers was to figure out the
 justification for this RNG not being vulnerable to entropy depletion attacks
 in shared hosting environments.

some (many?) HWRNG designs use bit accumulators that feed a register
that is read by a userspace instruction.

consider the following configuration:
- this register is collecting single bits from two high speed
oscillators that is sub-sampling single bits at a quarter clock.
- only whitened / non-run bits collected, as set in a configuration
write / initialization.

in short: providing slow, non-deterministic output rates to this
entropy instruction.

if the instruction is configured to not-block, you could starve the
available pool in one thread, in one process, by using it
aggressively.
if the instruction is configured to block, you could introduce
significantly processing delays (unexpectedly so?) to other consumers
in other threads of execution, or other processes.

Intel did an end run around this problem with the DRBG, which is
similar to urandom, in the sense that it does not provide a 1:1
correlation of entropy in the system to entropy bits returned out, as
you may get in /dev/random linux behavior, which blocks once the
kernel entropy pool is exhausted. (that's another rant/tangent. let's
not go there :)



 Still, 150 clocks is a crazy long time for an instruction that doesn't
 involve a cache miss or a TLB flush or the like.
...
 So something is causing AES-NI to take 300 clocks/block to run this DRBG.
 Again, more than 3x slower than the benchmarks I see for the hardware
 primitive. My interpretation is that either RdRand is blocking due to
 entropy depletion, there's some internal data pipe bottleneck, or maybe
 some of both.

it is also seeding from the physical noise sources, running sanity
checks of some type, and then handing over to DRBG. so there is
clearly more involved than just a call to AES-NI. 3x as expensive
doesn't sound unreasonable if the seeding and validation overhead is
significant.



 If in reality there's no way RDRAND can ever fail to return 64 bits of
 random data, then Intel could document that fact and we could save the world
 from yet another untested exceptional code path that only had a moderate
 chance of working the first time it's really needed anyway.

that's a great idea, and my reading of it is that they have said as
much. perhaps they should more clearly state it so for the usual case.
my understanding is that you will never fail unless the hardware
itself starts up? with a broken physical source returning clearly
invalid bits.

E.g. In the rare event that the DRNG fails during runtime, it would
cease to issue random numbers rather than issue poor quality random
numbers.


as for stating that it should never run dry, or fail to not return
bits as long as the instruction is working, no matter how frequently
it is invoked, see:


With respect to the RNG taxonomy discussed above, the DRNG follows the
cascade construction RNG model, using a processor resident entropy
source to repeatedly seed a hardware-implemented CSPRNG. Unlike
software approaches, it includes a high-quality entropy source
implementation that can be sampled quickly to repeatedly seed the
CSPRNG with high-quality entropy. Furthermore, it represents a
self-contained hardware module that is isolated from software attacks
on its internal state. The result is a solution that achieves RNG
objectives with considerable robustness: statistical quality
(independence, uniform distribution), highly unpredictable random
number sequences, high performance, and protection against attack.

Protecting online services against RNG attacks [read: high entropy
consumption servers]
...
Throughput ceiling is insensitive to the number of contending parallel threads


take with a gran of salt; this is all from their documentation:
  
http://software.intel.com/en-us/articles/intel-digital-random-number-generator-drng-software-implementation-guide/
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Peter Gutmann
coderman coder...@gmail.com writes:
On Tue, Jun 19, 2012 at 12:48 AM, Marsh Ray ma...@extendedsubset.com wrote:
...
 Right, 500 MB/s of random numbers out to be enough for anybody.

these rates often are not useful. even busy secure web or VPN servers use
orders of magnitude less.

initialization of full disk crypto across an SSD RAID could consume it, but
that's the only practical use case i've encountered so far :)

Not even that, you'd just use it to seed AES-CTR and use that for the
initialisation.  Generator bit-rates seem to be like Javascript engine speeds,
a mostly pointless [0] figure that's provided so you can show that you've
managed to crank your numbers higher than everyone else's, like Benzino
Napaloni and Adenoid Hynkel cranking up their barber chairs.

Peter.

[0] I'm hedging my bets here with mostly, in practice I think it's closer to
entirely pointless.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Joachim Strömbergson
Aloha!

On 2012-06-19 11:30 , coderman wrote:
 On Tue, Jun 19, 2012 at 12:48 AM, Marsh Ray ma...@extendedsubset.com wrote:
 So something is causing AES-NI to take 300 clocks/block to run this DRBG.
 Again, more than 3x slower than the benchmarks I see for the hardware
 primitive. My interpretation is that either RdRand is blocking due to
 entropy depletion, there's some internal data pipe bottleneck, or maybe
 some of both.
 
 it is also seeding from the physical noise sources, running sanity
 checks of some type, and then handing over to DRBG. so there is
 clearly more involved than just a call to AES-NI. 3x as expensive
 doesn't sound unreasonable if the seeding and validation overhead is
 significant.

I might be missing something. But is it clear that Bull Mountain is
actually using AES-NI? I assumed that one would like to use a separate
HW-engine. Reading from the CRI paper seems (to me) to suggest that this
is actually the case:

Entropy conditioning is done via two independent AES-CBC-MAC chains,
one for the generator’s key and one for its counter. AES-CBC-MAC should
be suitable as an entropy extractor, and allows reuse of the module’s
AES hardware.

-- 
Med vänlig hälsning, Yours

Joachim Strömbergson - Alltid i harmonisk svängning.







signature.asc
Description: OpenPGP digital signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Thor Lancelot Simon
On Mon, Jun 18, 2012 at 09:58:59PM -0700, coderman wrote:
 
 this is very useful to have in some configurations (not just testing).
 for example: a user space entropy daemon consuming raw, biased,
 un-whitened, full throughput bits of lower entropy density which is
 run through sanity checks, entropy estimates, and other vetting before
 mixing/obscuring state, and feeding into host or application entropy
 pools.

Sanity checks, entropy estimates, and other vetting *which the output
of a DRBG keyed in a known way by your adversary will pass without
a hint of trouble*.

It seems to me the only reason you'd benefit from access to the raw
source would be if you believed Intel might have goofed the sanity
checks.  For my part, I am happy to rely on CRI's assurance that Intel's
sanity checks are good.

The only defense against a deliberately compromised hardware RNG is to
mix it with something else.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread dj
 Aloha!

 On 2012-06-19 11:30 , coderman wrote:
 On Tue, Jun 19, 2012 at 12:48 AM, Marsh Ray ma...@extendedsubset.com
 wrote:
 So something is causing AES-NI to take 300 clocks/block to run this
 DRBG.
 Again, more than 3x slower than the benchmarks I see for the hardware
 primitive. My interpretation is that either RdRand is blocking due to
 entropy depletion, there's some internal data pipe bottleneck, or
 maybe
 some of both.

 it is also seeding from the physical noise sources, running sanity
 checks of some type, and then handing over to DRBG. so there is
 clearly more involved than just a call to AES-NI. 3x as expensive
 doesn't sound unreasonable if the seeding and validation overhead is
 significant.

 I might be missing something. But is it clear that Bull Mountain is
 actually using AES-NI? I assumed that one would like to use a separate
 HW-engine. Reading from the CRI paper seems (to me) to suggest that this
 is actually the case:

It is not using AES-NI. It is a self contained unit on chip with a built
in HW AES encrypt block cipher.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread coderman
On Tue, Jun 19, 2012 at 9:02 AM,  d...@deadhat.com wrote:
...
 It is not using AES-NI. It is a self contained unit on chip with a built
 in HW AES encrypt block cipher.

thanks for the clarification; is this documented somewhere? i am
curious if the die space consumed for two implementations of AES in
negligable on these very large cores, or if there is another reason to
intentionally keep them separate.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Marsh Ray

On 06/19/2012 01:59 PM, coderman wrote:

thanks for the clarification; is this documented somewhere? i am
curious if the die space consumed for two implementations of AES in
negligable on these very large cores, or if there is another reason to
intentionally keep them separate.


It sounds to me like the AES CTR DRBG is shared between multiple cores. 
So keeping it independent of any one core sounds like a good reason to 
separate it.


But then design decisions for these chips have mystified me in the past. 
(HT, SMM, etc. :-)


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread coderman
On Tue, Jun 19, 2012 at 6:17 AM, Thor Lancelot Simon t...@panix.com wrote:
 ...
 Sanity checks, entropy estimates, and other vetting *which the output
 of a DRBG keyed in a known way by your adversary will pass without
 a hint of trouble*.

absolutely; after it has gone through DRBG you have zero visibility
into state of generation. even von neumann whitening and string
filters obscure to some extent.


 It seems to me the only reason you'd benefit from access to the raw
 source would be if you believed Intel might have goofed the sanity
 checks.  For my part, I am happy to rely on CRI's assurance that Intel's
 sanity checks are good.

the sanity checks, being on die, are limited. you can't run DIEHARD
against this in a useful manner because the DRBG obscures anything
useful.

i'll concede the point that you'd only want raw bits to validate CRI
and Intel assurances, and they've done due diligence.

this is something i like to verify myself; no fault with the Intel
design or CRI analysis implied.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Matthew Green
 i'll concede the point that you'd only want raw bits to validate CRI
 and Intel assurances, and they've done due diligence.
 
 this is something i like to verify myself; no fault with the Intel
 design or CRI analysis implied.

If you assume that every manufactured device will meet the standards of Intel's 
test units, then you can live with the CRI/Intel review. 

If you're /not/ confident in that assumption, the ability to access raw ES 
output would be useful...___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread dan

  I would really love to hear some examples from the security world.
 
 The canonical example I was thinking of was Arthur Anderson, which
 doesn't meet your definition, I'm sure.


I would not wait for such clarity; the bond rating agencies still
live while the sovereigns they rated are busily defaulting.

--dan


It is criminal to steal a purse, daring to steal a fortune, a mark of
greatness to steal a crown. The blame diminishes as the guilt increases.
 -- Friedrich Schiller

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Marsh Ray

On 06/19/2012 02:11 PM, coderman wrote:


the sanity checks, being on die, are limited. you can't run DIEHARD
against this in a useful manner because the DRBG obscures anything
useful.


I don't think there's anything useful diehard (specifically) is going to 
tell you.


The raw entropy source output would not be expected to pass diehard. The 
CR report shows visible artifacts in that FFT graph. The entropy 
estimation function one would apply to that source would likely be much 
simpler than the diehard suite. Just a sanity check that the output is 
actually changing once in a while would go a long way towards 
eliminating the most common failure modes.


On the other hand, the AES CTR DRBG output will always pass diehard, 
whether it contains any entropy or not.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread James A. Donald

On 2012-06-19 4:51 AM, Matthew Green wrote:
 1. Private evaluation report (budgeted to, say, 200 hours)
 probabilistically identifies N serious vulnerabilities. We
 all know that another 200 hours could turn up N more. In
 fact, the code may be riddled with errors. Original N
 vulnerabilities are patched. What should the public report
 say? Technically the vulnerabilities are all 'fixed'.

If the public report says what it should say, lots of people will be 
unhappy.


So, what happens if the public report sounds like it saying that the 
product is fine, but in fact the product is crap, and disaster ensues?


Answer:  Absolutely nothing.  Example Wifi security, which somehow 
always uses fine methods in unusual ways.  The same people who brought 
you yesterday's failed Wifi security, bring you today's.


To summarize:  Our mechanisms for social verification of truth are 
broken, and are getting more broken.  Social verification tends to scale 
badly.  They have never worked well, and are now working worse than ever.


Nullius in verba:  Take nobody's word for it

This is the general problem with audits of all kinds, not just security 
audits.  It is often not only impossible to punish the irresponsible, 
but even to identify them.


Thus security source code simply has to be available, and that security 
hardware is what it claims to be has to be verifiable - which is why 
Intel should have made it possible to read the  raw unwhitened output of 
its true randomness generator.


And now I am once again going somewhat off topic  on how our social 
verification mechanisms are completely broken - indeed it is very hard 
to make social verification work.


For example the challenger inquiry found that some people had signed off 
both on reports that the space shuttle was going to explode, and also 
reports that it was good to go.  But the culture was blamed, not any 
specific identifiable people.


For example, try identifying who made, and who received, the dud loans 
that are at the root of the current financial crisis, and who commanded 
them to be made.  It is mysteriously difficult to do so.


For example the crisis at MF Global is everywhere described as a 
liquidity crisis.  It was in fact a solvency crisis.  Jon Corzine 
pissed away MF Global's assets on politically correct financial 
investments, and then kept the place operating for some time in a state 
of insolvency by borrowing from customer funds, but everyone continues 
to pretend that MF Global was solvent until it was not, because 
according to Sarbannes Oxley accounting standards, it was solvent until 
it was not, presaging an outcome in which no one gets punished.


For example JPM realized it was receiving stolen funds from MF Global. 
There is a large audit trail of incriminating documents as the people at 
JPM wrestle with their consciences.  After generating a large pile of 
highly incriminating paper, they win and their consciences lose.  This 
will probably result in a civil lawsuit against JPM, for acting as a 
fence, but no criminal penalties, nor personal loss of jobs.  Even 
though the trail of documents reveal that an ever increasing number of 
people connected to MF Global knew that MF Global was acting in a 
criminal manner, making them accessories after the fact, it still looks 
as though few, possibly no one, is going to see jail time.


And of course, there are the Climategate files, but to go into any 
details on that can of worms would really take us right off topic. 
Since the widespread introduction of peer review in the 1940s, instead 
of the experimenter telling the scientific community what he observes, 
the scientific community tells the experimenter what he observes.  The 
data cookery revealed by Climategate files is, arguably, business as 
usual.  The defense was everyone is doing it, that is the way Official 
Science is actually done, which defense is, alas, entirely true.  Peer 
Review was the abandonment of the principle of Nullius in Verba. 
Instead of taking no one's word for it, we take the word of a secretive 
and anonymous panel of referees, resulting in an ever escalating pile of 
bogus science.


To make social verification work, people have to be punished for being 
untruthful, dishonest, and failing in their duty, or at least abruptly 
and irrevocably thrown out of social verification network for the 
slightest infraction.  Which is not nice.




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread dj
 On 06/19/2012 02:11 PM, coderman wrote:

 the sanity checks, being on die, are limited. you can't run DIEHARD
 against this in a useful manner because the DRBG obscures anything
 useful.

 I don't think there's anything useful diehard (specifically) is going to
 tell you.

 The raw entropy source output would not be expected to pass diehard. The
 CR report shows visible artifacts in that FFT graph. The entropy
 estimation function one would apply to that source would likely be much
 simpler than the diehard suite. Just a sanity check that the output is
 actually changing once in a while would go a long way towards
 eliminating the most common failure modes.

 On the other hand, the AES CTR DRBG output will always pass diehard,
 whether it contains any entropy or not.


Yup. Actually having a perfect source is a problem. It's much easier to
test for a source with known defects that meet a well defined statistical
model. With that you can build a test that the circuit is built correctly.
You can also show it catches all SPOF and DPOF cases. You use other
techniques to prove that if built right, the circuit will have a well
defined min entropy in the output.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Thor Lancelot Simon
On Tue, Jun 19, 2012 at 07:35:03PM -0700, coderman wrote:
 
 is there any literature on the typical failure modes of TRNG/entropy
 sources in deployed systems?
 
 my understanding is that they tend to fail catastrophically, in a way
 easily detected by FIPS sanity checks. E.g. clearly broken.

I know of one case in which a design mistake may have caused related bits
to be output.  I think the FIPS statistical tests might have turned it
up, but the continuous-output test might well not have.

This was a design by Hifn where they reused an existing RNG block but
changed the output LFSR and thus had to rework the interface to register
exposed to the PCI bus in which they reported results.  They left out a
latch, so you could accidentally get the same bits from the LFSR twice
or get an intermediate state where some bits were from the previous state
and some were fresh.  The COT would have caught the former, but given
the clocks involved the former case would have been very, very unlikely.
It would not have caught the latter.

I never got a clear answer from Hifn whether they actually left the latch
out of the silicon or just out of the documentation.  However, I tried
very hard to give them opportunities to tell me it was just the docs that
were wrong, and they didn't.  The workaround was to simply read the register
repeatedly, discarding results, until one knew all the bits had to be fresh
given the other clocks involved; inefficient, but it got the job done.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread James A. Donald

On 2012-06-19 7:02 AM, Jack Lloyd wrote:

You're
not saying that CRI would hide things, you're just saying that
accepting payment sets the incentives all the wrong way and that all
companies would put out shoddy work so long as they got paid,
especially if giving a bad review would make the customer mad.


Again, observe what has been happening in our current financial crisis. 
 What reason do we have to believe that CRI is more virtuous than JP 
Morgan?

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread James A. Donald

On 2012-06-19 9:07 AM, d...@deadhat.com wrote:

It does tell you that if it is your chip and you don't let
someone else pull the lid off, scrape off the passivation and apply a pico
probe to it, it will certainly provide you with good random numbers
regardless of the FIPS mode.


I don't know that.  Intel might have screwed up deliberately or 
unintentionally, or my particular chip might fail in a way that produces 
numbers that are non random, but, due to whitening, are non random in a 
way that only some people know how to detect


If intel told me how it worked, and provided low level access to raw 
unwhitened output, I could find pretty good evidence that the low level 
randomness generator was working as described, and perfect evidence that 
the whitener was working as described.  Certification does not tell me 
anything much.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread ianG

On 20/06/12 13:25 PM, James A. Donald wrote:

On 2012-06-19 7:02 AM, Jack Lloyd wrote:

You're
not saying that CRI would hide things, you're just saying that
accepting payment sets the incentives all the wrong way and that all
companies would put out shoddy work so long as they got paid,
especially if giving a bad review would make the customer mad.


Again, observe what has been happening in our current financial crisis.
What reason do we have to believe that CRI is more virtuous than JP Morgan?



People.  Some of you might know the guys at CRI and have some view as to 
how they would handle it.


Do you have a view as to how audit people handle things?  I remember 
sitting down with a big-N auditor in 1998 or so, and him telling me that 
the systems audit for major tech firm was a complete farce - the 
partners wrote all the bad bits out.


Time.  100 years ago, JP Morgan was a byword for trust.  Fast forward to 
now ... CRI is only 15 years old.  Add a decade and will it still see 
the same interests?


I'm sure it won't.  The big question is not whether but when...


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread David Johnston

On 6/19/2012 7:35 PM, coderman wrote:

On Tue, Jun 19, 2012 at 1:54 PM, Marsh Ray ma...@extendedsubset.com wrote:

... Just a sanity check that the output is
actually changing once in a while would go a long way towards
eliminating the most common failure modes.

On Tue, Jun 19, 2012 at 6:58 PM,  d...@deadhat.com wrote:

... Actually having a perfect source is a problem. It's much easier to
test for a source with known defects that meet a well defined statistical
model.

is there any literature on the typical failure modes of TRNG/entropy
sources in deployed systems?

my understanding is that they tend to fail catastrophically, in a way
easily detected by FIPS sanity checks. E.g. clearly broken.

is it exceedingly rare for subtle / increasing bias to occur due to
hardware failure or misuse in most designs? are there designs which
fail hard rather than fail silent when error is encountered?


If an entropy source in a closed system is producing an apparently non 
repeating, unbiased sequence and its output is deterministic (or low 
entropy) then there must be internal memory in the entropy source that 
is enabling the non repeating behavior. The more memory, the longer you 
have to watch before you can identify repeating behavior.


So make your entropy source have a very small amount of memory and be 
sufficiently simple that you can model it mathematically. Then you can 
show all the SPOF and DPOF failure modes and show that your health check 
circuitry catches them. You can also show your health check circuitry 
catches all repeating patterns up and beyond some size that is 
determined by the internal memory of the ES.


So the answer is yes..

Minimal memory (E.G. fewer registers) = Fails hard.
Lots of memory (E.G. lots of registers, like an LFSR) = opportunity to 
fail soft.


I can't point to literature. I think it's obvious. Without memory, non 
repeating behavior has to come from non determinism. Perhaps I should 
write a paper :) Mistrust an ES with many flops.


I don't approve of FIPS sanity checks. These are algorithms you can't 
specify independent of the generation process. Or in other words, you 
can't test for randomness, you can only test for functionality. You need 
to use other arguments to show that what you have is random. FIPS sanity 
checks are a chore to implement after you've implemented a real health 
test algorithm matched to the failure modes of the source.



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread coderman
On Tue, Jun 19, 2012 at 2:30 AM, coderman coder...@gmail.com wrote:
 ...
 as for stating that it should never run dry, or fail to not return
 bits as long as the instruction is working...

i was incorrect; developers should expect this instruction to
infrequently encounter transitory failures requiring retry:


The RDRAND instruction returns with the carry flag set (CF = 1) to
indicate valid data
is returned. It is recommended that software using the RDRAND instruction to get
random numbers retry for a limited number of iterations while RDRAND
returns CF=0
and complete when valid data is returned...

This will deal with transitory underflows. A retry limit should be
employed to prevent a hard failure in
the RNG (expected to be extremely rare) leading to a busy loop in software.


in Intel Advanced Vector Extensions Programming Reference
 at http://software.intel.com/file/36945


i would be very curious to know what the distribution of these single
or consecutive failures (CF=0) look like on a busy system or long run
benchmark, and particularly if/how environmental factors* affect
failure rates.

*CPU temperature, voltage regulation, what else?
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Paweł Krawczyk
Well, who otherwise should pay for that? Consumer Federation of America?
It's quite normal practice for a vendor to contract a 3rd party that
performs a security assessment or penetration test. If you are a smartcard
vendor it's also you who pays for Common Criteria certification of your
product.

-Original Message-
From: cryptography-boun...@randombit.net
[mailto:cryptography-boun...@randombit.net] On Behalf Of Francois Grieu
Sent: Monday, June 18, 2012 11:04 AM
To: cryptography@randombit.net
Subject: Re: [cryptography] Intel RNG

d...@deadhat.com wrote:

 CRI has published an independent review of the RNG behind the RdRand
 instruction:
 http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf

where *independent* is to be taken as per this quote:
   This report was prepared by Cryptography Research, Inc. (CRI)
under contract to Intel Corporation

  Francois Grieu

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Matthew Green
The fact that something occurs routinely doesn't actually make it a good idea. 
I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 

This is CRI, so I'm fairly confident nobody is cutting corners. But that 
doesn't mean the practice is a good one. 

On Jun 18, 2012, at 5:52 AM, Paweł Krawczyk pawel.krawc...@hush.com wrote:

 Well, who otherwise should pay for that? Consumer Federation of America?
 It's quite normal practice for a vendor to contract a 3rd party that
 performs a security assessment or penetration test. If you are a smartcard
 vendor it's also you who pays for Common Criteria certification of your
 product.
 
 -Original Message-
 From: cryptography-boun...@randombit.net
 [mailto:cryptography-boun...@randombit.net] On Behalf Of Francois Grieu
 Sent: Monday, June 18, 2012 11:04 AM
 To: cryptography@randombit.net
 Subject: Re: [cryptography] Intel RNG
 
 d...@deadhat.com wrote:
 
 CRI has published an independent review of the RNG behind the RdRand
 instruction:
 http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf
 
 where *independent* is to be taken as per this quote:
   This report was prepared by Cryptography Research, Inc. (CRI)
under contract to Intel Corporation
 
  Francois Grieu
 
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
 
 
 
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:

 The fact that something occurs routinely doesn't actually make it a good 
 idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
 
 This is CRI, so I'm fairly confident nobody is cutting corners. But that 
 doesn't mean the practice is a good one. 

I don't understand.

A company makes a cryptographic widget that is inherently hard to test or 
validate. They hire a respected outside firm to do a review. What's wrong with 
that? I recommend that everyone do that. Un-reviewed crypto is a bane.

Is it the fact that they released their results that bothers you? Or perhaps 
that there may have been problems that CRI found that got fixed?

These also all sound like good things to me.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP32NnsTedWZOD3gYRAuxbAKCvzWt3/+jKq5VadSBLBo6hfT9L8wCeJT15
8e6Ll1xBvXe8IojvRDvksXw=
=jAzX
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jack Lloyd
On Mon, Jun 18, 2012 at 10:20:35AM -0700, Jon Callas wrote:
 On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:
 
  The fact that something occurs routinely doesn't actually make it a good 
  idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
  
  This is CRI, so I'm fairly confident nobody is cutting corners. But that 
  doesn't mean the practice is a good one. 
 
 I don't understand.
 
 A company makes a cryptographic widget that is inherently hard to
 test or validate. They hire a respected outside firm to do a
 review. What's wrong with that? I recommend that everyone do
 that.

When the vendor of the product is paying for the review, _especially_
when the main point of the review is that it be publicly released, the
incentives are all pointed away from looking too hard at the
product. The vendor wants a good review to tout, and the reviewer
wants to get paid (and wants repeat business).

I have seen cases where a FIPS 140 review found serious issues, and
when informed the vendor kicked and screamed and threatened to take
their business elsewhere if the problem did not 'go away'. In the
cases I am aware of, the vendor was told to suck it and fix their
product, but I would not be so certain that there haven't been at
least a few cases where the reviewer decided to let something slide. I
would also imagine in some of these cases the reviewer lost business
when the vendor moved to a more compliant (or simply less careful)
FIPS evaluator for future reviews.

I am not in any way suggesting that CRI would hide weaknesses or
perform a lame review. However the incentives of the relationship do
not favor a strong review, and thus the only reason I would place
credence with it is my impression of the professionalism of the CRI
staff. In contrast, consider a review by, say, a team of good grad
students, where the incentive is very strongly to produce a
publishable result and only mildly on making the vendor happy. Those
incentives again are not perfect (what is), especially given how
academic publishing works, but they are somewhat more aligned with the
end users desire to have a product that is secure.

 Un-reviewed crypto is a bane.

Bad crypto with a rubber stamp review is perhaps worse because someone
might believe the stamp means something.

-Jack
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Tim Dierks
On Mon, Jun 18, 2012 at 2:51 PM, Matthew Green matthewdgr...@gmail.comwrote:

 I think that Jack said most of what I would. The incentives all point in
 the wrong direction.


While this is all true, it's also why manufacturers who want persuasive
analysis of their products hire consulting vendors with a brand and track
record strong enough that the end consumer can plausibly believe that their
reputational risk outweighs the manufacturer's desire for a good report.
Cryptography Research is such a vendor. It's reasonable to take a
manufacturer-funded report with a grain of salt, but when consuming any
information, you also have to worry about issues like incompetence and
less-visible incentives that tilt the comprehension or presentation of
facts.

I think taking this report as presumed correct is good enough for most
users to rely upon, but if I was a high-value user with a comparable
budget, I'd consider further investigation.

 - Tim
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread dj
Indeed. We're confident that the DRNG design is sound, but asking the
world to trust us, it's a sound design is unreasonable without us
letting someone independently review it. So being a cryptographic design
that people need some reason to trust before they use it, we opened the
design to a reputable outside firm and paid for and asked for independent
review.

The reviewers get to publish their review. We don't control the text.
That's part of the deal. We run the risk that we look bad if the review
finds bad stuff.

How else could we look credible? Our goal is to remove the issue of bad
random numbers from PCs that lead to the failure of cryposystems. Part of
achieving that is that we have to give people a way to understand why the
random numbers are of cryptographic quality and what that means in
specific terms, like brute force prediction resistance, SP800-90
compliance and effective conditioning of seeds.




 A company makes a cryptographic widget that is inherently hard to test or
 validate. They hire a respected outside firm to do a review. What's wrong
 with that? I recommend that everyone do that. Un-reviewed crypto is a
 bane.

 Is it the fact that they released their results that bothers you? Or
 perhaps that there may have been problems that CRI found that got fixed?

 These also all sound like good things to me.

   Jon



 -BEGIN PGP SIGNATURE-
 Version: PGP Universal 3.2.0 (Build 1672)
 Charset: us-ascii

 wj8DBQFP32NnsTedWZOD3gYRAuxbAKCvzWt3/+jKq5VadSBLBo6hfT9L8wCeJT15
 8e6Ll1xBvXe8IojvRDvksXw=
 =jAzX
 -END PGP SIGNATURE-
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 18, 2012, at 11:15 AM, Jack Lloyd wrote:

 On Mon, Jun 18, 2012 at 10:20:35AM -0700, Jon Callas wrote:
 On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:
 
 The fact that something occurs routinely doesn't actually make it a good 
 idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
 
 This is CRI, so I'm fairly confident nobody is cutting corners. But that 
 doesn't mean the practice is a good one. 
 
 I don't understand.
 
 A company makes a cryptographic widget that is inherently hard to
 test or validate. They hire a respected outside firm to do a
 review. What's wrong with that? I recommend that everyone do
 that.
 
 When the vendor of the product is paying for the review, _especially_
 when the main point of the review is that it be publicly released, the
 incentives are all pointed away from looking too hard at the
 product. The vendor wants a good review to tout, and the reviewer
 wants to get paid (and wants repeat business).

Not precisely.

Reviewers don't want a review published that shows they gave a pass on a crap 
system. Producing a crap product hurts business more than any thing in the 
world. Reviews are products. If a professional organization gives a pass on 
something that turned out to be bad, it can (and has) destroyed the 
organization.

The reviewer is actually in a win-win situation. No matter what the result is, 
they win. But ironically, or perhaps perversely, a bad review is better for 
them than a good review. The reviewer gains far more from a bad review.

Any positive review is not only lacking in the titillation that comes from 
slagging something, but you can't prove something is secure. When you give a 
good review, you lay the groundwork for the next people to come along and find 
something you missed -- and I guarantee it, you missed something. There's no 
system in the world with zero bugs.

Of course there are perverse incentives in reviews. That's why when you read 
*any* review, you have to have your brain turned on and see past the marketing 
hype and get to the substance. Ignore the sizzle, look at the steak.

 
 I have seen cases where a FIPS 140 review found serious issues, and
 when informed the vendor kicked and screamed and threatened to take
 their business elsewhere if the problem did not 'go away'. In the
 cases I am aware of, the vendor was told to suck it and fix their
 product, but I would not be so certain that there haven't been at
 least a few cases where the reviewer decided to let something slide. I
 would also imagine in some of these cases the reviewer lost business
 when the vendor moved to a more compliant (or simply less careful)
 FIPS evaluator for future reviews.

I agree with you completely, but that's somewhere between irrelevant and a 
straw man.

FIPS 140 is exasperating because of the way it is bi-modal in many, many 
things. NIST themselves are cranky about calling it a validation as opposed 
to a certification because they recognize such problems themselves.

However, this paper is not a FIPS 140 evaluation. Anything one can say positive 
or negative about FIPS 140 is at best tangential to this paper. I just searched 
the paper for the string FIPS and there are six occurrences of that word in 
the paper. One reference discusses how a bum RNG can blow up DSA/ECDSA (FIPS 
186). The other five are in this paragraph:

In additional to the operational modes, the RNG supports a FIPS
mode, which can be enabled and disabled independently of the
operational modes. FIPS mode sets additional restrictions on how
the RNG operates and can be configured, and is intended to
facilitate FIPS-140 certification. In first generation parts, FIPS
mode and the XOR circuit will be disabled. Later parts will have
FIPS mode enabled. CRI does not believe that these differences in
configuration materially impact the security of the RNG. (See
Section 3.2.2 for details.)

So while we can have a bitch-fest about FIPS-140 (and I have, can, do, and will 
bitch about it), it's orthogonal to the discussion.

It appears that you're suggesting the syllogism:

FIPS 140 demonstrate security well.
This RNG has FIPS 140
Therefore, this RNG is not secure.

Or perhaps a conclusion of Therefore, this paper does not demonstrate the 
security of the RNG which is less provocative.

What they're actually saying is that they don't think that FIPSing the RNG will 
materially impact the security of the RNG -- which if you think about it, is 
pretty faint praise.


 
 I am not in any way suggesting that CRI would hide weaknesses or
 perform a lame review.

But that is *precisely* what you are saying.

Jon Stewart could parody that argument far better than I can. You're not saying 
that CRI would hide things, you're just saying that accepting payment sets the 
incentives all the wrong way and that all companies would put out shoddy work 
so long as they got paid, especially if 

Re: [cryptography] Intel RNG

2012-06-18 Thread Charles Morris
There's no [non-trivial] system in the world with zero bugs [for some value of 
trivial]

:)
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jack Lloyd
On Mon, Jun 18, 2012 at 01:21:20PM -0700, Jon Callas wrote:

  I am not in any way suggesting that CRI would hide weaknesses or
  perform a lame review.

 But that is *precisely* what you are saying.

 Jon Stewart could parody that argument far better than I can. You're
 not saying that CRI would hide things, you're just saying that
 accepting payment sets the incentives all the wrong way and that all
 companies would put out shoddy work so long as they got paid,
 especially if giving a bad review would make the customer mad.

 Come on. If you believe that this report is not worth the bits its
 written on because it was done for-pay, at least say so. If you
 think that they guys who put their names on the paper have
 prostituted their reputations, have the courage to say so.

Of course maintaining reputation (and self-respect, for the people
there) are large incentives for a company like CRI and that is why I
explicitly stated is that the only reason I would place credence with
it is my impression of the professionalism of the CRI staff, echoing
Matt's comment that This is CRI, so I'm fairly confident nobody is
cutting corners.

 In other words, only grad students are qualified to make an
 independent review

No, merely that the incentives are somewhat better aligned between
the reviewers and the end user.

 and universities are not tainted by money.

Not sure where I said that.

 I think sharp grad students are among the best reviewers possible. I
 think they do fantastic work. But there isn't a single paper from
 them that I've ever seen that didn't essentially stop abruptly
 because the funding ran out, or time ran out, or they decided to do
 something else like graduate.

I am not familiar with any security review process that has unlimited
time and money.

 *All* reviews are limited by scope that is controlled by
 resources. *All* reviews have a set of perverse incentives around
 them. The perverse incentives of professional review are indeed out
 of phase with the perverse incentives of academics.

Yes. I agree entirely.

  Un-reviewed crypto is a bane.
 
  Bad crypto with a rubber stamp review is perhaps worse because someone
  might believe the stamp means something.

 So we shouldn't bother to get reviews, because they're just rubber stamps?

Of course not, if the review is a good one done by someone who knows
what they are doing it has enourmous value. Very likely the reviews
CRI does are of this form.

Thus my phrase 'rubber stamp review', meaning to imply a distinct
thing from a review of depth conducted with care by people skilled in
the relevant areas.

 To suggest that professionals are inherently corrupt is insulting to
 everyone in this business

The incentives in a pay-for-review model are misaligned in terms of
producing a non-optimal result for end users. Pointing out this fact
is not a personal attack on anybody.

 To suggest that academia is somehow free of bias shows a blind spot.
 To go further and suggest that only academia has pure motives shows
 how big academia's blind spots are.

Let me repeat: Those incentives again are not perfect (what is),
especially given how academic publishing works

-Jack
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jack Lloyd
On Mon, Jun 18, 2012 at 11:58:56AM -0700, Kyle Hamilton wrote:

 So what can we do to solve it?  Create our own reputable review service?

 Who would pay for it?  Who could pay for it?  Who *should* pay for it?

At first it seems like irony that buyer-pays is likely the process
best aligned with the buyers interests and yet most people are willing
to pay very little if anything for additional security, and
organizations typically will only pay for as much as (and exactly in
the form that) their regulator requires. Likely this suggests
something about the true nature of the interests of the market. :)

Maybe there is room for a Consumer Reports/Cooks Illustrated-style
review model, which does have the advantage of spreading the review
costs over many buyers. I don't know of any attempts like this though.

 Given that the penalty for an individual who works at a US Federal
 contractor on a US Federal contract who is found to have skimped on
 quality control is pretty much a 20-year guaranteed felony
 imprisonment, I have more faith in the CMVP than you appear to.

I've never heard about someone trying to talk past, say, an AES
implementation that didn't actually work, or a bad RSA, that's a
pretty bright line. In that sense, FIPS is very good for setting a
minimum bar, ensuring that the system is not completely screwed up.
However because most attacks can either be ignored or specificationed
around in the security policy or by shooting for a lower level of
validation, FIPS 140 is not very good at producing a product which can
protect against a determined attacker [1]. The tests are mostly
affirmatives 'does it work', not 'how can I cause this to break'.

 Also, CMVP-validated and NIST-certified modules which implement Suite
 B (ECC and AES) are permitted to protect US national secrets up to
 Secret level.  I don't think NSA is going to let that happen with
 modules it lets the military and its contractors use.

I agree this is the case, but I would also guess that NSA has its own
review process for this. For instance, FIPS 140 says nothing about
cache or timing attack countermeasures [2], which are pretty relevant
for AES and ECC and which I would hope (!) the NSA would bake into its
own procurement processes and in what it recommends or requires DoD
contractors to use. (I would be interested if anyone knows the details
of this, though).

-Jack

[1] Note that I am not saying that FIPS certified products
intrinsically cannot protect against determined attackers, just that
it is likely that any product which could do so likely did it without
much help from the FIPS review process.

[2] OK I just checked, the FIPS 140-3 draft does finally require
countermeasures against DPA and timing attacks for level 3+ hardware
evals. No such requirements for software though.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread dj

 What they're actually saying is that they don't think that FIPSing the RNG
 will materially impact the security of the RNG -- which if you think
 about it, is pretty faint praise.

But true. The FIPS mode enforces some boundary controls (external config
and debug inputs are disabled) but the thing in control of the config and
debug inputs is also in control of the fips lever. So you could argue that
the boundary protections are no stronger than the protections for the rest
of the chip. It does tell you that if it is your chip and you don't let
someone else pull the lid off, scrape off the passivation and apply a pico
probe to it, it will certainly provide you with good random numbers
regardless of the FIPS mode. What is certain is that 'FIPS mode' was not
an optimal name.

The FIPS mode is all about getting certification which is obviously less
important to us than giving people a realistic understanding of what it
is, or we'd have done the certification first. It doesn't make the numbers
any more or less predictable.

[snip]


 * Intel has produced a hardware RNG in the CPU. It's shipping on new Ivy
 Bridge CPUs.
Yup

 * Intel would like to convince us that it's worth using. There are many
 reasons for this, not the least of which being that if we're not going to
 use the RNG, people would like that chip real estate for something else.
Yup

 * We actually *need* a good hardware RNG in CPUs. I seem to remember a
 kerfuffle about key generation cause by low entropy on system startup in
 the past. This could help mitigate such a problem.
We do. So do we. We think so.


 * Intel went to some experts and hired them to do a review. They went to
 arguably the best people in the world to do it, given their expertise in
 hardware in general and differential side-channel cryptanalysis.
You're not wrong.


 * Obviously, those experts found problems. Obviously, those problems got
 fixed. Obviously, it's a disappointment that we don't get to hear about
 them.
Actually what they found is in the paper. The design has been through
several iterations before it got to product and external review. Compared
to other functions, the RNG is special and needful of greater care in
design and openness to allow review.


 * The resulting report is mutually acceptable to both parties. It behooves
 the reader to peer between the lines and intuit what's not being said. I
 know there are issues that didn't get fixed. I know that there are
 architectural changes that the reviewers suggested that will come into
 later revisions of the hardware. There always are.
There are revisions in the pipeline. Mostly to do with making it go faster
and to deal with ever more stringent power consumption constraints. But
they're not fully baked yet, so I thought we were being quite open with
the reviewers about future plans.

[snip]

 * If we want to have a discussion of the SP 800-90+ DRBGs, that's also a
 great discussion. Having implemented myself an AES-CTR DRBG (which is what
 this is), I have some pointed opinions. That discussion is more than
 tangential to this, but it's also a digression, unless we want to attack
 the entire category of AES-CTR DRBGs (which is an interesting discussion
 in itself).
The use of AES was much more driven by the conditioning algorithm which as
yet is not standardized by NIST and the math is interesting. Once you've
built a hardware AES for conditioning purposes, you might as well use it
for the DRBG part. The SP800-90 AES-CTR DRBG is an odd duck and whoever
wrote the spec didn't have the hardware designer in mind, but
roll-your-own crypto is not an option for us.



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Marsh Ray

On 06/18/2012 12:20 PM, Jon Callas wrote:


A company makes a cryptographic widget that is inherently hard to
test or validate. They hire a respected outside firm to do a review.
What's wrong with that? I recommend that everyone do that.
Un-reviewed crypto is a bane.


Let's accept that the review was competent, thorough, and independent.

Here's what I'm left wondering:

How do I know that this circuit that was reviewed is actually the thing
producing the random numbers on my chip? Why should I assume it doesn't
have any backdoors, bugdoors, or engineering revisions that make it
different from what was reviewed?

Is RDRAND driven by reprogrammable microcode? If not, how are they going
to address bugs in it? If so, what are algorithms are used by my CPU to
authenticate the microcode updates that can be loaded? What kind of
processes are used to manage the signing keys for it?

Let's take a look at the actual report:
http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf


Page 12: At an 800 MHz clock rate, the RNG can deliver
post-processed random data at a sustained rate of 800 MBytes/sec. In
particular, it should not be possible for a malicious process to
starve another process.


Wait a minute... that second statement doesn't follow from the first.

We're talking about chips with a 25 GB/s *external* memory bus 
bandwidth, why can't a 4-core 2 GHz processor request on the order of 64 
bits per core*clock for 4*64*2e9 = 512e9 b/s = 60 GiB/s ?


So 800 MiB/s @ 800 MHz
= 7.62 clocks per 64 bit RDRAND result (if they mean 1 core)
= 30.5 clocks per 64 bit RDRAND (if they mean 4 cores).
= 61.0 clocks per 64 bit RDRAND (if they mean 8 hyperthread cores).

More info:

http://software.intel.com/en-us/articles/intel-digital-random-number-generator-drng-software-implementation-guide/
Data taken from an early engineering sample board with a 3rd generation Intel 
Core family processor,
code-named Ivy Bridge, quad core, 4 GB memory, hyper-threading enabled. 
Software: LINUX* Fedora 14,
GCC version 4.6.0 (experimental) with RDRAND support, test uses p-threads 
kernel API.


Why does an Intel Software Implementation Guide have more information 
about the actual device under test than the formal report?



Measured Throughput:
Up to 70 million RDRAND invocations per second


Implies:
4 cores @ 2 GHz - 114 clocks per RDRAND
8 cores @ 2 GHz - 229 clocks per RDRAND


500+ million bytes of random data per second


Implies:
rate = 62.5e6 RDRAND/s
t(RDRAND) = 16 ns
  = 32 clocks, 1 core @ 2 GHz
  = 256 core*clocks, 8 cores @ 2 GHz


RDRAND Response Time and Reseeding Frequency
~150 clocks per invocation
Note: Varies with CPU clock frequency since constraint is shared data path 
from DRNG to cores.
Little contention until 8 threads  – or 4 threads on 2 core chip
Simple linear increase as additional threads are added


So when the statement is given it should not be possible for a 
malicious process to starve another process without justification, it 
leads us to ask why should it not be possible to starve another process?


Maybe this is our answer: Because this hardware instruction is slow!

150 clocks (Intel's figure) implies 18.75 clocks per byte.

It would appear that the instruction is actually a blocking operation 
that does not return until the request has been satisfied. Or does it?


Can we then expect it will never result in an out-of-entropy condition?

What happens when 16 cores are put on a chip? Will some future chip 
begin occasionally returning zeroes from RDRAND when an attacker fires 
off 31 simultaneous threads requesting entropy? Or will RDRAND take 300 
clocks to execute?


Note that Skein 512 in pure software costs only about 6.25 clocks per 
byte. Three times faster! If RDRAND were entered in the SHA-3 contest, 
it would rank in the bottom third of the remaining contestants.

http://bench.cr.yp.to/results-sha3.html

So perhaps we should not throw away our software-stirred entropy pools 
just yet and if RDRAND is present it should be used to contribute 128 
bits or so at a time as just one of several sources of entropy. It could 
certainly help to kickstart the software RNG in those critical first 
seconds after cold boot (you know, when the SSH keys are being generated).


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Kyle Creyts
On Mon, Jun 18, 2012 at 7:12 PM, Marsh Ray ma...@extendedsubset.com wrote:
 On 06/18/2012 12:20 PM, Jon Callas wrote:


 A company makes a cryptographic widget that is inherently hard to
 test or validate. They hire a respected outside firm to do a review.
 What's wrong with that? I recommend that everyone do that.
 Un-reviewed crypto is a bane.


 Let's accept that the review was competent, thorough, and independent.

 Here's what I'm left wondering:

 How do I know that this circuit that was reviewed is actually the thing
 producing the random numbers on my chip? Why should I assume it doesn't
 have any backdoors, bugdoors, or engineering revisions that make it
 different from what was reviewed?

 Is RDRAND driven by reprogrammable microcode? If not, how are they going
 to address bugs in it? If so, what are algorithms are used by my CPU to
 authenticate the microcode updates that can be loaded? What kind of
 processes are used to manage the signing keys for it?

 Let's take a look at the actual report:
 http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf

 Page 12: At an 800 MHz clock rate, the RNG can deliver
 post-processed random data at a sustained rate of 800 MBytes/sec. In
 particular, it should not be possible for a malicious process to
 starve another process.


 Wait a minute... that second statement doesn't follow from the first.

 We're talking about chips with a 25 GB/s *external* memory bus bandwidth,
 why can't a 4-core 2 GHz processor request on the order of 64 bits per
 core*clock for 4*64*2e9 = 512e9 b/s = 60 GiB/s ?

 So 800 MiB/s @ 800 MHz
 = 7.62 clocks per 64 bit RDRAND result (if they mean 1 core)
 = 30.5 clocks per 64 bit RDRAND (if they mean 4 cores).
 = 61.0 clocks per 64 bit RDRAND (if they mean 8 hyperthread cores).

 More info:


 http://software.intel.com/en-us/articles/intel-digital-random-number-generator-drng-software-implementation-guide/
 Data taken from an early engineering sample board with a 3rd generation
 Intel Core family processor,
 code-named Ivy Bridge, quad core, 4 GB memory, hyper-threading enabled.
 Software: LINUX* Fedora 14,
 GCC version 4.6.0 (experimental) with RDRAND support, test uses p-threads
 kernel API.


 Why does an Intel Software Implementation Guide have more information
 about the actual device under test than the formal report?

 Measured Throughput:
    Up to 70 million RDRAND invocations per second


 Implies:
    4 cores @ 2 GHz - 114 clocks per RDRAND
    8 cores @ 2 GHz - 229 clocks per RDRAND

    500+ million bytes of random data per second


 Implies:
    rate = 62.5e6 RDRAND/s
    t(RDRAND) = 16 ns
              = 32 clocks, 1 core @ 2 GHz
              = 256 core*clocks, 8 cores @ 2 GHz

 RDRAND Response Time and Reseeding Frequency
    ~150 clocks per invocation
    Note: Varies with CPU clock frequency since constraint is shared data
 path from DRNG to cores.
    Little contention until 8 threads  – or 4 threads on 2 core chip
    Simple linear increase as additional threads are added


 So when the statement is given it should not be possible for a malicious
 process to starve another process without justification, it leads us to ask
 why should it not be possible to starve another process?

 Maybe this is our answer: Because this hardware instruction is slow!

 150 clocks (Intel's figure) implies 18.75 clocks per byte.

then perhaps this is the proper graph to examine.
http://bench.cr.yp.to/graph-sha3/8-thumb.png
maybe this is the case RDRAND is intended for? producing random 8-byte seeds?


 It would appear that the instruction is actually a blocking operation that
 does not return until the request has been satisfied. Or does it?

 Can we then expect it will never result in an out-of-entropy condition?

 What happens when 16 cores are put on a chip? Will some future chip begin
 occasionally returning zeroes from RDRAND when an attacker fires off 31
 simultaneous threads requesting entropy? Or will RDRAND take 300 clocks to
 execute?

 Note that Skein 512 in pure software costs only about 6.25 clocks per byte.
 Three times faster! If RDRAND were entered in the SHA-3 contest, it would
 rank in the bottom third of the remaining contestants.
 http://bench.cr.yp.to/results-sha3.html

 So perhaps we should not throw away our software-stirred entropy pools just
 yet and if RDRAND is present it should be used to contribute 128 bits or so
 at a time as just one of several sources of entropy. It could certainly help
 to kickstart the software RNG in those critical first seconds after cold
 boot (you know, when the SSH keys are being generated).

 - Marsh

 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography



-- 
Kyle Creyts

Information Assurance Professional
BSidesDetroit Organizer
___
cryptography mailing list
cryptography@randombit.net

Re: [cryptography] Intel RNG

2012-06-18 Thread ianG

On 19/06/12 08:49 AM, Jack Lloyd wrote:


I've never heard about someone trying to talk past, say, an AES
implementation that didn't actually work, or a bad RSA, that's a
pretty bright line.


I had a bit of an epiphany in two parts.

The first part is that AES and block algorithms can be quite tightly 
defined with a tight specification, and we can distribute test 
parameters.  Anyone who's ever coded these things up knows that the test 
parameters do a near-perfect job in locking implementations down.


This results in the creation of a black-box or component approach. 
Because of this and perhaps only because of this, block algorithms and 
hashes have become the staples of crypto work.  Public key crypto and 
HMACs less so.  Anything crazier isn't worth discussing.




Then there are RNGs.  They start from a theoretical absurdity that we 
cannot predict their output, which leads to an apparent impossibility of 
black-boxing.


NIST recently switched gears and decided to push the case for 
deterministic PRNGs.  According to original thinking, a perfect RNG was 
perfectly untestable.  Where as a perfectly deterministic RNG was also 
perfectly predictable.  This was a battle of two not-goods.


Hence the second epiphany:  NIST were apparently reasoning that the 
testability of the deterministic PRNG was the lesser of the two evils. 
They wanted to black-box the PRNG, because black-boxing was the critical 
determinant of success.


After a lot of thinking about the way the real world works, I think they 
have it right.  Use a deterministic PRNG, and leave the problem of 
securing good seed material to the user.  The latter is untestable 
anyway, so the right approach is to shrink the problem and punt it up-stack.




Taking that back to Intel's efforts.  Unfortunately it's hard to do that 
deterministic/seed breakup in silicon.  What else do they have?


The components / black-boxing approach in cryptoplumbing has been ultra 
successful.  It has also had a rather dramatic effect on everything 
else, because it has raised expectations.  We want everything else to be 
as perfect as the block encryption algorithm.  Unfortunately, that's 
not possible.  We need to manage our expectations.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Peter Gutmann
Tim Dierks t...@dierks.org writes:

While this is all true, it's also why manufacturers who want persuasive
analysis of their products hire consulting vendors with a brand and track
record strong enough that the end consumer can plausibly believe that their
reputational risk outweighs the manufacturer's desire for a good report.
Cryptography Research is such a vendor.

There's also the law of diminishing returns for Intel.  Most users of their 
products are going to say it's from Intel, it should be good enough.  A 
small number of users are going to say it should be OK but I'd like a second 
opinion just to be sure.  A vanishingly small number are going to peek out 
from under their tinfoil hats and claim that the Bavarian Illuminati fixed 
the report and they still don't trust it, ignoring the fact that the app 
they're using the RNG with has to run as admin under Windows, opens a bunch of 
globally-accessible network ports, and has eight different buffer overflows in 
it.

The point at which it makes sense to stop is between the second and third
groups.

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Steven Bellovin
On Jun 18, 2012, at 11:21 52PM, ianG wrote:
 
 
 Then there are RNGs.  They start from a theoretical absurdity that we cannot 
 predict their output, which leads to an apparent impossibility of 
 black-boxing.
 
 NIST recently switched gears and decided to push the case for deterministic 
 PRNGs.  According to original thinking, a perfect RNG was perfectly 
 untestable.  Where as a perfectly deterministic RNG was also perfectly 
 predictable.  This was a battle of two not-goods.
 
 Hence the second epiphany:  NIST were apparently reasoning that the 
 testability of the deterministic PRNG was the lesser of the two evils. They 
 wanted to black-box the PRNG, because black-boxing was the critical 
 determinant of success.
 
 After a lot of thinking about the way the real world works, I think they have 
 it right.  Use a deterministic PRNG, and leave the problem of securing good 
 seed material to the user.  The latter is untestable anyway, so the right 
 approach is to shrink the problem and punt it up-stack.
 

There's evidence, dating back to the Clipper chip days, that NSA feels the same 
way.  Given the difficulty of proving there are no weird environmental impacts 
on hardware RNGs, they're quite correct.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Matthew Green
On Jun 18, 2012, at 4:21 PM, Jon Callas wrote:

 Reviewers don't want a review published that shows they gave a pass on a crap 
 system. Producing a crap product hurts business more than any thing in the 
 world. Reviews are products. If a professional organization gives a pass on 
 something that turned out to be bad, it can (and has) destroyed the 
 organization.


I would really love to hear some examples from the security world. 

I'm not being skeptical: I really would like to know if any professional 
security evaluation firm has suffered meaningful, lasting harm as a result of 
having approved a product that was later broken.

I can think of several /counterexamples/, a few in particular from the 
satellite TV world. But not the reverse.

Anyone?___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Marsh Ray

On 06/18/2012 10:21 PM, ianG wrote:


The first part is that AES and block algorithms can be quite tightly
defined with a tight specification, and we can distribute test
parameters. Anyone who's ever coded these things up knows that the test
parameters do a near-perfect job in locking implementations down.


Yes, this is what separates the engineers from the hobbyists.

For example, this is my main complaint about bcrypt and scrypt. The 
absence of a comprehensive set of test vectors has caused insecure and 
incompatible implementations to creep into existence.



This results in the creation of a black-box or component approach.
Because of this and perhaps only because of this, block algorithms and
hashes have become the staples of crypto work. Public key crypto and
HMACs less so. Anything crazier isn't worth discussing.


I don't get it. Why can't we have effective test vectors for HMACs and 
public key algorithms?



Then there are RNGs. They start from a theoretical absurdity that we
cannot predict their output, which leads to an apparent impossibility of
black-boxing.

NIST recently switched gears and decided to push the case for
deterministic PRNGs. According to original thinking, a perfect RNG was
perfectly untestable. Where as a perfectly deterministic RNG was also
perfectly predictable. This was a battle of two not-goods.

Hence the second epiphany: NIST were apparently reasoning that the
testability of the deterministic PRNG was the lesser of the two evils.


But it's not even a binary choice. We can divide the system into 
components and test with the best available methods for each components 
separately.


Even the most deterministic crypto implementation will have some squishy 
properties. E.g., power and EM side channels, resilience to fault 
injection. The engineering goal is to make the proportion of the system 
that can be certified rigorously as large as possible and the proportion 
that can't be tested well at all as small as possible.



They wanted to black-box the PRNG, because black-boxing was the critical
determinant of success.

After a lot of thinking about the way the real world works, I think they
have it right.


It's almost as if they know a thing or two about testing stuff. :-)


Use a deterministic PRNG, and leave the problem of
securing good seed material to the user. The latter is untestable
anyway, so the right approach is to shrink the problem and punt it
up-stack.

Taking that back to Intel's efforts. Unfortunately it's hard to do that
deterministic/seed breakup in silicon. What else do they have?


One thing they could do is provide a mechainsm to access raw samples 
from the Entropy Source component. I.e., the data that Intel provided 
[to Cryptography Research] from pre-production chips. These chips allow 
access to the raw ES output, a capability which is disabled in 
production chips.


Obviously these samples can't go back into the DRBG, but some developers 
would probably like to estimate the entropy in the raw data. They would 
likely interpret it as a higher quality source if they could reach that 
conclusion with their own code.


ISTR OpenBSD performs this type of analysis when feeding timing samples 
into the pool.



The components / black-boxing approach in cryptoplumbing has been ultra
successful. It has also had a rather dramatic effect on everything else,
because it has raised expectations. We want everything else to be as
perfect as the block encryption algorithm.


And I want a pony!


Unfortunately, that's not possible. We need to manage our expectations.


I think it's entirely reasonable for us to insist on an efficient high 
quality source of cryptographically secure random numbers. After all 
this is the basis of so many other important security properties that we 
should be grateful for all the powerful simplifying assumptions it 
enables. Remember the old days when we had to bang on the keyboard and 
move the mouse in order to generate keys?


What we need to get better at, IMHO, is incorporating experience into 
our threat model and judging relative risks. For example, a lot of stuff 
going on in kernel RNGs looks to me like additional complexity that's 
more likely to breed bugs than to defend against an imagined 
three-letter agency with space alien supercomputers.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread coderman
On Mon, Jun 18, 2012 at 9:46 PM, Marsh Ray ma...@extendedsubset.com wrote:
 ...
 One thing they could do is provide a mechainsm to access raw samples from
 the Entropy Source component. I.e., the data that Intel provided [to
 Cryptography Research] from pre-production chips. These chips allow access
 to the raw ES output, a capability which is disabled in production chips.

this is very useful to have in some configurations (not just testing).
for example: a user space entropy daemon consuming raw, biased,
un-whitened, full throughput bits of lower entropy density which is
run through sanity checks, entropy estimates, and other vetting before
mixing/obscuring state, and feeding into host or application entropy
pools.

the Intel RNG conveniently abstracts all of this away from you,
potentially giving you a ready-to-use source without a pesky user
space entropy daemon requirement.

on the other hand, the Intel RNG abstracts all of this away from you,
so your own assurances against the raw output are no longer possible.



 Obviously these samples can't go back into the DRBG, but some developers
 would probably like to estimate the entropy in the raw data. They would
 likely interpret it as a higher quality source if they could reach that
 conclusion with their own code.

yes, particularly when fed through a DIEHARD battery of tests using
hundreds of MB of entropy. hard to do that on die! ;)


gripes aside, this is a design and implementation i like quite a bit,
and i am very happy to see more CPU cores with entropy instructions!
now if only AMD and ARM would follow suit...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-13 Thread dj
CRI has published an independent review of the RNG behind the RdRand
instruction:
http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf

 Intel has published more details of the new rdrand instruction and the
 random number generator behind it.

 http://software.intel.com/en-us/articles/download-the-latest-bull-mountain-software-implementation-guide/
 http://software.intel.com/file/36945


 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2011-06-28 Thread Peter Gutmann
In case this is useful to anyone, here's the Windows code to use rdrand, to
complement the gcc version for Unix systems.  It'll also be present in the
next release of the cryptlib RNG code, available under a GPL, LGPL, or BSD
license, depending on which you prefer.

#if defined( _MSC_VER )
  #define rdrand_eax__asm _emit 0x0F __asm _emit 0xC7 __asm _emit 0xF0
#endif /* VC++ */
#if defined __BORLANDC__
  #define rdrand_eax} __emit__( 0x0F, 0xC7, 0xF0 ); __asm {
#endif /* BC++ */

{
unsigned long buffer[ 8 + 8 ];
int byteCount = 0;

__asm {
xor eax, eax/* Tell VC++ that EAX will be trashed */
xor ecx, ecx
trngLoop:
rdrand_eax
jnc trngExit/* TRNG result bad, exit with byteCount 
= 0 */
mov [buffer+ecx], eax
add ecx, 4
cmp ecx, 32 /* Fill 32 bytes worth */
jl trngLoop
mov [byteCount], ecx
trngExit:
}
if( byteCount  0 )
{
/* buffer[ 0 ... byteCount ] contains random bytes */
}
}

This has been verified under XP, Vista, and Win7 using the Intel Software
Development Emulator.

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2011-06-18 Thread James Cloos
 PG == Peter Gutmann pgut...@cs.auckland.ac.nz writes:

PG Does this mean it's unavailable in 32-bit mode?

Unlikely; see below.

PG What does the notation 0F C7 /6 indicate in terms of encoding?  It
PG looks like RdRand r16 and r32 have the same encoding, or do you
PG encode (for example) r16 vs. r32 in whatever the /6 signifies?
PG How would you encode, for example, 'RdRand eax'?

AIUI, when the 32-bit and 16-bit versions have the same opcode, that
generally means that when the cpu is in 16-bit mode (such as when running
venerable DOS or the BIOS) that opcode works on 16-bit registers and
when the processor is in 32-bit or 64-bit mode that same opcode works
on 32-bit registers.

From p 8-15 of 319433-011.pdf, I presume that the assembly would look like;

 RDRAND eax  ; randomize 32-bit register eax
 RDRAND rdx  ; randomize 64-bit register rax

and, in 16-bit code:

 RDRAND al  ;  randomize 16-bit register al

-JimC
-- 
James Cloos cl...@jhcloos.com OpenPGP: 1024D/ED7DAEA6
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2011-06-18 Thread James Cloos
 JL == Jack Lloyd ll...@randombit.net writes:

JL It's also supported in (very very recent) GNU binutils.

The sample code Intel provided on that page compiled/assembled
correctly here, using binutils-2.21.

Noting again that the registers are ordered ax, cx, dx, bx, sp,
bp, si, di, then the opcodes are:

 In 16-bit mode:

0F C7 F0  through  0F C7 F7
   randomize 16-bit registers ax through di

 In 32-bit mode:

66 0F C7 F0  through  66 0F C7 F7
   randomize 16-bit registers ax through di

0F C7 F0  through  0F C7 F7
   randomize 32-bit registers eax through edi

 In 64-bit mode:

66 0F C7 F0  through  66 0F C7 F7
   randomize 16-bit registers ax through di

0F C7 F0  through  0F C7 F7
   randomize 32-bit registers eax through edi

41 0F C7 F0  through  41 0F C7 F7
   randomize 32-bit registers r8d through r15d

48 0F C7 F0  through  48 0F C7 F7
   randomize 64-bit registers rax through rdi

49 0F C7 F0  through  49 0F C7 F7
   randomize 64-bit registers r8 through r15

I confirmed those via objdump(1)'s disassembly.

Note the use of prefix octet 0x66 to work on a 16-bit register when
in 32-bit or 64-bit modes and the use of 0x48|0x41 (aka 0x49) to access
the gp registers r8-r15 as 64-bit registers.

It'll be interesting to see what AMD does on this front.

-JimC
-- 
James Cloos cl...@jhcloos.com OpenPGP: 1024D/ED7DAEA6
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography