Re: [cryptography] True RNG: elementary particle noise sensed with surprisingly simple electronics

2016-09-15 Thread dj
> Hi!
>
> A true random number generation strategy is no better than its
> trustworthiness. Here is a suggestion for a simple scheme which rests on
> a common digital electronic design.
>
[...]
> Unavoidable current noise source:
>   - thermal noise
>   - excess current noise caused by the above resistor material
> construction
> Noise sources to be reduced (as a matter of sampling approach coherency)
>   - electrostatic ...
>   - electromagnetic ...
>
> Any thoughts?
>

Yes.

A) Can you build 100,000,000 and expect them all to work?
B) Can you expect the those 100,000,000 resistors to behave in a
consistent manner or will the supplier switch compounds on you while you
aren't looking.  If you try and buy a paper-oil cap today, you'll get a
poly pretending to be paper-oil. I assume it's the same for obsolete
resistor compounds.
C) What are the EM injection opportunities to measured noise? Can you
saturate the inputs?
D) How are you planning to characterize the min entropy of the source? We
know the min entropy of well defined Gaussian noise, but what about shot,
1/f and all the other weirdy distributions?
  D_a) Can you distinguish that noise from system noise that might be
systematic rather than entropic.
E) Do you have an extractor algorithm in mind that is proven to work at
the lower bound for the min entropy you expect from the source?
F) Are you wanting computational prediction bounds at the output of the
extractor or do you want H_inf(X) = 1.
  F_1) If you want the entropy answer, then you need to consider multiple
input extractors.
  F_2) Oh, and quantum-safe extractors are a thing now.
G) Are any certifications required. In my experience P(Y) -> 1 as t ->
infinity. Projects who swore up and down that they weren't doing FIPS
would come back 2 years later, with a finished chip and ask "Can this be
FIPS certified", after a customer made their requirements clear.

That's my usual list of questions. They may or may not apply to your
situation.




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Stealthy analog trojans

2016-05-25 Thread dj
> I guess it was all just a matter of time.
>
>

A matter of time until the authors of this and certain other related
papers realize that recreating new mask sets for deep sub wavelength
silicon imaging isn't like running things through a plotter.

It's certainly worth considering these attack vectors (we certainly do)
but the naive optimism in the paper that you can just go in and edit masks
on a modern process is not well placed.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Kernel space vs userspace RNG

2016-05-13 Thread dj
>On 13/05/2016 10:43, Krisztián Pintér wrote:
>
>> okay, let me rephrase, because i was not very clear. what i tried to
>> say is: if we assume a precision with which we can possibly measure
>> the initial conditions, then there is a time interval after which the
>> system is in total uncertainty. this time interval is dependent on the
>> initial precision, and the system itself.
>>
>> once i heard a claim, not confirmed, just a fun factoid, that weather
>> has 20 days of "memory". that is, if the initial conditions change on
>> molecular level, which is clearly unmeasurable, 20 days later the
>> weather is totally different. that is quite literally the butterfly
>> effect. if we were to use the weather as an entropy source, we should
>> sample it every 20 days. the data will be true random. except, of
>> course, the weather is public, but that aside.
>
>Exactly, that's the point, and i think that this feature is an approach
>to the statistical independence of the samples obtained by this method.

The difference is important. Weather is no more 'true random' than the
short term variation of a ring oscillator is. In fact the consequences for
sampling both are the same. The larger the undersampling ratio, the more
independent the samples.

However the inability to measure independence doesn't imply the absence of
independence and when your extractor demands absolute independence for the
mathematical proof to hold, undersampling isn't necessarily enough.

An example, which I believe I've shared before, but I consider it important..

The output from a certain real world entropy source based on sampling a
fast oscillator with the output of a VCO with a noisy control input looks
very serially correlated. You can measure it. If you slow down the
sampling, the serial correlation goes down. You will reach the point where
the measured signal is lost in the noise. This is then fed into an Yuval
Perez whitener (iterated VN), which requires independent samples on the
input as a prerequisite of the proof.

However if you take the raw undersampled data and run through a test that
tries to distinguish it from random in terms of the SCC, rather than
producing a metric of the SCC, it can differentiate it every time. It's
not full entropy. It's partially entropic data.

The same is true of the weather.

If you use it as a noise source, you must establish a conservative lower
bound for the min entropy and then feed it through an extractor where that
min-entropy meets the min-entropy requirement of the input.

One sample every 20 days is really, really slow.




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Kernel space vs userspace RNG

2016-05-07 Thread dj
>
> Russell Leidich (at Friday, May 6, 2016, 10:16:12 PM):
>> Most of the entropy in a system is manifest in terms of the clock
>> skew between direct memory access (DMA) transfers from external
>> devices and the CPU core clocks, which unfortunately does not
>> traverse the kernel in any directly observable manner.
>
> someone please confirm this, because i'm not a linux expert, but i
> don't believe user space code can do dma without the kernel knowing
> about it.
>
> also, i assert that such clock drifts provide much less entropy than
> you make it look like.
>
The premise is generally wrong these days. Well designed entropy sources
on silicon generate between 200Mbits/s and 5Gbits/s per source. I think
most vendors are getting in on the act. We have been pretty open with our
ES designs and I've seen very smart ES papers. I particularly like
Samsung's ring of rings for a process agnostic circuit.


>
>> interrupt timing, unless we extend the definition of "interrupt" to
>> include quasiperiodic memory accesses from external clients.
>
> again, i'm no exert in low level kernel stuff, but to my knowledge,
> everything happens through interrupts, even dma uses it to report the
> end of an operation.
>
>
> ___
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography
>

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Kernel space vs userspace RNG

2016-05-07 Thread dj
>
> This was pretty much my thinking (though idk Intel thought similar). If
> this is debatable, that's fine as long as my view isn't totally
> batt-shit-crazy :)
>

I'm the RNG Tzar and I approve this message.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Kernel space vs userspace RNG

2016-05-05 Thread dj
> On 05/05/16 09:40 AM, shawn wilson wrote:
>> Just reflecting on the Linux RNG thread a bit ago, is there any
>> technical reason to have RNG in kernel space?
>
> The procurement of an RNG source for crypto is always a *system* design
> issue.
>
> The expectation that a kernel offering (intended for a wide range of CPU
> architectures, each of which being deployed in its own range of systems)
> can solve this system issue is IMHO naive.
>
> Thus, kernel space vs user space makes little difference.
>
> This being said, the kernel developers appear to make good faith efforts
> to adapt to the ever evolving digital electronics paradigms prevailing
> in a few mainstream system architectures. Is this effective versus some
> criteria for RNG quality? Is this good enough for you?
>
> It's your duty to figure out, I guess.
>
> Regards,
>
> - Thierry Moreau
>

I think this sums it up well. Today you are thrown into having to know
what to do specifically because it's a system level problem (matching
entropy sources to extractors to PRNGs to consuming functions).

The OS kernel does a thing well that is it's job - taking single physical
instances of entropy sources, post processing it and making it available
to all userland and kernel consumers.

However kernel writers cannot address the full system issue because they
don't know what hardware they are running on. They don't know if they are
in a VM. They don't know whether or not they have access to entropic datao
or whether something else has access to the same data.

So one of the "things you should know" is if you run a modern Linux,
Solaris or Windows on specific CPUs in specific environments (like not in
a VM) then it can and will serve your userland programs with
cryptographically useful random numbers, at the cost of a fairly large
attack surface (drivers, APIs, kernel code, timing, memory etc.)

Intel came down firmly on the side of enabling the userland. One
instruction puts entropic state into the register of your running userland
program. Smaller attack surface, simpler, quicker, serves multiple users
whether or not they are running in on bare metal or in a VM. You have to
trust the VM (as you do for anything else you do in a VM). Stuff is done
in hardware to make sure it serves multiple consumers, just as an OS does
stuff to serve multiple consumers.

A SW userland RNG is an effective method to connect entropy sources you
know about on your system to algorithms that meet your needs. The recent
switch to NIST requiring 192 bits or greater in key strength has
precipitated a few 256 bit SW SP800-90 implementations. I know, I wrote a
couple of them and I've reviewed a few others that have been written in
response to the NIST change.

SW RNG code is also easy to take through certification.
The different is you take the system through certification, not just the
code (except for CAVS). An OS kernel writer doesn't have that advantage.

So my general view is that if you are tasked with enabling random numbers
in your application, userland is usually a better place to do it. Maybe in
a decent library used directly by your application. Maybe with some
trivial inline assembler. But only if you can control the entropy source
and the sharing of it. If you can use HW features (RdRand, RdSeed, other
entropy sources, AES-NI, Hash instructions etc.) then your SW task is
simplified, but it assumes you know what hardware you are writing for.
Ditto for other platforms I'm less familiar with.

The mistake I have seen, particularly in certain 'lightweight' SSL
libraries is to say "It's our policy not to do the RNG thing - we trust
the OS to provide entropy" and read from /dev/urandom as a result (because
/dev/random blocks on many platforms). They are trusting the thing that is
not in a place where it can guarantee entropy sources are available. It
will work on some platforms and will certainly fail on some platforms,
particularly lightweight platforms with Linux kernels on CPUs with no
deliberately designed source of entropy which is where lightweight SSL
libraries are used most.

DJ





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] LastPass have been hacked, so it seems.

2015-06-16 Thread dj
 Are there any password managers that let the user specify where to
 store a remote copy of the passwords (FTP server, scp, Dropbox,
 whatever) while keeping the crypto and the master password on the end
 devices?

 Seems to me that would limit the cloudy trust problem while still
 addresssing the very real problem of a zillion accounts used from
 multiple devices.


I get by fine with KeePass. It's just a program that keeps your passwords
in an encrypted file using your password. You can install it on multiple
plaforms (I have PC, Mac and Android clients) and I put the file on Google
Drive. The UI is fit for purpose.

It might be one better if I could mix in multiple hardware tokens (one per
device), so I wasn't just relying on a password. This may be possible. I
haven't checked.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NIST Workshop on Elliptic Curve Cryptography Standards

2015-05-12 Thread dj

 On the lightweight side, I get the impression that block ciphers are
 also a big topic, but that there isn't a ton of work being done
 there... besides the NSA ciphers, SIMON and SPECK. John Kelsey
 mentioned these at RWC. The NSA came to NIST and said Check out these
 ciphers! and NIST said Those look cool, but please publish them for
 academic review so we're not favoring you in any way.  So they did.
 But now the onus is on the community to analyze them and either poke
 holes in them or present something better.

 -tom


Simon and speck have had quite a few cryptanalyses published and time has
passed. Simon is a lovely thing to implement in hardware. It goes up to
256,128 key and data size as is more efficient than AES in that
configuration by about a factor of 3 in hardware for the same performance.

If you don't read ISO specs for amusement (I can't blame you, they charge
money) PRESENT and CLEFIA are approved lightweight ciphers in ISO. But
they aren't as lightweight as Simon.

So all other things being equal, it seems to have something over PRESENT,
CLEFIA and AES. But all other things are not equal. The parentage is
unfortunate, because as an implementor, I really want Simon to make it
into the standards space, enabling us to deploy it in products where
standards compliance is mandatory.

My request to Doug Shors (who was at SC27 last week promoting Simon and
Speck for WG2) was - Add the missing 256 bit block size. It's the same
Achilles heel that AES has. The maximum block size is too small. The idea
that there is a need for lightweight crypto has poisoned the design of
lightweight ciphers. They are efficient ciphers, whether with small or big
key sizes or small or big block sizes. The more tasteful ones are smoothly
scalable in terms of width, unrolling and pipelining. But when they stop
at 64 bit block sizes or 128 bit key sizes, they limit the deployability
and performance limits.

David




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NIST Workshop on Elliptic Curve Cryptography Standards

2015-05-12 Thread dj

 There is a very simple way around this. Block XXTEA introduced a new
 method
[snip]

 Although for the internet and smart cards, data packets are small enough
 for 64 bit blocks not to matter as long as you rekey between packets.


To paraphrase Bowman: Oh my God. It's full of integer adders!
Integer adders don't pass the sniff test for lightweight hardware.

Alas, the world isn't just the internet and smart cards. We are throwing
crypto on silicon as fast as we can to address the many threats to
computer hardware. No one block size is correct.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NIST Workshop on Elliptic Curve Cryptography Standards

2015-05-11 Thread dj
 On Tue, May 12, 2015 at 1:56 AM, Thierry Moreau 
 thierry.mor...@connotech.com wrote:

 With ECC, I have less confidence in NIST ability to leverage the
 cryptographic community contributions.


 One hopes they will recommend the same elliptic curve standards that the
 IRTF's CFRG is standardizing for use in e.g. TLS.

 Given that, so far, the CFRG has standardized curves developed by djb and
 Mike Hamburg, at least to me they feel free of NSA influence.

 We'll see what NIST actually ends up doing. Standardizing the CFRG curves
 seems like a great way they could help promote interoperability and
 rebuild
 their reputation.


The DJB curves are finding traction elsewhere and will be adopted by other
standards bodies that I am involved in (because in part, I'm pushing for
them). The efficiency, simplicity of implementation and acceptance by the
crypto community of these algorithms make for strong arguments in
standards contexts.

NIST's primary problem with ECC are the NIST curves. If they can bring
themselves to move on to curves with better provenance, then progress can
be made with NIST. Otherwise the NIST curves will become obsolete and
superseded by other standards bodies.

There is also the Lightweight Crypto Workshop at NIST. This heavily
overlaps with the ECC thing, because the right options for ECC curves are
also the right options for lightweight crypto.

I'm attending the lightweight Crypto Workshop, but not the ECC Workshop. I
don't have bandwidth for both.

I spoke with Lily Chen of NIST last week (at SC27) about the
Lighweight/ECC overlap and the need for them to move to better curves.
They know what I think.



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] random number generator

2014-11-21 Thread dj

 Rather than me listing names, why not just let it rip and run your own
 randomness tests on it?

Because that won't tell me if you are performing entropy extraction.

Jytter assumes an x86 machine with multiple asynchronous clocks and
nondeterministic physical devices. This is not a safe assumption. Linux
assumes entropy in interrupt timing and this was the result
https://factorable.net/weakkeys12.extended.pdf.

This falls under the third model of source in my earlier email. Your
extractor might look simple, but your system is anything but simple and
entropy extracted from rdtsc and interrupts amounts to squish.

Looking at the timing on your system and saying it looks random to me
does not cut it. Portable code has to have a way to know system timing is
random on every platform it runs on. The above paper shows that it isn't.

Jytter does something neat but the broad claims you are making and the
broader claims the Jytter web site makes do not pass the sniff test.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] random number generator

2014-11-21 Thread dj
 OK, if you think my Jytter TRNG is weak,

I did not say it was weak. I said Jytter (and any other algorithm) is
deterministic when run on an entropy free platform. This is a simple fact.

By all meas design new and interesting ways to extract platform entropy,
but condition your claims on that entropy being there.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] random number generator

2014-11-20 Thread dj

 Plz excuse if inappropriate.  Does anyone know of a decent (as in
 really
 random) open source random generator?  Preferably in PHP or C/C++?

 Thanks.

Getting back to the initial question, the answer I think is 'no'.

You haven't expressed clearly what you want from this RNG, but you're
asking in a crypto forum and you said 'really random', which I take to
mean you want something that is suitable for crypto applications, like
generating keys, feeding key search algorithms, random IVs, nonces and all
the other fun stuff we do. I take it to mean you are not just looking for
a CS-PRNG.

For this you need an algorithm that
A) Measures the physical world in a way that translates quantum
uncertainty into digital bits with a well defined min-entropy.

and
B) Cryptographically processes these numbers such that they are
unpredictable (in specific ways) and indistinguishable from random.

and maybe
C) Uses that to seed a CS-PRNG to give you lots of numbers with low
overhead and guaranteed computational bounds on the adversary.

An algorithm in C, C++ or PHP in isolation cannot offer the necessary
properties because those languages can only be used to express
deterministic behaviors.

The hardware you run on must provide the source of non determinism. This
could be by sampling local physical events that happen to be entropic or
from a local entropy source circuit, or by reaching out over the internet
to other sources (this has issues) or a combination of all three.

In a pinch you can look at the whole system as assume entropy is leaking
in through its pores, and then sample the system state in complicated
ways. But this approach is tightly bound to the chosen system. It is not
portable.

So knowing this, you can know what to go looking for.

1) A physical source of entropy - Check your hardware specs
2) An entropy extractor - http://en.wikipedia.org/wiki/Randomness_extractor
3) A CS-PRNG -
http://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator

Code for 2 and 3 are spread all over the internet.

For 1, buy one, buy a computer that has one or get out your soldering
iron. Bill Cox has been discussing his interesting design for such a thing
right here.

DJ


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Speaking of key management [was Re: Techniques for protecting CA Root certificate Secret]

2014-01-09 Thread dj
 Hi,

 Those who are interested in key management may wish to note:

Cryptographic Key Management Workshop 2014
http://www.nist.gov/itl/csd/ct/ckm_workshop2014.cfm
March 4-5, 2014, NIST, Gaithersburg MD

 See also:

SP 800-152
DRAFT A Profile for U. S. Federal Cryptographic Key Management Systems
 (CKMS)
http://csrc.nist.gov/publications/PubsDrafts.html#SP-800-152
Released 7 Jan 2014, comments due by March 5, 2014


I will probably be there causing trouble.
DJ


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Speaking of key management [was Re: Techniques for protecting CA Root certificate Secret]

2014-01-09 Thread dj

SP 800-152
 Don't forget to look at SP 800-130 in parallel.

 Overall, an endless list of requirements that may be useful as a barrier
 to entry in the US Federal Government IT security market.


That's why I'm going. To try and trim the obstructive requirements. If
we're building on-chip key management to secure on-chip things from other
on-chip things, requiring a human 'officer' to bless keys, IDs and
entities is an impassable barrier.

That spec needs a lobotomy. Either it gets fixed, or we ignore it. So I'll
try one pass at fixing it and if that doesn't work, we'll move on.





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] on using RDRAND [was: Entropy improvement: haveged + rngd together?]

2013-12-02 Thread dj

 the work that you have done to make hardware entropy sources readily
 available in Intel chips should be commended, and i certainly
 appreciate it.  i will however continue to complain until it is even
 better, with configurable access to the raw entropy samples for those
 who wish to evaluate or run the TRNG in this mode.


I'm currently arguing with NIST about their specifications which make it
hard to provide raw entropy while being FIPS 140-2 and NIST SP800-90
compliant. If I had a free hand, it would not be a configuration.
Configurations suck in numerous ways. It would just be there.

Chip design is a slow process. Standards writing is a slow process,
especially when NIST is involved. When one depends on the other it is even
slower. So don't hold your breath waiting for anything to happen.

Feel free to lean on NIST. I notice that they haven't even published the
public comments yet. The comment period for SP800-90 ended over three
weeks ago.

The AES and SHA-3 competitions were not like this, even though RNG's are
less glitzy, they are a more fundamental security feature but they're
getting less attention from NIST.



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] on using RDRAND [was: Entropy improvement: haveged + rngd together?]

2013-12-01 Thread dj
 Am Freitag, 29. November 2013, 19:05:00 schrieb coderman:

 Hi coderman,

 On Fri, Nov 29, 2013 at 4:54 PM, coderman coder...@gmail.com wrote:
  ...
  0. extract_buf() - 'If we have a architectural hardware random number
  generator [ED.: but only RDRAND], mix that in, too.'
 
 
 https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/dri
vers/char/random.c#n1038
 hopefully my last mea culpa,

 but the issue above is fully resolved in latest linux git; Theodore
 Ts'o's work to harden the entropy system in Linux should be commended.

 the less better version directly xor's RDRAND with the pool output
 before handing back to consumer.
 see v3.12 or earlier:
   http://lxr.free-electrons.com/source/drivers/char/random.c?v=3.12#L945

 which looks like this:


 This code is already in since 3.4 or so. IIRC, your mentioned code never
 appeared on a final kernel tree.


The effect of RNGd using RdRand is entirely different in user experience
and security effect to the Kernel's use. The mix-in algorithm to the
kernel pool is different (XOR vs that twisted LFSR thing), the RNGd source
gets credited entropy by the kernel, where the RdRand does not.

The result is that with RNGd pulling from RdRand, /dev/random does not
block. With the right kernel parameters, /dev/random flows freely from
initial boot. This is a great improvement over the status quo.

The kernel's use of RdRand fails in this respect. /dev/random still
blocks. It is not a complete solution.

The downside to RNGd is a slightly higher computational overhead and a
larger attack surface, but for a box you control, that may be moot. It's
statistical tests seem misplaced.

While the kernel keeps getting it wrong (in a way that keeps /dev/random
blocking where it need not) RNGd is the path of least resistance that does
get deployed and does cause /dev/random to actually work.

I would not characterize the Linux RNG issue as fully resolved in any
way. Until every CPU maker includes a source of entropy by design (instead
of by accident) and the Kernel gets off its high horse and chooses to use
them and the kernel gets pre-configured in distros with sane parameters,
crypto software will continue to fail from low entropy situations.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] /dev/random is not robust

2013-10-14 Thread dj
http://eprint.iacr.org/2013/338.pdf


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread dj
 Aloha!

 On 2012-06-19 11:30 , coderman wrote:
 On Tue, Jun 19, 2012 at 12:48 AM, Marsh Ray ma...@extendedsubset.com
 wrote:
 So something is causing AES-NI to take 300 clocks/block to run this
 DRBG.
 Again, more than 3x slower than the benchmarks I see for the hardware
 primitive. My interpretation is that either RdRand is blocking due to
 entropy depletion, there's some internal data pipe bottleneck, or
 maybe
 some of both.

 it is also seeding from the physical noise sources, running sanity
 checks of some type, and then handing over to DRBG. so there is
 clearly more involved than just a call to AES-NI. 3x as expensive
 doesn't sound unreasonable if the seeding and validation overhead is
 significant.

 I might be missing something. But is it clear that Bull Mountain is
 actually using AES-NI? I assumed that one would like to use a separate
 HW-engine. Reading from the CRI paper seems (to me) to suggest that this
 is actually the case:

It is not using AES-NI. It is a self contained unit on chip with a built
in HW AES encrypt block cipher.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread dj
 On 06/19/2012 02:11 PM, coderman wrote:

 the sanity checks, being on die, are limited. you can't run DIEHARD
 against this in a useful manner because the DRBG obscures anything
 useful.

 I don't think there's anything useful diehard (specifically) is going to
 tell you.

 The raw entropy source output would not be expected to pass diehard. The
 CR report shows visible artifacts in that FFT graph. The entropy
 estimation function one would apply to that source would likely be much
 simpler than the diehard suite. Just a sanity check that the output is
 actually changing once in a while would go a long way towards
 eliminating the most common failure modes.

 On the other hand, the AES CTR DRBG output will always pass diehard,
 whether it contains any entropy or not.


Yup. Actually having a perfect source is a problem. It's much easier to
test for a source with known defects that meet a well defined statistical
model. With that you can build a test that the circuit is built correctly.
You can also show it catches all SPOF and DPOF cases. You use other
techniques to prove that if built right, the circuit will have a well
defined min entropy in the output.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread dj
Indeed. We're confident that the DRNG design is sound, but asking the
world to trust us, it's a sound design is unreasonable without us
letting someone independently review it. So being a cryptographic design
that people need some reason to trust before they use it, we opened the
design to a reputable outside firm and paid for and asked for independent
review.

The reviewers get to publish their review. We don't control the text.
That's part of the deal. We run the risk that we look bad if the review
finds bad stuff.

How else could we look credible? Our goal is to remove the issue of bad
random numbers from PCs that lead to the failure of cryposystems. Part of
achieving that is that we have to give people a way to understand why the
random numbers are of cryptographic quality and what that means in
specific terms, like brute force prediction resistance, SP800-90
compliance and effective conditioning of seeds.




 A company makes a cryptographic widget that is inherently hard to test or
 validate. They hire a respected outside firm to do a review. What's wrong
 with that? I recommend that everyone do that. Un-reviewed crypto is a
 bane.

 Is it the fact that they released their results that bothers you? Or
 perhaps that there may have been problems that CRI found that got fixed?

 These also all sound like good things to me.

   Jon



 -BEGIN PGP SIGNATURE-
 Version: PGP Universal 3.2.0 (Build 1672)
 Charset: us-ascii

 wj8DBQFP32NnsTedWZOD3gYRAuxbAKCvzWt3/+jKq5VadSBLBo6hfT9L8wCeJT15
 8e6Ll1xBvXe8IojvRDvksXw=
 =jAzX
 -END PGP SIGNATURE-
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread dj

 What they're actually saying is that they don't think that FIPSing the RNG
 will materially impact the security of the RNG -- which if you think
 about it, is pretty faint praise.

But true. The FIPS mode enforces some boundary controls (external config
and debug inputs are disabled) but the thing in control of the config and
debug inputs is also in control of the fips lever. So you could argue that
the boundary protections are no stronger than the protections for the rest
of the chip. It does tell you that if it is your chip and you don't let
someone else pull the lid off, scrape off the passivation and apply a pico
probe to it, it will certainly provide you with good random numbers
regardless of the FIPS mode. What is certain is that 'FIPS mode' was not
an optimal name.

The FIPS mode is all about getting certification which is obviously less
important to us than giving people a realistic understanding of what it
is, or we'd have done the certification first. It doesn't make the numbers
any more or less predictable.

[snip]


 * Intel has produced a hardware RNG in the CPU. It's shipping on new Ivy
 Bridge CPUs.
Yup

 * Intel would like to convince us that it's worth using. There are many
 reasons for this, not the least of which being that if we're not going to
 use the RNG, people would like that chip real estate for something else.
Yup

 * We actually *need* a good hardware RNG in CPUs. I seem to remember a
 kerfuffle about key generation cause by low entropy on system startup in
 the past. This could help mitigate such a problem.
We do. So do we. We think so.


 * Intel went to some experts and hired them to do a review. They went to
 arguably the best people in the world to do it, given their expertise in
 hardware in general and differential side-channel cryptanalysis.
You're not wrong.


 * Obviously, those experts found problems. Obviously, those problems got
 fixed. Obviously, it's a disappointment that we don't get to hear about
 them.
Actually what they found is in the paper. The design has been through
several iterations before it got to product and external review. Compared
to other functions, the RNG is special and needful of greater care in
design and openness to allow review.


 * The resulting report is mutually acceptable to both parties. It behooves
 the reader to peer between the lines and intuit what's not being said. I
 know there are issues that didn't get fixed. I know that there are
 architectural changes that the reviewers suggested that will come into
 later revisions of the hardware. There always are.
There are revisions in the pipeline. Mostly to do with making it go faster
and to deal with ever more stringent power consumption constraints. But
they're not fully baked yet, so I thought we were being quite open with
the reviewers about future plans.

[snip]

 * If we want to have a discussion of the SP 800-90+ DRBGs, that's also a
 great discussion. Having implemented myself an AES-CTR DRBG (which is what
 this is), I have some pointed opinions. That discussion is more than
 tangential to this, but it's also a digression, unless we want to attack
 the entire category of AES-CTR DRBGs (which is an interesting discussion
 in itself).
The use of AES was much more driven by the conditioning algorithm which as
yet is not standardized by NIST and the math is interesting. Once you've
built a hardware AES for conditioning purposes, you might as well use it
for the DRBG part. The SP800-90 AES-CTR DRBG is an odd duck and whoever
wrote the spec didn't have the hardware designer in mind, but
roll-your-own crypto is not an option for us.



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-13 Thread dj
CRI has published an independent review of the RNG behind the RdRand
instruction:
http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf

 Intel has published more details of the new rdrand instruction and the
 random number generator behind it.

 http://software.intel.com/en-us/articles/download-the-latest-bull-mountain-software-implementation-guide/
 http://software.intel.com/file/36945


 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread dj

 So, the underlying issue is not a poor design choice in OpenSSL, but
 poor seeding in some applications.

That's why we're putting it on-chip and in the instruction set from Ivy
Bridge onwards. http://en.wikipedia.org/wiki/RdRand


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography