Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Jun 18, 2012, at 4:12 PM, Marsh Ray wrote:

> 
> 150 clocks (Intel's figure) implies 18.75 clocks per byte.
> 

That's not bad at all. It's in the neighborhood of what I remember my DRBG 
running at with AES-NI. Faster, but not by a lot. However, I will getting the 
full 16 bytes out of the AES operation and RDRAND is doing 64 bits at a time, 
right?

> 
> Note that Skein 512 in pure software costs only about 6.25 clocks per byte. 
> Three times faster! If RDRAND were entered in the SHA-3 contest, it would 
> rank in the bottom third of the remaining contestants.
> http://bench.cr.yp.to/results-sha3.html

As much as it warms my heart to hear you say that, it's not a fair comparison. 
A DRBG has to do a lot of other stuff, too. The DRBG is an interesting beast 
and a subject of a whole different conversation.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFP4B3lsTedWZOD3gYRAkegAJ0Z491IAfNVXX3hKOdOghPczZmWMACgztIG
Ym7qE1e/es0m0o+macE+Iv0=
=GJXv
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 18, 2012, at 9:46 PM, Marsh Ray wrote:

>> This results in the creation of a black-box or component approach.
>> Because of this and perhaps only because of this, block algorithms and
>> hashes have become the staples of crypto work. Public key crypto and
>> HMACs less so. Anything crazier isn't worth discussing.
> 
> I don't get it. Why can't we have effective test vectors for HMACs and public 
> key algorithms?
> 

We do. FIPS 140 CAVS tests are a damned good set of vectors. The complaints I 
have about them is that there are too many and some things that are of 
questionable benefit (the so-called "Monte Carlo" tests, for one) rather than 
that there are too few of them.

There are even test vectors for the DRBGs. They give you entropy inputs and 
everything and look at your output.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP4Bn9sTedWZOD3gYRAkXpAJ9p2kkFBYOgVsiIhjgFlXOKCQFRmACgluQh
74tRuchgKXk60pBrlmhr3zE=
=qsCQ
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas

On Jun 18, 2012, at 9:03 PM, Matthew Green wrote:

> On Jun 18, 2012, at 4:21 PM, Jon Callas wrote:
> 
>> Reviewers don't want a review published that shows they gave a pass on a 
>> crap system. Producing a crap product hurts business more than any thing in 
>> the world. Reviews are products. If a professional organization gives a pass 
>> on something that turned out to be bad, it can (and has) destroyed the 
>> organization.
> 
> 
> I would really love to hear some examples from the security world. 
> 
> I'm not being skeptical: I really would like to know if any professional 
> security evaluation firm has suffered meaningful, lasting harm as a result of 
> having approved a product that was later broken.
> 
> I can think of several /counterexamples/, a few in particular from the 
> satellite TV world. But not the reverse.
> 
> Anyone?

The canonical example I was thinking of was Arthur Anderson, which doesn't meet 
your definition, I'm sure.

But we'll never get to requiring security reviews if we don't start off seeing 
them as desirable.

Jon



PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread coderman
On Mon, Jun 18, 2012 at 9:46 PM, Marsh Ray  wrote:
> ...
> One thing they could do is provide a mechainsm to access raw samples from
> the Entropy Source component. I.e., the data that "Intel provided [to
> Cryptography Research] from pre-production chips. These chips allow access
> to the raw ES output, a capability which is disabled in production chips."

this is very useful to have in some configurations (not just testing).
for example: a user space entropy daemon consuming raw, biased,
un-whitened, full throughput bits of lower entropy density which is
run through sanity checks, entropy estimates, and other vetting before
mixing/obscuring state, and feeding into host or application entropy
pools.

the Intel RNG conveniently abstracts all of this away from you,
potentially giving you a ready-to-use source without a pesky user
space entropy daemon requirement.

on the other hand, the Intel RNG abstracts all of this away from you,
so your own assurances against the raw output are no longer possible.



> Obviously these samples can't go back into the DRBG, but some developers
> would probably like to estimate the entropy in the raw data. They would
> likely interpret it as a higher quality source if they could reach that
> conclusion with their own code.

yes, particularly when fed through a DIEHARD battery of tests using
hundreds of MB of entropy. hard to do that on die! ;)


gripes aside, this is a design and implementation i like quite a bit,
and i am very happy to see more CPU cores with entropy instructions!
now if only AMD and ARM would follow suit...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Marsh Ray

On 06/18/2012 10:21 PM, ianG wrote:


The first part is that AES and block algorithms can be quite tightly
defined with a tight specification, and we can distribute test
parameters. Anyone who's ever coded these things up knows that the test
parameters do a near-perfect job in locking implementations down.


Yes, this is what separates the engineers from the hobbyists.

For example, this is my main complaint about bcrypt and scrypt. The 
absence of a comprehensive set of test vectors has caused insecure and 
incompatible implementations to creep into existence.



This results in the creation of a black-box or component approach.
Because of this and perhaps only because of this, block algorithms and
hashes have become the staples of crypto work. Public key crypto and
HMACs less so. Anything crazier isn't worth discussing.


I don't get it. Why can't we have effective test vectors for HMACs and 
public key algorithms?



Then there are RNGs. They start from a theoretical absurdity that we
cannot predict their output, which leads to an apparent impossibility of
black-boxing.

NIST recently switched gears and decided to push the case for
deterministic PRNGs. According to original thinking, a perfect RNG was
perfectly untestable. Where as a perfectly deterministic RNG was also
perfectly predictable. This was a battle of two not-goods.

Hence the second epiphany: NIST were apparently reasoning that the
testability of the deterministic PRNG was the lesser of the two evils.


But it's not even a binary choice. We can divide the system into 
components and test with the best available methods for each components 
separately.


Even the most deterministic crypto implementation will have some squishy 
properties. E.g., power and EM side channels, resilience to fault 
injection. The engineering goal is to make the proportion of the system 
that can be certified rigorously as large as possible and the proportion 
that can't be tested well at all as small as possible.



They wanted to black-box the PRNG, because black-boxing was the critical
determinant of success.

After a lot of thinking about the way the real world works, I think they
have it right.


It's almost as if they know a thing or two about testing stuff. :-)


Use a deterministic PRNG, and leave the problem of
securing good seed material to the user. The latter is untestable
anyway, so the right approach is to shrink the problem and punt it
up-stack.

Taking that back to Intel's efforts. Unfortunately it's hard to do that
deterministic/seed breakup in silicon. What else do they have?


One thing they could do is provide a mechainsm to access raw samples 
from the Entropy Source component. I.e., the data that "Intel provided 
[to Cryptography Research] from pre-production chips. These chips allow 
access to the raw ES output, a capability which is disabled in 
production chips."


Obviously these samples can't go back into the DRBG, but some developers 
would probably like to estimate the entropy in the raw data. They would 
likely interpret it as a higher quality source if they could reach that 
conclusion with their own code.


ISTR OpenBSD performs this type of analysis when feeding timing samples 
into the pool.



The components / black-boxing approach in cryptoplumbing has been ultra
successful. It has also had a rather dramatic effect on everything else,
because it has raised expectations. We want everything else to be as
"perfect" as the block encryption algorithm.


And I want a pony!


Unfortunately, that's not possible. We need to manage our expectations.


I think it's entirely reasonable for us to insist on an efficient high 
quality source of cryptographically secure random numbers. After all 
this is the basis of so many other important security properties that we 
should be grateful for all the powerful simplifying assumptions it 
enables. Remember the old days when we had to bang on the keyboard and 
move the mouse in order to generate keys?


What we need to get better at, IMHO, is incorporating experience into 
our threat model and judging relative risks. For example, a lot of stuff 
going on in kernel RNGs looks to me like additional complexity that's 
more likely to breed bugs than to defend against an imagined 
three-letter agency with space alien supercomputers.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Matthew Green
On Jun 18, 2012, at 4:21 PM, Jon Callas wrote:

> Reviewers don't want a review published that shows they gave a pass on a crap 
> system. Producing a crap product hurts business more than any thing in the 
> world. Reviews are products. If a professional organization gives a pass on 
> something that turned out to be bad, it can (and has) destroyed the 
> organization.


I would really love to hear some examples from the security world. 

I'm not being skeptical: I really would like to know if any professional 
security evaluation firm has suffered meaningful, lasting harm as a result of 
having approved a product that was later broken.

I can think of several /counterexamples/, a few in particular from the 
satellite TV world. But not the reverse.

Anyone?___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Steven Bellovin
On Jun 18, 2012, at 11:21 52PM, ianG wrote:
> 
> 
> Then there are RNGs.  They start from a theoretical absurdity that we cannot 
> predict their output, which leads to an apparent impossibility of 
> black-boxing.
> 
> NIST recently switched gears and decided to push the case for deterministic 
> PRNGs.  According to original thinking, a perfect RNG was perfectly 
> untestable.  Where as a perfectly deterministic RNG was also perfectly 
> predictable.  This was a battle of two not-goods.
> 
> Hence the second epiphany:  NIST were apparently reasoning that the 
> testability of the deterministic PRNG was the lesser of the two evils. They 
> wanted to black-box the PRNG, because black-boxing was the critical 
> determinant of success.
> 
> After a lot of thinking about the way the real world works, I think they have 
> it right.  Use a deterministic PRNG, and leave the problem of securing good 
> seed material to the user.  The latter is untestable anyway, so the right 
> approach is to shrink the problem and punt it up-stack.
> 

There's evidence, dating back to the Clipper chip days, that NSA feels the same 
way.  Given the difficulty of proving there are no weird environmental impacts 
on hardware RNGs, they're quite correct.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Peter Gutmann
Tim Dierks  writes:

>While this is all true, it's also why manufacturers who want persuasive
>analysis of their products hire consulting vendors with a brand and track
>record strong enough that the end consumer can plausibly believe that their
>reputational risk outweighs the manufacturer's desire for a good report.
>Cryptography Research is such a vendor.

There's also the law of diminishing returns for Intel.  Most users of their 
products are going to say "it's from Intel, it should be good enough".  A 
small number of users are going to say "it should be OK but I'd like a second 
opinion just to be sure".  A vanishingly small number are going to peek out 
from under their tinfoil hats and claim that the Bavarian Illuminati "fixed" 
the report and they still don't trust it, ignoring the fact that the app 
they're using the RNG with has to run as admin under Windows, opens a bunch of 
globally-accessible network ports, and has eight different buffer overflows in 
it.

The point at which it makes sense to stop is between the second and third
groups.

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread ianG

On 19/06/12 08:49 AM, Jack Lloyd wrote:


I've never heard about someone trying to talk past, say, an AES
implementation that didn't actually work, or a bad RSA, that's a
pretty bright line.


I had a bit of an epiphany in two parts.

The first part is that AES and block algorithms can be quite tightly 
defined with a tight specification, and we can distribute test 
parameters.  Anyone who's ever coded these things up knows that the test 
parameters do a near-perfect job in locking implementations down.


This results in the creation of a black-box or component approach. 
Because of this and perhaps only because of this, block algorithms and 
hashes have become the staples of crypto work.  Public key crypto and 
HMACs less so.  Anything crazier isn't worth discussing.




Then there are RNGs.  They start from a theoretical absurdity that we 
cannot predict their output, which leads to an apparent impossibility of 
black-boxing.


NIST recently switched gears and decided to push the case for 
deterministic PRNGs.  According to original thinking, a perfect RNG was 
perfectly untestable.  Where as a perfectly deterministic RNG was also 
perfectly predictable.  This was a battle of two not-goods.


Hence the second epiphany:  NIST were apparently reasoning that the 
testability of the deterministic PRNG was the lesser of the two evils. 
They wanted to black-box the PRNG, because black-boxing was the critical 
determinant of success.


After a lot of thinking about the way the real world works, I think they 
have it right.  Use a deterministic PRNG, and leave the problem of 
securing good seed material to the user.  The latter is untestable 
anyway, so the right approach is to shrink the problem and punt it up-stack.




Taking that back to Intel's efforts.  Unfortunately it's hard to do that 
deterministic/seed breakup in silicon.  What else do they have?


The components / black-boxing approach in cryptoplumbing has been ultra 
successful.  It has also had a rather dramatic effect on everything 
else, because it has raised expectations.  We want everything else to be 
as "perfect" as the block encryption algorithm.  Unfortunately, that's 
not possible.  We need to manage our expectations.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread ianG

On 19/06/12 04:15 AM, Jack Lloyd wrote:

On Mon, Jun 18, 2012 at 10:20:35AM -0700, Jon Callas wrote:



Un-reviewed crypto is a bane.


Bad crypto with a rubber stamp review is perhaps worse because someone
might believe the stamp means something.




Are you assuming there is such a thing as a perfect review?  I don't 
think so, myself.  There are just reviews, some better for some purposes 
than others.  We don't have anything like a complete methodology to do 
this sort of thing.  And as Jon mentioned, all reviews are bound by 
resources, so compromise is a given.


Another way to think about it is to consider the review as a fact that 
adds information.  Each new review adds some information which combines 
into some sort of timeline in history.  It doesn't necessarily tell us 
better, but it certainly sets up a reputational trap (as someone 
mentioned).  Which might be worth something.


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Kyle Creyts
On Mon, Jun 18, 2012 at 7:12 PM, Marsh Ray  wrote:
> On 06/18/2012 12:20 PM, Jon Callas wrote:
>>
>>
>> A company makes a cryptographic widget that is inherently hard to
>> test or validate. They hire a respected outside firm to do a review.
>> What's wrong with that? I recommend that everyone do that.
>> Un-reviewed crypto is a bane.
>
>
> Let's accept that the review was competent, thorough, and independent.
>
> Here's what I'm left wondering:
>
> How do I know that this circuit that was reviewed is actually the thing
> producing the random numbers on my chip? Why should I assume it doesn't
> have any backdoors, bugdoors, or "engineering revisions" that make it
> different from what was reviewed?
>
> Is RDRAND driven by reprogrammable microcode? If not, how are they going
> to address bugs in it? If so, what are algorithms are used by my CPU to
> authenticate the microcode updates that can be loaded? What kind of
> processes are used to manage the signing keys for it?
>
> Let's take a look at the actual report:
> http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf
>
>> Page 12: At an 800 MHz clock rate, the RNG can deliver
>> post-processed random data at a sustained rate of 800 MBytes/sec. In
>> particular, it should not be possible for a malicious process to
>> starve another process.
>
>
> Wait a minute... that second statement doesn't follow from the first.
>
> We're talking about chips with a 25 GB/s *external* memory bus bandwidth,
> why can't a 4-core 2 GHz processor request on the order of 64 bits per
> core*clock for 4*64*2e9 = 512e9 b/s = 60 GiB/s ?
>
> So 800 MiB/s @ 800 MHz
> = 7.62 clocks per 64 bit RDRAND result (if they mean 1 core)
> = 30.5 clocks per 64 bit RDRAND (if they mean 4 cores).
> = 61.0 clocks per 64 bit RDRAND (if they mean 8 hyperthread cores).
>
> More info:
>>
>>
>> http://software.intel.com/en-us/articles/intel-digital-random-number-generator-drng-software-implementation-guide/
>> Data taken from an early engineering sample board with a 3rd generation
>> Intel Core family processor,
>> code-named Ivy Bridge, quad core, 4 GB memory, hyper-threading enabled.
>> Software: LINUX* Fedora 14,
>> GCC version 4.6.0 (experimental) with RDRAND support, test uses p-threads
>> kernel API.
>
>
> Why does an Intel "Software Implementation Guide" have more information
> about the actual device under test than the formal report?
>
>> Measured Throughput:
>>    Up to 70 million RDRAND invocations per second
>
>
> Implies:
>    4 cores @ 2 GHz -> 114 clocks per RDRAND
>    8 cores @ 2 GHz -> 229 clocks per RDRAND
>
>>    500+ million bytes of random data per second
>
>
> Implies:
>    rate >= 62.5e6 RDRAND/s
>    t(RDRAND) <= 16 ns
>              <= 32 clocks, 1 core @ 2 GHz
>              <= 256 core*clocks, 8 cores @ 2 GHz
>
>> RDRAND Response Time and Reseeding Frequency
>>    ~150 clocks per invocation
>>    Note: Varies with CPU clock frequency since constraint is shared data
>> path from DRNG to cores.
>>    Little contention until 8 threads  – or 4 threads on 2 core chip
>>    Simple linear increase as additional threads are added
>
>
> So when the statement is given "it should not be possible for a malicious
> process to starve another process" without justification, it leads us to ask
> "why should it not be possible to starve another process?"
>
> Maybe this is our answer: Because this hardware instruction is slow!
>
> 150 clocks (Intel's figure) implies 18.75 clocks per byte.

then perhaps this is the proper graph to examine.
http://bench.cr.yp.to/graph-sha3/8-thumb.png
maybe this is the case RDRAND is intended for? producing random 8-byte seeds?

>
> It would appear that the instruction is actually a blocking operation that
> does not return until the request has been satisfied. Or does it?
>
> Can we then expect it will never result in an out-of-entropy condition?
>
> What happens when 16 cores are put on a chip? Will some future chip begin
> occasionally returning zeroes from RDRAND when an attacker fires off 31
> simultaneous threads requesting entropy? Or will RDRAND take 300 clocks to
> execute?
>
> Note that Skein 512 in pure software costs only about 6.25 clocks per byte.
> Three times faster! If RDRAND were entered in the SHA-3 contest, it would
> rank in the bottom third of the remaining contestants.
> http://bench.cr.yp.to/results-sha3.html
>
> So perhaps we should not throw away our software-stirred entropy pools just
> yet and if RDRAND is present it should be used to contribute 128 bits or so
> at a time as just one of several sources of entropy. It could certainly help
> to kickstart the software RNG in those critical first seconds after cold
> boot (you know, when the SSH keys are being generated).
>
> - Marsh
>
> ___
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography



-- 
Kyle Creyts

Information Assurance Professional
BSidesDetroit Org

Re: [cryptography] Intel RNG

2012-06-18 Thread Marsh Ray

On 06/18/2012 12:20 PM, Jon Callas wrote:


A company makes a cryptographic widget that is inherently hard to
test or validate. They hire a respected outside firm to do a review.
What's wrong with that? I recommend that everyone do that.
Un-reviewed crypto is a bane.


Let's accept that the review was competent, thorough, and independent.

Here's what I'm left wondering:

How do I know that this circuit that was reviewed is actually the thing
producing the random numbers on my chip? Why should I assume it doesn't
have any backdoors, bugdoors, or "engineering revisions" that make it
different from what was reviewed?

Is RDRAND driven by reprogrammable microcode? If not, how are they going
to address bugs in it? If so, what are algorithms are used by my CPU to
authenticate the microcode updates that can be loaded? What kind of
processes are used to manage the signing keys for it?

Let's take a look at the actual report:
http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf


Page 12: At an 800 MHz clock rate, the RNG can deliver
post-processed random data at a sustained rate of 800 MBytes/sec. In
particular, it should not be possible for a malicious process to
starve another process.


Wait a minute... that second statement doesn't follow from the first.

We're talking about chips with a 25 GB/s *external* memory bus 
bandwidth, why can't a 4-core 2 GHz processor request on the order of 64 
bits per core*clock for 4*64*2e9 = 512e9 b/s = 60 GiB/s ?


So 800 MiB/s @ 800 MHz
= 7.62 clocks per 64 bit RDRAND result (if they mean 1 core)
= 30.5 clocks per 64 bit RDRAND (if they mean 4 cores).
= 61.0 clocks per 64 bit RDRAND (if they mean 8 hyperthread cores).

More info:

http://software.intel.com/en-us/articles/intel-digital-random-number-generator-drng-software-implementation-guide/
Data taken from an early engineering sample board with a 3rd generation Intel 
Core family processor,
code-named Ivy Bridge, quad core, 4 GB memory, hyper-threading enabled. 
Software: LINUX* Fedora 14,
GCC version 4.6.0 (experimental) with RDRAND support, test uses p-threads 
kernel API.


Why does an Intel "Software Implementation Guide" have more information 
about the actual device under test than the formal report?



Measured Throughput:
Up to 70 million RDRAND invocations per second


Implies:
4 cores @ 2 GHz -> 114 clocks per RDRAND
8 cores @ 2 GHz -> 229 clocks per RDRAND


500+ million bytes of random data per second


Implies:
rate >= 62.5e6 RDRAND/s
t(RDRAND) <= 16 ns
  <= 32 clocks, 1 core @ 2 GHz
  <= 256 core*clocks, 8 cores @ 2 GHz


RDRAND Response Time and Reseeding Frequency
~150 clocks per invocation
Note: Varies with CPU clock frequency since constraint is shared data path 
from DRNG to cores.
Little contention until 8 threads  – or 4 threads on 2 core chip
Simple linear increase as additional threads are added


So when the statement is given "it should not be possible for a 
malicious process to starve another process" without justification, it 
leads us to ask "why should it not be possible to starve another process?"


Maybe this is our answer: Because this hardware instruction is slow!

150 clocks (Intel's figure) implies 18.75 clocks per byte.

It would appear that the instruction is actually a blocking operation 
that does not return until the request has been satisfied. Or does it?


Can we then expect it will never result in an out-of-entropy condition?

What happens when 16 cores are put on a chip? Will some future chip 
begin occasionally returning zeroes from RDRAND when an attacker fires 
off 31 simultaneous threads requesting entropy? Or will RDRAND take 300 
clocks to execute?


Note that Skein 512 in pure software costs only about 6.25 clocks per 
byte. Three times faster! If RDRAND were entered in the SHA-3 contest, 
it would rank in the bottom third of the remaining contestants.

http://bench.cr.yp.to/results-sha3.html

So perhaps we should not throw away our software-stirred entropy pools 
just yet and if RDRAND is present it should be used to contribute 128 
bits or so at a time as just one of several sources of entropy. It could 
certainly help to kickstart the software RNG in those critical first 
seconds after cold boot (you know, when the SSH keys are being generated).


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread dj
>
> What they're actually saying is that they don't think that FIPSing the RNG
> will "materially impact the security of the RNG" -- which if you think
> about it, is pretty faint praise.

But true. The FIPS mode enforces some boundary controls (external config
and debug inputs are disabled) but the thing in control of the config and
debug inputs is also in control of the fips lever. So you could argue that
the boundary protections are no stronger than the protections for the rest
of the chip. It does tell you that if it is your chip and you don't let
someone else pull the lid off, scrape off the passivation and apply a pico
probe to it, it will certainly provide you with good random numbers
regardless of the FIPS mode. What is certain is that 'FIPS mode' was not
an optimal name.

The FIPS mode is all about getting certification which is obviously less
important to us than giving people a realistic understanding of what it
is, or we'd have done the certification first. It doesn't make the numbers
any more or less predictable.

[snip]

>
> * Intel has produced a hardware RNG in the CPU. It's shipping on new Ivy
> Bridge CPUs.
Yup
>
> * Intel would like to convince us that it's worth using. There are many
> reasons for this, not the least of which being that if we're not going to
> use the RNG, people would like that chip real estate for something else.
Yup
>
> * We actually *need* a good hardware RNG in CPUs. I seem to remember a
> kerfuffle about key generation cause by low entropy on system startup in
> the past. This could help mitigate such a problem.
We do. So do we. We think so.

>
> * Intel went to some experts and hired them to do a review. They went to
> arguably the best people in the world to do it, given their expertise in
> hardware in general and differential side-channel cryptanalysis.
You're not wrong.

>
> * Obviously, those experts found problems. Obviously, those problems got
> fixed. Obviously, it's a disappointment that we don't get to hear about
> them.
Actually what they found is in the paper. The design has been through
several iterations before it got to product and external review. Compared
to other functions, the RNG is special and needful of greater care in
design and openness to allow review.

>
> * The resulting report is mutually acceptable to both parties. It behooves
> the reader to peer between the lines and intuit what's not being said. I
> know there are issues that didn't get fixed. I know that there are
> architectural changes that the reviewers suggested that will come into
> later revisions of the hardware. There always are.
There are revisions in the pipeline. Mostly to do with making it go faster
and to deal with ever more stringent power consumption constraints. But
they're not fully baked yet, so I thought we were being quite open with
the reviewers about future plans.

[snip]
>
> * If we want to have a discussion of the SP 800-90+ DRBGs, that's also a
> great discussion. Having implemented myself an AES-CTR DRBG (which is what
> this is), I have some pointed opinions. That discussion is more than
> tangential to this, but it's also a digression, unless we want to attack
> the entire category of AES-CTR DRBGs (which is an interesting discussion
> in itself).
The use of AES was much more driven by the conditioning algorithm which as
yet is not standardized by NIST and the math is interesting. Once you've
built a hardware AES for conditioning purposes, you might as well use it
for the DRBG part. The SP800-90 AES-CTR DRBG is an odd duck and whoever
wrote the spec didn't have the hardware designer in mind, but
roll-your-own crypto is not an option for us.



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jack Lloyd
On Mon, Jun 18, 2012 at 11:58:56AM -0700, Kyle Hamilton wrote:

> So what can we do to solve it?  Create our own "reputable review" service?
>
> Who would pay for it?  Who could pay for it?  Who *should* pay for it?

At first it seems like irony that buyer-pays is likely the process
best aligned with the buyers interests and yet most people are willing
to pay very little if anything for additional security, and
organizations typically will only pay for as much as (and exactly in
the form that) their regulator requires. Likely this suggests
something about the true nature of the interests of the market. :)

Maybe there is room for a Consumer Reports/Cooks Illustrated-style
review model, which does have the advantage of spreading the review
costs over many buyers. I don't know of any attempts like this though.

> Given that the penalty for an individual who works at a US Federal
> contractor on a US Federal contract who is found to have skimped on
> quality control is pretty much a 20-year guaranteed felony
> imprisonment, I have more faith in the CMVP than you appear to.

I've never heard about someone trying to talk past, say, an AES
implementation that didn't actually work, or a bad RSA, that's a
pretty bright line. In that sense, FIPS is very good for setting a
minimum bar, ensuring that the system is not completely screwed up.
However because most attacks can either be ignored or specificationed
around in the security policy or by shooting for a lower level of
validation, FIPS 140 is not very good at producing a product which can
protect against a determined attacker [1]. The tests are mostly
affirmatives 'does it work', not 'how can I cause this to break'.

> Also, CMVP-validated and NIST-certified modules which implement Suite
> B (ECC and AES) are permitted to protect US national secrets up to
> "Secret" level.  I don't think NSA is going to let that happen with
> modules it lets the military and its contractors use.

I agree this is the case, but I would also guess that NSA has its own
review process for this. For instance, FIPS 140 says nothing about
cache or timing attack countermeasures [2], which are pretty relevant
for AES and ECC and which I would hope (!) the NSA would bake into its
own procurement processes and in what it recommends or requires DoD
contractors to use. (I would be interested if anyone knows the details
of this, though).

-Jack

[1] Note that I am not saying that FIPS certified products
intrinsically cannot protect against determined attackers, just that
it is likely that any product which could do so likely did it without
much help from the FIPS review process.

[2] OK I just checked, the FIPS 140-3 draft does finally require
countermeasures against DPA and timing attacks for level 3+ hardware
evals. No such requirements for software though.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jack Lloyd
On Mon, Jun 18, 2012 at 01:21:20PM -0700, Jon Callas wrote:

> > I am not in any way suggesting that CRI would hide weaknesses or
> > perform a lame review.
>
> But that is *precisely* what you are saying.
>
> Jon Stewart could parody that argument far better than I can. You're
> not saying that CRI would hide things, you're just saying that
> accepting payment sets the incentives all the wrong way and that all
> companies would put out shoddy work so long as they got paid,
> especially if giving a bad review would make the customer mad.
>
> Come on. If you believe that this report is not worth the bits its
> written on because it was done for-pay, at least say so. If you
> think that they guys who put their names on the paper have
> prostituted their reputations, have the courage to say so.

Of course maintaining reputation (and self-respect, for the people
there) are large incentives for a company like CRI and that is why I
explicitly stated is that "the only reason I would place credence with
it is my impression of the professionalism of the CRI staff", echoing
Matt's comment that "This is CRI, so I'm fairly confident nobody is
cutting corners."

> In other words, only grad students are qualified to make an
> independent review

No, merely that the incentives are somewhat better aligned between
the reviewers and the end user.

> and universities are not tainted by money.

Not sure where I said that.

> I think sharp grad students are among the best reviewers possible. I
> think they do fantastic work. But there isn't a single paper from
> them that I've ever seen that didn't essentially stop abruptly
> because the funding ran out, or time ran out, or they decided to do
> something else like graduate.

I am not familiar with any security review process that has unlimited
time and money.

> *All* reviews are limited by scope that is controlled by
> resources. *All* reviews have a set of perverse incentives around
> them. The perverse incentives of professional review are indeed out
> of phase with the perverse incentives of academics.

Yes. I agree entirely.

> >> Un-reviewed crypto is a bane.
> >
> > Bad crypto with a rubber stamp review is perhaps worse because someone
> > might believe the stamp means something.
>
> So we shouldn't bother to get reviews, because they're just rubber stamps?

Of course not, if the review is a good one done by someone who knows
what they are doing it has enourmous value. Very likely the reviews
CRI does are of this form.

Thus my phrase 'rubber stamp review', meaning to imply a distinct
thing from a review of depth conducted with care by people skilled in
the relevant areas.

> To suggest that professionals are inherently corrupt is insulting to
> everyone in this business

The incentives in a pay-for-review model are misaligned in terms of
producing a non-optimal result for end users. Pointing out this fact
is not a personal attack on anybody.

> To suggest that academia is somehow free of bias shows a blind spot.
> To go further and suggest that only academia has pure motives shows
> how big academia's blind spots are.

Let me repeat: "Those incentives again are not perfect (what is),
especially given how academic publishing works"

-Jack
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Charles Morris
>There's no [non-trivial] system in the world with zero bugs [for some value of 
>trivial]

:)
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 18, 2012, at 11:15 AM, Jack Lloyd wrote:

> On Mon, Jun 18, 2012 at 10:20:35AM -0700, Jon Callas wrote:
>> On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:
>> 
>>> The fact that something occurs routinely doesn't actually make it a good 
>>> idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
>>> 
>>> This is CRI, so I'm fairly confident nobody is cutting corners. But that 
>>> doesn't mean the practice is a good one. 
>> 
>> I don't understand.
>> 
>> A company makes a cryptographic widget that is inherently hard to
>> test or validate. They hire a respected outside firm to do a
>> review. What's wrong with that? I recommend that everyone do
>> that.
> 
> When the vendor of the product is paying for the review, _especially_
> when the main point of the review is that it be publicly released, the
> incentives are all pointed away from looking too hard at the
> product. The vendor wants a good review to tout, and the reviewer
> wants to get paid (and wants repeat business).

Not precisely.

Reviewers don't want a review published that shows they gave a pass on a crap 
system. Producing a crap product hurts business more than any thing in the 
world. Reviews are products. If a professional organization gives a pass on 
something that turned out to be bad, it can (and has) destroyed the 
organization.

The reviewer is actually in a win-win situation. No matter what the result is, 
they win. But ironically, or perhaps perversely, a bad review is better for 
them than a good review. The reviewer gains far more from a bad review.

Any positive review is not only lacking in the titillation that comes from 
slagging something, but you can't prove something is secure. When you give a 
good review, you lay the groundwork for the next people to come along and find 
something you missed -- and I guarantee it, you missed something. There's no 
system in the world with zero bugs.

Of course there are perverse incentives in reviews. That's why when you read 
*any* review, you have to have your brain turned on and see past the marketing 
hype and get to the substance. Ignore the sizzle, look at the steak.

> 
> I have seen cases where a FIPS 140 review found serious issues, and
> when informed the vendor kicked and screamed and threatened to take
> their business elsewhere if the problem did not 'go away'. In the
> cases I am aware of, the vendor was told to suck it and fix their
> product, but I would not be so certain that there haven't been at
> least a few cases where the reviewer decided to let something slide. I
> would also imagine in some of these cases the reviewer lost business
> when the vendor moved to a more compliant (or simply less careful)
> FIPS evaluator for future reviews.

I agree with you completely, but that's somewhere between irrelevant and a 
straw man.

FIPS 140 is exasperating because of the way it is bi-modal in many, many 
things. NIST themselves are cranky about calling it a "validation" as opposed 
to a "certification" because they recognize such problems themselves.

However, this paper is not a FIPS 140 evaluation. Anything one can say positive 
or negative about FIPS 140 is at best tangential to this paper. I just searched 
the paper for the string "FIPS" and there are six occurrences of that word in 
the paper. One reference discusses how a bum RNG can blow up DSA/ECDSA (FIPS 
186). The other five are in this paragraph:

In additional to the operational modes, the RNG supports a FIPS
mode, which can be enabled and disabled independently of the
operational modes. FIPS mode sets additional restrictions on how
the RNG operates and can be configured, and is intended to
facilitate FIPS-140 certification. In first generation parts, FIPS
mode and the XOR circuit will be disabled. Later parts will have
FIPS mode enabled. CRI does not believe that these differences in
configuration materially impact the security of the RNG. (See
Section 3.2.2 for details.)

So while we can have a bitch-fest about FIPS-140 (and I have, can, do, and will 
bitch about it), it's orthogonal to the discussion.

It appears that you're suggesting the syllogism:

FIPS 140 demonstrate security well.
This RNG has FIPS 140
Therefore, this RNG is not secure.

Or perhaps a conclusion of "Therefore, this paper does not demonstrate the 
security of the RNG" which is less provocative.

What they're actually saying is that they don't think that FIPSing the RNG will 
"materially impact the security of the RNG" -- which if you think about it, is 
pretty faint praise.


> 
> I am not in any way suggesting that CRI would hide weaknesses or
> perform a lame review.

But that is *precisely* what you are saying.

Jon Stewart could parody that argument far better than I can. You're not saying 
that CRI would hide things, you're just saying that accepting payment sets the 
incentives all the wrong way and that all companies woul

Re: [cryptography] Intel RNG

2012-06-18 Thread dj
Indeed. We're confident that the DRNG design is sound, but asking the
world to "trust us, it's a sound design" is unreasonable without us
letting someone independently review it. So being a cryptographic design
that people need some reason to trust before they use it, we opened the
design to a reputable outside firm and paid for and asked for independent
review.

The reviewers get to publish their review. We don't control the text.
That's part of the deal. We run the risk that we look bad if the review
finds bad stuff.

How else could we look credible? Our goal is to remove the issue of bad
random numbers from PCs that lead to the failure of cryposystems. Part of
achieving that is that we have to give people a way to understand why the
random numbers are of cryptographic quality and what that means in
specific terms, like brute force prediction resistance, SP800-90
compliance and effective conditioning of seeds.



>
> A company makes a cryptographic widget that is inherently hard to test or
> validate. They hire a respected outside firm to do a review. What's wrong
> with that? I recommend that everyone do that. Un-reviewed crypto is a
> bane.
>
> Is it the fact that they released their results that bothers you? Or
> perhaps that there may have been problems that CRI found that got fixed?
>
> These also all sound like good things to me.
>
>   Jon
>
>
>
> -BEGIN PGP SIGNATURE-
> Version: PGP Universal 3.2.0 (Build 1672)
> Charset: us-ascii
>
> wj8DBQFP32NnsTedWZOD3gYRAuxbAKCvzWt3/+jKq5VadSBLBo6hfT9L8wCeJT15
> 8e6Ll1xBvXe8IojvRDvksXw=
> =jAzX
> -END PGP SIGNATURE-
> ___
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography
>

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Tim Dierks
On Mon, Jun 18, 2012 at 2:51 PM, Matthew Green wrote:

> I think that Jack said most of what I would. The incentives all point in
> the wrong direction.
>

While this is all true, it's also why manufacturers who want persuasive
analysis of their products hire consulting vendors with a brand and track
record strong enough that the end consumer can plausibly believe that their
reputational risk outweighs the manufacturer's desire for a good report.
Cryptography Research is such a vendor. It's reasonable to take a
manufacturer-funded report with a grain of salt, but when consuming any
information, you also have to worry about issues like incompetence and
less-visible incentives that tilt the comprehension or presentation of
facts.

I think taking this report as presumed correct is good enough for most
users to rely upon, but if I was a high-value user with a comparable
budget, I'd consider further investigation.

 - Tim
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Matthew Green
I think that Jack said most of what I would. The incentives all point in the 
wrong direction. 

I suspect that Jon is one of a few people who have been (a) in the hiring 
position, and (b) truly cared about the security of the product, rather than 
marketing it as 'secure'. I also think Jon is relatively rare in that respect. 
I've personally turned down jobs where it's obvious that the client is not like 
Jon.

I don't see anyone (competent) pulling their punches, and, passing through a 
seriously flawed product. But there are other kinds of compromise:

1. Private evaluation report (budgeted to, say, 200 hours) probabilistically 
identifies N serious vulnerabilities. We all know that another 200 hours could 
turn up N more. In fact, the code may be riddled with errors. Original N 
vulnerabilities are patched. What should the public report say? Technically the 
vulnerabilities are all 'fixed'.

2. Client wants to set unusual parameters for the evaluation: e.g., you won't 
get the device, we'll do your data collection for you. Of course you'll note 
this in your report. But what you say is /not/ what people will hear. Are you 
comfortable with this?

3. Client has an odd threat model. It's not something you agree with, but you 
think: could I just explain that this is the model the client has proposed, 
point out the flaws in it and then move forward with an analysis? 

There are probably other examples. I like to think that I've come down on the 
right side of these, but I recognize that these are all pressure points, and 
money /does/ influence where you stand. (I've also seen it from the other side. 
Ugh.)

Matt

On Jun 18, 2012, at 1:20 PM, Jon Callas wrote:

> On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:
> 
> > The fact that something occurs routinely doesn't actually make it a good 
> > idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
> > 
> > This is CRI, so I'm fairly confident nobody is cutting corners. But that 
> > doesn't mean the practice is a good one. 
> 
> I don't understand.
> 
> A company makes a cryptographic widget that is inherently hard to test or 
> validate. They hire a respected outside firm to do a review. What's wrong 
> with that? I recommend that everyone do that. Un-reviewed crypto is a bane.
> 
> Is it the fact that they released their results that bothers you? Or perhaps 
> that there may have been problems that CRI found that got fixed?
> 
> These also all sound like good things to me.
> 
>   Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jack Lloyd
On Mon, Jun 18, 2012 at 10:20:35AM -0700, Jon Callas wrote:
> On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:
> 
> > The fact that something occurs routinely doesn't actually make it a good 
> > idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
> > 
> > This is CRI, so I'm fairly confident nobody is cutting corners. But that 
> > doesn't mean the practice is a good one. 
> 
> I don't understand.
> 
> A company makes a cryptographic widget that is inherently hard to
> test or validate. They hire a respected outside firm to do a
> review. What's wrong with that? I recommend that everyone do
> that.

When the vendor of the product is paying for the review, _especially_
when the main point of the review is that it be publicly released, the
incentives are all pointed away from looking too hard at the
product. The vendor wants a good review to tout, and the reviewer
wants to get paid (and wants repeat business).

I have seen cases where a FIPS 140 review found serious issues, and
when informed the vendor kicked and screamed and threatened to take
their business elsewhere if the problem did not 'go away'. In the
cases I am aware of, the vendor was told to suck it and fix their
product, but I would not be so certain that there haven't been at
least a few cases where the reviewer decided to let something slide. I
would also imagine in some of these cases the reviewer lost business
when the vendor moved to a more compliant (or simply less careful)
FIPS evaluator for future reviews.

I am not in any way suggesting that CRI would hide weaknesses or
perform a lame review. However the incentives of the relationship do
not favor a strong review, and thus the only reason I would place
credence with it is my impression of the professionalism of the CRI
staff. In contrast, consider a review by, say, a team of good grad
students, where the incentive is very strongly to produce a
publishable result and only mildly on making the vendor happy. Those
incentives again are not perfect (what is), especially given how
academic publishing works, but they are somewhat more aligned with the
end users desire to have a product that is secure.

> Un-reviewed crypto is a bane.

Bad crypto with a rubber stamp review is perhaps worse because someone
might believe the stamp means something.

-Jack
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:

> The fact that something occurs routinely doesn't actually make it a good 
> idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
> 
> This is CRI, so I'm fairly confident nobody is cutting corners. But that 
> doesn't mean the practice is a good one. 

I don't understand.

A company makes a cryptographic widget that is inherently hard to test or 
validate. They hire a respected outside firm to do a review. What's wrong with 
that? I recommend that everyone do that. Un-reviewed crypto is a bane.

Is it the fact that they released their results that bothers you? Or perhaps 
that there may have been problems that CRI found that got fixed?

These also all sound like good things to me.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP32NnsTedWZOD3gYRAuxbAKCvzWt3/+jKq5VadSBLBo6hfT9L8wCeJT15
8e6Ll1xBvXe8IojvRDvksXw=
=jAzX
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] non-decryptable encryption

2012-06-18 Thread jd.cypherpunks
Natanael  wrote:
> One: On the second paper, you assume a prime number as long as the message is 
> secure, and give an example of a message of 500 characters. Assuming ASCII 
> coding and compression, that will be just a few hundred bits. RSA (using 
> primes too) of 1024 bits is now being considered insecure by more and more 
> people. I'm afraid that simple bruteforce could break your scheme quite fast. 
> Also, why not use simple XOR in that case?
> 

Yep - bruteforce will work here.
btw - when it comes to 'non-decryptable encryption' I still like OTP. :)
Read or re-read Steven Bellovins wonderfull piece about Frank Miller, the 
Inventor of the One-Time Pad 
https://mice.cs.columbia.edu/getTechreport.php?techreportID=1460

I'm not a rude guy and try not to diminish your archievments but there's some 
truth in the following sentence: Even if clever beyond description the odds 
that someone without too much experience in the field can revolutionize 
cryptography are small. Can't remember who said this - or something similar to 
this - but it's true anyhow. Think about this every time when I try to 'invent' 
something within my fields. :)

--Michael


> 
> Den 18 jun 2012 12:56 skrev "Givonne Cirkin" :
> Hi,
> 
> My name is Givon Zirkind.  I am a computer scientist.  I developed a method 
> of encryption that is not decryptable by method.  
> You can read my paper at: http://bit.ly/Kov1DE
> 
> My colleagues agree with me.  But, I have not been able to get pass peer 
> review and publish this paper.  In my opinion, the refutations are ridiculous 
> and just attacks -- clear misunderstandings of the methods.  They do not 
> explain my methods and say why they do not work.
> 
> I have a 2nd paper:  http://bit.ly/LjrM61  
> This paper also couldn't get published.  This too I was told doesn't follow 
> the norm and is not non-decryptable.  Which I find odd, because it is merely 
> the tweaking of an already known method of using prime numbers.
> 
> I am asking the hacking community for help.  Help me test my methods.  The 
> following message is encrypted using one of my new methods.  Logically, it 
> should not be decryptable by "method".  If you can decrypt it, please let me 
> know you did & how.  
> 
> CipherText:
> 
> 113-5-95-5-65-46-108-108-92-96-54-23-51-163-30-7-34-117-117-30-110-36-12-102-99-30-77-102
> 
> Thanks.
> 
> I have a website about this:  www.givonzirkind.weebly.com
> For information about the Transcendental Encryption Codec click on the "more" 
> tab.
> Also, on Facebook,  https://www.facebook.com/TranscendentalEncryptionCodecTec
> 
> Givon Zirkind
> 
> 
>  
> You @ 37.com - The world's easiest free Email address !
> 
> ___
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography
> 
> ___
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] non-decryptable encryption

2012-06-18 Thread Charles Morris
Meanies list:
1) Warren Kumari >:(
2) David Adamson >:(

I actually think his ideas are quite interesting, although I have only
read the abstract
of the first paper and the second paper is fairly clear in it's
methods (Fermat's littler theorem, etc).

Having the peer pairs agree on a unique per-pair, per-transaction, or
per-message codebook
is a very good idea, however nobody has really been able to make this
work without
prior exchange of data. In other words if the same method that is used
for key agreement
is used for codebook agreement, breaking one breaks the other.

Also, what is the optimal codebook for a given message, or a given
structure of message;
when also given weights for security parameter and size parameter?

There's a paper for you. That could actually be used in quantum
systems for codebook transmission.

Also, the idea of using non-algebraic numbers is a good one.
How are your transcendental numbers generated? By proof-by-contradiction?

(Apologies if any question is answered in the rest of the paper I
haven't yet read)

If I were to comment directly to Givon, I would say you might be
having problems being published
as you use terms like "non-decryptable encryption" and "not
decryptable by method"

I assume there is a language-barrier issue at work here as
"non-decryptable encryption" is a synonym for "junk data" :)
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] non-decryptable encryption

2012-06-18 Thread David Adamson
I have invented even simpler encryption method than yours that is also
non-decrypt-able.
It uses a special function F(x)=0 where 0 is only one bit.

It works like this:
1. Take plaintext P.
2. Ciphertext=F(P)

No one can decrypt my ciphertext, and it has advantage over your
method since it does not need much space (nor transmission).

But I had also troubles to publish it on CRYPTO 2009, 2010, 2011 and 2012.
I will try this year on the Rump session.

David


On 6/18/12, Warren Kumari  wrote:
> I too have invented an encryption method that is not decrypt able by any
> method.
>
> It woks like this:
> 1: Take plaintext.
> 2: XOR with a stream of random data, throw away the random data.
> 3:…
> 4: Profit!
>
> For some reason I cannot get folk to understand the brilliance of my method,
> probably because they are all stupid, closed minded smelly jerks who
> couldn't understand a good idea if it fell on them.
> They all kept saying that having encryption that is not decryptable is
> pointless, but rathter than listening to what the so called experts said, I
> decided to simply rage against their opinions and, just for good measure,
> insult them too…
>
> So far it's working great…
>
> W
>
>
>
> On Jun 18, 2012, at 6:56 AM, Givonne Cirkin wrote:
>
>> Hi,
>>
>> My name is Givon Zirkind.  I am a computer scientist.  I developed a
>> method of encryption that is not decryptable by method.
>> You can read my paper at: http://bit.ly/Kov1DE
>>
>> My colleagues agree with me.  But, I have not been able to get pass peer
>> review and publish this paper.  In my opinion, the refutations are
>> ridiculous and just attacks -- clear misunderstandings of the methods.
>> They do not explain my methods and say why they do not work.
>>
>> I have a 2nd paper:  http://bit.ly/LjrM61
>> This paper also couldn't get published.  This too I was told doesn't
>> follow the norm and is not non-decryptable.  Which I find odd, because it
>> is merely the tweaking of an already known method of using prime numbers.
>>
>> I am asking the hacking community for help.  Help me test my methods.  The
>> following message is encrypted using one of my new methods.  Logically, it
>> should not be decryptable by "method".  If you can decrypt it, please let
>> me know you did & how.
>>
>> CipherText:
>>
>> 113-5-95-5-65-46-108-108-92-96-54-23-51-163-30-7-34-117-117-30-110-36-12-102-99-30-77-102
>>
>> Thanks.
>>
>> I have a website about this:  www.givonzirkind.weebly.com
>> For information about the Transcendental Encryption Codec click on the
>> "more" tab.
>> Also, on Facebook,
>> https://www.facebook.com/TranscendentalEncryptionCodecTec
>>
>> Givon Zirkind
>>
>>
>>
>> You @ 37.com - The world's easiest free Email address !
>> ___
>> cryptography mailing list
>> cryptography@randombit.net
>> http://lists.randombit.net/mailman/listinfo/cryptography
>
> --
> "Go on, prove me wrong. Destroy the fabric of the universe. See if I care."
> -- Terry Prachett
>
>
> ___
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography
>
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] non-decryptable encryption

2012-06-18 Thread Natanael
I'm not a crypto expert, but I have read a bit about it, and have some
comments. I am pretty sure that my comments below are accurate and
relevant. Feel free to correct me if I am wrong.

One: On the second paper, you assume a prime number as long as the message
is secure, and give an example of a message of 500 characters. Assuming
ASCII coding and compression, that will be just a few hundred bits. RSA
(using primes too) of 1024 bits is now being considered insecure by more
and more people. I'm afraid that simple bruteforce could break your scheme
quite fast. Also, why not use simple XOR in that case?

Two: You don't mind keys with repeating numbers. Why?

On the first paper: One: The description reminds me of steganography.

Two: Also, why swap out the symbol set? You won't gain anything by creating
new symbols, converting bits to them and back. It's just the same as adding
a substitution cipher step. It does not prevent statistical analysis.

Three: Essentially, you don't really seem to take predictability of keys
into consideration. If you want key based encryption and decryption takes
<1 second for a CPU core to decrypt, you need to have as many as somewhere
about 2^128 possible keys with equal probability to prevent bruteforce. DES
use 64 bits (2^64 possible keys) and can be bruteforced today. If you don't
allow enough possible keys and don't consider how key generation is done, I
won't trust it.

Four: One time key encryption can NOT be broken by guessing if the key is
random enough. It's a fundamental property of XOR. ANYTHING can be the
message, because there are no invalid decryptions. Guess for another
message and you get another key. Guess another key and you get another
message. While AES encryption generally ONLY give you jibberish until you
guess the right key, with XOR you can have thousands and millions of keys
that all give valid messages from the same ciphertext.

Six: You claim encryption is usually not done on the binary representation,
but on the letters. But AES does certainly not care about the meaning of
the bits, it just encrypts bits it's given.

Seven: It looks like you are misinterpreting Benford's law. It is not
intended to be applied to cryptographically random numbers, but to
"descriptive" numbers (in lack of better words). Number of cars, amount of
money, height of flowers, etc. Lower is more common, with a certain
probability.

Eight: How do you sort out garbage data on decryption?

What I would do instead of using your method is to compress the data, add a
whole lot of random padding and use normal AES encryption.

Just so you know, NSA and their likes don't mind guessing and.number
crunching until they find patterns if they want to decrypt your messages.

You seem to forget that cryptoanalysis looks for ALL kinds of patterns, not
just pure math and such.

And the example ciphertext you provided: That's not how cryptoanalysists
usually work, they usually work with harddrives full of data, listen to
radio transmissions, etc. Your message is probably too short for analysis
while a paragraph of that could be breakable.

- Sent from my tablet
Den 18 jun 2012 12:56 skrev "Givonne Cirkin" :

> Hi,
>
> My name is Givon Zirkind.  I am a computer scientist.  I developed a
> method of encryption that is not decryptable by method.
> You can read my paper at: http://bit.ly/Kov1DE
>
> My colleagues agree with me.  But, I have not been able to get pass peer
> review and publish this paper.  In my opinion, the refutations are
> ridiculous and just attacks -- clear misunderstandings of the methods.
> They do not explain my methods and say why they do not work.
>
> I have a 2nd paper:  http://bit.ly/LjrM61
> This paper also couldn't get published.  This too I was told doesn't
> follow the norm and is not non-decryptable.  Which I find odd, because it
> is merely the tweaking of an already known method of using prime numbers.
>
> I am asking the hacking community for help.  Help me test my methods.  The
> following message is encrypted using one of my new methods.  Logically, it
> should not be decryptable by "method".  If you can decrypt it, please let
> me know you did & how.
>
> CipherText:
>
>
> 113-5-95-5-65-46-108-108-92-96-54-23-51-163-30-7-34-117-117-30-110-36-12-102-99-30-77-102
>
> Thanks.
>
> I have a website about this:  www.givonzirkind.weebly.com
> For information about the Transcendental Encryption Codec click on the
> "more" tab.
> Also, on Facebook,
> https://www.facebook.com/TranscendentalEncryptionCodecTec
>
>  Givon Zirkind
>
>
>
> --
> You @ 37.com - The world's easiest free Email address !
>
> ___
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography
>
>
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.ne

Re: [cryptography] Intel RNG

2012-06-18 Thread Matthew Green
The fact that something occurs routinely doesn't actually make it a good idea. 
I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 

This is CRI, so I'm fairly confident nobody is cutting corners. But that 
doesn't mean the practice is a good one. 

On Jun 18, 2012, at 5:52 AM, Paweł Krawczyk  wrote:

> Well, who otherwise should pay for that? Consumer Federation of America?
> It's quite normal practice for a vendor to contract a 3rd party that
> performs a security assessment or penetration test. If you are a smartcard
> vendor it's also you who pays for Common Criteria certification of your
> product.
> 
> -Original Message-
> From: cryptography-boun...@randombit.net
> [mailto:cryptography-boun...@randombit.net] On Behalf Of Francois Grieu
> Sent: Monday, June 18, 2012 11:04 AM
> To: cryptography@randombit.net
> Subject: Re: [cryptography] Intel RNG
> 
> d...@deadhat.com wrote:
> 
>> CRI has published an independent review of the RNG behind the RdRand
>> instruction:
>> http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf
> 
> where *independent* is to be taken as per this quote:
>   "This report was prepared by Cryptography Research, Inc. (CRI)
>under contract to Intel Corporation"
> 
>  Francois Grieu
> 
> ___
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography
> 
> 
> 
> ___
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Paweł Krawczyk
Well, who otherwise should pay for that? Consumer Federation of America?
It's quite normal practice for a vendor to contract a 3rd party that
performs a security assessment or penetration test. If you are a smartcard
vendor it's also you who pays for Common Criteria certification of your
product.

-Original Message-
From: cryptography-boun...@randombit.net
[mailto:cryptography-boun...@randombit.net] On Behalf Of Francois Grieu
Sent: Monday, June 18, 2012 11:04 AM
To: cryptography@randombit.net
Subject: Re: [cryptography] Intel RNG

d...@deadhat.com wrote:

> CRI has published an independent review of the RNG behind the RdRand
> instruction:
> http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf

where *independent* is to be taken as per this quote:
   "This report was prepared by Cryptography Research, Inc. (CRI)
under contract to Intel Corporation"

  Francois Grieu

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Francois Grieu
d...@deadhat.com wrote:

> CRI has published an independent review of the RNG behind the RdRand
> instruction:
> http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf

where *independent* is to be taken as per this quote:
   "This report was prepared by Cryptography Research, Inc. (CRI)
under contract to Intel Corporation"

  Francois Grieu

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography