Re: Intel Security processor + a question

2002-10-22 Thread Major Variola (ret)
At 05:13 PM 10/21/02 -0400, Tyler Durden wrote:
>
>So I guess the follow on question is: Even if you can look at the code
of a
>RNG...how easy is it to determine if its output is "usefully random",
or are
>there certain "Diffie-approved" RNGs that should always be there, and
if not
>something's up?

Start with something analog, where no one knows the initial state
perfectly, and the dynamics are dispersive (chaotic).  Digitize it.
You can use ping pong balls if you like.

1. Measure its entropy (eg see Shannon).  Xor values together
(xor doesn't generate change (variation), but preserves it).
Go to 1 until you find that your measurments have asymptoted.

You should then hash ('whiten') your distilled 1bit/baud values,
to make it hard to go backwards throught the deterministic iterative
"distilling" in the above recipe.

In practice, you may feed a hashing digest function directly with your
raw
measurements and rely on the digest compressing the number of bits
in:out
to assure 1 bit/baud (even without the hash-whitening).

However the output of such a hash function will be noise-like even with
very low entropy input, e.g., successive integers.  Ergo measuring after

hashing is pointless.

Discuss the results with your troopleader, and you will receive your
crypto merit badge in 4-6 weeks.




Re: Intel Security processor + a question

2002-10-21 Thread James A. Donald
--
On 21 Oct 2002 at 10:21, Major Variola (ret) wrote:
>  But no such "does it look random" test can tell good
> PRNG from TRNG. You must peek under the hood.

More generally, one can never know something is random merely
by looking at it, but only by knowing why it is random.  One
must have both theory and experiment. 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 govZnfsYPhr1BzrbpYoLQVdfLp/FkKmHG9KFTFkI
 4NCRqBJWFhDElvlvzDTaZGuTWNTAoXMMadfUryifo




Re: Intel Security processor + a question

2002-10-21 Thread Tyler Durden
Major Variola wrote...

"Bit-bias is trivial to correct (see Shannon).  Take a look at Prof.
Marsaglia's "Diehard" suite of statistical-structural tests for a real 
obstacle course.  But no such "does it look random" test can tell good PRNG 
from TRNG. You must peek under the hood."

Indeed, as far a I understand it, no digital algorithm gives you "true" 
randomness..."randomness" in the digital world seems to represent a fairly 
relative term meaning something like "is the output of your RNG coupled in 
any meaningful way to the solution space of your application?" If the answer 
is yes, your RNG is no good. I remember a great discussion in "Numerical 
Recipes in C" which discussed very subtle coupling between an RNG and the 
application. One wonders then if even fairly sophisticated cryptography 
folks will have the necessary expertise to spot such coupling (for instance, 
certain RNGs may slightly emphasize the probability of certain subsets of 
primes...a cracker might write code which preferentially attacks those 
primes and thereby greatly decreasing cracking time).

So I guess the follow on question is: Even if you can look at the code of a 
RNG...how easy is it to determine if its output is "usefully random", or are 
there certain "Diffie-approved" RNGs that should always be there, and if not 
something's up?




From: "Major Variola (ret)" <[EMAIL PROTECTED]>
To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
Subject: Re: Intel Security processor + a question
Date: Mon, 21 Oct 2002 10:21:28 -0700

At 07:40 PM 10/18/02 -0400, Tyler Durden wrote:
>Well,I disagree about psuedo random number generation, sort of.
>First, if I have PSR sequence of the known variety (ie, ANSI or ITU),
and if
>it's mapped to some telecom standard (DS-1/3, OC-3/12/48/192), then my
test
>set can and should be able to lock onto that sequence. This is true
whether
>that telecom signal is raw PRBS, or if it has been mapped into the
payload
>(you use different test sets).

1. Shift reg sequences are cryptographically weak.

2. Re-synch'ing with a PR stream is useful for some apps, true.

3. In crypto, we consider the adversary who claims to have a true RNG
but
instead is faking us out with an opaque PRNG.  If We are not privvy to
the
PRNG algorithm (or key) then we can't tell if its truly random or not.

>With encrypted info who knows? I would think that testing if there's
monkey
>business might boil down to algorithms--ie, if certain bit patterns
happen
>too often, then something's wrong...

Bit-bias is trivial to correct (see Shannon).  Take a look at Prof.
Marsaglia's
"Diehard" suite of statistical-structural tests for a real obstacle
course.  But
no such "does it look random" test can tell good PRNG from TRNG.
You must peek under the hood.



_
Get faster connections -- switch to MSN Internet Access! 
http://resourcecenter.msn.com/access/plans/default.asp



Re: Intel Security processor + a question

2002-10-21 Thread Major Variola (ret)
At 07:40 PM 10/18/02 -0400, Tyler Durden wrote:
>Well,I disagree about psuedo random number generation, sort of.
>First, if I have PSR sequence of the known variety (ie, ANSI or ITU),
and if
>it's mapped to some telecom standard (DS-1/3, OC-3/12/48/192), then my
test
>set can and should be able to lock onto that sequence. This is true
whether
>that telecom signal is raw PRBS, or if it has been mapped into the
payload
>(you use different test sets).

1. Shift reg sequences are cryptographically weak.

2. Re-synch'ing with a PR stream is useful for some apps, true.

3. In crypto, we consider the adversary who claims to have a true RNG
but
instead is faking us out with an opaque PRNG.  If We are not privvy to
the
PRNG algorithm (or key) then we can't tell if its truly random or not.

>With encrypted info who knows? I would think that testing if there's
monkey
>business might boil down to algorithms--ie, if certain bit patterns
happen
>too often, then something's wrong...

Bit-bias is trivial to correct (see Shannon).  Take a look at Prof.
Marsaglia's
"Diehard" suite of statistical-structural tests for a real obstacle
course.  But
no such "does it look random" test can tell good PRNG from TRNG.
You must peek under the hood.




Re: Intel Security processor + a question

2002-10-20 Thread Bill Stewart
[There's been some discussion of whether you can trust hardware crypto.]

At 11:54 AM 10/18/2002 -0400, Tyler Durden wrote:

OK...a follow up question (actually, really the same question in a 
diferent form).
Let's say I had a crypto chip or other encryption engine, the code of 
which I could not see. Now what if someone had monkeyed with it so that 
(let's say) the pool of prime numbers it drew from was actually a subset 
of the real pool that should be available for encryption. Let's also say 
that "somebody" knows this, and can search byte streams for known strings 
of products of these primes. They can then break this cypherstream very easily.

Sure.  As long as you can't evaluate the process that's being used
to generate your crypto material, you can't trust it.
If it's broken up into separate phases where you can get at the interfaces,
sometimes you can tell, but even then sometimes you can't.

For instance, if there's a hardware module that does randomness,
and another that does (random input -> pair of primes),
you may be able to try your own sets of random inputs and
decide that the output is good, but if the module is built so that
when the random number decrypted by DES key 0xDeadBeef has low bits ,
it generates primes from a short list, you probably won't notice,
and you probably won't detect that the random number generator's
output is less random.


Meanwhile, someone who doesn't know that the code's been tampered with can 
try to break the cypherstream using traditional brute force methods, and 
it will appear that this is a truly hard-encrypted message.

Yup.


AND if this is possible, is there some way to examine the encrypted output 
and then, say, search for unusual frequency traces of certain sequences, 
and determine tha the code has been tampered with?

Not if it's done half-credibly.
Otherwise, that would mean that looking at the
cyphertext would tell you about the key or plaintext,
which means the crypto algorithm is easily broken.

There are exceptions - seeing the same cyphertext really often
means that the bad guy was doing a bad job of making
fake random numbers.  To some extent, it's a tradeoff on the
bad guy is trying to reduce his search space -
if he's willing to try a million primes rather than a dozen,
the output looks a lot better.




Intel Security processor + a question

2002-10-17 Thread Tyler Durden
Intel is moving Security onto its Network processor chips...a quote also 
follows.

http://www.lightreading.com/document.asp?site=lightreading&doc_id=22749



(Begin quote)
For now, Intel is tackling very high- and low-end systems. The IXP2850 is 
derived from the IXP2800, which targets 10-Gbit/s line speeds. And back in 
February, Intel released the IXP425, a network processor with encryption 
hardware included, targeting low-end boxes such as enterprise routers (see 
Intel: The Prince of Processors? ).

For both chips, Intel developed its own hardware to handle the DES, triple 
DES, AES, and SHA-1 encryption standards. In the case of the IXP2850, Intel 
had left room in the IXP 2800 to add these hardware blocks, because 
potential customers had shown enough interest in security. We thought about 
adding crypto [to the IXP2800] as we were building it from the ground up, 
says Rajneesh Gaur, Intel senior product marketing manager.
(End quote)


Got a question for the cognoscenti amongst us...
If crypto is performed by hardware, how sure can users/designers be that it 
is truly secure (since one can't examine the code)? Is there any way to 
determine whether standard forms of encryption have been monkeyed with in 
some way (ie, to make those with certain backdoor keys have access at will, 
and yet still conform to he standard as far users can see)?
And, are hardware-based encryption implementations considered suspect from 
the standard by the more "careful" parts of the crypto community?


_
Get faster connections -- switch to MSN Internet Access! 
http://resourcecenter.msn.com/access/plans/default.asp



Re: Intel Security processor + a question

2002-10-17 Thread Eugen Leitl
On Thu, 17 Oct 2002, Tyler Durden wrote:

> If crypto is performed by hardware, how sure can users/designers be that it 
> is truly secure (since one can't examine the code)? 

Deterministic algorithms with known internal state and fed with same test
vectors generate exactly the same output as their software pendants. This 
is easy enough to test.

However, save of etching away the packaging, and tracing the circuitry
it's impossible to prove the absence of easter eggs, which, say, make you
spill the key if exposed to a magic frame.




Re: Intel Security processor + a question

2002-10-17 Thread Mike Rosing
On Thu, 17 Oct 2002, Tyler Durden wrote:

> If crypto is performed by hardware, how sure can users/designers be that it
> is truly secure (since one can't examine the code)? Is there any way to
> determine whether standard forms of encryption have been monkeyed with in
> some way (ie, to make those with certain backdoor keys have access at will,
> and yet still conform to he standard as far users can see)?
> And, are hardware-based encryption implementations considered suspect from
> the standard by the more "careful" parts of the crypto community?
>

As long as it puts out the correct data for any set of input keys you can
verify it easily.  A logic analyzer can verify there are no additional
data blocks being shipped around the system.

The only thing you can't really check is a physical back door where
someone with a special connector can tap the chip and dump some internal
memory (like a key holding block).  Since that's not economicly viable
(there are easier ways to steal keys) it's not worth worrying about.

So "trust but verify" is still a good idea.

Patience, persistence, truth,
Dr. mike




Re: Intel Security processor + a question

2002-10-17 Thread Major Variola (ret)
> If crypto is performed by hardware, how sure can users/designers be
that it
> is truly secure (since one can't examine the code)?

I'm currently microprogramming the 2800, and have worked on a crypto
ASIC in Verilog.
Some comments, food for thought:

You *can* examine the code if the manufacturer understands the needs of
security folks.
(This ranges from inspectable design stages (specs -> RTL -> GDSII)  to
taking your raw RNG output to a pin for diagnostics, which Intel forgot
to do when they made an RNG.)  Inspectable design != free source code.

Buy your chips anonymously from Frys, and test some, sampling randomly.

Testing may mean taking them to ChipWorks and getting them stripped,
reverse engineered up to
the functional level.  If you have access to the RTL, you can run
comparisons.  Look for those
extra gates the layout guy secretly added in return for citizenship for
his family.

You might also use "blank" reconfigurable logic devices (Xilinx, Altera)
and program them.  However the programming
is more mutable in the field than silicon gates.  But you don't pay a
million bucks for a mask set..

Beware JTAG, BTW, in Crypto.

Is there any way to
> determine whether standard forms of encryption have been monkeyed with
in
> some way (ie, to make those with certain backdoor keys have access at
will,
> and yet still conform to he standard as far users can see)?
> And, are hardware-based encryption implementations considered suspect
from
> the standard by the more "careful" parts of the crypto community?

Reverse engineering is the operational solution.  So is having vetted
design/fab folks, eg, NSA.
And marines with dogs guarding various things.

Hardware is fine with crypto folks.  There is a tradeoff of mutability
vs. observability compared
to software.  Hardware is fine for embedded devices, and a lot of pro
mil security is in embedded devices.
TV decoder PODs too; smartcards I suppose for euros.

Hardware crypto is just another point in design space; it can save power
because its more efficient than a
general cpu, but it costs boardspace & complexity.  It can achieve
higher throughput because its hardware,
if you need it.

One can also argue that if you have to worry about cryptochips getting
swapped in their
sockets at night, that you have many other (easier for the adversary)
concerns already ---Got Scarfo?
Ie checked your cables recently?

However as some very skilled reverse engineers have shown, you have to
be careful who physically has the
hardware.  The *banks* secrets in *hacker* wallets is bad.  *Your* key
in *your* wallet is good.
Do not lend Paul Kocher your hotel keycard :-)

In summary, there is no perfect solution (hey, Synopsys could recognize
an S-box and add something special,
like Ritchie's evil Turing Award compiler..), but there is a niche for
hardware.  Since there is more space on the die
and since you compete on extra features, expect to see crypto support
pop up in low and high end chips with
network applications.  Crypto functionality (or ethernet interfaces, for
that matter) is to commodity cpu makers what cup-holders and
seat-warmers are to car makers.

Why bother with hardware subversion when there is CALEA?  Anyone (telco,
VoIP) buying the systems (Cisco) made with crypto chips (Intel) will
have to buy systems programmed to tap.  If you can buy the heads of
cointel at FBI and CIA
for under $2e6 each, do you really need to black bag a fab?