Cryptography-Digest Digest #992, Volume #8       Thu, 28 Jan 99 20:13:03 EST

Contents:
  Re: hardRandNumbGen (R. Knauer)
  Re: Some more technical info on Pentium III serial number (Gareth Williams)
  Re: hardRandNumbGen (R. Knauer)
  Re: hardRandNumbGen (R. Knauer)
  Re: hardRandNumbGen ("Trevor Jackson, III")
  Strong cryptography: the many ways we can trust a key ([EMAIL PROTECTED])
  Re: Foiling 56-bit export limitations: example with 70-bit DES 
([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: hardRandNumbGen
Date: Thu, 28 Jan 1999 23:37:14 GMT
Reply-To: [EMAIL PROTECTED]

On 28 Jan 1999 11:23:20 -0500, [EMAIL PROTECTED] (Patrick Juola)
wrote:

>In terms of finite length numbers, a finite number is not statistically
>random if I can predict it (or something about it).

>The key word there is "predict."  Post hoc analyses don't mean much.
>A priori analyses, however, are a good way of showing that, in practice,
>a prediction can be made -- by the simple technique of making and
>validating a prediction.

[snip]

Your comments are directed at showing reasons for suspecting that a
generator is not random. What do you do if your statistical tests
indicate that it IS random?

You throw out the suspect bad generators and argue that such a
practice is safe. But what is your argument for the generators that
you do not throw out?

I can demonstrate experimentally that a program halts. That's easy.
What I cannot demonstrate is that a program does not halt. No matter
how long I let it run, there is always the chance that it will halt if
I wait just a little longer. If I stop short of that point in time,
and conclude that the program does not halt, and yet it would have
halted if I had waited just a little longer, then I am in error.

The same is true for your statistical tests. If you had used just a
little longer number, you might have found out that the generator was
bad. By using numbers of finite length, you are merely guessing that
they are random when they pass your statistical tests.

If you could determine that a finite number is truly random by using
algorithmic tests, then you could solve the Godel incompleteness
problem, the Turing halting problem and the Chaitin complexity problem
algorithmically too.

As one poster said earlier, many PRNGs pass statistical tests
wonderfully. And cryptanalysts have a real festival when someone uses
one of them.

Bob Knauer

"No Freeman shall ever be debarred the use of arms. The strongest
reason for the people to retain the right to keep and bear arms is,
as a last resort, to protect themselves against tyranny in government."
--Thomas Jefferson


------------------------------

From: Gareth Williams <[EMAIL PROTECTED]>
Subject: Re: Some more technical info on Pentium III serial number
Date: Thu, 28 Jan 1999 23:42:15 +0000



John Savard wrote:
> 
> [EMAIL PROTECTED] (Paul Rubin) wrote, in part:
> 
> >EE Times had an article giving a little bit more info on how
> >the PIII unique id's work.  Not a full description, but more
> >than I've seen here on the newsgroup.  I'm repeating the URL
> >from someone else's Usenet article (sorry, I don't remember whose):
> 
> >   http://www.eet.com/story/OEG19990127S0011
> 
> >I might post a comment or two later.
> 
> ...
> 
> 2) It is nice that Intel has devised a scheme whereby a web server,
> desiring to see people's Pentium III ID numbers, will get an ID signed
> by Intel which will then be used to request only a keyed hash of the
> ID number from your chip.
> 
> Will people be able to see the ID numbers of their own chips, to
> ensure they aren't buying stolen property?
> 
> What guarantee is there that browser makers will abide by this scheme,
> and not also give out unfiltered Pentium III IDs - or, for that
> matter, any other information, such as the ID numbers of plug-and-play
> PCI cards, someone having claimed they have serial numbers (yes, there
> are ID numbers that identify the make and model, but I fail to see the
> relevance of a _serial_ number to plug-and-play...)?
> 
> 3) If the instruction to return ID number is publicly known, then a
> browser maker can choose, or not choose, to use the Intel scheme for
> providing only a hash of the ID number.
> 
> At first, this makes the relevance of "tamper-resistant software" seem
> limited. If there is a hash scheme involved, then the original ID
> number is not recoverable, from any correct implementation of the hash
> scheme. Normally, web sites can't rewrite the browser on my computer.
> 
> But then I see what is going on.
> 
> The software is tamper resistant to prevent ID number spoofing.
> 
> The ID number on the chip is just a serial number, with no signature
> by Intel, just as Bruce said.
> 
> But a web site _can_ get a guarantee that the ID number is not spoofed
> - _if_ they accept getting only a personal hash of the ID number, to
> protect user privacy.
> 
> Because the software is protected against tampering _by the user_. So
> it will only take your real Pentium III ID number, hash it with the
> Intel-signed web site identifier of the web site you're visiting, and
> give them a personal identifier that can't be tracked but can't be
> forged.
> 

I am still not quite clear about this. How does the web site know it has
not been sent a random number? It has to recover its own ID that it sent
as verification.

So it must be a public/private key system. My guess: the 'tamperproof
agent' hashes the
Pentuim III ID with the server ID, appends this to the server ID, then
signs
the whole message with a private key. The public key is known to all the
servers, so the server decodes the message and gets back its own ID
(verification)
and the hash of the Pentium ID with its own ID. It cannot break the
hash, so
never learns the underlying Pentium ID. But it gets the same ID each
time.

Or is there a better way?

comments:

1) The whole scheme only works with the cooperation of browser software,
which
   users will be able to turn off (like cookies), so I do not see the
huge
   privacy problem. Web sites currently ask us to (re)identify ouselves
   with passwords that we have to remember. This is easier.

2) For network identification purposes, the whole thing could be done in
software.
   It is just a public key ID protocol. (Theft deterrence is a different
matter).

3) Since web site IDs are sent publicly (if I have interpreted the
scheme correctly)
   you can spoof a web site (though not a pentium III).

4) The 'tamperproof agent' will be subject not only to hacking and
decompilation, but
   to known-plaintext attack on the one universal private key.

5) The actual Pentium ID is not very safe, and it must be possible to
run the agent in
   an emulator (as someone else pointed out) and spoof it.

If this is right, the scheme might have some uses but does not seem very
secure to me. Ok, maybe they planned something cleverer. We need to know
more!

-- 
Gareth Williams <[EMAIL PROTECTED]>

** DGW Software Consultants LTD *******************
*  Montrose, Ledbury Rd, Ross-on-Wye, HR9 7BE, UK *
*  Tel/Fax 01989 563704                           *
***************************************************

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: hardRandNumbGen
Date: Thu, 28 Jan 1999 23:40:37 GMT
Reply-To: [EMAIL PROTECTED]

On 28 Jan 1999 11:25:33 -0500, [EMAIL PROTECTED] (Patrick Juola)
wrote:

>as long
>as the plaintext is finite *AND BOUNDED*, if you can get key material
>to exceed that bound, you can get perfect secrecy.

>But few of us have bounded secrets.

You are being uncharacteristically obscure.

Please elaborate on the concepts of "bounded", "unbounded" and how
they apply to a "bounded secret". And just how is a plaintext
"bounded", given that it is finite in length?

Bob Knauer

"No Freeman shall ever be debarred the use of arms. The strongest
reason for the people to retain the right to keep and bear arms is,
as a last resort, to protect themselves against tyranny in government."
--Thomas Jefferson


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: hardRandNumbGen
Date: Thu, 28 Jan 1999 23:47:03 GMT
Reply-To: [EMAIL PROTECTED]

On 28 Jan 1999 11:28:41 -0500, [EMAIL PROTECTED] (Patrick Juola)
wrote:

>Producing that much fiat money would produce such inflation
>as to drive the buying power to zero.  It doesn't matter how many
>dollars you printed, you'd still hit the limit of what a large
>pile of dollars valued at a ten-billions of an Italian lira would
>be worth.

>Or to put it another way, if you create a quadrillion dollars to enable
>yourself to buy that equipment, the cost just went up to a sextillion
>dollars. 8-)

I once read an interesting book in monetary theory by a mathematician
and he showed that very effect analytically. There were two relevant
things he was able to show graphically:

1) Price inflation took a while to develop once monetary inflation was
introduced. In his example, it took several years.

2) Once price inflation had saturated for a given initial dose of
monetary inflation, it would not go away. IOW, it was not a transient
effect.

So, the NSA could hyperinflate the money supply and buy all that they
wanted well before the effects of price inflation kicked in.

But afterwards, whatever they discovered would be worthless, unless
they used the information to buy gold and head for Switzerland. :-)

Bob Knauer

"No Freeman shall ever be debarred the use of arms. The strongest
reason for the people to retain the right to keep and bear arms is,
as a last resort, to protect themselves against tyranny in government."
--Thomas Jefferson


------------------------------

Date: Thu, 28 Jan 1999 18:51:59 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: hardRandNumbGen

Patrick Juola wrote:

> In article <[EMAIL PROTECTED]>,
> Trevor Jackson, III <[EMAIL PROTECTED]> wrote:
> >R. Knauer wrote:
> >
> >> On Wed, 27 Jan 1999 12:10:17 -0500, "Trevor Jackson, III"
> >> <[EMAIL PROTECTED]> wrote:
> >>
> >> >To boil all this verbiage down, analysis of the generation process is more
> >> >efficient than analysis of the generator output.  One reason for the
> >> >improved efficiency is that understanding the generation process provides
> >> >hints of the potential defects worth testing.  Another reason is that the
> >> >description of the generation process is logarithmicaly smaller than the
> >> >description of the generated output.
> >>
> >> Algorithmic analysis of the generated output itself is completely
> >> worthless if that output is finite in length. If the number 111...1 is
> >> generated by a TRNG, then it is a perfectly valid crypto-grade random
> >> number, just like all the other 2^N-1 numbers of the same length.
> >
> >False.  Analysis of output is insufficient to prove the entropy density is
> >100%, but it can easily provide useful information.  For instance, a page of
> >business text is around 2^16 bits (of data, not information).  If we divide up
> >the (?)RNG output into samples of this size we can establish a confidence that
> >the samples are statistically random.  If the samples are not statistically
> >random, we can, with statistical confidence, eliminate that RNG as appropriate
> >for key generation.  If the samples are statistically random we have not proved
> >that they are appropriate for key generation, but we have shown, to the limit
> >of our testing capability, the absence of predictability.  If the limit of our
> >testing capability is equal or greater than the capabilities of our
> >adversaries, then the samples are adequate key material.
> >
> >Note that the last statement does NOT mean that future samples may be
> >adequate.  The pattern established by the samples inspected might begin
> >repeating on the very next bit generated.  It DOES say that, withint the bounds
> >of analytic capability, one can inspect keys for effectiveness.
> >
> >The evaluation of the limits of statistical testing is probably tougher than I
> >can address because I am insufficiently versed in fundamental statistical
> >theory.  But it's an interesting operational question...
>
> One important operational point should be made here : the distinction
> between planned and post-hoc tests.  Specifically, if you're going to
> reject the RNG on the basis of statistical tests (which is reasonable),
> you *NEED* to define the tests you're going to run and the rejection
> threshhold before you power up the generator.

Absolutely.  Any other arrangement produces a man-in-the-loop situation.  The man
isn't the problem, but the loop sure is.  The output of the generator should not be
able to influence or adjust the acceptance/rejection criteria of a production
system.

Of course, during testing, we'll use the test run outputs to monotonicly tighten the
criteria until we're ready to go into production.


------------------------------

From: [EMAIL PROTECTED]
Subject: Strong cryptography: the many ways we can trust a key
Date: Fri, 29 Jan 1999 00:50:17 GMT

List:


This e-mail deals with cryptography and trust -- and the many ways we
can trust a key in cryptographic protocols. Actually, I will restrict
the exposition to just four possibilities, as this will already open
a (Pandora?) can of worms.

I will try to provide a bird's eye view, in an effort to unite
apparently diverse topics in the past discussion.


1. REVIEW

During the last weeks, in the mcg-talk list, sci.crypt as well as in
other fora, we have been discussing some new methods to improve security
[Ger99], notwithstanding and actually motivated by current export
restrictions on key-length for secret-keys as imposed by the Wassenaar
Arrangement
(WA).

In one example called "M-DES" [Ger99] and in strict accordance with
Wassenaar Arrangement rules for export-free cryptography, a method
was described which offered a practical scheme to boostrap a 56-bit
secret-key into a (56 + M)-bit secret-key. For M = 14 the method
yielded a perceived 70-bit secret-key security -- which is 64 times
more secure against brute-force attacks than even export-controlled
64-bit keys. And, which not even EFF's DES-cracker [EFF98] should
break in less than 75 years. However, the method is export-free under
WA.

But, what was the background for the method? In what did it differ or
approximate other methods? Is there a general framework for the
method?

The last question, of course, answers the first two. Indeed, the
general framework exists and is given in [Ger98], in the 4-valued
DAOF identification system (called I-2). The method defines plaintext
identification in that system -- as Distinguished, Ambiguous, Obscure
or Formless. It also defines trust in that *same* system -- with
Trust, Cotrust, Atrust and Ignorance [1].

To summarize, these trust levels were shown to correspond to
positive, negative, neutral or empty trust:

 NAME            SIGN      CONCEPT    IDENTIFICATION I-2

 trust           positive   Trust       Distinguished
 distrust        negative   Cotrust     Ambiguous
 identification  neutral    Atrust      Obscure
 ignorance       empty      Ignorance   Formless



To recall (please see the archives [1] for examples):

- "trust" is a distinguished (i.e., unambiguous and clear)
affirmation of reliance;

- "cotrust" is ambiguous because you affirm not what you rely on but
what you refuse to rely on;

- "atrust" is obscure because the object is identified but trust
itself is not; and

- "ignorance" is formless in two senses, because either the object
simply does not exist (trust does not exist in form) or may exist but
was not identified (trust has no known form).


As the reader may verify (and, as I exemplify below for Bob and
Alice), conventional public-key and symmetric key cryptography
correspond to uses of Trust and Ignorance. The other yet unused
combinations lead to a whole new series of cryptographic
possibilities [1], exemplified by three new variants:
"Impossible-Keys" for Cotrust, "Open-Keys" for Atrust and
"Unknown-Keys" for Ignorance.

The last two variants were discussed in [Ger99] and shown to afford a
potential security increase while being fully transparent to
restrictions on the size of secret-keys. The method, in either
variant, can be applied to new or to already tried and known cipher
systems.

In particular, 56-bit DES (Data Encryption Standard, NIST-USA) which
is nowadays considered insecure with only 56 bits of secret-key
[EFF98], can afford 70-bits of effective secret-key length when
14-bit "Unknown-Keys" are used [Ger99, Section 5.2]. This is useful
because except for its small key-length and the recent verification
that DES is not a random cipher over 64-bits but over 56-bits
[Ger99], DES has otherwise no known weakeness -- for which we can
rely on more than 22 years of use and cryptoanalysis. Thus, it is
worthwhile to apply the method to DES -- besides alowing for a
practical example.

The example of Section 5.2 in [Ger99] will be explained below under
the general framework laid out above -- which I believe will provide
a more illuminating insight into the method. The example is based on
the usual 56-bit DES cipher plus M-bits of Unknown-Keys (i.e., using
the method's last variant) -- in what was called M-DES.  The TCAI
4-level trust system will be used, as quoted above, but please note
that "Ignore" or "Ignored" are oftentimes used in lieu of "Ignorance"
(for text continuity).


2.  EXAMPLE


Bob needs to send a message to Alice, using export-free 56-bit DES
and the M-DES protocol with M = 14 (which he decides is good enough).

Bob chooses one key at will from a publicly known list of 64-bit keys
that has a total of 2^14 keys. Note that, to any observer, including
Alice and an attacker, the key chosen by Bob is unknown and has
14-bits of entropy -- even if that key list is just a sequential
numbering from 1 to 2^14. In other words, the "Unknown-Key" is
Ignored by Alice and any attacker, within 14-bits of choice.

In any desired way for a 56-bit DES protocol, both Bob and Alice have
Trust on a 56-bit secret-key, which key value is Ignored by any
attacker.

The difference begins now, with an additional protocol that allows
Alice to quickly develop Trust on what she Ignored, the 14-bit
"Unknown-Key", much sooner than any attacker that needs to develop
Trust on (56 + 14) bits. Note that the attacker faces 2^56 more
variability then Alice -- a very large difference.

Regarding attack time, this difference is of the order of 75 years,
as calculated below, even for the fast EFF DES-cracker [EFF98]. And,
under usual "Personal Computer resources", Alice should not take more
than a few seconds to calculate the "Unknown-Key" -- which she only
needs to do once for several messages [Ger99].

But, Bob already Trusts the 14-bit key -- he does not need to "find"
it! Thus, Alice can use the same 14-bit when replying to Bob. Bob has
no time loss, not even for first message. Bob and Alice are now
communicating with a 70-bit secret-key, bootstrapped from a 56-bit
key.

Thus, at the end of the full M-DES protocol, a two-way communication
channel with a Trusted secret-key of 70-bits is thus formed between
Bob and Alice. Which secret-key is Ignored in all 70-bits by an
attacker.


3. SECURITY

A different question is: what is the expected lifetime of this secure
channel, for total plaintext length L under a single 70-bit key, as a
function of the attacker's resources?

The attacker needs to solve an average of 2^69 DES decryptions for
one block of length 8 bytes (with the revised DES unicity presented
in [Ger99], not three as usually quoted). To have an idea how much
this is in time, suppose the EFF DES-cracker [EFF98] would take an
average of 40 hours (for 50% search) to break 56-bit DES. Then, the
same DES-cracker would take about 27,307 days to break the discussed
70-bit scheme -- approximately 75 years.


Thus, Alice and Bob can enjoy 70-bit protection with export-free
56-bit DES, for a   v e r y   l o n g  time ;-)


Comments are welcome.


Cheers,

Ed Gerck

========================================================
REFERENCES:

[1]      recent list discussions, archived at
         http://www.mcg.org.br/emails.htm

[EFF98]  Cracking DES, reference in http://www.eff.org/descracker/

[Ger98]  Gerck, E., "Identification Theory", in two parts:
         http://www.mcg.org.br/coherence.txt and
         http://www.mcg.org.br/coherence2.txt

[Ger99a] Gerck, E., "Unicity, DES Unicity, Open-Keys and Open
         Ignorance", in http://www.mcg.org.br/unicity.htm


______________________________________________________________________
Dr.rer.nat. E. Gerck                                 [EMAIL PROTECTED]
http://novaware.com.br
 ---  Meta-Certificate Group member -- http://www.mcg.org.br  ---




============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Foiling 56-bit export limitations: example with 70-bit DES
Date: Thu, 28 Jan 1999 23:17:56 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (wtshaw) wrote:
> In article <78mv0l$oih$[EMAIL PROTECTED]>, [EMAIL PROTECTED]
wrote:
>

> > It is a mistake in the literature to consider DES unicity to be 20 bytes (3
> > blocks of DES) for compressed English. The formula is simply not valid in
> > that range as proved in the paper.
> >
> You need not be hamstrung with simple ASCII as the source for your bits,
> as you could generate them from something else than base 128, which is
> about the poorest one to encrypt text directly from that there is.
> -

But, for DES, unicity can never be larger than 8 bytes -- one DES block --
unless you use absolutely random bits, which would then be absolutely useless
to send anything different than noise ...

Cheers,

Ed Gerck

> A much too common philosophy:
> It's no fun to have power....unless you can abuse it.
>

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to