Cryptography-Digest Digest #324, Volume #9        Fri, 2 Apr 99 01:13:03 EST

Contents:
  Re: True Randomness & The Law Of Large Numbers ("Trevor Jackson, III")
  Re: blind signatures (Bryan Olson)
  Re: Random Walk ("karl malbrain")
  Re: Newbie intro:  what modern crypto systems assume the enemy knows (Bryan Olson)
  Re: Random Walk ("Trevor Jackson, III")
  Re: Random Walk ("Trevor Jackson, III")
  Re: Scramdisk ("hapticz")
  Re: Wanted: free small DOS Encryption program (Grinch)
  How does one start cracking ciphers? (consalus)

----------------------------------------------------------------------------

Date: Thu, 01 Apr 1999 23:31:40 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers

R. Knauer wrote:
> 
> On Thu, 1 Apr 1999 12:46:36 -0600, "Franzen" <[EMAIL PROTECTED]> wrote:
> 
> >First you state statistical tests cannot distinguish uniform randomness
> >from pseudo-randomness (first paragraph above). Now you parenthetically
> >state some PRNG's can generate "close to" uniformly random sequences.
> >How can you possibly know? Not only would you have to be able to
> >distinguish between the two randomness potentials, but you would also
> >have to have a way to measure them with some degree(s) of precision.
> 
> You evidently were not here when we discussed that issue. We all
> realize that it is impossible to build a classical TRNG that is 100%
> random. There will always be flaws which disturb the process and give
> it some small amount of non-randomness, such as slight 1-bit bias.
> That is why we came to the conclusion that the most we can hope for is
> something we call "reasonably certainty" that a TRNG will be
> crypto-grade.
> 
> That, however, is not the same as statistical measures of confidence.
> Just as a woman cannot be pregnant with a 95% confidence level - she
> is either fully pregnant or not pregnant at all - you can't use
> confidence levels to determine the "reasonable certainty" that a TRNG
> is sufficiently random to meet the requirements for a given
> cryptosystem.

The problem with this construction is simple.  While we believe that
women are either pregnant or not, there is no test, statistical or not,
that will be 100% accurate.  Thus, in the case of biochemistry, we have
confidence in the test process.  Now whether this test is abdominal
phrenology (a wretched metaphor for measuring the curve of a statistical
distribution) or a test of the blood we can never be absolutely sure the
result is accurate.

Why would we have any confidence in a test that is not guaranteed
perfectly adequte?

        1. It is the best we have.

        2. For many problems it is "good enough"

        3. It is the best we have

The same criteria apply to the application of statistical tests to RNG
outputs, even though the situation is inverted --uncertainty lies in the
reality not in the test.

Note that the assertion of some "inspection process" as superior to
statistical testing is fundamentally flawed.  Mathematical history is
littered with people who performed the most rigorus analyses of their
algorithms, which, in practice, failed miserably.  The best example of
this is found in Knuth's early work on "super-randomness".

A sane crypto-engineer will use both procedures, belt and suspenders, to
detect problems.  If an RNG passses all available analyic tests (in the
sense of failing to find a disqualifying flaw), and passes all available
statistical tests (in the sense of failing to find a disqualifying
correlation/bias/etc), then the RNG is good enough.

Resolved: Given two RNGs both of which pass all tests as described
above, they are equivalently random because there is no metric by which
their lack of randomness can be characterized.

Futher, a crypto-engineer who fails to employ all available tests of
both flavors is negligent.  An adversary should be expected to utilize
all available flavors of tests for flaws.  There is no amount of
analytic inspection that can substitute for a simple statistical
evaluation of the RNG output.  And, there is no amount of statistical
testing that can substitute for a simple inspection of the RNG
mechanism.

------------------------------

From: Bryan Olson <[EMAIL PROTECTED]>
Subject: Re: blind signatures
Date: Thu, 01 Apr 1999 20:36:20 -0800



Ian Goldberg wrote:
> 
> In article <01be7c5b$8e518d50$89097283@cirano>,
> Gianluca Dini <[EMAIL PROTECTED]> wrote:
> >t = R x B^e (mod n)
> >
> >Now, the problem is: by the knowledge of t and e (and R and B if
> >necessary), is it possible to find anouther couple (R1, B1), different of
> >(R, B), such  that t = R1 x B1^e?
> 
> Yes.
> 
> >If the answer is yes, how difficult is this problem?
> 
> Totally trivial.  Pick _any_ value for B1.  Calculate
> R1 = t * (B1^e)^(-1) (mod n).
> 
> >furthermore, is there any countermeasure?
> 
> Countermeasure?  What for?  This is the whole _point_ of a blinded signature:
> the person doing the signature sees only t, not R and B.

I gather Gianluca was wondering whether t^d (the signature of
R blinded by B) could give away the signature on other numbers 
besides R.  The answer is yes but it doesn't matter, since 
anyone with the public key can construct as many pairs (r, r^d) 
as he wants.  The catch is he can't get them for chosen r.

Anyone with the public key can also construct quadruples of
the form [r, b, r*b^e, (r*b^e)^d] which is exactly what he get 
if he blinded r with b, then got the holder of the private key 
to sign r*b^e.  Again, he can't choose r.

--Bryan




  The idea is that
> it's impossible for him to figure out what R was, since _any_ value of R
> could have been used (note that I only showed above that any value of B
> could have been used; the other statement is left as an exercise for the
> reader).  Further, if B is chosen uniformly at random from Zn*, then
> the distribution of R given t is the _same_ as the distribution for R;
> i.e. knowledge of t gives exactly zero help (in the information-theoretic
> sense, not just computationally) in figuring out what R was.
> 
>    - Ian

------------------------------

From: "karl malbrain" <[EMAIL PROTECTED]>
Subject: Re: Random Walk
Date: Thu, 1 Apr 1999 20:56:24 -0800


Douglas A. Gwyn <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
(...)
> One doesn't learn statistics by "focussing on specific issues"
> any more than one learns general mathematics by focussing on
> the properties of the number "ten".

Oh, contrare!  Humans did indeed focus on the ten digits on both hands to
grasp mathematics.  I think you mean that one can't stay focused solely on
the number 10.  The <<focus>> on 10 is always present to a degree at all
levels.

So, yes, one learns statistics by SOME initial issue of some interest.
Karl M



------------------------------

From: Bryan Olson <[EMAIL PROTECTED]>
Subject: Re: Newbie intro:  what modern crypto systems assume the enemy knows
Date: Thu, 01 Apr 1999 20:51:28 -0800



Sundial Services wrote:
[...]
>         *  The attacker can generate and encrypt or decrypt his own "chosen
> plaintext"
>            messages and study the result.

There is now a fairly broad consensus that a symmetric cipher should
also resist "chosen ciphertext" attacks.  The attacker is allowed
to put all the ciphertext he wants through a black-box that gives
him the decrypted plaintext.  He should remain unable to exhibit 
the plaintext corresponding to a given ciphertext without putting 
that exact ciphertext through the box.

--Bryan

------------------------------

Date: Fri, 02 Apr 1999 00:04:17 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Random Walk

R. Knauer wrote:
> 
> On Wed, 31 Mar 1999 08:52:09 -0500, "Trevor Jackson, III"
> <[EMAIL PROTECTED]> wrote:
> 
> >Given a POPULATION (P) distribution and a SAMPLE (S) distribution,
> >one compares S and P to determine the representativeness of S with
> >respect to P.
> 
> >Given a DESIRED (D) distribution and a SAMPLE (S) distribution, one
> >compares the S and D to determine the confidence in the proposition that
> >P is similar to D.
> 
> Yes, that has always been my understanding of what statistical
> analysis is all about.
> 
> Now, given what you just said, what does it have to do with true
> randomness, or more correctly, non-true-randomness of a TRNG?
> 
> If you respond by saying that true randomness for a TRNG must include
> the property of no bias in the ensemble average, I fully agree - since
> equidistribution is a necesary (but not sufficient condition) for a
> TRNG (which is defined that way for purposes of crypto).
> 
> But here is where I believe statistics fails: In order to determine
> bias in a UBP one must perform an ensemble average, and for any decent
> sized keystream that is impossible. Sampling a very small fraction of
> very lond sequences is not going to work. For example, sampling as
> many as 1 million sequences for 1 thousand bit sequences is only
> sampling the infinitesmially small fraction of 10^6/2^1000, which is
> so small I cannot calculate it on my TI scientific calculator.

OK, this is a reasonable question.  However, the answer really does
require a deep appreciation, insight, into the behavior of samples with
respect to populations as they both grow.  As I'm not a professional
statistician, I'm sure I'll butcher rigor in commenting on it, but I may
succeed in communicating the essential insight.

Ther are two parts to consider.  First, the selection of the desired
degree of confidence.  Since we cannot expect perfection we have to
specify the dgree of imperfection, risk, that we can afford.  So there's
an epsilon that has to be selected by consideration of the overall
situation.  It cannnot be predefined.

Once epsilon is assigned a value, we can describe confidence as 1-e.  In
common usage, esp. soft sciences and thumb-rule evaluations, approximate
confidence levels are 90%, common confidence levels are 95% and very
high confidence levels are 98 or 99%.  In crypto we probably want more. 
Say e = 1/2^32.  Whatever.

Now, given a constant confidence level, how does the sample size
required to reach that level of confidence grow as the size of the
parent population grows?  This is related to the sharpening of the
central spike in a gaussian distribution as the size grows.  At medium
large numbers (10e6) the spike is quite sharp.  At really large numbers,
like the moles of air in a room (10e25?) the spike is incredibly thin. 
Thus we never see any measurable pressure differential across a room,
much less a segregation of air molecules.

For truly huge numbers, such as 2^256, the spike is essentially
everything.  It is really very hard to get a sequence of that length
that is at all far from the center.  For numbers like those you
mentioned, 2^1000, it is impossible in practice.

Given this narrowing of expectations, it does not take a large sample to
give good confidence that the sample properly represents the parent
population.  I have purposely left out any prescriptive reference,
thinking that if you derive the actual forumla for the sample size
necessary to reach a given level of confidence then you will have gained
the insight necessary to comprehend the answer.  Without that insight
the answer will be counter-intuitive gobbledogook.

Good luck.

> 
> The best you can do with such an infinitesmially small sample is infer
> some property that is present in infinite sequences, namely
> pseudorandomness. That makes statistical tests useful for diagnostic
> warnings, but useless for a reasonable determination of
> non-true-randomness.
> 
> (By "reasonable" I mean suitable for the crypto application at hand.
> Obviously, if you only intend to use a TRNG for one short message -
> like the Washington-Moscow Hotline - the conditions for "reasonable"
> are relaxed compared to crypto traffic that passes several million
> bits per hour).
> 
> But since you brought it up, kindly tell us the assumptions that you
> must make to be able to infer properties of D from calculations on S.
> That is, we model D in such a way that it has no bias - it is a
> uniform distribution like that for the UBP. I have a suspicion that
> somewhere there is an assumption about D which comes from an analysis
> of infinite sequences - which as we know (from expositions like those
> of Feller) has little to do with finite sequences, even very large
> finite sequences generated from discrete sample spaces.

This is another reasonable question, but I'm not well-enough grounded in
the axioms of statistics to answer it at all authoritatively.  I am
certain that the analyses I learned had nothing, zero, to do with
infinite samples.  I would guess your suspiscion is related to something
like the central limit theorem in which the answers are approximations
until the limit is reached.  That is most certainly NOT true of
statistics.  All of the principles are founded in finite populations.

They are several (probably more than I am aware of) shortcuts that can
be taken because a sample may be shown to approximate a "perfect"
distribution (based on an infinite collection), which makes it easier to
manipulate.  But AFAIK this are just Q&D techniques unrelated to
fundamental statistical analysis.



> 
> IOW, the desired property of normality (lack of bias) present in
> almost all infinite sequences (Borel normality), is simply not present
> in almost all finite sequences generated by a uniform process such as
> the UBP, even for a very large number of steps. In fact, contrary to
> what one might expect based on naive intuition, the random walk
> distribution - a critical measure of bias - actually broadens as n
> grow larger. Instead of more sequences having less bias, it turns out
> that more sequences have more bias as the paths migrate away from the
> origin. IOW, the Gaussian distribution broadens with time.
> 
> Bob Knauer
> 
> "The laws in this city are clearly racist. All laws are racist.
> The law of gravity is racist."
> - Marion Barry, Mayor of Washington DC

------------------------------

Date: Fri, 02 Apr 1999 00:10:12 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Random Walk

Douglas A. Gwyn wrote:
> 
> "R. Knauer" wrote:
> > Just ask why it is that after nearly a century of investigation by the
> > best minds on Earth, no one has advanced a satisfactory mathematical
> > explanation for the true randomness found in quantum physics. Could
> > that be taken as prima facie evidence that true randomness or its lack
> > cannot be calculated mathematically?
> 
> No, and that hasn't even been a tack that theoreticians have taken.
> 
> The official party line, from around 1930 through 1970 at least,
> was that "there is no point is worrying about *why* QM works, if
> we know *how* it works".  That could go a long way toward explaining
> why the "why" is not well known.  When it comes to truly fundamental
> physical theory, what constitutes an "explanation" requires careful
> study in its own right.
> 
> There have been researchers looking into such questions, however.
> There are actually some pretty good papers about entropy in QM
> available on the net.
> 
> A point that is generally agreed is that whatever is going on at
> the quantum superposition level, it is inconsistent with standard
> probability theory (a la Feller, for example), although it is
> usually described in probabilistic terms.

Here's a test that will prove this point beyond a doubt.  Find a
professional statistician (def: someone who is paid to design tests of
signifigance).  Provide the definition of a probability amplitude.  See
if you can convince the professional that probability amplitudes have
ANYTHING to do with what they do.

> 
> But this isn't the newsgroup for discussing physics.

------------------------------

From: "hapticz" <[EMAIL PROTECTED]>
Subject: Re: Scramdisk
Date: Fri, 2 Apr 1999 00:22:50 -0500

so,  did u find out where it all went??

--
best regards
[EMAIL PROTECTED]





------------------------------

Date: 2 Apr 1999 05:30:32 -0000
From: Grinch <Use-Author-Address-Header@[127.1]>
Subject: Re: Wanted: free small DOS Encryption program
Crossposted-To: comp.security.misc

Milton Pomeroy <[EMAIL PROTECTED]> made the request:

>Wanted - a free DOS-based encryption program which is small, fast,
>         strong and friendly

<snip of 12 requirements>

As several other people have done, I'll start by trying to dissuade
you from one of your requirements:

>  (6) EXE or COM file must be small - less than say 80kbytes
>       (if it's large, like PGP for DOS at around 400kbytes, it takes
>       over 10sec to load from floppy-drive)

My copy of PGP 2.6.2 is 243,097 bytes.  It fits easily on a floppy.  I
use it from a floppy on occasion.  This is my top recommendation.  Use
the conventional encryption:

        pgp -c "filename"

if you are intimidated by the public key capabilities.

Dissuading aside, the best solution that I know of that strictly meets
all 12 of your requirements is SAFER.  You can download it from:

        ftp://ftp.isi.ee.ethz.ch/pub/simpl/safer.V1.2.tar.Z

using ftp, lynx, netscape, ...
After downloading you will need to unpack with uncompress and tar,
which you will find on your convenient unix system:

      uncompress safer.V1.1.tar.Z
      tar -xf safer.V1.1.tar

You didn't list in your 12 requirements what compressing and archiving
utilities you had access to.

Be careful, however, with regard to:

>  (9) strong (at least 80-bit-key strength)

This particular implementation has the unpleasant feature that if you
link with getpass() to allow input of password without echoing, then
safer will truncate the output to 8 characters, even if getpass()
returns more.  Most implementations of getpass() already truncate to 8
characters, but that is another problem, and not safer's.  Since you
don't want to compile, much less change the source and compile, you
will need to enter your pass phrase on the command line.  Enclose it
in " " marks.  Using pgp, by the way, will solve this problem.

Grinch

------------------------------

From: consalus <[EMAIL PROTECTED]>
Subject: How does one start cracking ciphers?
Date: Thu, 01 Apr 1999 23:42:09 -0600

This may be a silly question, but I cannot remember seeing it covered in

anything I've read thus far.

How does one crack ciphers?
Generally.
I've read about Differential and Linear cryptoanalysis in Applied
Cryptography,
but he didn't describe how to do them, just generally how they works.

Are there other, simpler methods?
I supose one could just look at the code and try to figure out
weaknesses,
but I'd imagine there'd be something more systematic than that.

Anyway, I was curious. Thanks.

Kyle Consalus


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to