Cryptography-Digest Digest #350, Volume #9        Wed, 7 Apr 99 06:13:02 EDT

Contents:
  Re: Encrypting Fields in Microsoft Access Database (Derek Lyons)
  Re: Test vector repository--specifically, help with a broken Blowfish  (Boris Kazak)
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  Re: True Randomness & The Law Of Large Numbers ([EMAIL PROTECTED])
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: True Randomness & The Law Of Large Numbers (Mok-Kong Shen)
  Re: Security of LFSR on noisy channel. ("Douglas A. Gwyn")
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Derek Lyons)
Subject: Re: Encrypting Fields in Microsoft Access Database
Date: Wed, 07 Apr 1999 04:54:20 GMT

[EMAIL PROTECTED] wrote:

>If you want to do encryption, you will need to use C. VB lacks powerful
>bit-bashing operators (AFAIK it doesn't have bit shifting) and forces 
>you to use signed operators. 

You can bit bash in BASIC....  You XOR with 1,2,4,8,16,255 etc to pull
each bit out and make an 8 byte string, manipulating the string, then
XOR from the string back into the byte.

It's a b---h and a hack, and slow as dirt, but it works.

Derek

------------------------------

From: Boris Kazak <[EMAIL PROTECTED]>
Subject: Re: Test vector repository--specifically, help with a broken Blowfish 
Date: Tue, 06 Apr 1999 22:59:49 -0400
Reply-To: [EMAIL PROTECTED]

Nathan Kennedy wrote:
**************************(snip)
> I've compiled at least three different shoddy blowfish implementations off
> the net and gotten three different results from the same ciphertext/key.
> I'm basically trying to implement blowfish myself, and I've gotten to where
> I can encrypt and decrypt, but I haven't yet seen a program that will pass
> the test vectors at counterpane.com.
******************* (snip)
> void main()
> {
>   unsigned int P[18];
>   unsigned int S[4][256];
>   unsigned int K[2]={0,0};
>   unsigned int L=0,R=0;
>   unsigned int x;
> 
>   swim(K,P,S);
>   blow(P,S,&L,&R);
>   printf("%x %x\n",L,R);
>   suck(P,S,&L,&R);
>   printf("%x %x\n",L,R);
> }
******************
Dear friend, in 99.9% of all C compilers "int" are 16 bit wide.
You better declare your P,S,L,R and other goodies as "unsigned long",
then only you might get a slim chance of your program working.
  Also in "printf" you'd better declare the format as %lx, then it
will print 8 hex places.

    Best wishes               BNK

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 07 Apr 1999 08:28:17 GMT

"R. Knauer" wrote:
> You have completety dismissed the fact that Triola states in clear
> English that parametric statistical tests cannot be used to determine
> randomness. Williams & Clearwater, Li & Vitanyi, Feller - all these
> experts say the same thing, yet you ignore them because you imagine
> yourself to be smarter than they are. Well, I do not believe you are
> smarter than they are. In fact, I have very good reasons to believe
> that you are a phony.

To the contrary, I have consistently been AGREEING with the experts,
and with others who have been arguing with you in this newsgroup,
that statistical tests cannot establish that the system being tested
is definitely random.  On the other hand, such tests certainly can,
when occasion arises, give quantifiably reliable evidence that a
system is *not* random.  Until you understand that these are
*different* issues, you haven't understood statistical testing.
(Hint: You got the null and alternative hypotheses backward in
a recent posting.)

> >No, now that you have clarified that you mean just the integers
> Don't give me that bullcrap.

What -- YOU yourself in the posting to which that is a response
went through my list of suggested "TRNG" outputs and identified
that your intent was the integers and not the other numbers!

> You simply do not pay close attention to what I write, ...

But I have, and that is what your real objection is, because I
called your bluff.  You toss around undefined, mystical terms
like "true randomness", and whenever anyone is able to get YOU
yourself to specify intelligible properties or referents for
these terms, they turn out to be riddled with errors.  When
these are identified, you change the subject, make additional
unrelated (usually incorrect) assertions, or resort to insult
and "argument by authority" (although you don't do a very good
job of it, since the authorities don't actually believe what
you claim they do).  You rather remind me of the character
Otto in "A Fish Called Wanda":
        Wanda: ... But you think you're an intellectual,
                don't you, ape?
        Otto:  Apes don't read philosophy!
        Wanda: Yes they do, Otto, they just don't
                understand it.

> >> Equidistribution means that the sample space has a flat distribution.
> >That isn't very precise
> HUH?! It is a precise as it can ever get. How imprecise is it to say
> that the probability is the same for each item in the ensemble.

Since you had just identified the "ensemble" (you're abusing
the term) as the infinite set of integers, you're talking about
a "flat" distribution that consists of each individual's
probability being identically zero; more care is necessary when
dealing with infinite sets.

As I pointed out, YOUR own definition as interpreted by YOU
has the mathematical consequence that such a system is not
physically realizable.  In other words, *it cannot exist*.

It is because the "definition" of TRNG that YOU gave was so
flawed, that for purposes of computing the mean and s.d. I had
to substitute some simple, *meaningful* property (namely, a
sequence of 0 or 1 bits are emitted with equal probabilities).
Does or does not your idea of TRNG, assuming you *have* a valid
idea (which is quite a stretch at this point), say that as a
bitstream generator it would have that property?  If so, the
mean is 0.5 and the s.d. is 0.5, which disproves your contention
that such parameters do not apply to your TRNG.

But of course, you know in your heart that YOU cannot be
wrong, so the fault must be mine.

> Others have not managed to generate a prevailing consensus among real
> experts.

You sure have done no such thing youself.
Or, rather, I think there *is* a consensus among real experts,
but it is that your notions are *not* valid.

> Once again you are assuming that randomness refers to the population
> distribution, instead of the generation process that created that
> distribution. My claim is that you cannot infer the randomness of the
> generation process by sampling the distribution.

Sentence one -- wrong.  I assume no such thing.  The properties
of the 0,1 generation process I was forced to assume (in place of
your unworkable definition) make no assumption other than that
the output can be described in terms of a *very* simple Markov
chain, or UBP if you prefer.  It was YOU who were claiming (that
Feller claimed) that such "coin-flip" (binary) sequences were not
amenable to normal statistical analysis; everybody who actually
works in this field knows better.

Sentence two -- there's that "straw man" again.  Nobody with any
sense is maintaining that one *can* infer "randomness" via
statistical testing, whether or not via "sampling the distribution"
(which in this context is simply another name for "observing
actual behavior").  They do say that one can, on favorable
occasions, detect *non*randomness with a high degree of certainty.
You have repeatedly confused "the test failed to show conclusively
that the system is not random" with "the test conclusively showed
that the system is random".  Those assertions are not logically
equivalent.  The first is a possible outcome of a statistical test,
but the second is not.

Demonstrating beyond reasonable doubt that a putative "random"
source is not random, is an important thing to attempt, in
practice.  If the attempt fails, all we can conclude is that
our test wasn't sensitive enough to detect the kind and extent
of nonrandomness, if any, present in the system we tested.
As another Jim Felling explained, we have no way of telling
whether the system is actually perfectly random or instead
just has some kind and extent of randomness that our test is
unable to detect.  In practice, people try a whole battery of
more-or-less unrelated tests, in order to catch as many kinds
of nonrandomness as they can.  But it is impossible to catch
them all with a finite number of tests.  (However, that's more
of a theoretical point than a practical one; if there is a
definite pattern, it has to be extraordinarily complicated
as well as delicately balanced in order to escape detection
by all known tests.)

> I have not argued that. I have merely restated the claim by Williams &
> Clearwater that a quantum algorithm is now available for calculating
> true random numbers. All that is presumably left to do is to program
> it on a quantum computer, when one becomes available. That will then
> constitute the standard for true quantum numbers.

You don't need a quantum computer, for Chrissake.  It is *easy*
to use quantum phenomena to generate random sequences.  We used
to do it by inverting the Poisson distribution of radiation
detector events.  As I have pointed out, thermal noise is just
as satisfactory, and a lot more practical.

> Why do you believe that classical chaotic processes are truly random?

I don't know what you mean by "truly random", if anything.
But I already explained why thermal noise is sufficiently random;
non-thermodynamic initial conditions are quickly swamped, making
the process utterly irreversible.  (As engineers might say, that
signal is "lost in the noise", and unlike with spread-spectrum
communication, there is no intelligent agent carefully encoding
it with added redundancy to make the signal recoverable by some
special correlation algorithm.)  It is possible, by *careful
control*, to set up a system where this loss of information
doesn't occur, but this is well understood, and no designer of a
thermal noise-based random source would carefully engineer a
system to produce such an obviously unwanted result.  The more
important issue is to reduce systematic bias, which can have
several causes, also well understood.

If you can actually *demonstrate* any reproducible flaws in the
bit stream generated by the noise source on a commercial crypto
chip, it would be a noteworthy accomplishment, easily publishable
in any of several major thechnical journals.  We're waiting...

If there are no problems with what we have already, why even
bother going through the trouble of programming a delicate
(as yet nonexistent) quantum computer to produce the same
results?  My guess at your response, to save an iteration, is
"the quantum device is truly random whereas the thermal one is
not truly random" (although you like to say "pseudo-random",
which is an abuse of established terminology).  Our challenge
for you is, name even *one* way that we could tell the two
apart just by looking at their outputs (i.e. as "black boxes").
If there is *no* method of telling the difference, i.e., their
products are *indistinguishable*, then whatever "randomness"
properties one has, the other has, at least if those properties
have any connection to reality.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 07 Apr 1999 08:35:52 GMT

Darren New wrote:
> How about a classical computer programmed to simulate a quantum computer
> programmed to calculate true random numbers? Would that be true
> randomness? Can you justify your answer?

Remember, the fundamental "quantumness" is the projection of a
mixed state *randomly* into one of its eigenstates when an
observation is made.  An accurate simulation would therefore
have to include a random generator.

Here is a point to ponder:  Why are people trying to develop
actual quantum computers, if we can simulate them perfectly
with what we already have?

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 07 Apr 1999 08:34:12 GMT

[EMAIL PROTECTED] wrote:
> [EMAIL PROTECTED] wrote:

> Membership in set #2 is based on non-membership in set #1. There is no
> other sets. A RBG is either truly random or not. There is no set of
> "half random" processes.

Of the three claims above, right, wrong, right respectively.
Of course there are other sets than generators shown to be
non-random and generators not in that set.

> >Does FIPS-140 say otherwise?
>
> All I said was that FIPS-140 claims that set #1 exists, the set
> containing all non-random processes.

That is _not_ how you defined set #1. You defined #1 as the
set shown with reasonable certainty to be non-random.

> It follows that set #2 exists as
> a disjunction. Therefore the tests can be used to test set membership.
> If it is not in #1 then the process must be in #2.

Let's put this in proper terms:

The set of tested generators shown non-random (#1), and the
set of tested generators not shown to be non-random (#2) are
mutually exclusive and jointly exhaustive (taking the set of
tested generators as the universal set).  The set of non-truly
random generators (call it #a) and the set of truly random
generators (call it #b) are also mutually exclusive and jointly
exhaustive.

You seem to be confusing the distinction between #1 and #2
with the distinction between #a and #b.  The relations we
can assert with confidence are that #1 is a subset of #a
and #b is a subset of #2.

> If that weren't the case then the tests would be unreliable.

Are you _trying_ not to understand?  The test are reliable in
that they do not place truly random generators (generators in
set #b) in set #1.  They do often place non-truly random
generators (those in set #a) in set #2.  That amounts to
unreliability only if one is so foolish as to mistake
membership in set #2 as an assertion the generator is truly
random.  Have I seen that people make that mistake?  Yes,
but with the exception of one Bob Knauer, not very often, not
very recently, and certainly not in FIPS-140.


> >We can have it the way it is.  If the facts are inconsistent with
> >the theory, then the theory is wrong.  If the facts are consistent
> >with the theory, then the theory remains plausible.
>
> >I challenged you to quote FIPS-140 where it says what you claimed
> >it says, as re-included at the top.  A reasonable response would
> >include text from FIPS-140, or a retraction. I am not asking you
> >to quote yourself.
>
> There is a third possibility, namely that you misread my statements.

I _quoted_ your statement about what FIPS-140 says.  So far
I've quoted it twice and both times you snipped it.  I quote
it a third time below, still with the challenge to quote
FIPS-140 saying what you claimed it said.  If you think I
misunderstood what you wrote, quote FIPS-140 right below the
actual assertion of yours that I challenged.  Here, I'll make
it really easy for you:

  Bob Knauer:
  | : Yet people still insist that if their pet RNG
  | : passes such and such a statistical test, it is truly random.
  | : Hell, the famous FIPS-140 says that. Have you ever read FIPS-140?
  Bryan Olson:
  | Can you quote where FIPS-140 says that?

  <------------ Insert FIPS-140 Quote Here -------------->



> If tests claim reasonable certainty that a RNG is not random, then
> when an RNG does pass the tests, that constitutes reasonable certainty
> that the RNG is truly random.

That's why you are an exception to my note that I have not
seen the foolish mistake recently.


> Thus far in my exposure to statistical methods I have come across an
> incredible amount of circular reasoning. Test are presented which can
> give you certainty to an arbitrarily high degree of certainty and yet
> are so unreliable that if a RNG passes them you still cannot say the
> process is random. That makes the tests practically worthless.

That show _you_ didn't follow the reasoning and _you_ don't
see the worth of the tests.

--Bryan

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 07 Apr 1999 08:49:34 GMT

"R. Knauer" wrote:
> On Tue, 06 Apr 1999 03:06:42 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> wrote:
> >Far from it -- in fact I explained how it was done to a fellow
> >who asked politely via e-mail.  I'm sure Gillogly, Reed, and
> >others who have actually practiced cryptanalysis understand
> >the methodology, also.  It illustrates why OTP is not used in
> >very many actual cryptosystems, despite its theoretical
> >properties having been well understood since before WWII.

> If the keystream is generated by a true random process and used only
> once, the OTP cipher it produces is proveably secure.

"If pigs had wings, they would fly."

> What happens in reality is that someone uses a non-true-random
> keystream and information leaks giving the cryptanalyst a reasonable
> certainty that a particular decryption is the intended plaintext.

No, I am talking about genuinely random pads.
(Your model of cryptanalysis is inaccurate, also.)

The extremely well-known theoretical properties of "OTP" systems
unfortunately lead many to believe that real cryptosystems should
be implemented as OTPs.  We eavesdroppers snicker when people
make that mistake.  There are *several* points of vulnerability in
*real* implementations of OTP systems.  In fact, conventional
small-finite-symmetric-key cryptosystems are usually more secure;
they have a different set of vulnerabilities, sometimes not so
easy to exploit.

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 07 Apr 1999 08:45:30 GMT
Reply-To: [EMAIL PROTECTED]

On 6 Apr 1999 12:12:53 -0500, [EMAIL PROTECTED] (Patrick Juola)
wrote:

>What the hell does a "non-random" arrangment mean?

I think he meant Kolmogorov-Chaitin randomness - i.e., algorithmically
irreducible random numbers.

But that kind of randomness is not the kind we are discussing.

Bob Knauer

Signature files traditionally present comments from a population with an IQ over 100.
I thought it would be appropriate to present comments from the other half of the 
population, 
namely the one with an IQ under 100. After all, they do make up 50% of the human 
condition.
Never mind that they also tend to become politicians. Here is a representative example:

"People blame me because these water mains break, but I ask you, if the water mains
didn't break, would it be my responsibility to fix them then? WOULD IT!?!"
- Marion Barry, Mayor of Washington DC


------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 07 Apr 1999 10:58:27 +0200

R. Knauer wrote:
> 
> I have never denied the validity of statistics where it is applicable.
> 
> I am claiming that it is not applicable to the determination of true
> randomness.
> 
> Randomness is an axiom of statistical theory and cannot be decided
> from that theory. To do so would be circular.

Through putting forward YOUR concept of 'randomness' and denying
the applicability of mathematically founded tests operating on
the sequences generated on the one hand and not giving any concrete 
and EXACTLY determined, practically realizable, algorithms and 
methods to determine that 'randomness' and measure any (in 
practice) occuring deviations on the other hand, you are propaganding 
for a vague, hollow and scientifically useless thing. Previously you 
suggested that the proper functioning of your TRNG could be ensured 
through diagnostic tests. Since such tests is lastly tied to 
statistics, your decision that the sequence from an apparatus (found 
o.k. by these tests) is random IS a 'decision' based on statistics.
And that is circular!!! 

> 
> >If you are in a position
> >to dependably claim that a given sequence is random or not random (in
> >certain acceptable measure/sense), you should be able to do that
> >without knowing its origin.
> 
> By now you should know that true randomness is a property of the
> generation process, not the numbers it produces. Therefore you cannot
> possibly hope to know if the RNG is random unless you know what that
> process is.
> 
> I could give you two identical sets of numbers and one of them could
> have been generated by an algorithm and the other generated randomly.
> 
> What test would tell which one is which?

If NO currently available tests can distinguish between the qualities
of two sequences, then they are EXACTLY as good for use in practice
today! Isn't this trivial?? Imagine the case where the chemistry
succeeded to produce synthetic wool such that no expert, chemist or
not, could distinguish it from natural wool, would (and could) you
care that the pullover you bought is made of one stuff instead of 
the other??

> 
> >(For quite similar reasons medical tests
> >of new drugs are performed in the so-called doubly blind tests.)
> 
> Those are not tests for randomness.
> 
> >Statistical tests, as far as I know, never claim in absolute
> >sense that a given sequence is random/non-random.
> 
> Sequences are not random - the process that generates them is random.
> 
> Randomness pertains to the process, not its output.

Statistical tests on the ouput are the only viable and scientifically
sound means of evaluating the (quality of the) process generating
the output.

M. K. Shen

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Security of LFSR on noisy channel.
Date: Wed, 07 Apr 1999 08:58:29 GMT

[EMAIL PROTECTED] wrote:
> If I have a long (64 bit+) LFSR, and you want to read it, but
> can only read the bits with lots of noise, so that you can only
> get a little better than random chance (say 501 guesses out of 1000).
> Is there a way of doing this?
> Does anyone know of any upper, or lower bounds on the complexity of this?

I don't know what complexity measure you have in mind -- is N the
length of the LFSR, with infinite data, or is N the amount of data,
with a fixed LFSR?  In the latter case, it's an O(1) problem.

If you know the "taps", it's just a straight EM problem.

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 07 Apr 1999 09:03:57 GMT
Reply-To: [EMAIL PROTECTED]

On 6 Apr 1999 09:33:46 -0500, [EMAIL PROTECTED] (Herman
Rubin) wrote:

>Where do you get the authority to DEFINE true randomness?

I used that term "define" because it was used in the question, and I
did not care to nitpick over terminology. 

A specification for a True Random Number Generator can be posited,
such as the specification for a quantum computer programmed to
calculate true random numbers.

>The properties of random processes have been studied, and something
>is a random process if it has those properties.

What are those properties? Is statistical randomness a statement about
the population distribution of random numbers, or is it a statement
about the selection/generation process?

>Daily rainfall
>amounts at the Purdue Agronomy Station are a random process; they
>are not good as a process to generate "honest" random bits.

What are "honest" random bits?

>It is not a topic for a full text, as it is discussed with many 
>others.  Try a good probability book using measure theory.

Such as?

>We use mathematical models to approximate nature.  It is the other
>way around; what reason do we have to expect that nature will EXACTLY
>satisfy the model?  Some of the philosophers of science, but nobody
>who understands how complicated nature is.

Physical models may rely on mathematics, but that does not make them
mathematical models. Quantum mechanics is not a mathematical model.

There is overwhelming evidence that QM correctly models physical
reality. Therefore, if a quantum computer algorithm for true
randomness is run on a quantum computer, it will produce true random
numbers.

>We have lots more.  One of the more commonly used ones, the Fisher
>exact test, makes no such assumptions.  Nor do the non-parametric
>tests.

What is the Fisher exact test, and in what explicit reference source
can I learn about it?

>I suggest you refer to the better mathematical statistics books,

Such as?

>not those intended for high school students.

But Triola was recommended by one of the so-called "experts" here, who
said that if I read that book, I would have as great an understanding
of statistics as he does.

>>>Non-parametric tests, in particular, make no use of the CLT.

>>Triola does, in some instances.

>Only for asymptotics.

Yes, but for surprisingly small samples.

>We also know quite natural distributions of random variables for
>which the CLT fails.  This is rarely done except in graduate books.

Such as?

One good book will do for a start.

Bob Knauer

Signature files traditionally present comments from a population with an IQ over 100.
I thought it would be appropriate to present comments from the other half of the 
population, 
namely the one with an IQ under 100. After all, they do make up 50% of the human 
condition.
Never mind that they also tend to become politicians. Here is a representative example:

"People blame me because these water mains break, but I ask you, if the water mains
didn't break, would it be my responsibility to fix them then? WOULD IT!?!"
- Marion Barry, Mayor of Washington DC


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to