Cryptography-Digest Digest #533, Volume #9       Wed, 12 May 99 07:13:04 EDT

Contents:
  Help: How to protect my files ([EMAIL PROTECTED])
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  Re: AES ("Douglas A. Gwyn")
  Re: Time stamping (complete) (Jean-Jacques Quisquater)
  Re: AES (Terry Ritter)
  Re: Thought question: why do public ciphers use only simple ops like     shift and 
XOR? (Terry Ritter)
  Re: Crypto export limits ruled unconstitutional (cosmo)
  Re: Roulettes ([EMAIL PROTECTED])
  Re: Help: How to protect my files (Nathan Kennedy)
  Re: Thought question: why do public ciphers use only simple ops like     shift and 
XOR? (Terry Ritter)
  Re: Thought question: why do public ciphers use only simple ops like     shift and 
XOR? (Terry Ritter)
  On Contextual Strength (Terry Ritter)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED]
Subject: Help: How to protect my files
Date: Wed, 12 May 1999 05:30:12 GMT

I would like to know if there is any way to protect a midi file from copying 
it easily. I do a lot of midi programming and a lot of people pirates my midi
files. I don't want something too complicated....(I know there will be always
a way for hackers) , but just a way to avoid the common people to copy my
files easily. Is there any programm I should use? If you have a good
idea....let me know! Thank you very much Jean [EMAIL PROTECTED]


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 12 May 1999 06:07:42 GMT

"R. Knauer" wrote:
> Then why does Feller claim that it is fundamentally incorrect to infer
> the properties of random number generation from the time average of a
> single sequence?

Who cares why he says that, it's not relevant.

> >The required
> >key stream properties are such that a UBP is a very good model,
> ...
> Therefore claiming that a TRNG can be modeled by a UBP says nothing
> that we do not already know - it adds nothing substantial to the
> discussion.

You chopped my sentence in two and make a spurious objection to the
result.  The important point was the *second* part of the sentence:
> >and the Monobit Test checks the actual data against one property of
> >that model.

> And just what might that "one property" be? 1-bit bias perhaps? If
> so, then a sequence of 20,000 bits is but one sample used to
> determine 1 value of that 1-bit bias.

It's hard to be sure what you mean by terms like "1-bit bias".  The
Monobit Test checks the first moment of the distribution, which is
indeed a test for "bias".

> The Monobit Test seems to be saying that John Jones is not likely to
> be a salesman because he earns far less than the typical salesman.
> The typical salesman makes $100,000 per year and most salesmen (say
> 95%) make between $85,000 and $115,000 per year, so Jones cannot be
> a salesman since he makes only $25,000 per year.

If John Jones makes only $25,000/year, then there is evidence that he
isn't a very good salesman, and you should consider not using him to
peddle your product.  But a better analogy would be:  John Jones
normally makes around $100,000/year, but suddenly his sales plummet
to $25,000/year.  Should you suspect that he has developed a problem?
How long are you going to retain him as a sales representative if his
performance drops to that level and stays there?

> ... How can Jone's poor earnings be a
> reflection on the typical earnings of the vast majority of salesmen?

Of course, it isn't -- it reflects only on Jones.

> The  Monobit Test is an attempt to characterize a random process on
> the basis of some statistical expectation applied to only one sample
> sequence.

Due to practical considerations, it's as large a sample as we can get
before we have to make a decision.

> If you were to take 10,000 such samples at random times and ...

Sorry, we can't DO that.  We cannot wait until 200,000,000 stuck
key bits have been used to encrypt vital information.

> I do not believe it is possible to design a single test. Each test
> is a measure of the strength against a particular attack, ...

No, such tests don't even come close to measuring strength against
actual cryptanalytic attacks.  They're just checking how well the
generated key stream fits the theoretical UBP model.

> >So, what can you do with only 20,000 sequential bits?
> Beats me.
> I do not think you can do anything meaningful with only one such
> sequence. Perhaps you could break the sequence into 1,000 samples of
> 20 bits each and use them to plot the distribution and calculate the
> parameters of the UBP model. But 20 bits does not seem all that many
> to calculate a 1-bit bias and 1,000 samples does not seem all that
> many for getting a true distribution.

Unfortunately, partitioning the data like that wastes information,
and that is worst when you do try to detect serial correlation.

> Perhaps someone can give us the calculations for how large any single
> sample must be and how many such samples we would need to arrive at
> "reasonable certainty" regarding the distribution of 1-bit biases and
> UBP model parameters.

As I recall, it was 1 false alarm per 1,000,000 tests for the Monobit
test on the 20,000-bit sample.  There are other tests, such as
Pearson's chi-square, that could be used for that particular
property, but they'll have about the same false-alarm rate.

> But that is not what the Monobit Test is doing. It is calculating a
> single 1-bit bias value from a single sequence of 20,000 bits and
> claiming that a TRNG is broken if that single bias value is outside
> some statistical range of expected values. That is snake oil of the
> purest kind.

As with *any* statistic, the data is "boiled down" to one number that
summarizes something about the distribution.  What is your problem?

> >> Any one 10,000 bit key can be anomolous - that is what Feller and
> >> Li & Vitanyi have been trying to tell you.
> >Gee, we don't need them to tell us that, because it is exceedingly
> >obvious.
> And your statement is exceedingly smug - which is not at all
> surprising coming from you.
> Nothing about true randomness is "exceedingly obvious" - except to
> an idiot.

If the possibility of anomalies is not obvious to you, then *who* is
the idiot?  It really is obvious that *any* bit pattern can result in
the key stream we've been considering, even when the generator is not
broken.  This sort of thing is true in general of statistical
sampling, but it doesn't invalidate the methodology.  The *reason*
the methodolgy works is because the computed statistic is essentially
an indicator of membership in a *class* of samples, and those classes
have different relative sizes, thus differing likelihoods.

> >"I see no ensemble here."
> I take that to be a quote of Herman Rubin.

Actually, it was in the form of a response from the ADVENT game.

> Herman Rubin is entitled to his opinion on the matter of the existence
> of ensembles, but many people in the sciences and mathematics do refer
> to them, if only conceptually.

So what?  I didn't deny the utility of the notion.  I just said it
wasn't relevant to the issue at hand.  When the test raises an error
state, that applies *only* to the specific, probably-broken encryptor,
not to some ensemble of correctly-working encryptors.

> Well, which is it? Is it functioning properly or not?
> You cannot know, if all you take is one sample.

You cannot *know* anything with *certainty*, no matter *how* much you
sample nor what tests you perform.  So it is not a matter of certainty,
but rather of *likelihood*.

Consider an industrial plant, e.g. a nuclear power facility.  There
are continual tests and monitoring, with alarms raised when unlikely
(on the assumption of correct operation) conditions are detected.
Even though some of the alarms are false, would you argue that they
should be dispensed with altogether?

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: AES
Date: Wed, 12 May 1999 06:12:02 GMT

Bruce Schneier wrote:
> Multiple encryption is generally a good idea.  The only reason you
> don't see it widely used in practice is that using N ciphers cuts
> the performance by a factor of N (more or less).

Without necessarily gaining proportionally in security.
If you have a good algorithm, it would seem to be better
to put the extra cycles into using that algorithm with a
longer key (for example).

------------------------------

From: Jean-Jacques Quisquater <[EMAIL PROTECTED]>
Subject: Re: Time stamping (complete)
Date: Wed, 12 May 1999 08:10:24 +0200

Have also a look at

http://www.dice.ucl.ac.be/crypto/TIMESEC.html

(yes, not so far from you :-)
We just finished implementing a timestamping system with SPKI
capabilities.

Stuart Haber (from Surety and Intertrust) was here last Friday ...

Jean-Jacques Quisquater,

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: AES
Date: Wed, 12 May 1999 07:22:41 GMT


On Wed, 12 May 1999 06:12:02 GMT, in <[EMAIL PROTECTED]>, in
sci.crypt "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:

>Bruce Schneier wrote:
>> Multiple encryption is generally a good idea.  The only reason you
>> don't see it widely used in practice is that using N ciphers cuts
>> the performance by a factor of N (more or less).
>
>Without necessarily gaining proportionally in security.
>If you have a good algorithm, it would seem to be better
>to put the extra cycles into using that algorithm with a
>longer key (for example).

That would seem to be a way to get more keyspace, but it does not
address the problems I want to address.  I think we already have
plenty of keyspace.  

The problems I see are that we first cannot guarantee the strength of
a cipher, and second cannot know when our cipher has been broken.
These are rarely keyspace issues.  But if we have only one cipher, and
our cipher is broken in secret, we will continue to use that cipher
and continue to expose our information.  

I want to first reduce the probability of a break by multi-ciphering
as a common expected process.  

I want the ability to change a cipher quickly and easily if new
results warrant changing ciphers.  

I want to terminate the extent of any break which does occur by
changing ciphers frequently.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Thought question: why do public ciphers use only simple ops like     
shift and XOR?
Date: Wed, 12 May 1999 08:08:04 GMT


On Tue, 11 May 1999 10:04:51 -0700, in <[EMAIL PROTECTED]>, in
sci.crypt Jim Gillogly <[EMAIL PROTECTED]> wrote:

>John Savard wrote:
>> How the authenticated channel can be made to provide the additional
>> security that is the point of this scheme is indeed a problem that
>> could lead to a successful "why bother" argument, but one could get
>> around it by:
>> 
>> - only using the scheme for secret-key communications
>> 
>> - using *really* large primes, et cetera, if public-key methods are
>> used
>
>If you and Ritter are saying that the reason for going to this
>system is that one cannot know when or a whether a particular
>crypto algorithm has been broken, then using larger primes in
>your RSA scheme can't help.  

Speaking for Ritter, I think I have made myself abundantly clear in a
whole raft of postings.  There are multiple reasons for various parts
of this system, and I have described them numerous times.  

>You have no more knowledge about
>whether your enemy has a super-efficient factoring algorithm than
>you do about whether they can break 3DES in real time.  Both are
>unknown and neither has a proof of intractability, and therefore
>by (your and his) hypothesis they are equally suspect.

If you want to comment on my hypothesis, try reading my stuff first
and actually *quoting* the sections with which you disagree, instead
of making up what you think would be a nice weak position for you to
belittle and scorn.

The use of a huge public key has the effect of opposing substantial
factoring improvements if such occur.  This is an obvious,
widely-discussed tradeoff many people would like to make, but such
keys have substantial costs. 

Moving away from systems which use public key technology on every
message to systems which establish a large secret key once and then
re-use that secret key (to send large random message keys) can improve
the overall efficiency of the system.  

Increased efficiency can make the use of huge public keys more
practical than such use is now.  

And, until we can guarantee a lower bound for the strength of a
cipher, no cipher can be trusted.    

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: cosmo <[EMAIL PROTECTED]>
Crossposted-To: comp.dcom.vpn
Subject: Re: Crypto export limits ruled unconstitutional
Date: Tue, 11 May 1999 05:47:46 -0700

I haven`t read any of the responses to my post here in the newsgroup with
the only exception to one that was emailed to me. It was stated...


negative

try ruth bader ginsburg for one

-g


I have only one thing to say about this, and that is, OOPS.

You`re right. I was wrong. I just checked. Apparently Clinton has appointed
two supreme court Justices during his term in office. These include Ruth
Bader Ginsburg and Justice Breyre. I think that is all. But, sorry for
making a false claim.

            Cosmo


------------------------------

Date: Thu, 06 May 1999 09:25:15 -0400
From: [EMAIL PROTECTED]
Subject: Re: Roulettes

Patrick Juola wrote:
> 
> In article <[EMAIL PROTECTED]>,
> Mok-Kong Shen  <[EMAIL PROTECTED]> wrote:
> >Paul Rubin wrote:
> >>
> >> In article <[EMAIL PROTECTED]>,
> >> Mok-Kong Shen  <[EMAIL PROTECTED]> wrote:
> >> >Boris Kazak wrote:
> >> >>
> >> >
> >> >>  Just use octahedric dice (8faces) and get your random numbers in octal.
> >> >
> >> >Are such dices on sale?  BTW, maybe Rubik's cube (there is also a
> >> >4*4*4 variant) and similar toys could be of use.
> >>
> >> Yes, go to a game store and you can get N-sided dice for
> >> various N including N=4, 6, 8, 10, 12, 20, etc.
> >> Of course N=6 is the easiest to find.  N != 6 is mostly
> >> used for role playing games like Dungeons and Dragons.
> >
> >But according to mathematics there exist exactly 5 regular polyhedra:
> >tetrahedron, hexahedron, octahedron, dodecahedron and icosahedron.
> 
> But there are lots of other dice out there that are symmetrical,
> but not regular, polyhedra.  For example, to construct a fair
> 10-sided die, simply construct a regular pentagon and place two
> vertices a fixed distance "above" and "below" the center of the
> pentagon.  Connect all vertices and you have a solid with the
> necessary 10-fold symmetry, even though the faces are not themselves
> symmetric.
> 
> >I think also that it is difficult to read out from a octahedron,
> >i.e. which face it is to take when such a dice lies on the table.
> 
> Not if you've seen one.  The tetrahedron can be difficult to read;
> with the octahedron you simply read the number marked on the top
> (triangular) face.
> 
>         -kitten

This works for any even N, but in the limit you actually have a coin to
flip and then a large roulette wheel to spin.

For decimal dice the best answer is an appropriately marked icosahedron
(each number appears twice).  Percentile dice are composed ot two
differently colored decimal dice.

The largest solid that can be constructed from unit edges, ignoring
stellation is a rhombicosaduodecahedron.  This figure has 60 faces
composed of triangles, squares, and pentagons.  By truncating a
stellation of each face one can construct a version where the areas of
the faces are equal.

Does this form a "fair" die?

------------------------------

From: Nathan Kennedy <[EMAIL PROTECTED]>
Subject: Re: Help: How to protect my files
Date: Wed, 12 May 1999 18:04:14 +0800

[EMAIL PROTECTED] wrote:
> 
> I would like to know if there is any way to protect a midi file from copying
> it easily. I do a lot of midi programming and a lot of people pirates my midi
> files. I don't want something too complicated....(I know there will be always
> a way for hackers) , but just a way to avoid the common people to copy my
> files easily. Is there any programm I should use? If you have a good
> idea....let me know! Thank you very much Jean [EMAIL PROTECTED]

Let people play them but not read them or copy them?
Have your cake and eat it too?

Nope.

Have a look at http://www.gnu.org/philosophy/philosophy.html for another
alternative.

Nate

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Thought question: why do public ciphers use only simple ops like     
shift and XOR?
Date: Wed, 12 May 1999 10:20:59 GMT


On Tue, 11 May 1999 16:06:02 -0700, in
<[EMAIL PROTECTED]>, in sci.crypt Bryan Olson
<[EMAIL PROTECTED]> wrote:

>[...]
>Sorry, but you didn't follow the issue.  Whatever the security
>assumption of the ciphers, the 1000-cipher system is at least as
>bad as the single cipher system, usually worse.  

Sorry, you still do not understand that the single cipher system has
*no* guaranteed security.  Nothing I could do could possibly be worse
than that.  

Cryptanalysis provides *no* lower bound to strength.  Once we finally
realize what that means, we can start to innovate protocols to do what
we can.  

If a single-cipher system is broken, it stays broken.  A many-cipher
system changes ciphers, and starts over.  


>Assuming security
>is the _best_ the 1000 cipher system does compared to a single
>cipher system - in this case all we loose is efficiency.
>
>I didn't expect to convince you.  You've made your case, I've made
>mine.  Now we're just repeating the same thing over and over.  I've 
>been happy to see that some people have understood my side, and maybe 
>some people think you're right.  

My position is fact, not opinion.  Sides are irrelevant.  


>You had complained that your ideas
>were ignored; now they're not.

I recall no such complaint.  Let's see the quote; without one we are
justified in assuming that you made it up.  

Alas, your review is not the be-all and end-all of a fair hearing.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Thought question: why do public ciphers use only simple ops like     
shift and XOR?
Date: Wed, 12 May 1999 10:20:00 GMT


On Tue, 11 May 1999 15:45:55 -0700, in
<[EMAIL PROTECTED]>, in sci.crypt Bryan Olson
<[EMAIL PROTECTED]> wrote:

>[...]
>Try to follow what people are saying.  Without authentication,
>adversary can influence the choice and _make_ the easiest
>ciphers appear reasonably often.

Since it is my proposal, *you* try to follow what *I* am saying:
Authentication is an ordinary expected part of any cipher system.
Secret key delivery is key authentication.  Public key certification
is key authentication.  Plaintext hashing and message sequence
numbers, both hidden under cipher, could be message authentication; or
we could do something else.  But none of this has much to do with
cipher choice selection.  

In the proposed system, cipher choice negotiation occurs under the
current cipher.  It is therefore not available either to externally
observe or modify.  Since it is a sub-channel to normal communication,
any attempt to replay the negotiation also replays the associated
message and message sequence number, both of which should raise flags.
We know we have to handle this, because it is not all that unusual for
multiple copies of the same message to show up.  

So now where is the problem?

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: On Contextual Strength
Date: Wed, 12 May 1999 10:25:34 GMT


On Tue, 11 May 1999 15:42:27 -0700, in
<[EMAIL PROTECTED]>, in sci.crypt Bryan Olson
<[EMAIL PROTECTED]> wrote:

>Terry Ritter wrote:
>>  Bryan Olson
>> >[EMAIL PROTECTED] (Terry Ritter) wrote:
>> >> On the other hand, there is something to the idea of a relative or
>> >> "contextual strength."  That is, any cipher has the ability to confuse
>> >> an opponent of x capabilities (x being some combination of background,
>> >> time and resources), but not an opponent whose capabilities are
>> >> greater.
>> >
>> >Too bad the adversary knows x and we don't.
>> 
>> Indeed:  Since we do not know x, we cannot assume we know that value.
>> The implication of this is that we cannot trust any cipher.
>
>So your "contextual strength" is bankrupt.  

Sorry.  

>We not only can't 
>prove it, but if it's false the evidence to show it's false
>need not exist.  

What?

>It lacks both mathematical proof and scientific
>testability.  

That seems to be a projection of the faults of your idea of strength,
which also has no mathematical proof, and also cannot be tested with
respect to secret opponents, and they are the only ones we care about.
Strength values from cryptanalysis which does not show a practical
break are irrelevant to real cipher use.  

>If, on the other hand, we consider a cipher strong
>if and only if it has no tractable break, at least the hypothesis
>is falsifiable and we're playing the same rules as our adversary
>in looking for the evidence that would do so.

You mean we can assume a cipher has some strength unless someone can
prove otherwise?  This arrogant misuse of logic is at the root of many
problems in cryptography and cryptanalysis, because it seems to be the
way science is normally done.  But in cryptography, it would have us
believe that only work in the published academic literature counts,
when in fact what really counts is what our opponents can do.  Just
because the academics have some level of skill does not mean our data
are secure to that level of strength.  Any strength value which does
not take our opponents into account is irrelevant to actual use.  

Demanding that our opponents publish their attacks before we will
consider our ciphers weak is simply insane.  If the opponents break
the cipher and keep quiet, we will use that cipher again and again and
again, and all that time all the academics will claim the cipher must
be strong because nobody has proven otherwise.  That is not science,
that is arrogant ignorance.  

Cryptanalysis delivers only an upper bound to strength.  We do not
know "the" strength, and I argue that there is no one strength, but
instead of range of strengths depending upon what is known by each
opposing group.  

The correct approach to the use of cryptanalytic strength values is to
consider strength to be somewhere in a range bounded by zero and the
upper bound which is brute force or some better attack.  Unless and
until there is a proof for a lower bound above zero, a reasonable
approach is to assume that the opponents have a modest-effort break
for any cipher, and live with that.  And if we cannot live with that,
if we must assume that our cipher is "strong," we are not doing
science, we are doing superstition.  


>Your proposal, unlike the conventional method, is unscientific.
>You can't prove contextual strength, and without the help of
>your adversary you can't get the evidence that would disprove it.

And you have neither proof nor evidence for the real strength of your
system either.  You just have a value which is the current delusion
for strength from cryptanalysis.   


>No one that I've seen has disagreed that we lack mathematical
>proof of computational security.  No one disagrees that we can't
>rigorously quantify the security of our systems.  Prove the
>security of your proposal and you'll have a point.

I think I'll wait until you prove your alternative first.  

Only contextual strength captures the truth that the very same cipher
can have different strengths to different groups which have different
technology.  Only conceptual strength makes plain that the only
strength which counts is *not* the strength measured by academic
cryptanalysts, but instead the strength as it appears to an opponent.
That distinction alone places the concept far beyond the usual
academic approach.  It also provides a basis for general comparisons
of the knowledge and capabilities of different groups and the effect
these might have on cipher strength.  

That said, contextual strength does nothing to improve the situation
from cryptanalysis, where we have no lower bound to strength at all.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to