Cryptography-Digest Digest #572, Volume #10      Tue, 16 Nov 99 01:13:03 EST

Contents:
  Re: High Speed (1GBit/s) 3DES Processor (Paul Koning)
  Re: Short replacement for assymetric algorithms for authentification (Alex Birj)
  Re: SCOTT16U SOLUTION ON THE WEB (SCOTT19U.ZIP_GUY)
  Re: Short replacement for assymetric algorithms for authentification ("Gary")
  Re: Codebook examples on Web? (John Savard)
  Re:SCOTT16U SOLUTION ON THE WEB (Anonymous User)
  Re: Proposal: Inexpensive Method of "True Random Data" Generation 
([EMAIL PROTECTED])
  Re: Proposal: Inexpensive Method of "True Random Data" Generation (Coen Visser)
  Re: Proposal: Inexpensive Method of "True Random Data" Generation (Coen Visser)
  Re:SCOTT16U SOLUTION ON THE WEB (SCOTT19U.ZIP_GUY)
  Re: intelligent brute force? (CoyoteRed)
  more about the random number generator (Tom St Denis)
  Re: Scientific Progress and the NSA (Jim Gillogly)

----------------------------------------------------------------------------

From: Paul Koning <[EMAIL PROTECTED]>
Crossposted-To: comp.dcom.vpn,comp.security.firewalls
Subject: Re: High Speed (1GBit/s) 3DES Processor
Date: Mon, 15 Nov 1999 17:40:08 -0500

Tim Wood wrote:
> 
> [EMAIL PROTECTED] wrote in message
> <[EMAIL PROTECTED]>...
> >We have developed a prototype Encryption system which runs 3DES at 1
> >GBits/sec (this is not just processing  but with real IO at 1 GBit/sec).
> >Are there any commercial applications for this type of technology?
> 
> Isn't it usual to find out if there are commercial applications before you
> develop something?
> Unless of course you just do it as a hobby ;-)

Or academic, or a researchy type person.  Or you had an application
but it evaporated and you're looking for others.  Or you know of one
niche but suspect there might be 5 more outside your field....

        paul

------------------------------

From: Alex Birj <[EMAIL PROTECTED]>
Subject: Re: Short replacement for assymetric algorithms for authentification
Date: 15 Nov 1999 14:02:57 -0700

So is the next scheme is secure enough for protecting from producing
key generators without changing the program code? Let's assume that 
a program will have at most 5000 users.

1. text = "hambomat"

2. Create an array RandKeys[5000][8] by random generation

3. Apply  RC5 16 rounds, using every row as a key (8 bytes) on the text to get 
    cyphertexts RandCTxt[5000][8]

4. Put RandCTxt in the program code.

5. Give a user the key:   (RandKeys[i],i)

6. The program should check that applying RC5 on the text with this key 
    does produce the cyphertext RandCTxt[i] (or vice-versa).


------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: SCOTT16U SOLUTION ON THE WEB
Date: Tue, 16 Nov 1999 00:53:43 GMT

In article <[EMAIL PROTECTED]>, "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
>"SCOTT19U.ZIP_GUY" wrote:
>> the work I did was based on CBC which does have the weaknesses I describled.
>
>How do you reconcile that with the proof that CBC is secure (within
>tight bounds) if the underlying block cipher is?  Reference:
>
>M. Bellare, A. Desai, E. Jokipii and P. Rogaway. A Concrete Security
>Treatment of  Symmetric Encryption: Analysis of the DES Modes of
>Operation.  Extended abstract in Proceedings of 38th Annual Symposium
>on Foundations of Computer Science, IEEE, 1997.  Full version available
>from Mihir Bellare's Web site.

  THere is nothong to reconcile. I give you a procedure that shows the
data is localzed if you don't like it I can't help it.



David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

From: "Gary" <[EMAIL PROTECTED]>
Subject: Re: Short replacement for assymetric algorithms for authentification
Date: Tue, 16 Nov 1999 00:04:14 -0000

8 bytes (64 bits) for a keyed hash output is a little too low, I think
collisions could be found.
12 bytes maybe better but 16 bytes 128 bits is recommended.

Alex Birj wrote in message <[EMAIL PROTECTED]>...
>So is the next scheme is secure enough for protecting from producing
>key generators without changing the program code? Let's assume that
>a program will have at most 5000 users.




------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Codebook examples on Web?
Date: Tue, 16 Nov 1999 00:08:11 GMT

[EMAIL PROTECTED] wrote, in part:
>In article <[EMAIL PROTECTED]>, 
>[EMAIL PROTECTED] (UBCHI2) wrote:

>> Are there any examples of famous codebooks on the net? 

>I've just put up some pages from British Cypher No. 5 - this was the one 
>that replaced the infamous number 3.

 http://www.cix.co.uk/~klockstone/

Very interesting. I hope you can get permission from HMSO to put a few
more pages up, although you already have given the gist of the whole
thing.

So the "BAMS code" was British Cypher No. 3? ...and these codes were
compiled in the 1920s, far in advance of the war.

John Savard ( teneerf<- )
http://www.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

Date: Tue, 16 Nov 1999 01:22:17 +0100
From: Anonymous User <[EMAIL PROTECTED]>
Subject: Re:SCOTT16U SOLUTION ON THE WEB

Hey cool it.  Dont assume that everyone has rushed to read your site...I
for one has not...

So you are saying you have tested all aes candidates and all known block
ciphers, 3DES, IDEA , CAST, Blowfish...etc   in CBC mode and you have
demonstrated this weekness in all these ciphers????

If that is what you are saying, then you are turning the Crypto world
upside down....well done...you deserve a medal. But surely the answer to
this problem according to you is to use 1-1 compression, then the problem
disappears....


------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: sci.math,sci.misc,sci.physics
Subject: Re: Proposal: Inexpensive Method of "True Random Data" Generation
Date: Tue, 16 Nov 1999 00:32:44 GMT

Coen Visser wrote:
> [EMAIL PROTECTED] wrote:

[big snip]
> > The definition fails to define
> > whether a particular string is compressible,
> > since for any two lengths x and y, all of the
> > following are true:
> >
> >       x > y + O(1)
> >       x = y + O(1)
> >       x < y + O(1)
>
> The use of O(1) is had in mind was more like:
> C(x) <= length(x) + O(1). You use the additional
> term to compare constants. I compare functions.

I think I see the problem: Li and Paul Vitanyi
occasionally refers to "infinite strings".  My
other references define strings as finite
sequences of symbols from some alphabet.  The
initial question, whether we can meaningfully
define whether a string by itself is random,
depends on which definition we use.  Kolmogorov
complexity can define whether an infinite string
is compressible, but not whether a finite string
is compressible.

--Bryan


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Coen Visser <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.misc,sci.physics
Subject: Re: Proposal: Inexpensive Method of "True Random Data" Generation
Date: Tue, 16 Nov 1999 00:44:57 +0000

"Trevor Jackson, III" wrote:

> Coen Visser wrote:

> > Any compression program in
> > use uses the statistical properties of the string it compresses. So it
> > uses information inherent in the string.

> Not quite.  Every existing compression program has some kind of source model.  For
> instance, adaptive dictionary methods that replace repeated substrings with 
>references to
> previously found strings all have a window or dictionary size that depends upon the
> expectation of the number and/or length of patterns worth entering into the 
>dictionary.
> For those with a limit on the number of patterns, a sequence composed of limit+1 
>repeated
> subsequences would not be compressed at all.  Such a sequence would fall outside the
> model of the expected source.

> For dictionaries with a maximum length entry, a sequence containing only of
> contantenation os a subsequence who length is lenmax+1 would not be compressed at all
> because the sequence would fall outside of the model of the expected source.

You are right. I simplified too much. Applied compression programs use
a model of the *kind* of strings they are supposed to compress best. Be
it a
model based on dictionaries or on some sine/cosine transform. An optimal
compressor would use a model that fits exactly its target. That could be
just one string but maybe a class of strings. I don't know that.

> OK, if your input can be any finite string and you get to choose your UTM to match 
>the
> properties of each individual string, then the binary quality "compressibility" is 
>not a
> figure of merit.

No, you may only choose one UTM and define compression/compressibility
when using that specific UTM. It does not matter which one you fix (for
all your strings) because the results between two different choices of
UTM will only differ a constant.

> The UTM representation of some strings is smaller than the raw bonary
> of the string.  The UTM representation of other strings is larger than that of the 
>binary
> string.  Dividing strings into two categories "compressible" and "incompressible"
> implicitly assumes that the shortest representation of an incompressible string is 
>the
> string itself.  But it's not.  The shortest representation of an "incompressible" 
>string
> is _longer_ than the string itself due to the overhead of the UTM representation.

Yes that is what the O(1) additional term is for. But it is only a
technical detail linked to existence of a universal partial recursive
function. Not very exiting matter to discuss (in my opinion). You could
compare it to the overhead of a self-extracting zip file
that contains information that can not be compressed.

> One could set the division point between "compressible" and "incompressible"
> arbitrarily.  Trivially there will be strings whose UTM representation is the same 
>length
> as their binary representation.  The definition of "compressible" might include 
>those, or
> leave them out. (c.f. RLE-style encoding of both flavors).
 
> If the strings whose length are the same in binary and UTM representations are 
>defined as
> "incompressible" then there is some non-zero minimum degree of space gain that 
>qualifies
> a string as compressible.  There is no reason to set this threshold at a single bit. 
> One
> could set it at 50% with as much justification.

There is a good point you are making. For finite strings
there is no absolute criterium to what you can call random.
If some string is random and you flip the last bit would it still
be random? There is more like a spectrum of randomness with on one side
the strings you can compress a lot and on the other side incompressible
strings. Highly incompressible strings are called random: it really
doesn't
matter much if you can compress a string of 10 million bits only one or
five bits. Setting the treshold at 50% would be a bit stretching the
idea.

> Only when considering the utility of the transform to UTM representation is there 
>value
> is setting the threshold to a single bit -- space gain > 0.  But considerations of
> utility require a source model.  If you expect a uniform distribution over all finite
> strings this definition of compressibility is not useful because on average the size 
>of
> the UTM versions of the strings is larger than the size of the binary versions.  If 
>you
> have a non-uniform distribution the UTM overhead increases with the non-uniformity.

There is a "constructive" method to find the smallest representation of
a
string S. Finding one would be interesting because it would be
incompressible
itself. The procedure is as follows:
 1 Put the string S on a tape.
 2 Create TM1 (the first syntactically correct TM).
 3 Do one step of TM1. Did it halt with output S?
 4 Yes: TM1 is the smallest representation of S. Halt.
 5 No : create TM2 (the second syntactically correct TM).
 6 Do one step of TM1. Did it halt with output S?
 7 Yes: TM1 is the smallest representation of S. Halt.
 8 No : do one step of TM2. Did it halt with output S?
 9 Yes: TM2 is the smallest representation of S. Halt.
10 No : Create TM3.
...

This procedure is executable on a UTM and it will halt
with the smallest TM that decompresses to S.

> Note also that the UTM representations treated as binary strings are extremely far 
>from a
> uniform distribution.  They must halt because you stated the universe of interest
> included only finite strings.  Thus not all UTMs will be suitable.  It is undecidable
> which are suitable and which are not in the general case.  But there are specific 
>cases
> that are clearly unsuitable: "state1: write zero; goto state1;".  Excluding these 
>cases
> lowers the information density of the UTM representation of strings, and thus makes 
>them
> unsuitable as random numbers.

You may choose any UTM you find suitable (but you must stick with it).
It is only used as a vehicle to execute the TMs that are the
self-decompressing strings.

Regards,

        Coen Visser

------------------------------

From: Coen Visser <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.misc,sci.physics
Subject: Re: Proposal: Inexpensive Method of "True Random Data" Generation
Date: Tue, 16 Nov 1999 00:58:02 +0000

[EMAIL PROTECTED] wrote:

> I think I see the problem: Li and Paul Vitanyi
> occasionally refers to "infinite strings".  My
> other references define strings as finite
> sequences of symbols from some alphabet.

A good point. Throughout the discussion there
has been references to strings that sometimes where
defined finite and sometimes defined infinite. This
has created quite some confusion (I am at least
partly guilty).

> The initial question, whether we can meaningfully
> define whether a string by itself is random,
> depends on which definition we use.  Kolmogorov
> complexity can define whether an infinite string
> is compressible, but not whether a finite string
> is compressible.

It is good to separate the two cases, but I don't
agree with your above statement. I would think that
Kolmogorov complexity is properly defined on finite
strings: the upperbound of C(Sn) with Sn the "0"-string
of length n is log(n).

Regards,

        Coen Visser

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re:SCOTT16U SOLUTION ON THE WEB
Date: Tue, 16 Nov 1999 03:16:57 GMT

In article <[EMAIL PROTECTED]>, Anonymous User 
<[EMAIL PROTECTED]> wrote:
>Hey cool it.  Dont assume that everyone has rushed to read your site...I
>for one has not...
    It is obvious you have not read it.
>
>So you are saying you have tested all aes candidates and all known block
>ciphers, 3DES, IDEA , CAST, Blowfish...etc   in CBC mode and you have
>demonstrated this weekness in all these ciphers????
      Again the ciphers you mention are all of the same type and could in
theory be modeled by a black box. The result that I call a weakness is
a well known fact to any real cryptographer but many just except it as
error recovery. Which in the old days may have been a good idea.
>
>If that is what you are saying, then you are turning the Crypto world
>upside down....well done...you deserve a medal. But surely the answer to
>this problem according to you is to use 1-1 compression, then the problem
>disappears....
     This is nothing new and I am sure no medals will be given. But it is the
kind of information new people don't understand. But 1-1 compression
does nothing to help in this. What you fail to understand is that 1-1 
compression is not encryption. It is just something that should be a
part of any compression that is to be just if one is going to use
compression before encryption in the first place. The use of 1-1
compression methods would not add information that could weaken
a crypto system.  To make the problem metnioned by you disappear
you should use something like wrapped PCBC




>


David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

From: [EMAIL PROTECTED] (CoyoteRed)
Subject: Re: intelligent brute force?
Date: Tue, 16 Nov 1999 02:55:44 GMT
Reply-To: CoyoteRed (at) Bigfoot (dot) com

Quoting "Keith A Monahan" on 15 Nov 1999 20:48:28 GMT ...

>    The good news is that I have the approximate length of the passphrase,
>    a small set of symbols used. I actually know perhaps 90% of the password
>    minus one misspelled english word, and various symbols in different
>    positions.  Nonetheless, even with this reduced set, brute forcing is
>    out of the question due to the maximum attempts/sec I can manage.

This may be getting off topic now. (more security, than crytography)

If you have this much, do you have access to his machine?  Can you get
a key logger installed onto his machine?  If you are on a LAN you can
monitor the log and when you have the key stroke sequence then you can
erase the logger and all traces that it was there.

Try one of the security groups, they may have more information about
this sort of thing.


-- 
CoyoteRed
CoyoteRed <at> bigfoot <dot> com
http://go.to/CoyoteRed
PGP key ID: 0xA60C12D1 at ldap://certserver.pgp.com


------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: more about the random number generator
Date: Tue, 16 Nov 1999 02:58:37 GMT

I got a program from http://www.fourmilab.ch/random/

to perform pseudo-randomness tests on files.  I made a file using my
rng [based on the winrng] until I got bored... here are the results
[and if dejanews messes up the following lines, I can re-email it in
private].

Entropy = 1.000000 bits per bit.

Optimum compression would reduce the size
of this 417792 bit file by 0 percent.

Chi square distribution for 417792 samples is 0.14, and randomly
would exceed this value 50.00 percent of the times.

Arithmetic mean value of data bits is 0.5003 (0.5 = random).
Monte Carlo value for Pi is 3.136948529 (error 0.15 percent).
Serial correlation coefficient is 0.001254 (totally uncorrelated = 0.0).



Is this good or bad?  From what I read at the site this is really good
for a first pass at it...

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Jim Gillogly <[EMAIL PROTECTED]>
Subject: Re: Scientific Progress and the NSA
Date: Tue, 16 Nov 1999 04:01:48 +0000

Tim Tyler wrote:
> Certainly we now know that public key cryptography was invented in
> secret independently of the public community some years before the
> rest of the world heard about it.

Do we?  Although Ellis & Cocks did also invent public key crypto, Ellis
told Diffie that "You did more with it than we did."  It's not at all
clear that they understood what they'd come up with, and in fact their
documents say things like (not an actual quote) it's a shame this can't
be used to pass symmetric keys.  The E&C work was only a few years before
the DH "New Directions" and the RSA work.  The Diffie-Hellman key
exchange was invented and published <before> the equivalent GCHQ
classified version was published internally.  I'd say it's pretty
much of a dead heat, with the public community getting higher marks
for recognizing its applications sooner.

Granted that the non-classified community didn't know about diff crypt
for some years after the classified community (including the IBM people
who evidently invented it at LUCIFER/DES time).  However, I find it quite
credible that recent academic cryppies have come up with other attacks
before their counterparts inside the Black Chambers.  I think it likely
that in many areas the gap has narrowed considerably.  In others we
probably still don't have a clue about technologies they're using.

> Did you see the post relating to NSA developing telephone transcription?

Yup.  Looks potentially nice.

> If quantum computers can be used to crack public key systems, money
> can be bet on intelligence agencies developing these internally before
> the rest of the world wakes up to the idea.

Maybe.  They have budgets too (albeit really big ones), and they have
to place their bets.  If they need to bet on basic research versus a
few more Deep Cracks or Wiener Boxes, I bet they'll have a tough choice.

> I find that it's helpful to think of intelligence agencies as being
> /extremely/ capable.  It seems to help concentrate the mind to think
> that there are worthy opponents out there ;-)

Sure.  But if you think of them as gods you'll find it paralyzing.
There just aren't that many Turings in the world.

-- 
        Jim Gillogly
        Highday, 26 Blotmath S.R. 1999, 03:48
        12.19.6.12.14, 2 Ix 2 Ceh, Second Lord of Night

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to