Cryptography-Digest Digest #441, Volume #10      Sun, 24 Oct 99 12:13:04 EDT

Contents:
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column ("A 
[Temporary] Dog")
  Re: Unbiased One to One Compression (SCOTT19U.ZIP_GUY)
  Re: Unbiased One to One Compression
  Re: Note on Feistel Ciphers
  Re: Weakness in the Rijndael key schedule? (Tom St Denis)
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column 
(SCOTT19U.ZIP_GUY)
  This compression argument must end now (Tom St Denis)
  Re: Unbiased One to One Compression (Tom St Denis)
  Re: Unbiased One to One Compression (Tom St Denis)
  Re: Unbiased One to One Compression ("Trevor Jackson, III")

----------------------------------------------------------------------------

From: "A [Temporary] Dog" <[EMAIL PROTECTED]>
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column
Date: Sun, 24 Oct 1999 07:59:43 -0400

On Sat, 23 Oct 1999 18:45:09 GMT, Tim Tyler <[EMAIL PROTECTED]> wrote:

>Bruce Schneier <[EMAIL PROTECTED]> wrote:
>: <[EMAIL PROTECTED]> wrote:
>:>Roger Schlafly wrote:
>
>[Ritter-style cypher swapping]
>
>:>> Yes, hardware systems could do that. But I am sure that they
>:>> would rather devote those resources to something more useful,
>:>> in most cases.
>:>
>:>Something "more useful" than averting the possibility that the entire
>:>device becomes useless?
>
>: Yes.  There are a zillion things more useful than that.  In a
>: cellphone, for example, voice quality is far more important than
>: encryption. [...]
>
>That rather depends on who is owning the cellphone, and where they are
>using it.

Since most of us are stuck with what the mass market will support for
cellphones, what any one owner wants is irrelevant.  The cellphone
marketers have determined that voice quality is more important to a
commercially viable phone then good encryption.  Good encryption may
actually be a detriment if it gets you into a pissing match with
government authorities.

>
>: The system will NEVER be designed to choose from an array of
>: acceptable ciphers; we can't even get a system designed that
>: has a single marginally acceptable cipher. [...]
>
>"NEVER"?  That statement is /so/ strong it is almost certain to be wrong.

Not withstanding Mr. Ritter's comments on reinterpreting the words of
others, always read "never" as "for the foreseeable future".


- A (Temporary) Dog            |"Intelligent, reasonable
The Domain is *erols dot com*  |people understand that -
The Name is tempdog            |unfortunately, we're dealing 
                               |with elected officials"
Put together as name@domain    | - name withheld

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Unbiased One to One Compression
Date: Sun, 24 Oct 1999 13:23:38 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] () wrote:
>SCOTT19U.ZIP_GUY ([EMAIL PROTECTED]) wrote:
>:    I don't think you have a real grasp of the problem. The all zero symbol
>: is not eliminated in the file. At each use there is always a full set of 
>: symbols.
>
>Well, the way I understood your posts, in the compressed file there was
>one code not used so that the codes could be distinguished from the
>padding. Of course, while the set of compressed codes had one kind of
>symbol omitted, there were still codes representing any possible input
>text.
    Article well I don't think you have read all my posts or even looked
at http://members.xoom.xom/ecil/compress.htm
>
>But if your method is different from that, then I misunderstood you.
>Anyways, I have produced another example of how to do what you said you
>were trying to do: to frustrate brute-force search, by making all possible
>bit patterns valid _outputs_ from the compressor. I also tried to make
>them as equiprobable as possible as well.
   You can correct me if you like. But I think you method is invovled by
using some random data. This is really soemthing that I would put directly
in the encryption portion like in the option of scott19u to add a random pad
to the text. 
  But could you anwser one question directly. Can your compress/decompression
method take a random binary file of bytes then uncompress it a file. Which 
when compressed will give back this same file? I don't think this question
is to hard to anwser or to test for. But I either think you don't understand 
this or don't want to understand this particular point.
 If your program can do the above then in my opinuion it could be sutiable
for use as compression to be done before compression since it would not
add information to help someone undo the encryption of the message.

>
>John Savard


David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: Unbiased One to One Compression
Date: 24 Oct 99 13:04:34 GMT

Tim Tyler ([EMAIL PROTECTED]) wrote:
: Pretty much the whole point of the scheme was to remove clues from
: potential attackers.  It would have been bizarre had these been introduced
: in the form of a missing symbol in the way you appear to be suggesting.

Well, that is how I understood his postings where he explained his scheme,
and I thought it was bizarre, by fixing one problem at the expense of
creating a worse one. I may have misunderstood his posts, as he claims.

: You mean you have some working code?  Another "one-on-one" compressor?

No, I haven't _implemented_ my scheme yet.

However, at

http://www.ecn.ab.ca/~jsavard/mi060303.htm

there is a nice, clear explanation of what I'm talking about.

: It's *easy* to make a decompressor that fails to give errors, but doesn't
: have this "one-on-one" property: just take an ordinary decompressor and
: instead of giving errors, make it spit out an empty string.  Needless to
: say, this doesn't offer any real security benefits over the original
: routine.

Although I did not attempt to obtain the true "one-to-one" property, my
scheme does offer the "security benefit" David A. Scott sought; the
decompressor offers decompressed file X for those inputs, and only those
inputs, which _could_ have resulted from compressing file X. By allowing
random padding, and not forcing files to compress to something having a
whole length in bytes, I achieve a scheme which, I think, causes less
distortion of probabilities (assuming each individual file of a longer
length is less probable than each individual file of a shorter length) and
is less complicated than other possible methods.

John Savard

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: Note on Feistel Ciphers
Date: 24 Oct 99 13:12:17 GMT

[EMAIL PROTECTED] wrote:
: My point is that your concern, attacks that exploit
: a separation between the halves in Feistel ciphers,
: is of no concern at all.  The separation is only in
: the process of computation; it is known not to limit
: the resulting dependencies.

Not in theory. However, perhaps when someone designs a Feistel cipher,
those designs that produce effects like arbitrary bit permutations are
very complicated and hard to achieve, and so in the space of "simple"
Feistel designs that people are likely to actually come up with, there
might be weaknesses, and it might be easier to remedy them by a technique
such as Mr. Shen suggests than to eliminate them while still having a
Feistel cipher.

Since an F-function involves only half the block, this might limit its
complexity compared to a function involving the whole block - although the
requirement of invertibility probably creates other kinds of limitation.

But I think my point is valid: even if the Feistel round structure has no
mathematicaly _inherent_ weaknesses, real-world designs, being limited in
their complexity, might still have weaknesses influenced by the basic
structure chosen, and it might be easier to remedy that by varying the
structure than by exploiting the full possibilities of the structure.

John Savard

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Weakness in the Rijndael key schedule?
Date: Sun, 24 Oct 1999 13:43:06 GMT

In article <7utrti$1o26$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote:
> In article <7usskb$v75$[EMAIL PROTECTED]>, [EMAIL PROTECTED]
wrote:
> >A comment on the key schedule for Rijndael.
> >
> >Hello sci.crypt,
> >
> >I believe I have found a weakness in the key schedule for Rijndael.
I
> >turn to this august body for confirmation or harsh refutation of my
> >theory.  A quick note on my credentials. I am only an amateur
> >cryptographer but a long time professional computer scientist.  This
> >post is a follow up to the 'Curious Keys in Rijndael' post.
> >
> >Abstract
> >
> >In this paper the block cipher Rijndael is analyzed.  Rijndael is
> >submitted as a candidate for the Advanced Encryption Standard.  The
> >cipher has variable key and block length.  This paper focuses on the
key
> >length of 128 bits with the block length of 128 bits.  The Rijndael
> >cipher is an iterated ten round block cipher for this case.  Here it
is
> >shown, the Cipher Key can be recovered if an attacker has
the 'exclusive
> >or' of several Round Keys.  Moreover, one byte of key recovery can be
> >mounted on Rijndael for certain key relationships and strange chosen
> >cipher texts. Extension for this attack exist but appear to be
> >impractical.
> >
> >This note is just the beginning of the full paper I am working on.
If
> >it turns out that I goofed up then this posting will save me the
work:-)
> >
> >I will use the notation adopted in 'The Rijndael Block Cipher' by
Joan
> >Daemen and Vincent Rijmen. To understand the attack, you will have to
> >read 'The Rijndael Block Cipher.'
> >
> >
>
>   I am not sure you understand that AES is not to be used for data
that the
> United States governement wants to be secure. It is for data that the
US wants
> every one else to use so the NSA can read the mail. You should not
bother to
> show weakness in the cipher till the AES weak cipher is adopted for
use. Then
> blow it all to hell.  If your attack is strong enough the NSA my have
to use
> another of there trojan cyrpto methods in the contest. We ameturs
should wait
> until the phony crypto gods bless the final candidate.

Yak yak yak yak.

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]>
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column
Date: Sun, 24 Oct 1999 13:52:48 GMT

In article <[EMAIL PROTECTED]>,
  "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
> Bruce Schneier wrote:
> > Nice one.
>
> Thanks, but of course now D.Scott will think I'm in cahoots with you,
> too.
> :-)
>

  No I don't think your in cahoots but I do think you
full of it. Since it is very very poor practice
to reencrypt the same block and resend it. This just
is not done except in your mind. By the way asshole
my BS was in Fields and Waves so commumications use
to be my bag. Bruce is talking above his head and
I wish people could see through his phonyness.
  The only way such a thing could take place would
be if some country stupid enough to buy a crypto system
from the US especailly modified to be weak in this
area.


--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: This compression argument must end now
Date: Sun, 24 Oct 1999 13:54:47 GMT

Ok the argument does 'p = compress(decompress(p))' make cryptanalysis
has gone on too long now.

First off lets review the argument.  The argument is if the above is
true it will be harder to tell if a decryption is valid (via brute
force) because any decryption is 'valid'.  Second that since all
decryptions are valid guessing the decryption output (what it should
be) is just as hard guessing the amount of actual bits of info contain
therein (obviously upto the block size or whatever).

My original counter-argument.  There are too many possible ascii text
blocks and the ciphers are too hard to attack (to exploit the nature of
the input) in a practical sense.  I.e you would have to break the
cipher to tell what the decryption could be.  There was no counter
argument to this argument.

My second counter-argument.  Most compression algorithms such as LZ77
and Deflate (belongs to the LZ77 family) have no 'invalid' code words
in the data stream.  What does this mean?  It means you cannot just
look at a code and say it's invalid.  No counter arguement provided.
My counter to my own idea, is that if you have a index > your current
position in the decompressed stream it must be invalid.

However to exploit this in a practical sense would require breaking the
cipher.

We all know who will never agree or dismiss the argument.  However I
would like to live and see a day where this is dropped.  Please
consider the facts, make up your own mind (I am not saying which is
right or wrong) and stop posting about this.

<mytwocents>
For the most part I think compression should be left for making
messages smaller and deskewing input biases (in text and audio/video).
The actually security provided should not be the highlight.  Also that
algorithms with higher compression ratios are probably more secure then
entropy coders such as huffman, arith, range etc...
</mytwocents>

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Unbiased One to One Compression
Date: Sun, 24 Oct 1999 13:56:47 GMT

In article <7uutpe$19ai$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote:
> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] () wrote:
> >SCOTT19U.ZIP_GUY ([EMAIL PROTECTED]) wrote:
> >:    I don't think you have a real grasp of the problem. The all
zero symbol
> >: is not eliminated in the file. At each use there is always a full
set of
> >: symbols.
> >
> >Well, the way I understood your posts, in the compressed file there
was
> >one code not used so that the codes could be distinguished from the
> >padding. Of course, while the set of compressed codes had one kind of
> >symbol omitted, there were still codes representing any possible
input
> >text.
>     Article well I don't think you have read all my posts or even
looked
> at http://members.xoom.xom/ecil/compress.htm
> >
> >But if your method is different from that, then I misunderstood you.
> >Anyways, I have produced another example of how to do what you said
you
> >were trying to do: to frustrate brute-force search, by making all
possible
> >bit patterns valid _outputs_ from the compressor. I also tried to
make
> >them as equiprobable as possible as well.
>    You can correct me if you like. But I think you method is invovled
by
> using some random data. This is really soemthing that I would put
directly
> in the encryption portion like in the option of scott19u to add a
random pad
> to the text.
>   But could you anwser one question directly. Can your
compress/decompression
> method take a random binary file of bytes then uncompress it a file.
Which
> when compressed will give back this same file? I don't think this
question
> is to hard to anwser or to test for. But I either think you don't
understand
> this or don't want to understand this particular point.
>  If your program can do the above then in my opinuion it could be
sutiable
> for use as compression to be done before compression since it would
not
> add information to help someone undo the encryption of the message.

Yak yak yak.

By your logic, RC5 is secure because p = encrypt(decrypt(p)) ...

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Unbiased One to One Compression
Date: Sun, 24 Oct 1999 14:00:22 GMT

In article <7utrfq$1o26$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote:
>
>   Thanks Tim I have been away for a few weeks so I have not kept up
> with what is going on. However I doubt if you can make John understand
> the concept. I think you worded it very well but John refuses to see
or
> understand the obvious. Good luck in trying to teach him anything
about
> compression.

Funny, I always figured 'sci.crypt' was focused on scientific
discussions on CRYPTOGRAPHY.

I may be mistaken, I dunno.

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

Date: Sun, 24 Oct 1999 10:29:19 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Unbiased One to One Compression



Tim Tyler wrote:

> SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> wrote:
> : In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>
> :>Note that David's scheme doesn't *just* make all strings decompress to
> :>valid outputs, it also has the property that distinct strings
> :>always decompress to *different* valid outputs - the ultimate in
> :>"equiprobability"?
> :>
> :>It's *easy* to make a decompressor that fails to give errors, but doesn't
> :>have this "one-on-one" property: just take an ordinary decompressor and
> :>instead of giving errors, make it spit out an empty string.  Needless to
> :>say, this doesn't offer any real security benefits [...]
>
> :   Thanks Tim I have been away for a few weeks so I have not kept up
> : with what is going on. However I doubt if you can make John understand
> : the concept. [...]
>
> I have a high opinion of John Savard, based on his usenet contributions.
>
> However, I'm not sure I've done as well as I could have done in explaining
> why having one - and only one - decompressed file for every compressed one
> is potentially important to security.
>
> If you *don't* have this, then the compressor, when compressing, can
> choose more than one compressed file to compress to.
>
> If the compressor chooses between these files at random, then there's no
> security problem - you just wind up with fatter compressed files than you
> need to.

This is not always true.  In classic Huffman compression there is a choice to
be made when assigning subtrees to bits.  One could always assign zero to the
lighter subtree and one to the heavier subtree, yeilding a tiny amount of
information to an attacker, or one can chosse randomly.  The random choice does
not increase the size of the file, but simply manipulates the mapping of the
source characters onto the code bits, and thus the original text(s) onto the
compressed text(s).

>
>
> The opponent may get more cyphertext, but the extra cyphertext is
> encrypting genuinely random plaintext - which can't possibly help him.
>
> However, if you *systematically* choose to compress to one type of file
> rather than another one, your compressor may well be adding information to
> the compressed file, which was not present in the original message.
>
> It /may/ be possible for an attacker to utilise information added in
> the process of compression to aid his attack on the subsequent cypher.
>
> Making the cypher "one-on-one" avoids the possibility of any information
> or regularity *added* by the compressor being used in the attack on the
> cypher - since no information has been added.
>
> Attacks can still be based on any regularities left by sub-optimal
> compression, though.
> --
> __________
>  |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]
>
> Programmers get overlaid.




------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to