Cryptography-Digest Digest #457, Volume #10      Wed, 27 Oct 99 16:13:04 EDT

Contents:
  Re: This compression argument must end now (Tim Tyler)
  Re: Unbiased One to One Compression (Tom St Denis)
  Re: the ACM full of Dolts? (Tom St Denis)
  Re: OAP-L3:  How Do You Spell S-H-A-R-E-W-A-R-E (Tim Tyler)
  Re: This compression argument must end now (James Felling)
  Re: Unbiased One to One Compression (Tim Tyler)
  Re: Browsers Random Number Generator (Paul Koning)
  Re: This compression argument must end now (SCOTT19U.ZIP_GUY)
  Re: the ACM full of Dolts? ("Belldandy")
  Re: Unbiased One to One Compression (SCOTT19U.ZIP_GUY)
  Re: Unbiased One to One Compression (SCOTT19U.ZIP_GUY)
  Re: the ACM full of Dolts? (SCOTT19U.ZIP_GUY)
  Re: Unbiased One to One Compression (Mok-Kong Shen)

----------------------------------------------------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: This compression argument must end now
Reply-To: [EMAIL PROTECTED]
Date: Wed, 27 Oct 1999 15:32:24 GMT

SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> wrote:

: If any one on this Use group is a plant to try to bore people to death
: it is Tom.

Blimey! a possibility I'd (innocently) not even *considered* - until now.

Of course, it may well be in the interests of any security organisation
large enough, to spread disinformation about cryptography to all other
active third parties - with a view to subsequently decyphering any
systems they later go on to build.

I *had* noticed that I appear to be assailed from all sides in this forum
by people who appear to be under fairly severe misconceptions about basic
things - but it had not even occurred to me that this forum might contain
individuals whole sole purpose in life was to waste other people's time -
and actively try to kill off any intelligent discussion of relevant issues!

;-)
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

Never eat anything bigger than your head.

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Unbiased One to One Compression
Date: Wed, 27 Oct 1999 15:58:12 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] () wrote:
> While I must confess to an emotional reaction to being told that I
seem
> unable to understand the intellectual accomplishments of the great
David
> A. Scott, still I must point out that there is some unfairness in this
> criticism.

Probably true.  I do however dislike there egotistical narcleptic anti-
social antics they try peddling.  I would rather have real questions
floating around.

> "Why Cryptography Is Harder Than It Looks" was not off-topic for this
> group. Posts can discuss how to write an encryption program so that it
> does not leave sensitive data in the swap file. Compressing data
before
> encrypting it does directly relate to the security of the resulting
> messages.

[of course I don't see leaving info in the swap file as a outright
weakness, but that is just me].

>
> Why?
>
> A known-plaintext attack on a symmetric cipher is easier than a
> ciphertext-only attack. And a ciphertext-only attack is only possible
> because even in that case, we know something about the plaintext; we
know
> things like frequency characteristics.

Maybe for the first few compressed blocks, but as the compression ratio
gets better it's harder to find correlations or biases.

>
> Thus, for very straightforward information-theoretic reasons, the
better
> the quality of our compression method, the less information is known
to an
> adversary about the contents of our plaintext messages. And the less
the
> adversary has to work with in an attack on our ciphers.
>
> Thus, improvements in the quality of compression that are not normally
> worth the effort simply to save storage space or transmission time can
> still be relevant for cryptographic purposes, where the existence of
any
> available information about one's plaintexts is something to be fought
> wherever possible.
>
> As compression approaches perfection, the ciphertext-only attack
> approaches becoming as impossible as cracking a one-time-pad. Of
course,
> the difference here is that physical randomizing devices let us
actually
> implement a one-time-pad; perfect compression, except in the
trivial "Paul
> Revere" case, is essentially impossible.

However I think your type of mentality lends itself to higher
compression ratios.  I.e the plaintext blocks contain more information
and the message is digitally smaller as well.

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: the ACM full of Dolts?
Date: Wed, 27 Oct 1999 16:07:13 GMT

In article <7v6ubj$jj2$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote:
> Quote Start
> -- There are several major technical pieces that are missing from this
> article. Most importantly, no motivation is ever presented for
designing
> compression algorithms to be one-to-one. Further, I have an easier
> solution to the "file ending problem" -- use a filesystem that stores
the
> bit- length of each file rather than the byte-length. (After all, the
> conventional view that a file's size is some multiple of 8 bits is an
> illusion
> provided by the filesystem, which actually allocates in larger
chunks.)
> Quote End

While I must admit your courage and determination are intact.  I must
agree with their findings. Some notes on the paper...

1) Normally you start with an abstract, discussing the theme of the
paper, also provide a snippet of what you are trying to
demonstrate/prove/disprove.

2) Normally papers are of that nature (or topic) are a tad longer.  You
should include (possibly as an appendix) a description of how huffman
coding works.  Also you should provide small snippets or examples of
what a one-to-one compressed file resembles.  Possibly showing the
steps of going from P to Compress(P) backto P.  Also you should include
more diagrams and tables.  etc...  Mainly esthetics.

3) As for content I have not really read much of it yet, [i am on my
lunch now].  But I will read it when I get home from school.

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

Crossposted-To: talk.politics.crypto
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: OAP-L3:  How Do You Spell S-H-A-R-E-W-A-R-E
Reply-To: [EMAIL PROTECTED]
Date: Wed, 27 Oct 1999 16:08:29 GMT

In sci.crypt Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote:

: Anyone who is a crypto consultant worth his / her salt should keep
: abreast of all developments in the field of crypto.

Alas, in this modern era, this is completely impossible ;-(

Given that there is not even enough time for an individual to keep up
with all the current literature - let alone review the history of the
subject - "crypto consultants" should do their utmost to avoid wasting
their precious time considering idiotic, half-baked proposals by people
who fail to demonstrate any undersanding of the subject.

: OAP-L3 is a significant contribution to the field.

Somewhat in the manner of manure, though, it would seem ;-(
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

I pretend to work - they pretend to pay.

------------------------------

From: James Felling <[EMAIL PROTECTED]>
Subject: Re: This compression argument must end now
Date: Wed, 27 Oct 1999 11:45:03 -0500



Anton Stiglic wrote:

> James Felling wrote:
>
> > Anton Stiglic wrote:
> >
> > > Tim Tyler wrote:
> > >
> > > > Steven Alexander <[EMAIL PROTECTED]> wrote:
> > > >
> > > > : Used correctly, certain types of compression(Not .zip!) can increase the
> > > > : difficulty of a ciphertext only attack.
> > > >
> > > > Yes.
> > > >
> > > > : However, it will not make a known plaintext attack anymore difficult.
> > > >
> > > > No.  This is false.  This is the /third/ time someone has raised this
> > > > point - so perhaps it should get into the one-on-one FAQ.
> > >
> > > As I have obseved, Mr. Tyler is talking about stream ciphers.  If your
> > > cipher is a stream cipher, then an attacker gets less ciphertext for sur (and
> > >
> > > the entropy of the source, which spits out bits, is greater if you use
> > > compression).
> > > If, however, you are using a mathematicaly modeled encryption scheme like
> > > RSA,
> > > where messages are viewed as integers when encrypted, compressing doesn't
> > > give
> > > you less ciphertext.
> >
> > Not 100% true.  If you are using a block based algorithim,(RSA, DES, IDEA, any
> > non-stream cypher)  you have a block size B, and a message M of size K1, and
> > comp(M) of size K2.  When M is sent you will have sent  ceil(K1/B)  blocks of
> > data , and when comp(M) is sent you will have sent ceil(K2/B) blocks.  This will
> > reveal information as to the internal state of the cypher as well.  One may
> > infact view a block cypher as producing, using internal state and plaintext a
> > stream of output values of size  B.  More blocks gives more analisys data in re:
> > internal structure, and state values.
> >
>
> yes, what you say is correct, but read carefully what I wrote, or more confusion
> will
> arise.  Block ciphers are *not* included in either of the two cathegories I
> mentioned.
> To generalize to what you added, if encryption is done on multiple chunks of a
> message,
> compression reduces available ciphertexts and greatens entropy, if it's done in a
> message
> as whole (as in RSA), it doens't give you less ciphertext nor does it change the
> entropy
> (by the viewpoint of the sources that I mentioned).
>
> > > Also, your source is then viewd as spitting out
> > > integers (which
> > > correspond to the messages) in whole, so entropy doesn't change if you
> > > compress.
> >
> > All cyphers take plaintext and internal state and output cyphertext, thus all
> > cyphers are vulnerable to analisys that looks for/at internal cypher state.
> >
>
> That is not true.  RSA, ElGamal, for example, take the ciphertext as an integer
> number,
> and operate on that, unlike ciphers like DES of Blowfish, which take oprate on the
> bits
> of the plaintext, by means of transpositions and such.  There are two general
> models
> to consider.
>
> Anton

If I use RSA with  an n bit modulous then my blocksize is <= to n bits.


RSA is a block cypher with a VERY LARGE block -- RSA with a 4097 bit key is encrypting
in 4K blocks, but they are still blocks.  Same with El Gamal, and many others.

Just because the blocks are bigger doen't mean that they are not block cyphers.



------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Unbiased One to One Compression
Reply-To: [EMAIL PROTECTED]
Date: Wed, 27 Oct 1999 16:28:00 GMT

Trevor Jackson, III <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:

[ruthless snip of preamble]

: OK, above you mentioned "optimal compression programs".  In this context
: optimality is defined with respect to some probability model.  If your
: probabiliity model is flat, i.e., assumes all bitstrings are expected
: equally, then the densest compression of any string is the string
: itself and varitants thereof.  I.e., a flat probability model cannot
: be compressed.  No surprise as a flat probability model implies a source
: of pure entropy; randomness.

...

: My point however, is distinct from the issue of optimality.  For any
: compression routine, even the trivial copy-input-to-ouput, one can apply
: transforms that do not increase the file size, but do show that there
: is always some flexibility in encoding the compressed representation of
: a bitstring.

: For example, mapping each bitstring onto its inverse is an option.  It
: does not increase the string length of any string [...]
: The trivial inverstion routine is just as much a compression routine
: as the trivial copy routine.  Thus, given a flat probability model you
: can choose either as an appropriate and "optimal" compressor.

: For non-flat probability models equivalent encoding transforms exist.
: This is why I disputed your contention that there can only be one
: version of a perfectly compressed file.

I said (to paraphrase from memory) "adding randomness to a compressed file
simply bulks it up".

You appear to be discussing the possibility of choosing randomly between
a number of equally good compression programs.

Since the best (sizewize) compression programs are one-on-one,
in order for your partner to decrypt the message you send, they /must/
know which compressor you are using.

You can't just make a random decision about how to compress, and send the
message, expecting them to decrpty it.  If you send a different message,
they will decode a different plaintext - since with "optimal" compression,
decrypting two files to the same message is not a possibility.

If they know the algorithm in advance, there's no possibility of changing
the algorithm (by, say inverting all the bits).

The /only/ way you can change the algorithm is if you include additional
information about how you have changed the algorithm with the message.
This fattens the message up.

: To insure that there is no residual confusion, ;et me state that any
: given compressor with the property you seek requires that it have a
: unique bidirectional mapping from plain to compressed text and back.
: But there is no unique compressor with that property for any
: probability model.

I *completely* agree with this.

However, I doubt that you can make "random" decisions while compressing
with an optimal compressor - *and* still expect a partner to
decompress it - without fattening up your file.

*Perhaps* if your partner can cryptanalyse your results, or there is
a known pattern to your message texts, or you have another channels on
which you can tell him which "random" choices you have made when
compressing, you can get away with your scheme.

I was not assuming any of this when I made my statement.

:> Random choices "bulk up" compressed files - compared to how small they
:> could be with optimal compression.  Essentially such random choices
:> necessarily add information to the file about the random choice.
:> This information has to go somewhere.  It winds up fattening up the file.

: Not necessarily.  Let's be specific.  Got an example?

I suspect an example would be tedious to give - and not terribly helpful.

If this bit of the thread continues much more you /may/ persuade me to
try to knock one up.

To summarise:

*You're* assuming that you can just compress and decompress - while
remembering how you compressed.

*I'm* assuming if you decompress by an unusual method, you *need*
to convey this additional information about /how/ you compressed
the file to your partner (because the compression is one-on-one)
before they can read your message.

I think that if we can resolve our differing assumptions, our apparent
disagreement will evaporate.
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

No individual raindrop considers itself responsible for the flood.

------------------------------

From: Paul Koning <[EMAIL PROTECTED]>
Subject: Re: Browsers Random Number Generator
Date: Wed, 27 Oct 1999 12:09:35 -0400

Vernon Schryver wrote:
> ...
> Don't you think that if the new encryption rules are not more creative
> uses of the truth, the California Mozilla people will add SSL to the main
> Mozilla source?  When I asked him some time ago about 128-bit encryption
> in Mozilla, Brendan Eich said the problem was the Feds.

And it continues to be.

If you look at the details that have been announced so far,
one point clearly stated is that source code continues to
be rigidly controlled.  That includes GPL source code.
It'll still have to be restricted to US only.

        paul

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: This compression argument must end now
Date: Wed, 27 Oct 1999 18:01:43 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> wrote:
>
>: If any one on this Use group is a plant to try to bore people to death
>: it is Tom.
>
>Blimey! a possibility I'd (innocently) not even *considered* - until now.
>
>Of course, it may well be in the interests of any security organisation
>large enough, to spread disinformation about cryptography to all other
>active third parties - with a view to subsequently decyphering any
>systems they later go on to build.
>
>I *had* noticed that I appear to be assailed from all sides in this forum
>by people who appear to be under fairly severe misconceptions about basic
>things - but it had not even occurred to me that this forum might contain
>individuals whole sole purpose in life was to waste other people's time -
>and actively try to kill off any intelligent discussion of relevant issues!
>
>;-)

  I think you realize that one to one or one on one or what ever
we call it. Is a very important security consideration when it comes
to using compression before encryption. You also realize that the
job of the NSA is to keep people misinformed so that no real useful
encryption will be done by the masses. I am not sure if I can point
to anyone directly doing the bidding of the NSA on this thread since
MR BS who likes to brag about his NSA contacts seems to ignore
this thread and others are attacking it either out of lack of knowledge
and are unwittingly help the NSA or maybe one or two are plants but
I am sure they would try to keep a low profile.
  I have had friends in college who where working toward there  Phd's.( I did 
not I could not kiss ass well enough and my English left something to be 
desired so never went that far) But it was common practice for them to
write papers so there professors could take all the glory. I do hope some
one like Mr R of RSA who has looked at my methods write some paper
so that the community at large can at least start to consider this kind
of thing.
  I know most of you out there would never hire a programmer like me.
I would most likey have been fired years ago but the Navy at one time
had use for people who could fix programming bugs and getting something
to fly with software was at one time a high priority but in todays world
Bull Shit is more important than actually producing code that works.

P.S. http://members.xoom.com/ecil/compress.htm  for better huffman 
compression as a first pass to encryption.

 


David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: "Belldandy" <[EMAIL PROTECTED]>
Subject: Re: the ACM full of Dolts?
Date: Wed, 27 Oct 1999 12:42:40 -0500

....huh?

OK, I'm new to this..... theory of yours, so please enlight me. What is the
advantage of using one to one?





------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Unbiased One to One Compression
Date: Wed, 27 Oct 1999 18:26:16 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>[EMAIL PROTECTED] wrote:
>: Tim Tyler ([EMAIL PROTECTED]) wrote:
>
>[snip stuff about "bias in the last byte", wvich David has replied to]
>
>: Of course, the objection is trivial, but what I gained by going
>: "one-to-one" was also trivial, since it, too, only concerned the *last
>: symbol in the message*
>
>I don't understand.  One-on-one compression - as a general notion -
>can eliminate information added by the compressor throughout the *entire*
>message.  The quantity of infomation can be *much* larger than one byte.
>
>Are you saying that in the currently available program, the original
>Huffman compression it is based on is already "nearly" one-on-one - so
>David's surgery removed a bias that was at most 8 bits in size?
>
>If correct, this would suggest that about 1 file in 256 has
>(Comp(Decomp(X)) = X, in the original compression program.
     The orignal compression program I based it on had 2 Major
weaknesses that stood in the way for it being one to one.
The first Major weakness was do to the fact it started with
an empty tree and the weak method it used to add in new symbols.
When decompressing every time a new symbol was added
it would use the 8 bits of the new symbol directly in the file.
 If one had used a wrong key guess ( to show how information was 
add by the compression) And one tried to decompress the file the
first 8 bits would be first symbol added to the tree. If a single bit zero
follows then it would take the next 8 bits as a new symbol to add to 
the tree. This failes in a random file 1/256 times when second new symbol
is added since during compression at adds a new lead only when a
new used symobl is present in the input file. AS more and more
symbols are add to the tree the chances of duplicate symbols rapidly
grows. If decompressing a random file. Once 128 leaves are present
every new symbol added from the file generated with the wrong key has a
better than a 50/50 chance of already being in the tree thus telling
one that the key used was wrong. I have tested my method on thousands
of long files. The question for some who is bright since my mathematics
is not trusted is how long a file is needed to be 50% sure that the wrong
key was used. Note if the file is long the method in the original code
would evidently find 257 symbols. Which is kind of tough since there
can at most be 256 symbols in 8 bits. You can assume a 128 bit input
key or 256 bit key.
 Know assholes tell me I am wrong. And what I said above was without
any file ending considerations or are you people going to just listen to
assholes that think compression has little effect on security.






David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Unbiased One to One Compression
Date: Wed, 27 Oct 1999 18:59:57 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
(John Savard) wrote:
>Tim Tyler <[EMAIL PROTECTED]> wrote, in part:
>
>>This - to my mind - has nothing to do with one-on-one compression per-se,
>>but rather presents a criticism of David's implementation.
>
>Yes, of David's specific implementation as I understand it, and of my
>own example implementation.
>
>One can remove the bias in the last byte in a contrived fashion - but
>that may well be good enough, since if the "one-on-one" property is
>retained, brute force search is not assisted, at least by much.
    John has refused to comment on how the bias is there. As
I have told him at least 3 others times. Replace the last byte of
the compressed file with each of the 256 cases. You can then
uncompress each one of those files to a unique file. John has
yet to comment on this fact.
  I think his problem relates to his confusion as how to make a compressed
stream that does not fit in nice 8 bit boundaries map to 8 bit boundries in
a way that allows the one to one property of the compression to exist. 
This mapping is a function of more thatn just converting a bit stream to
a multiple of 8 bits. It is a function of how the stream itslef was created
and if not careful one could easily get an adaptive huffman compression
that would not be one to one due to bias added by the EOF problem but
I think if I handled it wrong my compression would not be one to one.
I hope John can put our hatred for each other aside and take a real look.

>
>>David has said it's not an issue - and I presume he's correct.
>
>Chacun a son gout.
    Gee could you at least curse in English or something close to it.



David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: the ACM full of Dolts?
Date: Wed, 27 Oct 1999 19:06:07 GMT

In article <7v7d76$ekl$[EMAIL PROTECTED]>, "Belldandy" <[EMAIL PROTECTED]> 
wrote:
>.....huh?
>
>OK, I'm new to this..... theory of yours, so please enlight me. What is the
>advantage of using one to one?
>
>

  The main advantage is that if your going to use compression at all. The
one to one porperty will not add information that other compression routines
do to weaken your encryption.
  Just follow the threads on " .. compression .." in this use group. Most
people find my style a little to dark and foriegn so it might help to read
the posts of Tim Tyler he seems to have the best grasp of frequent
writers on the group on this subject and I assume he spells better than
me.







David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Unbiased One to One Compression
Date: Wed, 27 Oct 1999 20:17:17 +0200

[EMAIL PROTECTED] schrieb:
> 
> Somewhat recently, David A. Scott proposed that for purposes of
> encryption, a method of compression should be used that is "one to one";
> that is, it has the property that any random string of bits having the
> same length as one's message (such as that obtained by deciphering the
> message with the wrong key) should decompress to something without error.

A tiny remark: The one-to-one argument assumes that the analyst
knows and hence uses the same compression scheme as the sender.
How about weakening that assumption a bit? If one uses a dynamic
Hoffman scheme, one can start out with an arbitrary initial frequency
distribution. If this is hidden from the analyst, he doesn't know
how to decompress/compress properly from the very beginning, let 
alone to examine the one-to-one property. Of course, this means 
one is effectively using more key bits. On the other hand, it does 
mean that one can now explicitly prefix the ciphertext with a 
length (in plaintext) giving the exact number of bits of the 
ciphertext, which could be useful for ensuring correct transmission.

M. K. Shen

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to