Cryptography-Digest Digest #471, Volume #10      Sat, 30 Oct 99 10:13:10 EDT

Contents:
  Re: Symetric cipher ("Adam Durana")
  Re: VXD Memory Allocator for Win9x ("Rick Braddam")
  Re: Bruce Schneier's Crypto Comments on Slashdot (SCOTT19U.ZIP_GUY)
  Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY)
  Re: Unbiased One to One Compression (SCOTT19U.ZIP_GUY)
  Re: Compression: A ? for David Scott (Tom)
  Re: Compression: A ? for David Scott (SCOTT19U.ZIP_GUY)

----------------------------------------------------------------------------

From: "Adam Durana" <[EMAIL PROTECTED]>
Subject: Re: Symetric cipher
Date: Sat, 30 Oct 1999 01:21:39 -0400

> Funny he responds to my messages [albeit not quickly, but he does run a
> business].  Maybe you should take the hint?
>
> At anyrate, when did the NSA become anything more then a bunch of smart
> people?  And why can't smart people exist outside of the NSA?

They are smart people.  Smart people who spy on American citizens to
"protect" the nation.  And smart people do exist outside the NSA, no one
said they did not.  Also you don't seem to like David Scott much, so why
don't you follow Mr. Scheier's example and not reply?

Back to the original purpose of this thread...  'Handbook of Applied
Cryptography' is another great book, and the section on symmetric ciphers is
online, free to download.  The site http://www.cacr.math.uwaterloo.ca/hac/
In fact all the chapters seem to be online now.  If you are truly interested
in cryptography, its worth the money.

-- Adam Durana



------------------------------

From: "Rick Braddam" <[EMAIL PROTECTED]>
Subject: Re: VXD Memory Allocator for Win9x
Date: Sat, 30 Oct 1999 02:33:34 -0500

Paul Koning <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> My reason for pointing you at that link is that it compares a number
> of different malloc() implementations -- which differ in their
> allocation strategies -- to see how they perform.  I'd suggest
> you pick one that looks good and track down its source code.
> It's been a while since I read that article, but I remember that
> it had a clear winner (the emacs one?).  Using a malloc that
> came with an OS is probably not the best approach.  Using one
> that came from MS is quite likely to be a bad idea...
>
> paul

I went back and checked out the web page again, because I didn't remember what the 
ranking was. It appears that about four of the
implementations did well in some tests, including the Emacs malloc().

I may not know how to read the tables correctly, but it looks to me like the Linux/DL 
malloc() did well on all tests, and
substantially better on several tests. That would be Doug Lea's malloc() (dlmalloc()). 
I checked the CD I archived downloads on, and
the one I have (besides the MS one which came with VC5.0) is Doug Lea's. I think I 
need to print out his sources and study them some
more. I also downloaded ptmalloc() which is a multithread-specific version of Lea's 
malloc(). I need to do some studying there, too.

One problem I have is unfamiliarity with mmap() and sbrk(). I don't recall ever seeing 
them before, and I know I haven't used them.
Are they specific to *nix OSes? If so, what is their purpose, what arguements do they 
take, and what do they return? I'll eventually
figure that out by studying the sources, but learning how the algorithm works will be 
much quicker if I know ahead of time.

Thanks for motivating me to take a second look at dlmalloc(). It doesn't waste a lot 
of space and is very fast -- both important
considerations.

Rick






------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Bruce Schneier's Crypto Comments on Slashdot
Date: Sat, 30 Oct 1999 12:20:08 GMT

In article <[EMAIL PROTECTED]>, Matt Curtin 
<[EMAIL PROTECTED]> wrote:
>>>>>> On Fri, 29 Oct 1999 20:17:30 GMT,
>    [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) said:
>
>MrWizard> Don't forget Skipjack is something the Clinton Adminstration
>MrWizard> wanted everyone to use as it was to be hidden in the Clipper
>MrWizard> Chip.  Does he really think they meant it to be secure?
>
>Pray tell us, what would be the point of something with a known flaw
>that could be exploited by third parties and used to make the
>widespread adoption of the product fail if The Man had the ability to
>get at the data using escrow keys?
>

  First of all in violation of what most consider a safe parctice. The
actual method was to be secret. Part of the security was based on
hoping no one knew that algorithm. The method also was never
intended for highly classifed government data so one would never
suspect that it should have been super secure. And thirdly the
LEAF ( the law enforcement field ) which was suppose to aid
the government through the use of escrow but there were methods
already in the open literature to get by this method. The NSA
designed it so that it could be easily broken by them when
people on through own take the chip apart and learn the method
or try the techniques in the literature to get a tound the LEAF
stuff. So in reality it had to be a weak method. But as weak as
it is I don't think people in the open literature could even today
actaully break a message very easily in a timely matter.
But it would be a safe beat the NSA could.




David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Crossposted-To: comp.compression
Subject: Re: Build your own one-on-one compressor
Date: Sat, 30 Oct 1999 12:23:51 GMT

In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]> 
wrote:
>SCOTT19U.ZIP_GUY wrote:
>> 
>> In article <[EMAIL PROTECTED]>, Mok-Kong Shen
> <[EMAIL PROTECTED]> wrote:
>> >Tim Tyler wrote:
>> >>
>> >> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>> >>
>> >> : If I understand correctly, you are now on compression not operating
>> >> : on groups of 8 bits but on group of bytes.
>> >>
>> >> This is a strange way of putting it.  There is no possibility of a
>> >> non-byte aligned end-of-file, if you work with byte-aligned symbol
>> >> strings.
>> >>
>> >> Of course my method is not *confined* to the use of byte-confined
>> >> symbol strings - but your strings *must* differ from their replacements
>> >> by a multiple of 8 bits or you will introduce the type of file-ending
>> >> problems that only David Scott knows how to solve ;-)
>> >
>> >Let me explain what I meant. You have certain symbols; let's
>> >for convenience identify these with A, B, C, etc. (they could in fact
>> >be anything you define through grouping any number of bits). Now your
>> >dictionary does translations. An example could be that 'ABCD' is
>> >translated to 'HG'. This is the compression direction. On decompression
>> >it goes backward to translate 'HG' to 'ABCD'. Am I o.k. till here?
>> >Now I denote the side with 'ABCD' above side1 and the side with 'HG'
>> >side2. So on compression one matches the source string with entries
>> >of side1 and replaces the found entries with the corresponding
>> >entries of side2. On decompression one reverses that. Now side1,
>> >if correctly constructed, should be able to process any given source
>> >input and translate that to the target output. (I like to note
>> >however that it needs some care for ensuring this property, if your
>> >symbols are arbitrarily defined.) Let's say that in a concrete case
>> >one has XYZ....MPQABCD compressed to UV.....ERHG. Suppose now I
>> >change the compressed string to UV.....ERHN (HG is changed to HN).
>
>>    I will not commit to much since I feel you are addressing Tims stuff.
>> But I think you are wrong. The HG does not change as you say to HN
>> but either is a perfectly valid string that would decompress properly if
>> that is what you are asking
>
>I was making an example of a 'wrong file' such as one that is obtained
>with decryption using a wrong key. For simplification, I assume
>it is wrong only in one symbol. (This justifies the 'change'.)
>If side2 of the dictionary does not happen to have the entry 'HN',
>then one has the same problem that one discussed in the context
>of your one-to-one problem concerning the ending bits of a wrong
>file.
>

   Mok for his example if you changed the entry so that it did not appear
in the dictionary. THen on decompression using his method that part
of code would remain unchanged so there is no problem. It is still
1-1.





David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Unbiased One to One Compression
Date: Sat, 30 Oct 1999 12:34:38 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Jerry 
Coffin) wrote:
>In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
>
>[ ... ] 
>
>> > I'd guess one will just have to reserve a spot in the huffman tree as an
>> > EOF then.
>> 
>> Or put the file size in bits in the message.  On does not lose thereby as 
>> one usually assumes the opponent knows the size of the message.
>
>Yes and no.  The opponent can see the size of the message that's 
>transmitted.  S/he cannot see how many of the bits at the end may 
>contain garbage.  If you encrypt the size (in bits, for example) along 
>with the rest of the compressed message, you're losing part of the 
>benefit of using compression at all: if the opponent decrypts the 
>length field and it's not a reasonable match for the amount of 
>encrypted text received, he knows his attempted decryption was wrong.  
>The point of using a bijective function for the compression in the 
>first place is to avoid giving away information like this.
>
>I think a better method would be to encode the number of unused bits 
>in the last byte into a field that's only large enough to hold 
>legitimate values (e.g. 3 bits for 8-bit bytes).  In this case, 
>there's no way for the field to contain a value that isn't reasonable, 
>so the opponent can't look at it and make a decision about whether the 
>attempted decryption is right or wrong based on the field's contents.
>

   This last attempt is a common practice for ending the adaptive huffman
compression. But in reality it is almost as bad as leaving the number bits
in the file. For example supose on decompressing you start a long token
that started in second to last byte and even by the time you finish using
all the bits in the last byte you are not at a leaf in the tree. Assume you
not invoking the random bits god what is one ot conclude but that you
used the wrong key.
   You may not like it but I think my method is the ideal type for ending
an adaptive huffman compression since you never run into a problem
that does not end correctly when you do the decompression.




David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

From: [EMAIL PROTECTED] (Tom)
Subject: Re: Compression: A ? for David Scott
Date: Sat, 30 Oct 1999 14:56:11 GMT
Reply-To: [EMAIL PROTECTED]

On Fri, 29 Oct 1999 16:59:24 GMT, [EMAIL PROTECTED]
(SCOTT19U.ZIP_GUY) wrote:

>In article <7vc81f$128$[EMAIL PROTECTED]>, "Tim Wood" <[EMAIL PROTECTED]> 
>wrote:
>
>>True. But it does depend on your definition of bad (low
>>compression/predictable structure).
>    My defination of bad is a compressor that adds information
>to a file as it compresses.

I used good/bad to quantify a compressor in the traditional sense,
that is the better the compressor, the higher the compression ratio.
As this reduces the amount of space a given amount of information
occupies, it reduces the occurrences of patterns, an overall reduction
of information per byte.

Measuring by compression ratio is objective.  You're definition of
adding information sounds subjective.  If we try to compress a file
compressed by pkzip - a "bad" compressor by your definition, we'll
find it doesn't compress much, if any.  This could be taken as an
objective measurement of a lack of information added, overall.  A few
bytes in the header?  Sure.  Patterns throughout the file?  Doesn't
seem likely.

If the concern is that known data at the beginning of the plaintext
could ease analysis, then why not deal with the header, rather than
trying to reinvent compression?  If the compression isn't as efficient
in a traditional sense, and it doesn't add random data, then it's
either going to add or leave patterns throughout the file that, IMHO,
could also be used to aid in analysis.

>>
>>>If the problem is only that a
>>>good compression program (i.e. pkzip) adds known information to the
>>>header (as exist in many file formats), why wouldn't moving the
>>header
>>>to the end of the plaintext, and running in a feedback mode
>>>effectively eliminate or greatly reduce the problem?
>>
>>Running in a feedback mode would reduce the problem - however it would
>>be better to eliminate the problem (i.e. find a cure not treat the
>>symptoms) by removing all added structure from the post-compression
>>plaintext.
>    And one way to do that is to use a one to one compression scheme.

But are we only talking about the structure of the header?  From what
I can tell from looking at a couple of zip's, there is a header of
maybe 80 or so bytes that have a distinct pattern, and a trailer of
about 60 bytes that also has a pattern (looks like name and checksum),
but the body of the data looks pretty paternless.  How could it not
be, otherwise it would still be compressible?  

By "one to one", I assume you're speaking of is a compression
algorithm where any data will decompress to something, not a
compression scheme with a one to one compression ratio, or a symmetric
compression algorithm.  I haven't studied it, but it would seem to me
that many, if not most, compression/decompression functions will
decompress any data to "something", other than the header and footer
of the file.  

>>
>>>Additionally,
>>>why wouldn't running a bad compression program be potentially as bad
>>>as no compression, as it could lead to predictable patterns within
>>the
>>>body of the plaintext?
>>
>>Running a compressor that adds sinificant predictable
>>structure/information to your plaintext is probably as bad or worse
>>than no compression (depending on the plaintext you are commpressing).
>>And certainly not more secure than no compression.
>    It is definitely worse.
>>
>>>What if the "compression" program is actually designed to
>>allow/insert
>>>patterns specifically to provide someone with a known plaintext
>>>attack?
>>
>>This should be avoided.  Although it is certainly possible. All third
>>party programs suffer from similar possible problems; it comes down to
>>who you trust... micro$oft for instance? ;-)
>>
>>>If the compression program doesn't actually compress,
>>
>>The compression program should compress (only random data should be
>>uncompressible).
>   Becasue of the counting theorm all compressors only make a subset set
>of files smaller. In fact most files would incearse in lenght of you looked at
>the class of all real binary files.
>>
Ok, even if we accept that, I'd still contend that the subset of files
that have the most general application for encryption are those that
have large, well defined, known patterns, and are highly
compressible - namely databases and documents.  Multimedia files
(movie, sound, photo), which are often in an already compressed format
often have a very well defined header.


>>>wouldn't it make
>>>a whole lot more sense to use an unrelated but standard cipher
>>>(different key, of course) first?
>>
>>That is a whole diffrent issue.... encrypted data should be mostly
>>uncompresible.

first being before encryption, not before compression.  If you're
using compression to obfuscate the plaintext, why not use the proper
tool instead - encryption.  

>>
>>>It just seems like trying to use a screwdriver as a hammer, and not
>>>doing either very well.
>>
>>This is possibly true, it would be wrong to rely on compression to
>>secure your cipher, but there is nothing wrong with adding extra
>>security. Compression complements encryption.
>    Compression can complement security if used correctly however
>in practice this seems to seldom be done. Since people have not taken
>the time to really look at the security issues involvled. Except for groups
>like the NSA which undoubtly through there contacts with the public and
>through the chosen few like Mr BS who talk about his great contacts with
>the NSA I am sure would not like the masses to learn about bad compression
>that is used with encryption or his book would have had a decent discussion
>on the topic which it does not. Makes one wonder what else is missing or
>worded to misinform the person trying to really learn about crypto. I am
>graetly disappointed that Wagner has choosen to keep the disception and
>misinformation about compression and encryption alive.
>
This stuff isn't magic.  Compression is a way to make the plaintext
smaller, and greatly reduce patterns.  Reducing the effects of
housekeeping patterns generated by the compression is a great topic,
but attempting to use compression to eliminate all patterns completely
sounds like encryption, not compression, to me.



------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Compression: A ? for David Scott
Date: Sat, 30 Oct 1999 13:11:49 GMT

In article <7vdn0t$qa2$[EMAIL PROTECTED]>, Clinton Begin <[EMAIL PROTECTED]> wrote:
>
>> *All* compressed files are valid using David's scheme.  I presume you
>> are referring to files you have generated without using the
>> compressor?
>
>Yes, and also information that has been encrypted and then decrypted
>with the wrong key (as in the cryptanalysis procedure I described).
>For example:
>
>Where M = compressed message
>k = proper key
>r = random key attempt
>E = encrypt
>D = decrypt
>C = cipher text
>
>C = E[k](M)
>M1= D[r](C)
>
>Here when M1 is decompressed, it will not change in size (as with
>David's compression examples) or the decryption will fail (as with many
>other compression schemes).
    Actually when I do this I get a large change in the file size
what are you using for your encryption decryption.
>
>> Not at all obvious.  Say the compression is treating a JPEG, or a zip
  When I compress a zip file it generally gets longer.
>> file. It won't be able to compress this further very well - and
>> "compression" will result in a file that differs from the original
>> by "one or two percent".
>
>Thank you, I believe I identified that as a possibility.  I also
>described how this could easily be added as a step in the cryptanalysis
>process I provided in my second example.  In these cases, there would
>likely be only a few compression formats the hacker would have to check
>for (jpg, mpg, zip, mp3 etc.).  Most of these formats can be easily
>identified in the first few bytes of the file.
   That is why I recommend if one truely wants better security and stuck
with using a short keyed method like any of the AES candidates to compress
the file in both directions. This way the attacker is forced to completely 
decrypt the file and completely do one uncompress before there are any bytes
available to test anything.

  I think you are wrong if you think I an saying this is perfect. What I am
saying is that my type of compress adds no informatuon that would help
an attacker and that if one compresses one should use a compressor that
adds no information.
  You should be comparing what happens if file not compressed. Does
my method make it easyer to break than if compression not used at all.
I think not. But what makes the others bad is that information can be
given to the attacker even if one knows nothing of the characristics of
the file being compressed and this is truely a dangerous situation.



David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to