Cryptography-Digest Digest #550, Volume #10      Fri, 12 Nov 99 01:13:04 EST

Contents:
  Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY)
  Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY)
  Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY)
  Re: Compression: A ? for David Scott (SCOTT19U.ZIP_GUY)
  Re: Signals From Intelligent Space Aliens?  Forget About It. (SCOTT19U.ZIP_GUY)
  Re: Build your own one-on-one compressor (Don Taylor)
  McAfee Fortress ("Kris Hendricks")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Crossposted-To: comp.compression
Subject: Re: Build your own one-on-one compressor
Date: Fri, 12 Nov 1999 04:19:14 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>In sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>: Tim Tyler wrote:
>:> In sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>
>:> : As I said previously, for such numerical coding the compression is
>:> : already so good that one need not (at least in the first
>:> : experimental phase) consider the aspect of word freqeucies.
>:> 
>:> I doubt this.  I expect non-dictionary words will typically bulk up the
> messages
>:> by a larger factor than they are compressed by, for (say) email messages.
>:> 
>:> It may be possible to develop a scheme that (roughly) breaks even on the
>:> compression stakes - but I doubt good compression ratios will ever be
> obtained -
>:> except on obscure or contrived types of text.
>
>: Just try to roughly compute the compression ratio of your paragraph 
>: above (noting that each word is translated to 16 bits) and
>: you'll see that you get something that is probably better than
>: what you expect from a normal compression of ASCII text.
>
>If you design your dectionary for my message I don't doubt you can perform
>excellent compression.
>
>Will the 65536 words in your dictionary contain all the words I used?
>
>I think the scheme you are proposing can compress English texts.  I doubt it
>will be as good as methods allowing variable-length symbols, and schemes like
>arithmetic coding which allow symbols which are not an integral number of bits
>in length.
>
>It's not a walk-over, though.  Unless you choose your dictionary carefully,
>more bulking-up than slimming down will occur - as all rogue non-disctionary
>symbols get expanded up from one to two bytes.
>
>:> Also, any 16-bit granularity in the output file will immediately render
> "8-bit"
>:> one-on-one property invalid: if you have a file which is an odd number of
> bytes
>:> long, you can rule it out immediately as a candidate compressed file ;-/
>:> 
>:> In fact, this will /probably/ have few implications for security, given
> various
>:> assumptions - e.g. that the length of the compressed file is already clear.
>
>: Since each word is translated to 2 bytes, every compressed file
>: has a even number of bytes. Now, suppose one gets a 'wrong' file 
>: which comes into being because the analyst uses an incorrect key
>: to decrypt. This file certainly also has an even number of bytes 
>: (assumung normal block algorithms). Since the dictionary has
>: 2^16 entries, any 2 bytes, whatever the content, has a corresponding
>: word (the dictionary is assumed to be full, i.e. exhausting the
>: coding space of 2^16). [...]
>
>I agree - *if* ordinary block-encryption is employed.
>
>However, there /are/ techniques which attempt to disguise the file length,
>through the use of random padding.
>
>**If** these are used, the analyst may well be able to usefully discard
>supposedly compressed files if they are an odd number of bytes long.
>
>: Thus the 1-1 property (definition of Scott) is trivially present, since
>: one can translate the words back again to the same numbers.
>
>*Which* definition of "Scott"?
>
>"Scott" commonly mentions that ``Comp(Decomp(X)) = X for all X''
>is what one-on-one compression involves.
>
>The scheme under discussion fails this - if X is an odd number of bytes long.
>
>I don't want to stress this too much - for most applications, it's probably
>a relatively minor issue.

  Actually you still can make one to one compress if you redefine a byte to
be 16 bits.  That way all files of your english text when tokenized would
be a mulitply of 16bits. THe huffman compression could be based on a larger
tree than I am presently using. But the resulting file would still be one to 
one. Since even if a wrong key used for then decryption the file would 
decompress back to a file that is a multiply of 16 bits. THen that would
map to the english or whatever words. Note this has nothing to do with
weather the english file is an odd or even number of bytes.




David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Crossposted-To: comp.compression
Subject: Re: Build your own one-on-one compressor
Date: Fri, 12 Nov 1999 04:27:41 GMT

In article <382b36d9$[EMAIL PROTECTED]>, Don Taylor <[EMAIL PROTECTED]> wrote:
>A proposal follows:
>
>In comp.compression Tim Tyler <[EMAIL PROTECTED]> wrote:
>> In sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>> : Tim Tyler wrote:
>> :> In sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>> :> : As I said previously, for such numerical coding the compression is
>> :> : already so good that one need not (at least in the first
>> :> : experimental phase) consider the aspect of word freqeucies.
>> :> 
>> :> I doubt this.  I expect non-dictionary words will typically bulk up the
>> :> messages by a larger factor than they are compressed by, for (say) email
>> :> messages.
>> :> 
>> :> It may be possible to develop a scheme that (roughly) breaks even on the
>> :> compression stakes - but I doubt good compression ratios will ever be
>> :> obtained - except on obscure or contrived types of text.
>
>> : Just try to roughly compute the compression ratio of your paragraph 
>> : above (noting that each word is translated to 16 bits) and
>> : you'll see that you get something that is probably better than
>> : what you expect from a normal compression of ASCII text.
>
>> If you design your dectionary for my message I don't doubt you can perform
>> excellent compression.
>
>> Will the 65536 words in your dictionary contain all the words I used?
>....
>> It's not a walk-over, though.  Unless you choose your dictionary carefully,
>> more bulking-up than slimming down will occur - as all rogue non-disctionary
>> symbols get expanded up from one to two bytes.
>....
>> The scheme under discussion fails this - if X is an odd number of bytes long.
>
>I will make use of a suggestion by Mr. Scott, using case of letters here.
>
>A specific precise tangible testable proposal:
>
>There is a fixed shared dictionary to be used for this compression.
>The dictionary consists of item pairs, lower case word and code number.
>
>lower case word found in dictionary <---> corresponding word code
>
>There are 2^15 plus approximately 2^13 pairs in this dictionary.  These
>are code numbered in two different ranges, in binary 1bbb bbbb,bbbb bbbb
>and approximately 010b bbbb,bbbb bbbb.
>
>To explain this, these codes are pairs of bytes.  The first range
>consists of byte pairs with the top bit, of the first byte, set.  This
>gives 2^15 possible codes.  The second range consists of byte pairs
>where the first byte has a value that would normally be a lower case
>ascii character and the second byte of this pair may be any value.
>This gives a little less than 2^13 possible codes.
>
>Every word in a message that is to be found in the dictionary shall be
>followed by a space.  Thus
>
>        'hello there '
>translates into
>        hello-code there-code
>which will be two adjacent two-byte codes in the compressed file.
>
>Another way to say this could be that ALL words in the dictionary have
>a trailing space, thus to find a match for your word you would have to
>find that trailing space in your uncompressed text.
>
>Note: This thus mandates A solution to the prefix/suffix/infix debate.
>
>There is an additional pairing outside of this dictionary process.
>
>NON-lower case char in 0...2^7-1 range <---> that char
>
>And, as a concession to those concerned with odd byte length files.
>
>first byte of a word code followed by EOF <---> that byte followed by EOF
>
>Now I claim, hoping that I have not made a mistake here, that this is
>one-to-one.  And having that annoying mathematical habit, I am making
>a distinction between a 1-1 function and an onto function and a total
>function, etc.  I claim the function is one-to-one *over its domain*.
>
>Thus I should say some things about that domain.
>
>For the uncompressed side:
>
>The individual who is dealing with the uncompressed side is constrained
>to look up words in the dictionary.  Words that are in the dictionary are
>lower case.  Any lower case letters in a potential message that are not
>to be found as a word in the dictionary must be translated into upper
>case or this system simply does not apply.  Every word that is to be
>found in the dictionary is to be followed by a space.  That trailing
>space is part of the matching process.  Additional/other punctuation,
>etc are left as is.
>
>Thus the constraint is that the case of letters is mandated for this
>process.  If the user doesn't wish to do this then find another method.
>If the input doesn't match these picky rules then no claims are made.
>So the input on the uncompressed side consists of "words" that are
>found in the dictionary, non-word characters, and a final special case.
>
>On the uncompressed side the user is ALLOWED to include a final single
>SPECIAL character in a message, IF they so choose.  That character will
>be in the range 2^7...2^8-1 but if they include such a character it will
>be the last character in the file.
>
>These make up the restrictions that apply to messages presented to be
>compressed.  
>
>For the compressed side:
>
>The individual who is dealing with the compressed side is free to submit
>any sequence of bytes of any values.
>
>It seems reasonable to place fewer constraints on him, for he is
>supposedly unaware of the compression process and is probing the
>behavior of the system by submitting a variety of test messages and
>observing whether they are returned as identical messages after
>uncompressing and recompressing.
>
>If he submits a two byte word code it will uncompress to the word
>followed by a space and when compressed will return the same two byte
>word code.
>
>If he submits a one byte letter code it will uncompress to the same
>letter and will then recompress to the same one byte letter code.
>
>If he submits the first half of a two byte word code as the last byte
>of a file then this will uncompress to that same byte and will then
>recompress to that same byte.  This is the only case where a two byte
>word code can be "broken" and a following byte will not be available
>to represent some entry in the dictionary.  And thus I added this
>patch to cover this case.
>
>Claims: if the above constraints are accepted.
>
>The system is one-to-one.
>
>No file expands when compressed.
>
>If the folklore for american english is correct and "the average
>word length is 5 characters", not counting the trailing space, then
>the trailing space makes the average "word length" 6 characters and
>these words will compress to two bytes, assuming that we can live
>with a dictionary of about 40,000 words.  And those words that are
>not found in the dictionary do not grow in length, they are
>"compressed" with characters that make up the word and are the same size.
>
>Even and odd byte length files are supported.
>
>End of claims.
>
>I would be happy to have any errors pointed out in this.
>don
>
>The following paragraph is OUTSIDE the current discussion.
>If you want a similar 3x compression for shorter words too, and
>you are willing to accept a somewhat smaller vocabulary, and you
>want even greater compression for words that repeat several times
>within the document and you are willing to incorporate some
>adaptive behavior in the compressor... then I have this marvelous
>modification of this scheme which, the margin of this screen is
>too small in which to fit the description.  But I would have to
>again think about that very carefully for a while to make certain
>that I had not broken the one-to-one condition placed on this.
>

  I think you did break the one to one thing but I wish we could
form group to do this common dictionasry thing if it is possible 
to organize over the net.




David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Crossposted-To: comp.compression
Subject: Re: Build your own one-on-one compressor
Date: Fri, 12 Nov 1999 04:31:41 GMT

In article <P1GW3.13263$[EMAIL PROTECTED]>, gtf[@]cirp.org (Geoffrey T. 
Falk) wrote:
>>In sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>>: The main trouble with a scheme like that is to have the general
>>: public accept a standard (or defecto standard) of numerical coding.
>                                ^^^^^^^^^^^^^^^^
>

  How about a program that builds the tables automatically from the
test files used by the compression guy Jeff something.



David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Compression: A ? for David Scott
Date: Fri, 12 Nov 1999 04:39:54 GMT

In article <[EMAIL PROTECTED]>, "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
>Tim Tyler wrote:
>> Methinks you underestimate your opponents ;-/
>
>Methinks you didn't understand what I said.

 Methinks he knows what you think you thought you said.

Also Tim I think most would agree if we do a dictionary you are more apt to
lead the group than me. I get pissed easier and would have far fewer 
followers.



David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Signals From Intelligent Space Aliens?  Forget About It.
Date: Fri, 12 Nov 1999 04:48:59 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>"SCOTT19U.ZIP_GUY" wrote:
>
>> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>> >"Douglas A. Gwyn" wrote:
>> >
>> >> "SCOTT19U.ZIP_GUY" wrote:
>> >> >    While don't just tease us what distance did you come up with?
>> >>
>> >> I don't have my notes with me right now and don't want to
>> >> spend time recomputing it.  I'll try to look it up at home
>> >> and post a follow-up with the info.
>> >
>> >Actually, I believe the professor said a slightly higher percentage of
>> >the speed of light but I cannot remember exactly what it was but I do
>> >remember it was in excess of 90%.
>> >
>> >But my comment about surviving the voyage was referring to the
>> >expansion of mass as one increasingly achieves speeds closer and
>> >closer to that of light.
>> >
>> >This mass expansion seemed to me to be dangerous and I believe I
>> >communicated this concern to the professor.  I believe he understood
>> >what I was concerned about and I believe his reply was meant to
>> >answer my question with regard to my concern.
>> >
>>
>>    The mass that you think  would be dangerous is not.
>> If you where in a rocket without windows feeling an accleration of 1.1 G's
> you
>> would never even be able to tell if your going 1% the speed of light or
> 99.99%
>> the speed of light. Your concerns of the problem are false.
>>
>> David A. Scott
>> --
>>
>> SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
>> http://www.jim.com/jamesd/Kong/scott19u.zip
>>
>> Scott famous encryption website NOT FOR WIMPS
>> http://members.xoom.com/ecil/index.htm
>>
>> Scott rejected paper for the ACM
>> http://members.xoom.com/ecil/dspaper.htm
>>
>> Scott famous Compression Page WIMPS allowed
>> http://members.xoom.com/ecil/compress.htm
>>
>> **NOTE EMAIL address is for SPAMERS***
>
>If you did not know it, as mass approaches the speed of light, 
>the mass increases.
>
>When they accelerate electrons at a high energy particle physics 
>lab, the mass of the electrons increase 10,000 times as they are 
>accelerated near to the speed of light.
>
>Similarly, this effect might take a few inches off your height, 
>me thinks.  You might end up a flat mass of mush.
>
>We are not talking about relativistic clocks and measurements here.
>We are talking about mass of the human body and its radical
>crease.  I think my concern for the safety of a human under these 
>conditions is reasonable.
>
>Of course if you know of a logical reason that supports your 
>assertion that this MASS increase will not be of concern, like your 
>analogy asserts, let us know.
>
>You have confused relativistic time perceptions with this very real 
>effect on mass as it approaches the speed of light.

 Sorry sir but I always got A's in physics. I may have only got
in the 17 percentile for english crap but math and science was
alwyas 99+ percentile.   The appeant mass incerse you see
is only to an observer who is looking at it from a different frame
of reference. To some one in a rocket slowy accelerating there
would be no change in appeatant mss. Try again next time.





David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

From: Don Taylor <[EMAIL PROTECTED]>
Subject: Re: Build your own one-on-one compressor
Crossposted-To: comp.compression
Date: 11 Nov 1999 22:16:15 -0600

> In article <382b36d9$[EMAIL PROTECTED]>, Don Taylor <[EMAIL PROTECTED]> 
>wrote:
>>A proposal follows:
...
>   I think you did break the one to one thing but I wish we could
> form group to do this common dictionasry thing if it is possible 
> to organize over the net.

Ordinarily I don't post to the whole group what would be a simple question
between two people but I seem to not be able to reply by email to ask this.
My apologies to the rest of you.

Mr. Scott

Please show me an example where the constraints that I outlined are
not violated and yet the proposed system is not one-to-one.  If I
have made a mistake I will make every effort to correct this.

Email would be preferred
thank you


  -----------== Posted via Newsfeeds.Com, Uncensored Usenet News ==----------
   http://www.newsfeeds.com       The Largest Usenet Servers in the World!
======== Over 73,000 Newsgroups = Including  Dedicated  Binaries Servers =======

------------------------------

From: "Kris Hendricks" <[EMAIL PROTECTED]>
Subject: McAfee Fortress
Date: Thu, 11 Nov 1999 19:33:42 -0800

Does anyone have any comments about the McAfee Fortress program? It uses
Blowfish with up to 448 bits, also DES and Triple DES. You  can enter a
password/passphrase of up to 32 characters. This seems like it would be a
good program but I never heard of it until recently. Has anyone used this
program or know if its good or not?

Kris Hendricks




------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to