Cryptography-Digest Digest #489, Volume #10 Mon, 1 Nov 99 19:13:03 EST
Contents:
Re: What is the deal with passwords? (SCOTT19U.ZIP_GUY)
Re: Compression: A ? for David Scott (SCOTT19U.ZIP_GUY)
Re: Build your own one-on-one compressor (Tim Tyler)
Re: Re: Proposal: Inexpensive Method of "True Random Data" Generation (CoyoteRed)
Re: Kerberos Question (John Savard)
Re: the ACM full of Dolts? (Tom St Denis)
Re: Compression: A ? for David Scott (Tim Tyler)
Re: COMPOSITES belong to co-NP? (Anton Stiglic)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: What is the deal with passwords?
Date: Mon, 01 Nov 1999 21:54:52 GMT
In article <7vksa7$i2i$[EMAIL PROTECTED]>, Tom St Denis <[EMAIL PROTECTED]> wrote:
>Does anybody have any thoughts along these lines?
Just one do you ever read or think about the carp you post.
>
>btw I don't mean to plug peekboo, but it really is the only encryption
>software I can find that doesn't use passwords. Feel free to list
>other software in replies.
>
And people call my stuff snake oil. This last set you wrote
is priceless.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm
Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm
Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Compression: A ? for David Scott
Date: Mon, 01 Nov 1999 21:47:49 GMT
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> wrote:
>: "Tim Wood" <[EMAIL PROTECTED]> wrote:
>:>Tom wrote in message <[EMAIL PROTECTED]>...
>
>:>>Worse than that - this scheme makes the implementation of a chosen
>:>>plaintext attack trivial, where using standard compression makes some
>:>>forms of chosen plaintext attack completely impossible.
>
>:>Assuming that the attaker can _choose_ the plaintext
>:>it makes no difference what compression is used. The plaintext chossen
>:>is the binary string *after* compression.
>:
>: Yes as I have admitted many times if you can get the enemy to
>: completely encrypt a file of your choice. Then he can control exactly
>: the complete file that goes into the encryption phase of the program.
>: IN this case it would be essetntially the same as giving the enemy a
>: different file with out using the compression at all becasue this would
>: be like not using compression at all. [...]
>
>A case where I appear to disagee with David Scott ;-)
>
>*Even* if the enemy can control the plaintext of the message, compression
>can /still/ offer _some_ security benefits...
I am not sure we disagree. I feel over all there still are some security
benefits. But in this one rare case we are allowing the enemy the to give
us a file to use. One that he could have cooked up for the weak encryption
system. I
>
>These benefits arise because the enemy still has less cyphertext to
>analyse.
But here is the catcher. If he could have given FIle X to a system that
had no compression he woudl get File Z out.
But to our system he give File Y which though it might be 5 times
as long as FIle X. When Y is compressed it becomes the X that is
compressed back to Y and encryprted
so that the attacker has FIle Q which when compressed will be the
Z of the plaintext attack without compression.
This is still not easy to do and may not be pratical for most cases
in practive. Note I am only talking about the case where the enemy
can control the excat totally input file. Still though this is not a case
of the system being weakened by the compression which is what
happens with other compression systems.
And yes even in this case Where attacking the encryption system
without compress would have to analyize the complete pairs of files
X for input and Z for output.
While in the one with compress he has to get Decomp(X) to give the
user and after user secrectly compresses his version to get his
secret X he then encrypts to Z which he decompress and gives Q out. The
attacker then
would have to compress Q to get the Z know he though much work
has the same set of files as for a TOATAL FILE PLAINTEXT ATTACK
on the system without compression. But yes he is using longer files
and is required to run a decompression of X and a compression for
each Z so it is more work. And the decompressed X may not be in
a form that is easy to trick or even force the user to use.
But even in this case there are total plain text attacks that
it can prevent. For example the let's say the user is only encrypting
text. And that some slick NSA guy comes up with an attack that
uses text as a plainfile attack that and comes up with a break that
only works for encrypted ASCII files. Since the user is only using a
very limited subset of the files space available it is conceivable that
a break could occur here. But know the user switchs to a compressed
file (my compression) Know the user is no longer using such a small
subset of the file space and the break the NSA guy has which only
worked for encrypting text may fail. Since it is highly unlikely the
compressed text will compress to binary. And the attaker now even
knows less than before since before he may have also beon using
the number of ascii characters to help with such a break.
>
>Consider the case where an EOR with PRNG output has been used for
>encryption:
>
>With a known plaintext, the attack boild down to finding the seed
>which was used to create the PRNG, given a section of its output.
>
>This latter problem is widely known to frequently become easier the more
>PRNG output is available.
>
>There is generally some threshold, below which finding the seed exactly
>and with certainty is not possible, while above the threshold the internal
>state of the PRNG (and the key) may be recovered.
>
>The conclusion is not confined to simple stream cyphers - having less
>cyphertext to analyse generally hinders the enemy and increases security.
I agree with you said but we are specifically talking about an attack
where the attacker has totoal control on what file you are encrypting. But in
general you are right. I hope I stated this correctly since I don't think we
diasagree.
Please let me know Tim what you think
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm
Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm
Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
------------------------------
Crossposted-To: comp.compression
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Build your own one-on-one compressor
Reply-To: [EMAIL PROTECTED]
Date: Mon, 1 Nov 1999 21:33:06 GMT
In sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
: Tim Tyler schrieb:
:> In sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
:> : Tim Tyler wrote:
:> :> In sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
:> :> :>TT> ``* No string in the tables should contain another such string as a
:> :> :> >> substring;
:> :> :> >>
:> :> :> >> * No leading symbols in any string should exactly match the trailing
:> :> :> >> symbols in a different string.''
:> :>
:> :> : But the impossibility of constructing a dictionary of the art of
:> :> : Tim Tyler in practice remains. Consider the following simple sentence:
:> :>
:> :> : In this afternoon there is going to be a discussion on his
:> :> : issue.
:> :>
:> :> : Could you show a minimal dictionary that satisfies his two criteria?
:> :>
:> :> In what follows, note that spaces have been replaced by "_" characters
:> :> for clarity.
:> :>
:> :> The dictionary:
:> :>
:> :> "there_" <--> "%"
:> :> "going_" <--> "|" "to_" <--> "*"
:> :> "be_" <--> "#" "his_" <--> "\"
:> :> "noon_" <--> "A]" "ion_" <--> "A["
:> :>
:> :> ...compresses:
:> :>
:> :> ``In this afternoon there is going to be a discussion on his issue.''
:> :>
:> :> ...to:
:> :>
:> :> ``In t\afterA]%is |*#a discussA[on \issue.''
:>
:> : Would you be satisfied with a dictionary which has only one single
:> : line specifying 'a' to be translated to '%'?
:>
:> Such a dictionary would not be very useful for compression.
:>
:> Who is proposing such a thing?
: Look above of what you have done. How many percent of the input
: is not translated? [...]
About 31 characters out of 66 - slightly less than half.
: I was simply using a more conspicuous example to illustrate my point.
I /think/ I now understand your point.
Your example seemed to me to have too many other possible interpretations
for me to be able to deduce what you were trying to say.
[snip]
: What is your idea of a dictionary and translation?
I'm using "dictionary" primarily in the sense of "a list of words".
I don't believe I've mentioned translation per se.
Certainly the word is not used in my monograph on the web page.
: Look at what you'll do if you are going to translate English to, say,
: Russian. [...]
Yes, well, that is rather a different operation from compressing some
text.
: You must be able to translate ALL that is on the source side
: to the target side. Otherwise you don't have a real translation.
No such requirement is present for a compression routine.
A compressor simply needs to be able to replace some oprtion of the text
with shorter versions.
Replacing every /possible/ word is not a necessary requirement for being
able to compress effectively.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ [EMAIL PROTECTED]
I've been seduced by the chocolate side of the force.
------------------------------
From: [EMAIL PROTECTED] (CoyoteRed)
Subject: Re: Re: Proposal: Inexpensive Method of "True Random Data" Generation
Date: Mon, 01 Nov 1999 21:30:40 GMT
Reply-To: this news group unless otherwise instructed!
On Mon, 01 Nov 1999 20:48:11 GMT, [EMAIL PROTECTED]
(HJS) wrote:
>Why not just take the noise from the loudspeaker? Easier - and no doubt
>just as random as all this camera business.
Feed it directly into the computer's A-D converter and mix with a ham
radio's output.
Or take several samples from a computer mic and mix together.
Or use 50 or so TV receivers ( or AM / FM radios ) and mix the output
of each sound channel together to get a file. Sample only when you
encrypt.
Mix these sounds bitwise, not soundwise...
--
CoyoteRed
CoyoteRed <at> bigfoot <dot> com
http://go.to/CoyoteRed
PGP key ID: 0xA60C12D1 at ldap://certserver.pgp.com
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Kerberos Question
Date: Mon, 01 Nov 1999 20:46:26 GMT
[EMAIL PROTECTED] (David P Jablon) wrote, in part:
>Preventing eavesdropper dictionary attack is just one of the benefits of
>EKE and SPEKE. To fairly compare these methods to a scheme which
>relies only on a persistent server public key, one should also consider
>their other benefits. SPEKE/EKE work for roaming users, where no specific
>server is preconfigured, and they use the password for mutual authentication,
>to create a stronger binding between the password and the encrypted session.
I've finally looked into SPEKE. After realizing that protecting
passwords against a dictionary attack is harder than I thought, it was
important to understand the available alternatives.
EKE was interesting because it offered "privacy amplification" in an
obvious way, which could be useful for purposes quite unconnected with
protecting weak passwords. Unfortunately, as a random string of bits
can easily be distinguished from the product of two large primes, it
can't be used with _full_ effectiveness with RSA.
SPEKE, I see, is a variant of Diffie-Hellman.
In Diffie-Hellman, there is a large prime modulus, M, and a publicly
known base, A. The two users each think of a secret number: x and y.
Each one discloses A raised to that secret number, and thus both users
can calculate A^(xy), which equals both (A^x)^y and (A^y)^x.
In SPEKE, A is the secret key derived from the relatively short
password. For some technical reason, x and y are both equal to twice
the original random numbers chosen by the two users; which is
equivalent to restricting them both to be even.
It differs from EKE since instead of using the password to encrypt
part of the public key, the password is part of the public key.
John Savard ( teneerf<- )
http://www.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: the ACM full of Dolts?
Date: Mon, 01 Nov 1999 21:44:01 GMT
In article <[EMAIL PROTECTED]>,
Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> SCOTT19U.ZIP_GUY wrote:
> >
> > In article <[EMAIL PROTECTED]>, Mok-Kong Shen <mok-
[EMAIL PROTECTED]> wrote:
> >
> > >Simply put: The one-to-one property isn't that essential, there
> > >are ways to accomplish good encryption purposes without needing
> > >that.
> >
> > Simply put. If ine is GOING TO USE COMPRESSION BEFORE
> > ENCRYPTION. Then it is best to use compression that does not
> > add data to the file that would aid an attacker into breaking the
> > system. ONE-ONE COMPRESSION does not add information
> > when compressing the file.
>
> Using an adaptive Huffman with an initial frequency distribution
> does NOT add data to the file. The distribution is completely
> hidden from the analyst.
What info could it possibly add? I still don't get this point. A LZ77
stream is only <index, len, lit> code words. What info can be added?
The literals?
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Compression: A ? for David Scott
Reply-To: [EMAIL PROTECTED]
Date: Mon, 1 Nov 1999 22:03:09 GMT
CoyoteRed <[EMAIL PROTECTED]> wrote:
: One argument presented was if we used one-to-one and the aggressor
: attempted a key and decompressed, what's to stop him from looking at
: the compression ratio to determine if it expanded out. [...]
This would only work if random files typically didn't decompress.
David has said that this is not true for his technique.
AFAICS, it isn't true for the method I describe either - random files are
likely to expand significantly when decompressed.
: (It was mentioned something about output of /random data in/ (an
: unsuccessful decryption) would be /random data out/ which, in turn, doesn't
: compress well.)
It was /mentioned/, but AFAIK, it isn't true, for either routine.
Of course - unless the compression is near-perfect - decompressing and
looking to see if the result is a valid text file will allow the attacker
to systematically rule out keys, if he knows a text file is what he is
looking for.
: So, in this belief scenario, we might be able to assume that
: one-to-one is not free of analysis. Granted, the Comp(DeComp(x))=x
: attack is eliminated, but not all attacks.
If the compressor is less than perfect, a brute-force keyspace
search /might/ bear fruit.
It /may/ even be possible to rule out keys based on decompressing
only *part* of the file. This depends on the compression routine, and how
"local" it is.
One-on-one makes life harder for the attacker. If it can be combined
with good compression ratios, this would make life doubly hard.
However - while the size of the message significantly exceeds the size of
the key - ruling out *all* attacks is not a reasonable expectation.
: What if we just compressed for size using the best algorithm available
: and then generated a random file to XOR with the compressed data.
: Encrypt the random file and package the encrypted random file with
: XOR'ed data and send it on it's way.
There's still the question of where do you get the "random" file from?
: We vastly suppress the ability to analyze from message to message,
: because each message will be XOR'ed with a different file. [...]
: While we will vastly increase the size of our message, we would
: eliminate any compression based attack or pattern attack angle. We'd
: just have to sacrifice size for security.
: Because, we've (I believe) have established that XOR'ing your data
: with a sufficiently random file makes it /secure/. Our next attack
: would come from trying to break the encryption on the random file and
: because we have a random file as our "data," wouldn't he be forced to
: use brute force and the strength of this would rely on the size of our
: key?
This is the same problem as I mentioned before. If you have an
encrypted file (with random contents) and a message encrypted by
XORing with those same random contents, then an attack considering both
the message and the encrypted random data has some chance of success.
*Something* is known about the contents of the "random" file - that is
if you EOR them with the sought-after message the result is a given file.
It may be possible for the analyst to use this information to concoct
an attack on the encryption of the message with the random data.
I would agree that you're /probably/ making the analyst's life harder
by using the type of scheme you're suggesting - but there are /still/
techniques he can use to attack.
For example, a known-plaintext attack is unaffected by the use of your
approach - except in so far as the file size has been reduced by the
compression.
*If* such a scheme proves more resistant to attack than plain
compress/encrypt then it /may/ be an interesting way of trying
to increase security at the expense of bandwidth.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ [EMAIL PROTECTED]
I've had fun before - this isn't it.
------------------------------
From: Anton Stiglic <[EMAIL PROTECTED]>
Crossposted-To: sci.math
Subject: Re: COMPOSITES belong to co-NP?
Date: Mon, 01 Nov 1999 17:35:27 -0500
William Rowden wrote:
> Here is a question from someone completely new to complexity theory
> and number theory:
>
> What "certificate" would allow one to verify primality in polynomial
> time? It is obvious to me that the composite decision problem ("Is n
> composite?") belongs to NP, because one could produce as a certificate
> an integer b such that b|n (b divides n). The reverse is not clear to
> me: what information would permit verifying that the answer to "Is n
> composite?" is NO?
>
Look at section 4.3 of the Big Green book for some algos and theorems.
To understand how this pertains to complexity, let me try to explain:
To prove that p is prime, we can use the following theorem:
p is prime if and only if there exists an integer a satisfying
(i) a^{p-1} = 1 mod p and
(ii) a^{p-1}/1 != 1 mod n, for each prime q such that q | p-1.
Now, an all powerfull proover P could factor p-1, and give this in the
certificat,
but then he must also proove that the q's that divide p-1 are in fact prime,
and
that he has all of them. To to this, he gives q1, q2, ..., qn such that
p-1 = q1q2 ... qn (this can be verified in poly time)
and prooves that the qi's are infact prime.
He prooves that each qi is prime by using the same method, until he comes
down
to primes for which prooving they are primes can efficiently be done by just
trying to divide it by each number smaller then it (or something like that).
A time complexity study of this can proove that all this can be done in
Poly of lg(p) time. There is actually a good little article explaining all
of this,
I could give you the ref tomorow.
Anton
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************