Cryptography-Digest Digest #604, Volume #14      Wed, 13 Jun 01 10:13:00 EDT

Contents:
  Re: Special promotion: White-Hat Security Arsenal at 40% off on  (Phil Carmody)
  Re: Sophie-Germain Primes for sale (Mark Wooding)
  Beginner's Question (Jschutkeker)
  Re: IV (Mark Wooding)
  Re: Uniciyt distance and compression for AES (SCOTT19U.ZIP_GUY)
  Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY (Mark Wooding)
  Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY (Mark Wooding)
  Re: Uniciyt distance and compression for AES (Tim Tyler)
  Re: Alice and Bob Speak MooJoo ("Robert J. Kolker")
  Re: Yarrow PRNG (Eric Lee Green)
  Re: differential cryptanalysis with a new twist? (Mika R S Kojo)

----------------------------------------------------------------------------

From: Phil Carmody <[EMAIL PROTECTED]>
Crossposted-To: alt.security,comp.security.misc
Subject: Re: Special promotion: White-Hat Security Arsenal at 40% off on 
Date: Wed, 13 Jun 2001 12:23:41 GMT

Avi Rubin wrote:
> This book is currently being featured at a special 40% discount
> on Amazon.com, for a limited time.
> 
> http://www.amazon.com/exec/obidos/tg/feature/-/175767/102-9130054-3732109
> 
> > White-Hat Security Arsenal: Tackling the Threats
> >      - with a foreword by Bill Cheswick
> >
> > Paperback - 384 pages (June, 2001)
> > Addison-Wesley ISBN: 0-201-71114-1
> >
> > See http://white-hat.org/ for detailed information.
> >
> > Amazon page:
> > http://www.amazon.com/exec/obidos/ASIN/0201711141
> >
> > Addison Wesley page:
> > http://cseng.aw.com/book/0,3828,0201711141,00.html
> >
> > Feel free to forward this message to any people/mailing lists who may be
> > interested.
> >
> > Avi Rubin

>From http://cseng.aw.com/book/0,3828,0201711141,00.html - 

<<<
"White-Hat Security Arsenal ups the ante for the good guys in the arms
race against computer-based
crime. Like a barrage of cruise missiles, Avi's excellent book attains
air superiority by leveraging
smarts and advanced GPS technology to zero in on critical targets.
Intended to educate and inform
information security professionals with a no-nonsense, hold-the-hype
approach to security, this book
is a critical weapon for modern information warriors. If you wear a
white hat and are on the good
guys' team, buy this book. Don't go into battle without it!" 
--Gary McGraw, Ph.D., CTO, Cigital 
>>>

I'm curious what "no-nonsense" and "hold-the-hype" mean after reading
that quote.

I'm not trying to be disrespectful to the book or the author, I'm just
saying that I will certainly wait until I have seen another
(independent) review. Preferably a review with better leveraged smarts. 

Phil

------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Sophie-Germain Primes for sale
Date: 13 Jun 2001 12:22:10 GMT

Tom St Denis <[EMAIL PROTECTED]> wrote:
> David Hopwood wrote:
> > Tom St Denis wrote:
> >
> > > A SG prime is of the form p = 2q + 1, where q itself is prime and of
> > > course p mod 4 = 3.
> > 
> > No. It's not two weeks since I last corrected you on this (message ID
> > <[EMAIL PROTECTED]>).
> > 
> > If p = 2q + 1 for p and q both prime, then q is a Germain prime, and
> > p is a safe prime. Also it is not part of the definition that p = 3
> > (mod 4) (counterexample: p = 5, q = 2, although admittedly that is
> > the only counterexample).
>
> Ok.  Well if you checked the numbers you would realize they are all 3
> mod 4.

Learn to read.

The correction is that, when p = 2 q + 1 with p and q prime, it's *q*
which is the Sophie Germain prime.  We in the crypto community call p a
`safe' prime, though I don't believe it has a name among general
mathematicians.

You're right that if q is odd then p = 2 q + 1 = 3 (mod 4).  That's why
David said that 5 is the only safe prime not congruent to 3 (mod 4).

> When I said "and of course p mod 4 = 3" I meant "of course I made them
> such that ...", although I can see how that could have been misleading.

You couldn't have made nontrivial safe primes to have any other residue
mod 4.

-- [mdw]

------------------------------

From: [EMAIL PROTECTED] (Jschutkeker)
Date: 13 Jun 2001 12:51:44 GMT
Subject: Beginner's Question


This may be a naive question, but I just finished reading Frederick Bauer's
badly written book, "Decrypted Secrets" and it gave me an idea.  Has anybody
ever tried a cryptanalytic technique based on exhaustion by linguistic
particles?  If so, could you provide me some references so I can see what's
been done and how well it works?

Thanks,
JS

------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: IV
Date: 13 Jun 2001 13:10:12 GMT

Tim Tyler <[EMAIL PROTECTED]> wrote:

> Not really.  We've already discussed weaknesses in CTR mode when
> cyphertexts are small.  The idea that CTR mode is as secure as the
> underlying block cypher is essentially a myth - despite the supposed
> proof to this effect - because of this. 

I think you've misunderstood the security notions behind the proof.

The notion used is the `real-or-random' test.  It works like this.  An
adversary is given an oracle which is of one of two types: it might

  * /either/ encrypt the given plaintext under a fixed but secret and
    randomly-chosen key and return the ciphertext, 

  * /or/ encrypt a randomly-selected plaintext of the same length.

The adversary's job is to decide which of these two types of oracles
it's been given.


Formally...

A symmetric cipher is a triple (K, E, D) of polynomial-time algorithms.

  * K is a randomized algorithm which, given a security parameter 1^k,
    returns a string i \in {0, 1}^*.  We write i \in K(1^k).

  * E is a randomized algorithm which, given strings i \in K(1^k) and p
    \in {0, 1}^* returns a string c \in {0, 1}^*.  We write c \in E(i,
    p) or c \in E_i(p).

  * D is a deterministic algorithm which, given i \in K(1^k) and c \in
    E(i, p) returns p.

We have two oracles:

  * O_{E_i}(p) computes and returns c <-- E_i(p).

  * O'_{E_i}(p) selects at random a string p' <-- {0, 1}^{|p|} of the
    same length as p and returns c <-- E_i(p').

The advantage of an adversary equipped with one of these oracles A(O) is
then

  Adv(A) = Pr(A(O) = 1 | i <-- K(1^k), O = O_{E_i}) -
             Pr(A(O) = 1 | i <-- K(1^k), O = O'_{E_i})

We say that the cipher is (t, q, \epsilon)-secure (in the real-or-random
sense) if no adversary which runs for time t and is allowed q queries to
its oracle has an advantage greater than \epsilon.


The security proof at which you scoff shows how the security of the
CTR-mode construction relates to the security of the underlying block
cipher (specifically, how hard it is to distinguish it from a random
function -- the formalism for this is similar to the above and fairly
dull).

This is an extremely useful result.


Interestingly, a compressing cipher would fail the real-or-random test.
The adversary supplies compressable plaintexts.  If the ciphertexts are
smaller, it claims `real'; if they are larger, it claims `random'.

More strongly, the same property leads to an attack in the
find-and-guess model (in which the adversary thinks up two plaintexts --
of the same length -- and is returned a ciphertext and asked which of
its plaintexts it corrsponds to), simply by choosing two plaintexts one
of which is compressable and one of which is not and examining the
length of the ciphertext.  Find-and-guess is a weaker security notion
than real-or-random.

-- [mdw]

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Uniciyt distance and compression for AES
Date: 13 Jun 2001 13:24:44 GMT

[EMAIL PROTECTED] wrote in <[EMAIL PROTECTED]>:

>Tim Tyler wrote:
>> 
>> [EMAIL PROTECTED] wrote:
>> : Tim Tyler wrote:
>> :> [EMAIL PROTECTED] wrote:
>[snip]
>> :>
>> :> : What isn't clear to me is how a compression algorithm can be
>> :> : intelligent enough to distinguish "meaningful" from "meaningless"
>> :> : inputs (although it would be easier if the compression algorithm
>> :> : knew the input language). 
>> :>
>> :> Compression algorithms need to do no such thing in order
>> :> for the unicity distance to be increased.
>> :>
>> :> All they really need to do is compress plausible-looking messages
>> :> on average - and face it - if they didn't do that, it would be hard
>> :> to justify calling them compressors in the context of the target
>> :> data. 
>
>I worked through an example that indicates that this isn't true. Whether
>or not the redundancy is reduced depends on how many meaningless
>messages

    You failed in your example. You don't know what UNICITY DISTANCE
means.

>Compressor Example
>------------------
>------------------
>
>
>Uncompressed
>Mesg.           Mesg.     Compression #1     Compression #2    
>Compression #3          
>          Prob.     (code)     (prob.)     (code)     (prob.)     (code)
>              (prob.) 
>------------     -------  -----------     ---------------    
>-------------- 0000          1/8     000     2/11     000     4/21    
>000     4/20 0001          1/8     001     2/11     001     4/21     001
>    4/20 0010          1/8     010     2/11     010     4/21     010    
>4/20     0011          1/16     011     1/11     011     2/21     011   
> 2/20 0100          1/16     100     1/11     100     2/21     100    
>2/20     0101          1/16     101     1/11     101     2/21     101   
> 2/20 0110          1/16     110     1/11     110     2/21
>0111          1/16     111     1/11     
>1000          1/16
>1001          1/16
>1010          1/32               111     1/21     110     1/20
>1011          1/32                         111     1/20
>1100          1/32
>1101          1/32
>1110          1/32
>1111          1/32
>
>
>
>Compression #1 only compresses high probability ("meaningful") messages.
>Compression #2 compresses one low probability ("meaningless") message.
>Compression #3 compresses two low probability messages.
>
>In reality meaningless messages would have a much lower probability
>but I just picked easy numbers.
>
>
>Using H(m)=p_m*log(p_m)
>
>     H(m)=3.8125 for the umcompressed message space with n=4
>     H(m)=2.9139 for the message space after Compression #1 
>     H(m)=2.869 for the message space after Compression #2 
>     H(m)=2.61 for the message space after Compression #3
>               
>Using the notation in Shannon's "Comm. Theory of Secrecy Sys."
>     
>     D_n = log(G) - H(m) 
>
>where D_n is the total redundancy for n letters of mesg.
>and G is the total number of messages of length n.
>
>D_n = 4 - 3.8125 = 0.1875 for the uncompressed message space (G = 16)
>               
>For Compression #1:
>     D_n = 3 - 2.9139 = 0.086 < 0.1875 so the unicity distance increases
>For Compression #2
>     D_n = 3 - 2.869 = 0.1315 < 0.1875 so unicity distance increases
>     even though not all compressed messages were meaningful (as you
>     said). 
>For Compression #3
>     D_n = 3 - 2.61 = 0.394 > 0.1875 so unicity distance *decreaes*
>     even though the majority of compressed messages are meaningful
>     (which is *not* as you said).               
>
>
>Correct me if I did something wrong.
>
>

   Sorrying quoting feature sucks but here in simple terms here is why
your wrong. You state that 1/32 are the meaningless messages. The point
is you only encrypt meaningful messages. So if I rescale you propabilites
and change 1/32 to "zero" now I have a message space with messages and
meaningless data. I hope this is not to complicated. The next step
is that your messages are not of a form that very useful since you
have to describe all possible messages. before and after compression. 
instead of going 0000 to 1111
you should have done
0
1
00
01
10
11
000
001
010
011
..... and so on till done.

Then when you do your compression you just change where
the messages are.

David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
        http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY
Date: 13 Jun 2001 13:43:09 GMT

Mok-Kong Shen <[EMAIL PROTECTED]> wrote:

> But measures should have adquate (intuitionally reasonable)
> interpretations, I suppose. If a security measure
> says 0 security, then one would 'very naturally' think
> that that means no protection at all, isn't it?

This is why we have different notions of security.  There is a
difference between the information-theoretic security provided by the
one-time pad (and perfect secret-sharing systems) and the computational
security provided (by assumption) by most commonly-used symmetric and
asymmetric ciphers.

> I don't think that something that is at the disposal of the opponent
> but requires a time of eternity to exploit isn't equivalent to 'no
> information' to the opponent, on the other hand.

The information is there; the ability to extract it is not.

-- [mdw]

------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY
Date: 13 Jun 2001 13:45:45 GMT

[EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

> In the case of a OTP, YOU YOURSELF can walk right up to me and GIVE me
> your key, along with a box of chocolates. That does me no good:

You could at least eat the chocolates.  That might do you some good.

-- [mdw]

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Uniciyt distance and compression for AES
Reply-To: [EMAIL PROTECTED]
Date: Wed, 13 Jun 2001 13:53:50 GMT

[EMAIL PROTECTED] wrote:
: Tim Tyler wrote:
:> [EMAIL PROTECTED] wrote:
:> : Tim Tyler wrote:
:> :> [EMAIL PROTECTED] wrote:

:> :> : What isn't clear to me is how a compression algorithm can be intelligent
:> :> : enough to distinguish "meaningful" from "meaningless" inputs (although
:> :> : it would be easier if the compression algorithm knew the input language).
:> :>
:> :> Compression algorithms need to do no such thing in order
:> :> for the unicity distance to be increased.
:> :>
:> :> All they really need to do is compress plausible-looking messages
:> :> on average [...]

: I worked through an example that indicates that this isn't true. Whether
: or not the redundancy is reduced depends on how many meaningless
: messages that the compressor compresses.

Your example is wrong then.  I'll explain why shortly...

:> : Someone is going to have to a better job of explaining this before I can
:> : buy it. [...]

:> : Explain how compression effectively reduces the redundancy [...]
:> 
:> That's what compression *does*.  It makes files shorter, increasing
:> the entropy per bit (entropy remains the same, number of bits in
:> message decreases).  When entropy-per-bit goes up, redundancy goes down.

: I think the entropy refers to the entropy of the message space, which
: would be affected if the number of bits decreases because the message
: space would become smaller.

I was referring to the entropy in an individual message - but the same
holds if you consider the whole message space.  If you're compressor
distorts the messages such that their overall entropy is changed, then
it is not behaving itself as a compressor - but rather is distorting the
messages you transmit.

: my example seems to show that [whether unicity distance increases]
: depends on the number of meaningless messages that are compressed.

[...]

: Uncompressed
: Mesg.                 Mesg.   Compression #1  Compression #2  Compression #3         
: 
:               Prob.   (code)  (prob.) (code)  (prob.) (code)  (prob.)
: ------------  -------  -----------    --------------- --------------
: 0000          1/8     000     2/11    000     4/21    000     4/20
: 0001          1/8     001     2/11    001     4/21    001     4/20
: 0010          1/8     010     2/11    010     4/21    010     4/20    
: 0011          1/16    011     1/11    011     2/21    011     2/20
: 0100          1/16    100     1/11    100     2/21    100     2/20    
: 0101          1/16    101     1/11    101     2/21    101     2/20
: 0110          1/16    110     1/11    110     2/21
: 0111          1/16    111     1/11    
: 1000          1/16
: 1001          1/16
: 1010          1/32                    111     1/21    110     1/20
: 1011          1/32                                    111     1/20
: 1100          1/32
: 1101          1/32
: 1110          1/32
: 1111          1/32

: Compression #1 only compresses high probability ("meaningful") messages.
: Compression #2 compresses one low probability ("meaningless") message.
: Compression #3 compresses two low probability messages.

: In reality meaningless messages would have a much lower probability
: but I just picked easy numbers.

: Using H(m)=p_m*log(p_m)

:       H(m)=3.8125 for the umcompressed message space with n=4
:       H(m)=2.9139 for the message space after Compression #1 
:       H(m)=2.869 for the message space after Compression #2 
:       H(m)=2.61 for the message space after Compression #3
:                       
: Using the notation in Shannon's "Comm. Theory of Secrecy Sys."
:       
:       D_n = log(G) - H(m) 

: where D_n is the total redundancy for n letters of mesg.
: and G is the total number of messages of length n.

: D_n = 4 - 3.8125 = 0.1875 for the uncompressed message space (G = 16)
:                       
: For Compression #1:
:       D_n = 3 - 2.9139 = 0.086 < 0.1875 so the unicity distance increases
: For Compression #2
:       D_n = 3 - 2.869 = 0.1315 < 0.1875 so unicity distance increases
:       even though not all compressed messages were meaningful (as you said).
: For Compression #3
:       D_n = 3 - 2.61 = 0.394 > 0.1875 so unicity distance *decreaes*
:       even though the majority of compressed messages are meaningful
:       (which is *not* as you said).                   

: Correct me if I did something wrong.

Here is the model I think you should have adopted:

  Alice -> [encrypt] -> ...eve... -> [decrypt] -> Bob

Unicity distance is as measured by Eve.

We are interested in the effect of replacing [encrypt] and [decrypt]
by [compress encrypt] and [decrypt decompress] respectively.

What you have done is not /just/ make changes to the stages represented
inside square brackets.

You have made fundamental changes to the set of messages which are sent
and their probabilities.  Your system after the compressor is added can't
even transmit some messages that were transmitted before - and *all* the
probabilities of the messages occurring are changed!

Why should putting in a compressor affect the frequency of the messages
being sent?  It should not.

If you /must/ use a lossy compressor at least set the probability of
messages it can't handle to be 0 initially, or you are comparing different
systems with different message spaces.

Also, do not change the probabilities of the messages occurring, or else
you are comparing entropies in very different systems - and it is no
suprise that your unicity distances are all over the map.
-- 
__________
 |im |yler  [EMAIL PROTECTED]  Home page: http://alife.co.uk/tim/

------------------------------

From: "Robert J. Kolker" <[EMAIL PROTECTED]>
Subject: Re: Alice and Bob Speak MooJoo
Date: Wed, 13 Jun 2001 10:01:21 -0400



David A Molnar wrote:

>
> Is this right?

Yes.



>
>
> Before getting into nasty philosophy of language issues, one thing pops to
> mind. If Eve cannot observe the referents for MooJoo - if these referents
> are inaccessible to her - then *why* would she *want* to listen to Alice
> and Bob? and if she *can*, doesn't that destroy the claimed
> "security-through-inaccessible-referent" ?

If Eve listned in she has no way of knowing what Alice and
Bob are saying to each other. So the communication between
Alice and Bob is secure. From Eve's p.o.v. she is getting
noise from the A/B link.

Bob Kolker



------------------------------

From: [EMAIL PROTECTED] (Eric Lee Green)
Subject: Re: Yarrow PRNG
Reply-To: [EMAIL PROTECTED]
Date: Wed, 13 Jun 2001 14:02:50 GMT

=====BEGIN PGP SIGNED MESSAGE=====
Hash: SHA1

On Wed, 13 Jun 2001 12:00:33 GMT, Tim Tyler <[EMAIL PROTECTED]> wrote:
>Mark Wooding <[EMAIL PROTECTED]> wrote:
>:> 1.  Has anyone reviewed the properties of the PRNG?  Is it as good as
>:>     it claims?
>: The paper claims 160-bit security for Yarrow-160.  If you interpret that
>: to mean that you can't tell the difference between Yarrow-160 and real
>: random data with less than 2^{160} effort then the claim is false.
>: The output stage uses triple-DES in counter mode.  Output is in 64-bit
>: chunks.  If the source were truly random, we'd expect two consecutive
>: outputs to be equal with probability 2^{-64}.  With a counter-mode
>: output, this would never happen at all, except that Yarrow-160 rekeys
>: the cipher every P_G = 10 output blocks. [...]
>
>Yes - the use of a block cypher in CTR mode was always a rather glaring
>problem with Yarrow, IMO.  It's pleasing to see the extent of the problem
>quantified.

The question then is twofold:

1. Can this lead to a possible attack? E.g., if someone knows that a 
  ciphersystem is obtaining key data and challenges via Yarrow, does this 
  in any way compromise the security of the ciphersystem?

2. Could this have been fixed by the simple step of adding yet another
  SHA-1 to the algorithm, at the output, to further "stir" the values
  being given to the user? 

When I designed Ocotillo, a Yarrow-inspired PRNG for Unix, I dumped large
amounts of data both with and without the final "mix" and did some
empirical noodlings of the data. Both ways passed the statistical tests
for randomness, but some subset distributions were troubling without the
mix. With the mix, I couldn't tell any difference between that and 
/dev/random (on Linux). I'm just trying to figure out whether I made any
real difference here, or whether it was just coincidence :-(. 


=====BEGIN PGP SIGNATURE=====
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE7J3EM3DrrK1kMA04RAqFtAJwNOwZ+gJ7M7zeyTu6tdaxyVWZgtwCgiWRp
1nHXNEaCxpBOx06XygkRgYw=
=Wy5K
=====END PGP SIGNATURE=====

------------------------------

From: Mika R S Kojo <[EMAIL PROTECTED]>
Subject: Re: differential cryptanalysis with a new twist?
Date: 13 Jun 2001 16:50:28 +0300

"Tom St Denis" <[EMAIL PROTECTED]> writes:
> <snip>
> 
> I saved this post to disk.  I just got home so I will read over it.  Note
> that my knowledge of algebraic fields and rings (and polynomial fields and
> rings) is a bit loose.
> 
> For example,  is GF(2^x) a way to denote a field of characteristic two
> degree x?  Is "x" the extension degree (if that means anything at all?)

Well, there is a nice way of interpreting GF(2^n). It simply denotes a
field of 2^n elements. Of course, if you read field theory more you
get to know that GF(2^n) is a degree n extension of GF(2). You may
intuitively see this by consider how to build GF(2^n) over GF(2) using
polynomial basis. In a similar way you can construct all other
(finite) extension fields over GF(2).

For example, if you can build GF(2^n) from GF(2) and GF(2^m) then it
follows that GF(2^m) is an extension between GF(2^n) and GF(2).

>From the notation GF(p^n) the characteristic comes nicely as p
(surprise?). The effect of characteristic is interesting as
characteristic 0 fields are Q, R and many other quite similar
objects. However, when you take positive characteristic (i.e. non-zero
characteristic, recall that characteristic is always a prime or 0)
then some strange things may happen. For example, you get the
Frobenius map x |--> x^p with its interesting properties and uses.

I prefer the notation F_{2^n} and F_{2} as this is more widely
accepted among mathematicians. But to be honest, most mathematicians
just write k and tell the reader what kind of a field it is supposed
to be. It is convenient and perhaps elegant.

> Thanks for the reply.  I will reply about your post later tonight after I
> have re-read it.

Take your time. I didn't explain it as down-to-earth as I perhaps
should have. So if you have questions I'll try to give a better
explanation then.

Mika

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to