Cryptography-Digest Digest #775, Volume #12      Mon, 25 Sep 00 23:13:00 EDT

Contents:
  Re: On block encrpytion processing with intermediate permutations (Tom St Denis)
  Re: A Different Kind of Quantum Computer (John Savard)
  Re: What am I missing? (Matthew Skala)
  Re: What make a cipher resistent to Differential Cryptanalysis? (Tom St Denis)
  Re: Question on biases in random numbers & decompression (Samuel Paik)
  Re: Question on biases in random-numbers & decompression (Terry Ritter)
  Explanation of LFSR's in Cryptography (Jeff Gonion)
  RC4 from the past (=?ISO-8859-1?Q?Jacques_Th=E9riault?=)
  Re: Tying Up Loose Ends - Correction (Bryan Olson)
  continuous functions and differential cryptanalysis (Tom St Denis)
  NTRU question (actually truncated modular polynomial question) (Benjamin Goldberg)
  Re: continuous functions and differential cryptanalysis (Paul Rubin)

----------------------------------------------------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: On block encrpytion processing with intermediate permutations
Date: Mon, 25 Sep 2000 23:46:47 GMT

In article <[EMAIL PROTECTED]>,
  Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>
>
> Tom St Denis wrote:
> >
> >   Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> >
> > >I should very much appreciate comments on the following idea:
> .............
> [snip]
>
> >
> > This only will work when
> >
> > 1)  You both have the same PRNG, possibly key derivied, thus the
PRNG
> > must be salted, or something
>
> If PRNG is used (i.e. instead of using sorting), there
> is only one PRNG. In this case I would prefer a secret
> seed that is independent of the key of the block cipher.
> This of course means that one employs more secret
> informations, i.e. using a longer 'effective' key.
>
> > 2)  The messages are over several blocks long.
>
> Yes.
>
> > Still you have to consider the possibilty that some halves are not
> > mixed as well with the others (i.e disjoint cycles in the
shuffling).
> > Consider the halves (paired means they go thru a cycle together)
> >
> > R1: ab cd ef gh
> >
> > R2: ac be df gh
> >
> > R3: ae cf db gh
> >
> > ...gh is never moved...
>
> I understand that gh forms a block. In this case the
> gh block is not profited by the permutation, i.e. the
> block is processed by the original block algorithm
> through all its cycles.
>
> >
> > Obviously that's not likely but low diffusion is a probable
consequence
> > over the entire message with this scheme.  It gets worse as the
message
> > size increases... (more chance that two halves never interact, or
> > interact an equal amount of times).
>
> I don't think so. If the message is large, the number
> of the parts a, b, etc. become large. In that case
> the chance of a pair remain sticking together at the
> same location should be lower. If a block consists of
> 4 words instead of 2 words, there would be a similar
> favourable effect.

No, My point was that not all halves would affect each other DIRECTLY.
That is optimum diffusion.

> > It's a neat idea, but not terribly secure I would think.
>
> What would interest me is the question whether employing
> permutation could in general be better than block chaining
> or not. If not, then the scheme would have no value.

Ok Consider a 128-bit block cipher where I permute the 16 bytes then
pass the clumps through a 16x16 sbox...

There will be alot of weak permutes where the diffusion is not terribly
high amongst all words.

Also the PRNG must be reseed'ed for each block otherwise you have a
stream cipher, not a block cipher.... not terribly usefull since
seeking is a benefit of block ciphers.

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: A Different Kind of Quantum Computer
Date: Tue, 26 Sep 2000 00:07:19 GMT

On Mon, 25 Sep 2000 15:37:42 -0700, "A. Melon"
<[EMAIL PROTECTED]> almost wrote, in part:

>Quantum computers can only solve problems in BQP, which is not thought to include
>NP-complete problems.  Now there is a new type of quantum computer 
>being proposed: 

>http://xxx.lanl.gov/abs/quant-ph/9910073 

>The paper claims this could solve NP-complete and sharp-P-complete problems.

>No, this isnt the End Of Cryptography As We Know It (EOCAWKI).  Even if the
>claims are true, it may be impossible to actually build.  Even if it 
>is possible to build, it will be easy to make ciphers that defeat it.  

>For example, make a random, public, invertible, 27x27 sbox, and distribute
>it on CD-ROM.  Encrypt a block by applying AES, then passing it 
>through the sbox (in groups of 27 bits), then applying AES again.  The 
>sbox is only 432MB, so it is easy to distribute. 

Oh, dear. I have visions Scott27u now.

>The quantum computer 
>would need millions of qbits to crack it.

I wonder. There might be a meet-in-the-middle attack. Because the
quantum computer's copy of the CD-ROM would *not* have to be in qbits,
since the CD-ROM is fixed and public; the number of qbits only has to
be the length of the secret key. But the non-qbit size of the problem
might still create problems in avoiding decoherence or something.

>If quantum computers reach 
>that size, switch to a bigger sbox on DVD.  Mass storage should easily 
>grow faster than quantum computers.  

>This new quantum computer would actually be of great benefit to cryptography. 
>Given a set of axioms and a theorem, such a machine could always find 
>a proof of the theorem, if a proof exists that fits into the number of 
>qbits you have.  We might use it to prove P!=NP.  We might use it to 
>prove that AES+sbox really is secure against quantum computers.  We 
>might use it to discover new cryptosystems with provable security and 
>efficiency, along with their proof.  

>Does anyone know whether the claims in that paper are valid?

Now that I've made the text of your posting readable, maybe someone
will answer... 

John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: [EMAIL PROTECTED] (Matthew Skala)
Subject: Re: What am I missing?
Date: 25 Sep 2000 17:23:29 -0700

In article <8qnl7o$f3q$[EMAIL PROTECTED]>,
Lon Willett  <[EMAIL PROTECTED]> wrote:
>a large advantage here; they need only find a few machine
>distinguishable bits to play with, while the compressors would

Note that it's a bad thing for "wild" original audio, such as a user
might record with a microphone, to falsely appear watermarked.  If I sing
into my DVD-Audio recorder and it refuses to record me, I won't buy
another one from that company.  Worse, if I'm a content provider producing
work for hire (i.e. I'm what in a less enlightened time would have been
called an artist) and the equipment refuses to record the original, then
my masters can't make money from my work.

So... if you count "bits" in their information theoretic definition, it's
reasonable to suppose that each bit represents some feature of a signal
that will be close to randomly and unbiased true or false in wild original
audio.  If it represented some feature that was always or never present,
then it would be lost in lossy compression, or easy to pick off when
modified.

The "don't copy me" bit can't really be just one bit, then; it has to be a
bunch of bits, with the "don't copy me" condition signalled by all those
bits being set appropriately.  How many bits?  That depends on the rate of
false positives you're willing to accept.

If you want your recording device to reject watermarked audio within one
second, and to reject non-watermarked "wild" original audio at most once
in, say, 4096 seconds (a little more than an hour), then your watermark
channel has to be able to carry 12 bits per second when measured as
unbiased bits - more if you want more than a simple "yes/no", less if
you're willing to accept more rejections of unmarked audio or a longer
time to reject marked audio.
-- 
Matthew Skala
[EMAIL PROTECTED]              I'm recording the boycott industry!
http://www.islandnet.com/~mskala/




------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: What make a cipher resistent to Differential Cryptanalysis?
Date: Tue, 26 Sep 2000 00:36:53 GMT

In article <[EMAIL PROTECTED]>,
  Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>
>
> Tom St Denis schrieb:
> >
> >   Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> > >
> > > Tom St Denis wrote:
> > > >
> > > >   Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> > > > >
> > > > > The catch is with the term 'random'. The randomness
> > > > > required by an ideal OTP, i.e. the exact theoretical
> > > > > assumptions of it, cannot be absulutely verified for
> > > > > any given sequence pretended to be an ideal OTP.
> > > > > That's why an ideal OTP can't be obtained in practice,
> > > > > even it could exist in the world according to theory.
> > > >
> > > > Are you sure about that?
> > > >
> > > > In the series "111111111111111" what is the next bit?
> > >
> > > I am not sure of whether we are talking about the
> > > same thing. Given a a source in practice, we have no
> > > way of absolutely confirming that it satisfies the
> > > assumptions for generating an ideal OTP. One can
> > > tell that something does not satisfy being OTP,
> > > e.g. a code that emits alternatingly O and 1. That
> > > is, you could give negative answers but never positive
> > > answers.
> >
> > It's sufficient to say that YOU cannot guess the next bit then the
> > cipher is secure.
>
> Whether I can guess or not doesn't matter much. The
> opponent may be able to guess, if the source is not
> perfectly random. There is no way to prove perfect
> randomness in practice, as discussed often times in
> the group.

Ah, but something can be pratically random.  Take the output of say RC4
that is hashed (gather 30 bytes then hash it via SHA-1, etc...) I would
bet 10 bucks that without knowing the RC4 seed you couldn't guess the
output better then 1/2.

Make the RC4 seed 256 bits long ... and voila.

Of course how do you make a random RC4 seed?  recursive problem...

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Samuel Paik <[EMAIL PROTECTED]>
Crossposted-To: com.compression
Subject: Re: Question on biases in random numbers & decompression
Date: Tue, 26 Sep 2000 01:08:27 GMT

In article <[EMAIL PROTECTED]>, Benjamin Goldberg
<[EMAIL PROTECTED]> wrote:
> I've been looking for a way to convert a stream of unbiased random
> bits into an unbiased stream of random numbers in an arbitrary
> range, and I think I've hit on an idea that hasn't been suggested
> before:
> Take an arithmetic dececoder, and initialize it to believe that the
> values 0, 1, 2 are equiprobable, and no other values will occur.
> Then feed it the stream of random bits.  This should result in an
> unbiased stream of random numbers in the range 0..2.

I wrote (but didn't post!)
> Depends on the details, but the most likely result (as in, the usual
> configuration of an arithmetic coder and an equiprobable model) is
> to produce a single symbol, designed 0, 1, or 2.

Obviously my brain had checked out when I wrote this.

Yes, you could use an arithmetic decoder and a three symbol
equiprobable model to produce a stream of equiprobable 0s, 1s,
and 2s from an unbiased bitstream.  It would be harder to
produced an unbiased stream of {0,1,2}* directly from a biased
bitstream, I'll have to think about that (however it is trivial
to produce an unbiased bitstream from a biased bitstream, e.g.
a Von Neuman compensator).

> Since I think that if I were *starting out* with random
> (equiprobable) 0s, 1s, and 2s, and *en*coded them with
> an arithmetic coder, I *should* get a random bitstream
> (with 0 and 1 equiprobable),

Going this way looks a lot harder to prove.

Sam Paik
an occasional nitwit


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Question on biases in random-numbers & decompression
Date: Tue, 26 Sep 2000 01:20:45 GMT


On Mon, 25 Sep 2000 23:25:39 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt Benjamin Goldberg
<[EMAIL PROTECTED]> wrote:

>Terry Ritter wrote:
>> 
>> On Sun, 24 Sep 2000 21:26:07 GMT, in
>> <[EMAIL PROTECTED]>, in sci.crypt Benjamin Goldberg
>> <[EMAIL PROTECTED]> wrote:
>> 
>> >[...]
>> >Does anyone know if this is right, or close to right?  And if it
>> >isn't, is using an arithmetic (de)coder on the right track to get an
>> >unbiased distribution while discarding a minimum amount of random
>> >data from the bit stream?
>> 
>> For cryptographic use, I think there is some advantage to simply
>> discarding parts of the stream at random, and if that is the way the
>> stream is used, so much the better.
>
>If I'm using a TRNG, especially a slow one, it's better to use as much
>of the stream as possible.  

No, I'd say it's better to use a faster random source so that one can
afford throw some of it away.  We're probably talking about throwing
away 25 percent; even double that should be affordable.  


>Decimation of the stream is good for making
>a PRNG less predictable, but the premise is that I've already got a
>stream that's perfectly unpredictable.

Unfortunately, that premise is a hope and a wish, unprovable in theory
and unverifiable in practice.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

Date: Mon, 25 Sep 2000 20:40:23 -0500
From: [EMAIL PROTECTED] (Jeff Gonion)
Subject: Explanation of LFSR's in Cryptography


I have put together a paper that explains LFSR principals for use in
cryptography. Tom has been nice enough to post it on his website at...

  http://www.geocities.com/tomstdenis/files/lfsr.pdf

This was motivated primarily by the lack of decent online information
about LFSR's pulled-together in one place, by seeing the type of questions
commonly posted concerning LFSR's, and by my own frustration when I
started looking for information.

The paper starts-off basic, but includes source code for the
Berlekamp-Massey analysis algorithm, instruction on Modulo-2 polynomial
math, Golumb¹s criteria for randomness, and Euler¹s totient function.
Introductory examples of known-plaintext and correlation attacks are also
given. It is written for those without a serious mathematical background
(you know, in plain english).

I hope this will provide a useful resource for those who have questions,
and I welcome suggestions for additions/improvements.

- Jeff Gonion

------------------------------

Subject: RC4 from the past
From: [EMAIL PROTECTED] (=?ISO-8859-1?Q?Jacques_Th=E9riault?=)
Date: Tue, 26 Sep 2000 01:35:37 GMT

I was reading Donald E. Knuth "The art of Computer programming" second
edition, trying to get documentation about checking pseudo random
sequences when I saw on page 32 of volume 2 a description of an
algorithm that looks very much like RC4.  In the book it is called
Algorithm B ( randomizing by shuffling).

Does anybody have comments!

Jacques Thériault
http://nightbird.n3.net

------------------------------

From: Bryan Olson <[EMAIL PROTECTED]>
Subject: Re: Tying Up Loose Ends - Correction
Date: Tue, 26 Sep 2000 01:38:52 GMT

Tim Tyler wrote:
> Bryan Olson wrote:
> : Yes, you can work on reducing that constant.  The mistake is
> : pretending it does something to the keyspace.
>
> It doesn't do anything to the keyspace.  This was the point
> of my attempts to distinguish the "effective keyspace"
> from the actual keyspace.
>
> The "effective keyspace" contains the keys that can't be dismissed out
> of hand on a priori grounds - and may need further consideration.

The only attack you considered was exhaustive search. Not
one single key can be dismissed a priori.  Every one
requires a trial decryption.

> I've said if this is likely to rub people up the wrong
> way then they are invited to present an alternative
> concise way of saying operation X "reduces the effective
> keyspace by five bits".

The problem is the meaning, not the wording.  I'd suggest
"Makes testing each key faster most of the time."

[...]
> : No.  The effect of the keyspace is still there. [...]
>
> I don't think I ever said that these keys could be
> discarded without doing any work at all.

It's an intractable amount of work for any reasonable
keyspace.


> In fact, *if* the cypher has been broken, being supplied
> with known plaintext can sometimes rule out keys en mass,
> with practically no work.

Correct.  If we have a short cut solution, we may be able to
eliminate large classes of keys without doing a unit of work
for each key.


[...]
> : If I give you a thousand bits of Blowfish (448-bit key)
> ciphertext and corresponding plaintext, what's the
> "effective keyspace"?
>
> Probably very small, to the point of being practically non-existent.
> The work required to reject each key is about as small as it can
> ever be expected to be, and - in all liklihood - enough information is
> supplied to be able to reject all but the correct key.

So try your search.  If you cannot search a "practically
non-existent" "effective keyspace", then we have a good
indication that your effective keyspace concept is
misguided.


--Bryan
--
email: bolson at certicom dot com


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: continuous functions and differential cryptanalysis
Date: Tue, 26 Sep 2000 02:29:41 GMT

Woohoo I used calc for something.

Theorem.  No continuous [finite] function can ever have a minimum
Probability associated with a differential (over all possible
differentials).

Lemma.  Examine the derivative of function 1/x in GF(2)^n, given as -
1/x^2.  We note that the derivative is not a bijection in the same
field.  This means that two pairs of differences will cause the same
output difference.  For example with 'y = x^2' mod 257, we find that
2^2 = 255^2 = 4.  Therefore this means "at least" two pairs of inputs
to 'F(x) = 1/x' must have the same output difference.

Lemma.  Examine F(x) = g^x mod p, p = prime, g = primitive generator.
If 'x' is even then F'(x) = xg^(x-1) and this cannot be a bijection mod
a prime (I think).  If 'x' is odd then the power is even and cannot be
a bijection (x^2n mod p doesn't work!).

Lemma.  Given any continuous bijective function F(x) = ax^b mod p (note
that inversion is just ax^(p-2) mod p) the derivative cannot be a
function since by the power rule the exponent must be odd (if p is a
prime, otherwise the host function is not a bijection).

Proof.  Therefore it's impossible to have a continuous function which
is a bijection to have a derivative that is a bijection.  If the
derivative is not a bijection then the function cannot have the lowest
DP possible.

I am still thinking about g^x mod p, when 'x' is even... xg^(x-1) is
not a bijection because when x == g, we have g^x and again that's not a
bijection, but if 'g' is odd... if 'g' is odd and x==g we have xg^(x-1)
which is g^x... hmm seems to work...

Ok it's a semi weak idea, but I know any function in the form

F(x) = ax^b mod p cannot be totally ideal...

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: NTRU question (actually truncated modular polynomial question)
Date: Tue, 26 Sep 2000 02:47:50 GMT

NTRU uses truncated modular polynomials as it's primary objects.
I understand how multiplication and addition work, but I do not
understand how inversion works... I can't seem to understand the stuff
on ntru's web site.

For those who don't want to bother to visit the web site, here are the
addition and multiplication operations:

Definitions:
N : the number of terms;  the order of the poly is N-1.
m : the modulo
% : modular reduction; x = y % m results in values 0..(m-1)
x_i : the ith term of x.

To calculate c = a + b % m
for( i = 0 .. N-1 ) c_i = a_i + b_i % m

To calculate c = a * b % m
for( i = 0 .. N-1 ) c_i = sum(j=0..N-1, a_j * b_((i-j)%N) ) % m

Assuming that the multiplicative inverse function is modinv( a, m ),
I know that if m = x*y (where gcd(x,y)=1), then
        modinv(a, m) = modinv(a, x) * modinv(a, y)
And I think I can can even figure out how to get modinv( a, x**n ) where
x is prime, if I already have modinv( a, x )...
But how do I get modinv( a, x )???

An equivilant question seems to be, how do I do long division for
truncated modular polynomials?

If someone could post clear psuedocode or C code, it would be much
appreciated...  (clearer than the stuff on NTRU's web site, that is)

--
... perfection has been reached not when there is nothing left to
add, but when there is nothing left to take away. (from RFC 1925)

------------------------------

From: Paul Rubin <[EMAIL PROTECTED]>
Subject: Re: continuous functions and differential cryptanalysis
Date: 25 Sep 2000 19:59:16 -0700

Tom St Denis <[EMAIL PROTECTED]> writes:

> Woohoo I used calc for something.
> 
> Theorem.  No continuous [finite] function can ever have a minimum
> Probability associated with a differential (over all possible
> differentials).
> 
> Lemma.  Examine the derivative of function 1/x in GF(2)^n, given as -

Um, what's wrong with this sentence?  Hint: do you know what a
derivative is?

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to