Cryptography-Digest Digest #551, Volume #9       Fri, 14 May 99 20:13:04 EDT

Contents:
  Re: Lemming and Lemur (David Wagner)
  Re: Hello I am paper, please read me. (David Wagner)
  Re: On Contextual Strength (Bryan Olson)
  RSA-modulus binary decomposition ([EMAIL PROTECTED])
  Re: Lemming and Lemur: New Block and Stream Cipher ([EMAIL PROTECTED])
  Re: Toy Round Function (David Wagner)
  Europe and USA encryption export restrictions (Nils Zonneveld)
  Re: Europe and USA encryption export restrictions ("Steven Alexander")
  Re: Toy Round Function ([EMAIL PROTECTED])
  Re: Hello I am paper, please read me. ([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Lemming and Lemur
Date: 14 May 1999 12:03:08 -0700

In article <7hg90g$er4$[EMAIL PROTECTED]>,  <[EMAIL PROTECTED]> wrote:
> In article <7hfhkg$fmi$[EMAIL PROTECTED]>,
>   [EMAIL PROTECTED] (David Wagner) wrote:
> > Dan Bernstein has pointed out in private email that there is an easy
> > slide attack [1] on Lemming needing 2^64 known texts.
> 
> It looks more like 2^68 to me, since (P,C) can't be part of a
> useful slid pair unless P[0]=C[0].

Hmm.  I don't see it.

A pair (P,C) (P',C') of known plaintexts is said to be a _slid pair_ 
if P' = F(P).  (F is the 128bit->128bit round function.)  Slid pairs
should be found after about 2^{64.5} known texts (birthday argument).

Where does the relation P[0] = C[0] come in?

> I don't see how to extend this attack to Lemur,
> which is interesting, since I had thought Lemur was less secure
> than Lemming.

I think the attack can be extended to Lemur, under the known-plaintext
attack model.  You just look for collisions in the internal 128-bit
state variable.  For technical reasons, there might be some increase in
the amount of known texts needed, if you don't refresh the IV very
frequently.

See also the section on WAKE-ROFB in the slide attack paper for more
discussion on how to apply slide attacks to stream ciphers.  Because of
the similarity between Lemur and WAKE-ROFB, those techniques will also
be applicable to Lemur.

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Hello I am paper, please read me.
Date: 14 May 1999 12:09:42 -0700

In article <7hgn2u$os6$[EMAIL PROTECTED]>,
Gustaf Dellkrantz  <[EMAIL PROTECTED]> wrote:
> If you look at this you see that when shuffle=0, the first and last
> value stay the same. When shuffle=4 no value stay the same and in all
> other cases one value stays the same.

Note that this is just a corollary of a conjugacy class argument.

Two items are conjugate if they have the same number of cycles of each
length, when written in cycle notation.  Thus, (1 2 3) (4) (5 6) is
conjugate to (1) (2 5 4) (3 6), but not to (5 2 3 1) (4 6).

The number of values that stay the same is the number of cycles of length
1 in the cycle decomposition, and you can use this to classify the value
of the shuffling-bisector.  But you can also extract more information,
using this conjugacy technique: you can count the numbers of cycles of each
length, and use this more powerful classification technique to learn more
information about the value of the shuffling-bisector.

The reason this works is that A^{-1} o R_j o I o A is conjugate to R_j o I,
and we can easily calculate the conjugacy class of R_j o I for each j.
(See my previous post for notation.)

Did that make any sense?

------------------------------

From: Bryan Olson <[EMAIL PROTECTED]>
Subject: Re: On Contextual Strength
Date: Fri, 14 May 1999 11:25:11 -0700


John Savard wrote:
> 
> Bryan Olson wrote, in part:
> 
> >I want to reject ciphers if there exists a tractable break,
> >without regard to whether an adversary may know or discover that
> >break.
> 
> >No.  I mean what I wrote.
> 
> I'd want to reject such ciphers too. However, I am helpless to reject
> a cipher for which I do not know the break, or of the break - and my
> adversaries are smarter than I am.
> 
> >If a cipher is strong under the idea I proposed, then _no_
> >adversary can break it since a break does not exist.
> 
> So you do mean this.
> 
> And you are correct - that's a good definition of "strong".
> 
> But we don't know if tractable breaks exist that we don't know about.
> We don't _even_ know if tractable breaks exist that we don't know
> about, but our adversaries know about. We only know about the breaks
> we know.

Which applies equally to both cases, and thus favors neither.

But consider a theorem that shows a tractable break exists but
gives no clue as to what the method is - the non-constructive
proof of insecurity.  In that case the two definition direct us
differently.  The theorem would not help our adversary to break 
the cipher, so under the adversary relative standard, it doesn't
mean the cipher is weak.

> Terry Ritter asks us to consider the possibility our adversaries know
> something we don't.

Or will come to know such a thing within the intelligence life
of the data.

So instead relying on our assessments on cipher strength - something
our adversaries may know more about than we do, we might rely on
our assessment of our adversaries' capabilities - something they are 
absolutely certain to know more about than we do.

> You are now saying that you believe in following an even higher
> standard. I don't think he opposes this higher standard. What he
> opposes is the *lower* standard of assuming a cipher is strong if we,
> ourselves, don't know the break.

I think we have a fundamental difference of opinion as to whether
how hard we look for a weakness can raise that standard.

Of course what matters in practice is the system we actually employ.
The adversary doesn't care whether we assumed it strong or fielded
it without assuming it strong.

> And that lower standard at least _appears_ to be the one followed in
> practice by the conventional academic cryptographic community. Plus a
> fudge factor - I tend to think this is unavoidable, even if it could
> be made a little larger 

Yes, I'd love to have a proof of security, but I don't.  We can
only act on what we know, an I think that means we'd better put 
effort into knowing as much as we can.

> - but Mr. Ritter appears to be seeking to
> reject that practice completely.

Precisely what practice?  A complex multi-cipher is itself a
cipher.  If we don't field unproven ciphers, we don't field
the multi-cipher system.

> Well, I think that even his
> multi-cipher proposal is just a way of getting a bigger fudge factor -
> but a _much_ bigger fudge factor without a correspondingly large
> sacrifice of processing time.

If we refuse to field systems of unproven security, then we 
have the OTP and nothing else.  The multi-cipher protocol goes
out with the rest.

Obviously, multiple encryption increases the fudge factor.
Whether it's a good way to increase the fudge factor is 
debatable, but that's for another thread.

My argument is that the many-cipher system is _worse_ for us
and better for the attacker.  I think it's much more likely 
to expose 10% of the data than is a single cipher system, and
unless we intelligently partition the data among the ciphers,
exposing 10% is probably 90% as bad as exposing everything.

I don't see any justification for the idea that we're 
fracturing the adversaries effort so he won't be able to break
any significant portion of the traffic.  It seems that we'd 
have to assume that the our adversary's attacks are applicable
only to individual ciphers, and that's not the way most attacks
we know about work.  We'd have to assume he can't easily tell
whether his attacks apply to a cipher, which seems absurd.

Assumptions about how may adversary attacks ciphers seem worse
than assumptions about what is possible.  At least there's a 
good chance that academic cryptanalysis is pretty good.  It 
really is science, and it really can make progress.  My attacker
may be better at it, but we're looking for the same thing.  
There's nothing fundamental stopping me from knowing what he 
knows.

No research is going to tell me the capabilities of my enemies.
If you disagree with me about the generality of the adversaries 
attacks, how can one of us convince the other?  Neither position
is either provable or falsifiable.  It's not science; we can't 
make progress.


--Bryan

------------------------------

From: [EMAIL PROTECTED]
Subject: RSA-modulus binary decomposition
Date: Fri, 14 May 1999 11:11:13 GMT

Binary decomposition of RSA modulus.

We would use symmetric forms to represent bit components of RSA modulus.
As a result of this we get a system of linear equations. The system of
linear equations is not the same as in linear algebra. I would like to
call
this an ordered system of linear equations. This order is due to the
following rule. If in the equation [i] appears ‘1+1’ then it can be
moved
in the next equation[i+1] as added ‘1’. This approach demonstrates
following example.

Let m=37=5*B be an RSA modulus. Factors 5 and B (11 dec.) should be
found.
In binary form it looks as 37=110111, 5=0101, B=1011. The factors we
note in binary form as a(3)a(2)a(1)a(0) and b(3)b(2)b(1)b(0). At this
moment
we know that

(1)                b(4)= b(5)= b(6)=a(4)= a(5)= a(6)=0.

The system of linear equations than looks as follows:

1=b(0)a(0);
1=b(1)a(0)+b(0)a(1);
1=b(2)a(0)+b(1)a(1)+b(0)a(2);
0= b(3)a(0)+b(2)a(1)+b(1)a(2)+b(0)a(3);
1= b(4)a(0)+b(3)a(1)+b(2)a(2)+b(1)a(3)+b(0)a(4);
1= b(5)a(0)+b(4)a(1)+b(3)a(2)+b(2)a(3)+b(1)a(4)+b(0)a(5);
0= b(6)a(0)+b(5)a(1)+b(4)a(2)+b(3)a(3)+b(2)a(4)+b(1)a(5)+b(0)a(6);

>From the first equation it follows that

(2)                b(0)=a(0)=1.

Using this and (1) we obtain:

1=b(1) +a(1);
1=b(2)+b(1)a(1)+a(2);
0=b(3)+b(2)a(1)+b(1)a(2)+a(3);
1=b(3)a(1)+b(2)a(2)+ b(1)a(3);
1= b(3)a(2)+b(2)a(3);
0= b(3)a(3);

>From the first equation and (2) it follows that it is no meter
If b(1)=1 and a(1)=0  or b(1)=1 and a(1)=0.
Our choice is

(3) b(1)=1 and a(1)=0

Then the system can be transformed to:

1= b(2)+a(2);
0=b(3)+a(2)+a(3);
1= b(3)a(2)+a(3);
1= b(3)a(2)+b(2)a(3);
0= b(3)a(3);

At this time there are two variants. It is
easy to see that in each step it may be
not more then two variants. I suppose
that in each case the logically consistence
of the transformed system can be checked.

Case(a).  b(2)=1 and a(2)=0.

In this case we have b(3)+a(3)=0 and
b(3)a(3)=0, which is impossible.

Case(b). b(2)=0 and a(2)=1.

The system looks as follows:

b(3)+1+a(3)=0;
a(3)=1;
b(3)=1;
b(3)a(3)=0;

It seams that second equation is wrong.
Not at all. Due to the rule this equation
gets 1 from the first equation and logically
all is correct.

The found factors are 0101 and 1011.

The challenge is to develop algorithm, which
implements such approach.

For better view of this example please go to

www.online.de/home/aernst/RSA.html

or follow the link 'RSA-null sequrity' at

www.online.de/home/aernst

Regards
Alex


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Lemming and Lemur: New Block and Stream Cipher
Date: Fri, 14 May 1999 11:10:37 GMT

In article <7hgdls$t5u$[EMAIL PROTECTED]>,
  "Craig Clapp" <[EMAIL PROTECTED]> wrote:
>>TO LEMMING ENCRYPT A BLOCK:
>> for (i=0; i<32; i++)
>>     block = rotate(block ^ key[block[0]]);
>
>For the form of key specified, the expression
>     block = rotate(block ^ key[block[0]])
>
>is equivalent to the expression
>     block = shift(block) ^ rotate(key[block[0]]).
>
>where shift() does the same as rotate() but without
>the end-around action.
>
>In this case the rotation can become part of the the key
>schedule, or more conveniently you just build the key
>with key[i][15] = perm[i]. Then the round function
>needs only a shift rather than a rotate. On many
>architectures the shift is more efficient than a rotate.

On the Pentium without MMX, it's faster to not use any shifts or
rotates of the block at all.  The loop is repeated a number of
times that is a multiple of the number of bytes in the block, so
it is equivalent to:

    for (i=0; i<32; i++)
        block ^= rot(i,key[block[i]]);

where rot(i,k) rotates k by 8*i bits in the opposite direction
as rotate(), moving k[i] to k[0].

If the rotations of the key are precomputed, then there is no
need for any runtime shifts or rotates at all.  On a Pentium
without MMX, the block is stored in 4 32-bit registers, so the
line inside the loop becomes 4 XORs.  There are only 4 bytes
in a 32-bit register, so it is only necessary to precompute 4
different rotations of the key rather than all 32.  This means
that the key expands from 4KB to 16KB.  I coded this on a
Pentium II that had 32KB of cache, so the precomputed, rotated
copies of the key could all fit within cache.

To extract the byte block[i], in 1/4 of the cases you just
AND a copy of a register with FF, in 1/4 of the cases you
shift a copy of a register by 24 bits, and in 1/2 the cases
you must both shift and AND.  These shifts are only applied
to a copy of one register, not the whole block.  There's no
need to ever shift or rotate the entire block.

I used these ideas when I did the timing tests on Lemming,
which is why I said it encrypts at 19 cycles per byte.


LCB


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Toy Round Function
Date: 14 May 1999 11:39:05 -0700

In article <7hg0d0$91i$[EMAIL PROTECTED]>,  <[EMAIL PROTECTED]> wrote:
> for r = 0 to rounds
>    A = A + (((B ^ P[2r]) * S[B >> 22]) ^ P[2r + 1])
>    (A,B) = (B,A)
> next r

If you want to try your own hand at analysis of this cipher, I suggest
looking at what happens if you flip one bit of the input to the F-function.
(Here F(B) = ((B ^ P[24]) * S[B >> 22]) ^ P[2r+1].)  See if you can find
any high-probability differential characteristics that way.  Then, see if
you can piece them together into a multiple-round characteristic.

Hint: This cipher is extremely vulnerable to differential cryptanalysis.

Or you might try looking at linear approximations, although I've always
found them a little harder to wrap my brain around.

------------------------------

From: Nils Zonneveld <[EMAIL PROTECTED]>
Subject: Europe and USA encryption export restrictions
Date: Fri, 14 May 1999 22:03:01 +0200

Hi,

I have to admit that I know very little about encryption, but what I do
know is that today browser software for europe is equipped only with 40
bits encryption. Due to USA export restrictions the 128 bits encryption
version is not available.  This makes on-line banking[1] impossible and
cripples e-commerce. 

There are in fact European made encryption algorithms that offer more
protection then 40 bits and even go further then 128 bits. What I do not
understand is that these algorithms are not used in the European
versions of browser software.

That way on-line banking becomes a possibility and e-commerce within
Europe would be encouraged. This way one would create a incompatibility
between Europe and USA encryption algorithms used in browser software,
but that is the fault of the USA government.

Does anyone have a clue why there is still no decent encryption in
European browser software? Technically it is a piece of cake to use
European based algorithms. There seem to be a lot of political
obstructions, that hinder the protection of european consumers.


[1] on-line banking in this context, transactions over the internet in
stead of a direct dail-in to the bank-computers.


-- 
M.v.g.

Nils Zonneveld
=============================================================================
"Misschien is niets geheel waar, en zelfs dat niet"
(Maybe nothing is completely true and even that is to question)
Multatuli (Eduard Douwes Dekker) - Idee 1

------------------------------

From: "Steven Alexander" <[EMAIL PROTECTED]>
Subject: Re: Europe and USA encryption export restrictions
Date: Fri, 14 May 1999 14:41:17 -0700


The major browsing software is made by American companies.  They cannot
export the browsers with strong encryption whether it is made in the U.S. or
Europe or anywhere else.  In the future, I expect that someone outside the
U.S. will implement 128-bit encryption in Netscape since the source code is
available.  Internet Explorer may never yield stronger encryption outside
the U.S.

-steven



------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Toy Round Function
Date: Fri, 14 May 1999 23:03:50 GMT


> If you want to try your own hand at analysis of this cipher, I suggest
> looking at what happens if you flip one bit of the input to the F-
function.
> (Here F(B) = ((B ^ P[24]) * S[B >> 22]) ^ P[2r+1].)  See if you can
find
> any high-probability differential characteristics that way.  Then,
see if
> you can piece them together into a multiple-round characteristic.
>
> Hint: This cipher is extremely vulnerable to differential
cryptanalysis.

You mean flipping bits in the P[] and S[] array or A and B?  I think
the latter, as this will help expose the P array.

Just for clarification the S and P array are not fixed, they are chosen
based on the key.

Anyways, I will try flipping bits, and see what I come up with.  I know
I can crack this (I spent 10 mins writing it...)

Tom
--
PGP public keys.  SPARE key is for daily work, WORK key is for
published work.  The spare is at
'http://members.tripod.com/~tomstdenis/key_s.pgp'.  Work key is at
'http://members.tripod.com/~tomstdenis/key.pgp'.  Try SPARE first!


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Hello I am paper, please read me.
Date: Fri, 14 May 1999 23:07:51 GMT


> Note that this is just a corollary of a conjugacy class argument.
>
> Two items are conjugate if they have the same number of cycles of each
> length, when written in cycle notation.  Thus, (1 2 3) (4) (5 6) is
> conjugate to (1) (2 5 4) (3 6), but not to (5 2 3 1) (4 6).
>
> The number of values that stay the same is the number of cycles of
length
> 1 in the cycle decomposition, and you can use this to classify the
value
> of the shuffling-bisector.  But you can also extract more information,
> using this conjugacy technique: you can count the numbers of cycles
of each
> length, and use this more powerful classification technique to learn
more
> information about the value of the shuffling-bisector.
>
> The reason this works is that A^{-1} o R_j o I o A is conjugate to
R_j o I,
> and we can easily calculate the conjugacy class of R_j o I for each j.
> (See my previous post for notation.)
>

But you can't classify deck_A since you don't know it.  Any other
shuffling techniques though?

What about Jim's Idea of making the size of A and B relatively prime?
Is that similar to cascading LFSRs to make them maximal cycle length?

Tom
> Did that make any sense?
>

--
PGP public keys.  SPARE key is for daily work, WORK key is for
published work.  The spare is at
'http://members.tripod.com/~tomstdenis/key_s.pgp'.  Work key is at
'http://members.tripod.com/~tomstdenis/key.pgp'.  Try SPARE first!


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to