Cryptography-Digest Digest #957, Volume #10 Sat, 22 Jan 00 23:13:01 EST
Contents:
Cryptanalysis of MYH (David Hopwood)
Re: Cryptanalysis of MYH (David Hopwood)
Re: Calculating A^-1 Mod P (Kent Briggs)
Re: Does RSA use real prime ? (Johnny Bravo)
Re: Combination of stream and block encryption techniques (Mok-Kong Shen)
Re: Transposition over ASCII-coded text (Mok-Kong Shen)
Re: Does RSA use real prime ? (Tom St Denis)
Re: Challenge. (Tom St Denis)
Re: Does RSA use real prime ? ("Hank")
----------------------------------------------------------------------------
Date: Sun, 23 Jan 2000 00:23:49 +0000
From: David Hopwood <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Cryptanalysis of MYH
=====BEGIN PGP SIGNED MESSAGE=====
MYH is broken, in the sense that it is no more secure than the
Pohlig-Hellman variant of the original Maurer-Yacobi scheme.
(Its goal was to provide a better margin of security than a
forward-secret encryption algorithm based on that scheme; that
was the reason for using a subgroup and for most of the
additional complication.)
Here is a sketch of the attack:
e := y; (or any other subgroup element of order q)
for (i := 1..r) {
for (j := each prime number) {
e := e^j mod m;
if (e = 1) break;
}
q_i := j;
e := y^(q/q_i);
}
(The above finds all of the q_i, in order from largest to
smallest.)
Using the q_i, it is possible to calculate discrete logs in
the subgroup (by raising an element to the power q/q_i,
calculating its discrete log to the base alpha mod q_i, for
each i, and combining the results using the CRT).
The work required to find all of the q_i is
sum[number of primes < q_i: i = 1..r]
exponentiations mod m, which means that the margin between work
done in key pair generation, and work needed for an attack, is
effectively the same as for a p-1 factoring attack against the
Pohlig-Hellman scheme. (In fact, the structure of the attack is
very similar to p-1 factoring, since we are trying to factor
the order of the subgroup.)
It might be possible to salvage some of the ideas behind MYH,
but it seems as though finding a practical way to add a trap-door
to the discrete-log problem is very difficult.
- --
David Hopwood <[EMAIL PROTECTED]>
PGP public key: http://www.users.zetnet.co.uk/hopwood/public.asc
RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5 0F 69 8C D4 FA 66 15 01
"Attempts to control the use of encryption technology are wrong in principle,
unworkable in practice, and damaging to the long-term economic value of the
information networks." -- UK Labour Party pre-election policy document
=====BEGIN PGP SIGNATURE=====
Version: 2.6.3i
Charset: noconv
iQEVAwUBOIpJtzkCAxeYt5gVAQEszAgA0DLEK5kdqw/0s2vGGm24iam8+zBwBPAx
/sqr0qVwwxOs+fVbLGexvb8yVEBThXYGWdtxn+HRboTKYon+e40rwKFFk++Ph52V
o8klj57lQpxED8rPsS21OxYfxOblgNjbxp+FppwW5CCnZpSvveulBtnyscZE5OPS
xj0YToxBSfGkhLA3EIy/4a4r6KgYreM1O1FnMITJbAT3Jef1QArhodcZ3vaCbLea
riALxKmXEXxoOEO2boNgIW4KY5VJrU1b0ZtyoPEhgMW3D1zGIZV42cpnWueUgkzy
jGyVpva0hxV8K7Q8pARx9nf7MOOiuOX6Io8OVikTHSWxElo95YSxkA==
=dPBo
=====END PGP SIGNATURE=====
------------------------------
Date: Sun, 23 Jan 2000 00:38:06 +0000
From: David Hopwood <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Cryptanalysis of MYH
=====BEGIN PGP SIGNED MESSAGE=====
David Hopwood wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
>
> MYH is broken, in the sense that it is no more secure than the
> Pohlig-Hellman variant of the original Maurer-Yacobi scheme.
[...]
> Here is a sketch of the attack:
>
> e := y; (or any other subgroup element of order q)
> for (i := 1..r) {
> for (j := each prime number) {
> e := e^j mod m;
> if (e = 1) break;
> }
> q_i := j;
> e := y^(q/q_i);
This line should have been:
e := y^product[q_k: k = 1..i]
> }
- --
David Hopwood <[EMAIL PROTECTED]>
PGP public key: http://www.users.zetnet.co.uk/hopwood/public.asc
RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5 0F 69 8C D4 FA 66 15 01
"Attempts to control the use of encryption technology are wrong in principle,
unworkable in practice, and damaging to the long-term economic value of the
information networks." -- UK Labour Party pre-election policy document
=====BEGIN PGP SIGNATURE=====
Version: 2.6.3i
Charset: noconv
iQEVAwUBOIpNOTkCAxeYt5gVAQGWoQf/Workf6CC/Y8jVfl0qmUb0xpbEEglolz/
DX+0roEUlsRikZXl3xfKPNilty8aq9N19aZdDZ1VT0z4026O1U4urZXd1IzFNX1E
7o3w/UCJ531o58qmgjFuym7rtYhoyssMFwjCU1/dY5Lrf05BWSQu2Y7SCLUx7yOt
8LUIFgSu/a//VAs14KOGdYGYVlizEF00UxGzER1DveD5vbRQCxaw7p9TxqYmh+Cp
/35ZQTB7khr+Nd7ITvtkCjBfdETeI7xhZDgyThqp1KOkTWMw6KGi8zjerHWEmzI9
db+cVN1AgEVg9GnqGs2efA/IqEnMPn4mPPX5IBROqZoSaOrELLhJwQ==
=7rEw
=====END PGP SIGNATURE=====
------------------------------
From: Kent Briggs <[EMAIL PROTECTED]>
Subject: Re: Calculating A^-1 Mod P
Date: Sun, 23 Jan 2000 01:43:31 GMT
Michael Scott wrote:
> > I think you meant to say Euler, not Euclid.
> >
>
> No, I meant Euclid.
Calculating an inverse a^-1 mod p using a^(p-2) mod p is Euler's
Generalization, according to p. 248 of Schneier's AC.
--
Kent Briggs, [EMAIL PROTECTED]
Briggs Softworks, http://www.briggsoft.com
------------------------------
From: Johnny Bravo <[EMAIL PROTECTED]>
Subject: Re: Does RSA use real prime ?
Date: Sat, 22 Jan 2000 20:48:44 +0000
On Sun, 23 Jan 2000 01:24:36 +0800, "Hank"
<[EMAIL PROTECTED]> wrote:
> I hear someone that these program does not use real prime.
Either a number is prime or is isn't, there aren't any fake primes. :)
>Instead, they use numbers which are very possbile real primes on a
>statistical base. Is this true ?
The odds that a prime generated by PGP is not really prime are lower
than the chance that a meteor will crash into the Earth and extinguish all
life on the planet before you can finish reading this reply.
Current Sources use Fermat tests to the bases of 2, 3, 5, 7, 11, 13 and
17 as well as make several other strong checks for pseudoprimes (composite
numbers that pass a Fermat test to the base 2). The density of
pseudoprimes for 1024 bit numbers (such as are used in 2048 bit keys) is
between 6.63e-45 and 1.88e-304 and is thought to be around 4.40e-89. The
density of actual 1024 bit prime numbers is about 1.41e-3; the chances of
getting a pseudoprime from a number of 1024 bits picked at random instead
of a real prime are at least 2.13e41 to 1, and are thought to be closer to
3.2e85 to 1.
Ronald Rivest in "Finding Four Million Large Random Primes", in
Advances in Cryptology: Proceedings of Crypto '90 gives a theoretical
argument that the chance of finding a 256-bit non-prime which satisfies
one Fermat test to the base 2 is less than 10^-22. The small divisor test
improves this number, and if the numbers are 512 bits (as needed for a
1024-bit key) the odds of failure shrink to about 10^-44. Thus, he
concludes, for practical purposes *one* Fermat test to the base 2 is
sufficient. The rest of the tests {3..17} are just extra security, along
with the rest of the pseudoprime tests.
Best Wishes,
Johnny Bravo
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Combination of stream and block encryption techniques
Date: Sun, 23 Jan 2000 03:37:18 +0100
Terry Ritter wrote:
>
> >>>in sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>
> >>>>The point worthy to be repeated is that one need not keep a strict
> >>>>distinction between stream and block ciphers, i.e. there is no sharp
> >>>>boundary between these...
>
> In my response, I described just such distinctions, and the beneficial
> implications to cipher design accruing from such classification.
Let me quote what you wrote there:
I claim there *is* a useful distinction between a cipher which
can work on the minimum possible amount of information
-- a bit -- and one which inherently requires multiple bits
before it can start to operate. Moreover, the distinction helps
clarify various characteristics of cipher operation.
I like to remark that for a user who writes a message in ASCII
it doesn't matter whether the cipher can start to operate with
1 bit or not, for he produces his stuffs in units of 8 bits anyway.
I just don't see why that distinction is so useful in practice
(especially in the context of modern technology), excepting perhaps
for some very special applications. An algorithm can normally be
divided into components and each component can be given a name and
with its working discussed, without any 'necessity' of saying that
the algorithm as a whole belongs to 'steam' or 'block' category,
I believe.
> In my last message, I gave details about how such a "filtering" stream
> cipher *would* fit into the stream cipher classification, but *not* if
> that classification was based, as you suggested, on the presence of a
> keying sequence. True, my example was a weak cipher, but if you think
> a cipher is only a cipher if it is strong, you might as well forget
> about Simple Substitution entirely. And if you are not ready to
> ignore Simple Substitution in cipher analysis, not seeing my weak
> example as a valid counterexample seems rather inconsistent.
It would be indeed extremely strange (and hence never have come to
my mind) that a classification of cipher types would depend on the
strength of the ciphers. I wonder from which sentences of my previous
writing did you ever come to the 'suspicion' that I could possibly
consider 'a cipher is only a cipher if it is strong'. Could you
please tell?
My point was that you mentioned 'digital filtering' but didn't
give any concrete details of that technique. Thus I wrote that
'Your information is rather vague for me' and asked you to provide
literature references. Was I wrong in doing that? Unfortunately you
couldn't give the references. So I remarked that in that situation
one should be careful in accepting what the person or persons
claimed to you about that technique. Was that remark a second big
blunder of mine? How could we start to scientifically discuss a
classification in respect to something without knowing anything
concrete about that something?
> If you want a more valid cipher in this category, surely we can talk
> about a cipher in which plaintext shifts into a fairly long shift
> register. The state value in the shift register is partitioned,
> selected, and combined in multiple ways. Then, when the next
> plaintext character shifts in, ciphertext shifts out, and the internal
> operations occur again. This will be reversible if the internal
> combining is reversible. I suppose we might call this "plaintext
> feedforward (PFF)" or something, but that really does not capture the
> flavor of the operation. There is no internal or external keystream
> to recognize, so if we use "keystream" as the differentiating
> property, this must be a *block* cipher, even though the structure can
> cipher bit-by-bit or char-by-char.
Does that shift register operate on single bit of the plaintext
at a time? I presume it is the case, since you previously said
that the characteristic of stream cipher is having 1 bit width,
if I don't err. If so, then at the place (or places) the single bit
of the plaintext is operated on, there must be bit or bits of
the shift register that does the required interaction with it.
These bits could be consider as the key stream bits, I suppose.
(How these key stream bits are generated is not an issue here.)
>From that view point the scheme could be said to be a stream cipher.
If a set of plaintext bits are simultaneously being worked on
(by corresponding sets of bits of the shift register) it could also
be considered as a stream cipher (with bigger units than 1 bit) in
my view. (However, see also the following paragraph.)
Perhaps I should point out that there is generally some flexibility in
considering how many plaintext bits are being operated on together
(at least conceptually), somewhat analogous to the possibility of
looking at an object either macroscopically or microscopically. Where
more than one bits are operated on together, there could be
conceivable situations where one person argues the cipher to be a
stream cipher and the other person argues the same to be a block
cipher. This could be the case when the design makes use of techniques
of both categories. That disagreement would then be analogous to
disputes about religious issues. This once again shows that it
is better not to maintain a sharp distinction between stream and
block ciphers.
To repeat, there are certainly extreme cases where one can claim
that the one is 'stream cipher' and the other is 'block cipher'
and nobody could aptly refute that. But there are designs where
the stream and block techniques are intermixed/interwoven such that
it is not very useful to 'forcefully' classify them into one of the
two categories, for that would be an arbitrary and unnatural act.
In fact, I personally don't see much benefit deriving from having
separate categories other than some pedagogical interests. If one
accepts the viewpoint that it may be beneficial to have an algorithm
that incorporates both of (what are hiterto termed) stream and block
techniques, then I believe that it is better for the designer that
he doesn't have the notion of two 'different' categories at all. If
I may venture to use an analogy (you might criticize it as
'far-fetched'): Many countries have people of different races,
often also of different colours. If I am doing business with
people of a country X, it is not useful for me to maintain several
categories of my customers of that country according to their
races/colours. As a businessman, whose goal is to make money, all
that I need consider is that they are from country X (because I
have to deal with currencies and import/export rules etc. of that
country).
Because of the above said viewpoint of mine, it is in fact not
my 'intention' to 'strongly' dispute on the issue of what IS a stream
cipher and what IS a block cipher. For in 'strict' conformity with my
own viewpoint I should actually not have disputed at all, because my
viewpoint (on the assumption that it is correct) logically implies
that a distinction cannot be maintained and consequently the
question of classification into two separate categories (i.e.
the definitions of stream and block ciphers, together with all
logical consequneces of that) doesn't 'exist' (for me)!
> Accordingly I suggest that a "keystream" distinction is not useful to
> differentiate "stream" from "block." Further, it is not yet
> particularly useful to differentiate "keystream" from "filtering"
> stream ciphers, since we know so little about the consequences of the
> latter.
And we also know SO little about 'filtering', before we obtain
from the inventors of that encryption technique some concrete details!!
> >I suspect that you misunderstood me a bit. The topic is not to
> >seek diversity, to find more classification categories in the
> >first place, but to 'unify' them, in particular to take away the big
> >boundary between stream encryption and block encrytion techniques.
>
> "The topic," as you put it, is the claim that there is no sensible
> distinction between stream and block ciphers, and that claim is false.
Covered above.
> >On the other hand, if people invent a new technique (here digital
> >filtering), then there must be something basically different
> >from the existing one (key stream) and there must be some concrete
> >benefits/advantages that motivated the new design. Aren't these
> >the 'consequences' in the present context?
>
> The point of a distinction between ciphers is to differentiate some
> part of their operation with respect to cryptography. Not having a
> body of work in this area would seem to make implied consequences
> difficult to pin down.
All the operations have names. Further, it is obviously not possible
to say that, for example, XOR is used only in one of the two types of
ciphers. Couldn't one discuss an algorithm only in terms of the
operations involved and never mention the noun 'stream cipher'
or 'block cipher'?
> >Perhaps I may take this opportunity to give what is in my view
> >a (rather trivial/primitive) combination of stream and block
> >encryption techniques. One uses a PRNG to generate keys for a
> >block encryption algorithm. (Cf. my previous proposal to defeat
> >differential analysis through using variable keys for DES.)
>
> This is straightforward and has been mentioned by many people many
> times. Since block ciphers are just large Simple Substitutions, I see
> it as the pseudorandom selection of alphabets in a polyalphabetic
> stream. It is a stream meta-cipher composed of block cipher
> transformations, and -- with an appropriately strong sequence -- is
> certainly stronger than CBC.
> Here we can say "certainly stronger than CBC" because additional work
> is required to resolve the sequence generator, no matter how little
> that work may be. I see this logic as an advantage which accrues to
> systems with multiple layers of different ciphering puzzles. Further,
> hopefully, the information required to penetrate one layer may
> "commit" the attacker (in an information sense) to an approach which
> simply does not admit the solution of the other layers.
Isn't a 'stream meta-cipher composed of block cipher' a combination
of stream and block encryption techniques? So it appears that
you do agree that it is sensible to 'lump' the repertoires of
techniques together. To carry a step further (logically), one
can obviate the boundary between stream and block ciphers.
> I suppose the reason sequence-streamed DES is rarely done in practice
> is the same reason that multi-ciphering is rarely done: People don't
> think it is worth the extra trouble.
But people did seem to take extra trouble till recently to develop
techniques like the differential analysis.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Transposition over ASCII-coded text
Date: Sun, 23 Jan 2000 03:52:55 +0100
Douglas A. Gwyn wrote:
>
> Mok-Kong Shen wrote:
> > If that interleaving is very systematic, would you consider it
> > as a (non-trivial) scrambling suitable for encryption purposes?
>
> Error correction and encryption have quite different goals.
Yes, I know that. But the writer who initiated this thread
apparently meant that error correction had something essential
to do with encryption.
M. K. Shen
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Does RSA use real prime ?
Date: Sun, 23 Jan 2000 03:31:57 GMT
In article <[EMAIL PROTECTED]>,
Frank the_root <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Tom St Denis wrote into my screen:
>
> > Yes it's true. Factoring a large N bit number will take too long
for
> > most cases.
>
> How EXACTLY, RSA choose p and q? Does anyone have a good article about
> that? I heard about a good method to find larges primes: First, I
> determine the lenght of p and p ( in bits ) and then, I generate a
> approximation of the rank of the smallest/greatest primes numbers in
> this range knowning that numbers of primes not exceding a abitrary
> number c is asymptoticly equal to "c ÷ ln c".
>
> a = 2^lenght ÷ ln 2^lenght
> b = 2^(lenght-1) ÷ 2^(lenght-1)
>
> After, I choose randomly an other number "c" in the range ]a,b[. Then
I
> start searching a prime number around "c*ln c". Repeat untils two
primes
> are found...
>
> Is this the way RSA proceed?
Basically you do this.
Make a n-bit random number. Set the msb and lsb. Do trial division of
primes upto 256 [or more], then Perform a few rounds of Miller-Rabin.
This will weed out any composite with very high probability.
> > ... So they use probable primality testers. ...
>
> How those primaly tester works?
Depends on which test. Fermats little theorem says that a^p mod p = p
always if p is prime, or a^(p-1) = 1 mod p. This can be proved simply
by saying the order of the group [if p is prime] is (p-1) and the
exponent p mod (p-1) = 1, a^1 = a... If p is not prime, let's say it's
the product of two primes, then it's order will be (a - 1)(b - 1) for
two primes. That means a^((a-1)(b-1) - 1) = 1 mod p. But that's not
the same exponent you tested with ...
Generally that test has a 1/2 chance of saying a composite is prime,
but if you do it t times it is a 1 in 2^t chance. Do it 64 times and
you are more likely to win the lottery eight times in a row then find a
composite...
Of course MR is much faster ... :)
> > 2^51. Even if they are composite the chances that you
decrypt/encrypt
> > messages sucessfully more then once is way smaller.
>
> And I also heard that if p and p are not primes, n will be factorised
> very quickly.
Not with NFS. With ECM if the factors are small it will work. Maybe
Pollard-Rho as well. I am not sure off hand.
> > See the decryption exponent 'e' is found by taking d^-1 mod (p-1)(q-
> > 1). If either q or p are not prime then 'e' will not be the inverse
> > exponent mod pq.
>
> I don't understand this one very well. Are you talking about the "ed
mod
> Phi(n) = 1" relation? What is the link between this and the obscure
> notation "d = e^-1 mod ( Phi(n) )"? Why there is a exponent? And, does
> anyone know where a mathematic demonstration of RSA is aviaible? I
mean,
> WHY IT WORKS. On the net, I only found pages explaining the algorithm
> himself. Tanks.
Not too hard.
You know that n = pq, which is two primes. This means the order of the
field (or is it agroup?) is n' = (p - 1)(q - 1) this is the eulers
totient function of n. You then pick any e which is relatively prime
to n'. It has to be relatively prime otherwise there won't be an
inverse.
Then you find d such that de = 1 mod n'.
It works in RSA because C=M^e mod n. and M=C^d mod n = M^de mod n =
M^1 mod n.
Think of it this way...
e = 3
d = 1/3
M = 5
C=5^3 = 125. M=125^1/3. The cube root of 125 is 5...
Basically the exponent is just the inverse exponent....
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Challenge.
Date: Sun, 23 Jan 2000 03:40:43 GMT
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (John Savard) wrote:
> On Sat, 22 Jan 2000 22:35:26 GMT, Tom St Denis <[EMAIL PROTECTED]>
> wrote, in part:
>
> >TAKE A HINT
>
> Normally, I'd agree with the sentiment, but if the postscript is true,
> that this is a "puzzle for amateurs", then, even if to the wrong
> newsgroup, something useful has been contributed.
What created the puzzle? Although I can't cryptanalyze a cipher worth
[explicit word]. I think others might enjoy the challenge. Just
posting ciphertext is annoying [fun to flame... oops you aren't suppose
to know that].
> Had this been a new "unbreakable" cipher, of sufficient complexity
> that it was, say, even comparable to 4-round DES, then of course it
> would be a total waste of time.
Well it's not my forum. They can do as they please.
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: "Hank" <[EMAIL PROTECTED]>
Subject: Re: Does RSA use real prime ?
Date: Sun, 23 Jan 2000 11:59:26 +0800
base. Is this true ?
> >
>
> Yes it's true. Factoring a large N bit number will take too long for
> most cases. So they use probable primality testers. I wouldn't
> dismiss them so casually. For the most part the chances your two
> primes in RSA [say as made by PGP] are not prime is less then 1 in
> 2^51. Even if they are composite the chances that you decrypt/encrypt
> messages sucessfully more then once is way smaller.
>
> See the decryption exponent 'e' is found by taking d^-1 mod (p-1)(q-
> 1). If either q or p are not prime then 'e' will not be the inverse
> exponent mod pq.
>
> Tom
As I know, the encryption/decryption of RSA is based on Fermat's theorem, which
says
If p is a prime, then a^(p-1) = 1 (mod p) for a in Zn (*)
I don't have a detailed proof on this. And I think it is not a two-way theorem.
That means the
relation might hold for some carefully chosen 'a' even p is not a prime.
So I wonder how PGP tackles conditions conditons in which p, q are not real primes.
Starting a new round of prime hunting ? Or carefully selecting 'a' to accord with
the
relation(*) ?
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************