Re: 1024-bit RSA keys in danger of compromise

2002-03-31 Thread Anonymous

Joseph Ashwood writes:
> Bernstein's proposal does have an impact, but I do not believ that 3x the
> key size is necessary
> I believe Bernstein's proposal results in the necessity of a keysize of
> approximately 1.5 times what was required before
> I believe that there are further similar advances available to the
> algorithms involved that can push this to approximately 2x
>
> I have reached these considerations through a very long thought process that
> involved digging through old textbooks on electrical engineering,

Wow, really, Joe?  You went through a long thought process?  You dug
through old textbooks?  That's very impressive.  At least, it would be
if you were still a schoolboy.

Out here in the real world we have this crazy idea called "evidence".
When someone makes a claim, like that Bernstein's ideas mean that we
need to increase keysize by 1.5, we like to know WHY.  We like to see
some reason why we might believe a claim like this.

Here's a hint, Joe.  Telling people about all the hard work you went
through to come up with these amazing conclusions is WORTHLESS.  You
want to impress us?  We don't want to hear about your tireless hours
studying old text books.  We want to see your results.

If you have determined that we need keys 1.5 times bigger, then show
us why.  If you have a real analysis of what it would take to build
Bernstein's machine, how it would work in a real problem, what the
parameters are which are hidden by Bernstein's o(1) fudge factors,
then prove it.

No one has done this yet.  You have now joined the club of people
who claim to have done the math and determined that his machine will
work, but who for some reason won't print anything about their results.
What is it about the Bernstein machine that leads people to make claims
that they won't back up?




Re: Re: 1024-bit RSA keys in danger of compromise

2002-03-31 Thread Joseph Ashwood

I have done a significant amount of considering on the very questions raised
in this. This consideration has spanned approximately a month of time. These
are my basic conclusions:

Bernstein's proposal does have an impact, but I do not believ that 3x the
key size is necessary
I believe Bernstein's proposal results in the necessity of a keysize of
approximately 1.5 times what was required before
I believe that there are further similar advances available to the
algorithms involved that can push this to approximately 2x

I have reached these considerations through a very long thought process that
involved digging through old textbooks on electrical engineering, and a
fundamental assumption that people will only construct these machines when
there is a stimulus to do so. So for example it would not be reasonable for
me to construct one to break 768-bit keys because I have little interest in
the actual data, merely whether or not the data is secure. Similarly, IBM
would not likely construct one simply because it would be economically more
feasible to dedicate that money towards research. The NSA and similar
organizations is extremely likely to strongly consider building such a
machine because they have the money, and the mandate to to whatever it takes
to gain access to the data encrypted by militaries around the world. Are
these assumptions necessarily correct? In their fundamental form they are
not, Linux is proof of this (people giving their freetime to something that
they get effetively nothing out of), however since we are talking about a
very significant investment of money to make one of usable size, these
assumptions are likely to be approximately correct.

This means that according to my considerations it seems reasonable to
decommission all 512-bit keys immediately (these ahouls hyave been
decomissioned years ago, but there are still a few floating around), 768-bit
keys should be decommissioned at the earliest realizable opportunity (I
don't believe they are in immediate danger of compromise, but they are
compromisable), 1024-bit keys should now be considered moderately secure in
the immediate future and decommissioned over the next couple years, 1536-bit
keys are for reasonable purposes secure, 2048-bit keys are secure for all
but the most demanding situations, and 4096-bit keys are still effectively
invulnerable.

This of course makes some very blanket assumptions about the desirability of
breaking a specific key. If no one wants to read what's inside, you don't
even really need to encrypt it (note the difference between need and want).
It will still cost a minimum of 10^9 US dollars to break 1024-bit keys.
Considering that most businesses and many governments won't have this value
of information transferred in the next 100 years, the desire to break
1024-bit keys simply isn't there.

Also examine _who_ wants to read your data. If it's just messages back and
forth from your girlfriend/wife/mistress it's unlikely that 512-bits will be
broken. If you are protecting state secrets, obviously you need to consider
things more carefully, and 4096-bit keys may not even offer enough security.

As usual there is no one-stop solution for every situation, only more
considerations that need to be made. I welcome any comments on my
conclusions.
Joe




Re: 1024-bit RSA keys in danger of compromise

2002-03-28 Thread V Alex Brennen

On Mon, 25 Mar 2002, Bill Stewart wrote:

> While SSL implementations are mostly 1024 bits these days,
> aren't PGP Diffie-Hellman keys usually 1536 bits?

I think there's a general consensus that the minimum
recommended key size for X9.42 Diffie-Hellman PGP keys 
is 1024bits.  I'm not sure if the standard size is 1536bits.
I  might be wrong, but I don't believe such a key length
standard exists. I think the only size related limitation
in X9.42 was related only to size of the prime defining
the Galos Field.  I haven't worked with X9.42 before.

There does not appear to be many 1536bit keys in the global PGP
public keyring (the keys of the synchronized public keyservers).

I count 1,057 in my copy of the ring, or 0.0748% of the
total keys in the ring.

Here is more information about that ring:

http://gnv.us.ks.cryptnet.net/stats.html

Notice the % of keys which is =< 1024bits. 


- VAB
---
V. Alex Brennen
Senior Systems Engineer
IBM Certified Specialist
e-TechServices.com
IBM Business Partner
Bus: 352.246.8553
Fax: 770.216.1877
[EMAIL PROTECTED]
http://www.e-techservices.com/people/vab/




RE: 1024-bit RSA keys in danger of compromise

2002-03-28 Thread Kevin Steves

On Thu, 28 Mar 2002, Lucky Green wrote:
:Which brings me to an issue that I hope may be on-topic to this mailing
:list: I would like to be able to enforce that the keys my users can use
:to authenticate themselves to my sshd to be of a minimum size. Is there
:a config option to sshd that will reject user keys below a minimum size?
:I didn't see anything in the man pages or my first go through the code.

no config option, but this change will be in the next release:

RCS file: /usr/OpenBSD/cvs/src/usr.bin/ssh/auth-rsa.c,v
retrieving revision 1.53
retrieving revision 1.54
diff -u -r1.53 -r1.54
--- src/usr.bin/ssh/auth-rsa.c  2002/03/25 09:21:13 1.53
+++ src/usr.bin/ssh/auth-rsa.c  2002/03/26 23:13:03 1.54
@@ -14,7 +14,7 @@
  */

 #include "includes.h"
-RCSID("$OpenBSD: auth-rsa.c,v 1.53 2002/03/25 09:21:13 markus Exp $");
+RCSID("$OpenBSD: auth-rsa.c,v 1.54 2002/03/26 23:13:03 markus Exp $");

 #include 
 #include 
@@ -77,6 +77,13 @@
u_char buf[32], mdbuf[16];
MD5_CTX md;
int len;
+
+   /* don't allow short keys */
+   if (BN_num_bits(key->rsa->n) < 768) {
+   error("auth_rsa_verify_response: n too small: %d bits",
+   BN_num_bits(key->rsa->n));
+   return (0);
+   }

/* The response is MD5 of decrypted challenge plus session id. */
len = BN_num_bytes(challenge);




RE: 1024-bit RSA keys in danger of compromise

2002-03-28 Thread Lucky Green

[OK, let me try this again, since we clearly got off on the wrong foot
here. My apologies for overreacting to Damien's post; I have been
receiving dozens of emails from the far corners of the Net over the last
few days that alternatively claimed that I was a stooge of the NSA
because everybody knows that 8k RSA keys can be factored in real-time or
that 512-bit RSA keys were untouchable since nobody could perform even
perform an exhaustive search of a 128-bit key space...]

Damien wrote:
> I am disputing that the improvements as presented are 
> practically relevant. Since you saw fit to cross-post to 
> openssh-unix-dev@, which is a list concerned with code (not 
> polemic), that is the context in which I chose to frame my reply.

My post reported on what was announced at an academic cryptographic
conference by a cryptographer that has written peer-reviewed papers on
the design of large-scale cryptographic processing machines in the past.
(I.e. how one would in practice build one of Rivest's MicroMint
machines). I believe my relaying these claims was responsible given the
potentially massive security implications to a good part of the
infrastructure. In addition, a reporter for the Financial Times was
present at the same event who announced his intent to write about it as
well.

Nowhere in the post did I make, or intent to make, claims of my own as
to the impact of Bernstein's paper on factoring. I did report on my
reaction to the claims which I witnessed and on which I therefore
reported. My reaction to those claims was to create larger keys. Other
may choose to react differently. Furthermore, I provided those concerned
with the new claims with what I believe are sound recommendations how to
counter the potential thread. Which was to increase the key size.

[On Nicko's rump session talk that they factored 512-bit keys on the
hardware in their office].
> You offer this aside in the context of an argument against 
> the insufficiency of 1024 bit RSA keys. Surely you don't 
> expect people to believe that you weren't including it to 
> bolster your argument?

To be perfectly honest, the thought that somebody on a mailing list
related to cryptographic software would consider my reporting on the
news that somebody factored 512-bit keys on the computers in their
office would believe I meant to imply this to have any bearing on a
potential ability to factor 1024-bit keys on purpose-built hardware
never even occurred to me.

I really, really meant coincidentally when I wrote coincidentally. The
two news came within a day of each other, so while reporting on one of
the news, I thought I'd make mention of the other news as well. That's
all.

Well, on second thought I suppose there actually is an, albeit removed,
connection between the two: many sites still use 512-bit keys; even if
one is unconcerned about 1024-bit keys being breakable, hopefully those
with 512-bit keys will take the fact that 512-bit keys can be broken by
some office hardware as a reason to upgrade key sizes.

[...]
> You post is hyperbole because it is very long on verbiage and 
> very short on justification. Large claims require a good 
> amount of proof: If you expect everyone to switch to 2048 bit 
> keys on the basis of your rant alone, you may be disappointed.

I don't really personally care what key sizes others use. For all I
care, others are welcome to employ 4-bit RSA keys, as long as they don't
use those keys to authenticate themselves to any of the machines under
my control.

Which brings me to an issue that I hope may be on-topic to this mailing
list: I would like to be able to enforce that the keys my users can use
to authenticate themselves to my sshd to be of a minimum size. Is there
a config option to sshd that will reject user keys below a minimum size?
I didn't see anything in the man pages or my first go through the code.

Thanks in advance,
--Lucky




RE: 1024-bit RSA keys in danger of compromise

2002-03-28 Thread A. Melon

> Here's a real question: if you could build a special purpose machine
> to do 1024 bit RSA keys (that is, factor a 1024 bit number), how much
> would that help with discrete logs in a safe prime field?

Solving discrete logs via NFS is structurally similar to factoring.
You start off with a factor base and generate a large number of
relations among the elements in that set.  You then look for a linear
dependency among these relations, which amounts to solving a large matrix.
Since the techniques are similar, Dan Bernstein's ideas probably apply
to both cases.

However there are some significant differences, mostly in the second
phase, the matrix reduction.  The matrix solution in the case of
factoring is over GF(2).  This means that all the elements are 0's and
1's, that "addition" is just XOR, and that "multiplication" is just AND.
This simplifies the problem and Bernstein uses a trick to speed up matrix
multiplication in this field.

Bernstein's insight is that matrix multiplication in GF(2) is essentially
just a matching problem.  When you have two 1's being multiplied together,
you keep them; if either one is a 0, you can eliminate that.  During the
addition, if you have two matching elements, they cancel; if they are
unlike elements, they produce a 1.  So what he does is to put the data
into a giant sorting engine.  This brings the corresponding elements
close together.  Then a couple of passes of these matching rules gives
the result.

The matrix solution for discrete log is over a larger field, the size of
the large prime that divides p-1; 1023 bits for a strong 1024 bit key.
You don't have just 0's and 1's, and you no longer have simple matching
rules.  The field operations are non-trivial.

This will have several effects.  First, the cells of the solver will have
to be larger; in addition to storing row and column numbers, they have to
store a large numeric value.  Second, the cells will have to include much
more complex logic; instead of just comparing with their neighbors and
looking for matches, they have to do modular multiplication and addition.
And third, the algorithm will change during the addition phase; a series
of values have to be added together, rather than just counting whether
there are an even or odd number.  This probably doesn't increase the
asymptotic number of steps, though.

So the matrix solver will be considerably larger, but if the factoring
version were buildable, the DL version probably could be built as well.
Given the increased complexity of the cell, from doing ANDs of ~40 bit
numbers to doing modular multiplication of ~1000 bit values, a very rough
guess is that it will be 1000 times more expensive.  It may be possible
to compensate for this greater cost by using a smaller factor base.

The first phase in both algorithms involves finding numbers smooth over a
factor base, so again the success of a factoring engine probably implies
that the discrete log machine would work, too.

Summing up, Bernstein's techniques would require some modification for
solving discrete logs, and the details of the asymptotic analysis would
probably not be exactly the same, but broadly speaking the same general
techniques would apply.  If his machine factors 1024 bit numbers at
some reasonable expense, 1024 bit discrete log keys would have to be
considered unsafe as well.




RE: 1024-bit RSA keys in danger of compromise

2002-03-28 Thread Tom Holroyd

You know, Lucky, most of the people here have been around the block a
few times, and your previous post is just classic Usenet whinage.
Complaining about puncuation indeed.  Spare us, please.

Look, we've all read the background.  The improvement is a function
f(n) which for large n may approach 3.  What is f(1024)?  I don't
know, do you?  Your original post might have merit if f(1024) is also
close to 3 or more, but it may be very much less.

Here's a real question: if you could build a special purpose machine
to do 1024 bit RSA keys (that is, factor a 1024 bit number), how much
would that help with discrete logs in a safe prime field?

Dr. Tom Holroyd
"I am, as I said, inspired by the biological phenomena in which
chemical forces are used in repetitious fashion to produce all
kinds of weird effects (one of which is the author)."
-- Richard Feynman, _There's Plenty of Room at the Bottom_




mixmaster upgrades? (Re: 1024-bit RSA keys in danger of compromise)

2002-03-27 Thread Adam Back

I think it wouldn't hurt to use 2048 bit RSA keys for anything that
supports them.  I've been using 2048 bit RSA keys with PGP since 1995
based on the assumption even given uncertainty about the future of
factoring that double the key size can't hurt, and didn't make any
significant difference to message processing time.

Mixmaster is an example of an application which could benefit from
larger key sizes, given the presumed long-term assurances one would
like about it's anonymity.  There was some discussion a while ago
about a candidate mixmaster version 3 protocol:

http://www.eskimo.com/~rowdenw/crypt/Mix/draft-moeller-v3-01.txt

I made some comments at the time about a way to reduce the space
overhead of using RSA:

http://archives.seul.org/freehaven/dev/Jun-2000/msg00029.html

by reusing some of the space inside the RSA encrypted message to
transport part of the chained encrypted message as well as the
symmetric keys.  I think this would allow 2048 bit keys without
increasing the already 50% overhead of key-exchange to message with
mixmaster.  (10k for each).

The other thing mixmaster really needs is forward secrecy, ideally
end-to-end forward secrecy, but hop-by-hop forward secrecy would be a
start.  Lack of forward-secrecy leaves remailer operators open to a
fair risk of subpoena attack if someone went to the trouble of having
an ISP record the incoming messages.

The other current weak point is DSA signature key sizes maxing out at
1024 bits due to the SHA1 hash output size.  I presume that in due
course NIST will make an extended DSA to go with the extended SHA1
(SHA-256, SHA-384 and SHA-512).

But signatures key strengths aren't so important for forward secrecy
as encryption key strengths; you only have to be convinced that
current adversaries can't forge them given the current signature size
you're using.  If at some point in the future after you've upgraded
your key sets to larger signature keys, it's not as significant if
someone can go back and forge old small key signatures.

Adam

On Sat, Mar 23, 2002 at 05:42:34PM -0800, Lucky Green wrote:
> [about value of upgrading key sizes, triggered by discussion of
> potential implications of Bernstein's paper].



Re: 1024-bit RSA keys in danger of compromise

2002-03-26 Thread Eric Murray

Here's the distribution of RSA key sizes in SSL servers, as
recorded by my SSL server survey in June 2000 and June 2001

RSA Server Key size
   Key bits2000 2001
2048 .2% .2%
1024   70% 80%
>= 1000 2%   .7%
>= 768  2%   1%
>512 -   0%
<= 512  25% 17%



Eric





Re: 1024-bit RSA keys in danger of compromise

2002-03-26 Thread Meyer Wolfsheim

On Mon, 25 Mar 2002, Bill Stewart wrote:

> While SSL implementations are mostly 1024 bits these days,
> aren't PGP Diffie-Hellman keys usually 1536 bits?

The ElGamal encryption keys (Diffie-Hellman is a misnomer in PGP's case)
are usually 2048 bits, though the DSA signing keys are almost always 1024
bits.


-MW-




Re: 1024-bit RSA keys in danger of compromise

2002-03-26 Thread Bill Stewart

At 05:38 PM 03/23/2002 -0800, Lucky Green wrote:
>While the latter doesn't warrant comment, one question to ask
>spokespersons pitching the former is "what key size is the majority of
>your customers using with your security product"? Having worked in this
>industry for over a decade, I can state without qualification that
>anybody other than perhaps some of the HSM vendors would be misinformed
>if they claimed that the majority - or even a sizable minority - of
>their customers have deployed key sizes larger than 1024-bits through
>their organization. Which is not surprising, since many vendor offerings
>fail to support larger keys.

While SSL implementations are mostly 1024 bits these days,
aren't PGP Diffie-Hellman keys usually 1536 bits?




Re: 1024-bit RSA keys in danger of compromise

2002-03-24 Thread Ian Goldberg

In article <00e101c1d2d8$c9768080$c33a080a@LUCKYVAIO>,
Lucky Green <[EMAIL PROTECTED]> wrote:
>The panel, consisting of Ian Goldberg and Nicko van Someren, put forth
>the following rough first estimates:

I'd just like to credit the "O(minutes)" calculation to Nicko;
my own opinion was that:

- We have no reason to believe the asymptotic result applies to "real"
  keylengths (2^1024 <<< infinity)
- The physical properties of such a machine (size, power, cooling, etc.)
  seem implausible to me.

I personally don't intend to be revoking my 1024 bit key at this time.

   - Ian




Re: 1024-bit RSA keys in danger of compromise

2002-03-23 Thread Anonymous

Lucky Green writes:
> The panel, consisting of Ian Goldberg and Nicko van Someren, put forth
> the following rough first estimates:
>
> While the interconnections required by Bernstein's proposed architecture
> add a non-trivial level of complexity, as Bruce Schneier correctly
> pointed out in his latest CRYPTOGRAM newsletter, a 1024-bit RSA
> factoring device can likely be built using only commercially available
> technology for a price range of several hundred million dollars to about
> 1 billion dollars. Costs may well drop lower if one has the use of a
> chip fab. It is a matter of public record that the NSA as well as the
> Chinese, Russian, French, and many other intelligence agencies all
> operate their own fabs.
> ...
> Bernstein's machine, once built, will have power requirements in the MW
> to operate, but in return will be able to break a 1024-bit RSA or DH key
> in seconds to minutes.

Is there anything written, even a paragraph or two of back of the
envelope scrawls, to justify these numbers?  How big a factor base are
they assuming?  What sieving/factoring method are they using to generate
the relations?  Are they following Bernstein's proposal to abandon
sieving and try to factor values directly using massive parallelism?

Rough calculations on the cryptography list showed that Bernstein's
strategy of replacing sieving with direct smoothness testing is
not feasible.  For a 1024 bit RSA modulus, assuming a factor base of
size approximately 2^36, we need to test approximately 2^89 values for
smoothness in order to get enough relations.  These values are several
hundred bits in length.

Assuming each smoothness test (which amounts to finding all the factors
of each of these values up to 2^36 in size) takes 2^18 cycles, the
total number of cycles in this phase is 2^107.  These are not trivial
algorithms, they involve multi-precision multiplication and will require
substantial complexity for each unit on the chip.  Any problem with a
work factor of 2^107 cycles is effectively impossible even if you've
got a billion dollars to spend.

Understand this: Bernstein only had two ideas in his paper.  The first was
to replace sieving with direct smoothness testing, which is asymptotically
faster (but almost certainly not faster for keys of practical size;
sieving is just too efficient).  The second was to speed up the matrix
solving phase by building a smart computing fabric.

It's possible that you can throw out the first idea and just use the
clever matrix solver, but that's not going to give you the kind of
speedups claimed by his paper (the factor of 3 in modulus length).
It seems very questionable that this change alone would let you solve
RSA keys in "seconds to minutes", unless you already thought that you
could solve them in hours to days using conventional technology of
comparable expense.

It's too bad that Lucky took the precipitous action of revoking his keys
before the community has had a chance to perform some peer review of
these claims of feasibility.  We need to see the analysis, rough though
it may be, so we can judge for ourselves whether these remarkable claims
are sound.