Re: More problems with hash functions

2004-08-28 Thread "Hal Finney"
Jerry Leichter writes:
> It all depends on how you define an attack, and how you choose to define your
> security.  I explored the "outer edge":  Distinguishability from a random
> function.  For a random function from {0,1}*->{0,1}^k, we expect to have to do
> 2^k work (where the unit of work is an evaluation of the function) per
> collision.  The collisions are all "independent" - if you've found N, you have
> ... N.  The next one you want still costs you 2^k work.

I don't believe this correct.  Joux said that for a true random function,
finding a multicollision of degree j, i.e. j distinct values which hash
to the same thing, takes 2^((j-1)*k/j) for a k-bit hash.  He mentioned
a Crypto paper by David Wagner from a few years ago which showed how
to do this.  See http://www.cs.berkeley.edu/~daw/papers/genbday.html .
This means that the work for a (j+1)-collision is about 2^(k/j^2) times
harder than for a j-collision, not 2^k times harder.

Now, this is not done by first finding a j-collision and then looking
for a new value that matches; that would, as you say, take 2^k times
the work.  Rather, one collects enough hashes and then matches them up
in an efficient way to find the (j+1)-collision.

The wide-internal-path proposal will therefore satisfy the constraints
about multicollisions.  With a 2k bit wide internal path for a k bit hash,
it will take 2^k work to find an internal collision.  With some multiple
of this, Joux's attack finds large multicollisions.  But as the paper by
Wagner shows, you can find arbitrarily large multicollisions in a true
random k-bit function with less than 2^k work.  Since Joux's attack
takes more than this, it does not distinguish this hash construction
from a random function.

Hal

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: More problems with hash functions

2004-08-28 Thread Jerrold Leichter
| > However ... *any* on-line algorithm falls to a Joux-style attack.  An
| > algorithm with fixed internal memory that can only read its input linearly,
| > exactly once, can be modeled as an FSM.  A Joux-style attack then is:  Find
| > a pair of inputs M1 and M1' that, starting from the fixed initial state, both
| > leave the FSM in state S.  Now find a pair of inputs M2 and M2' that, starting
| > from S, leave the FSM in state S'.  Then for the cost of finding these two
| > collisions, we have *four* distinct collisions as before.  More generally,
| > by just continuing from state S' to S'' and so on, for the cost of k
| > single collision searches, we get 2^k collisions.
|
| That's an interesting point.  One counter example I can offer is my
| earlier suggestion to use a wide path internally between the stages.
| The analysis suggested that if the internal path was twice the width
| of the output of the hash, the Joux attack was avoided.  Basically if
| the hash output width is n, and the internal width is 2n, then it will
| take 2^n to find an internal collision.  And the overall hash strength
| is never more than 2^n against any attack, since you can exhaustively
| invert the function for that effort.  So nothing based on internal
| collisions can weaken a hash built around this principle.
|
| It's not a particularly appealing design strategy due to its apparent
| inefficiency.  But if you're right about the general vulnerability of
| hashes that support incremental hashing, as any useful hash surely must,
| then perhaps it is worth exploring.
It all depends on how you define an attack, and how you choose to define your
security.  I explored the "outer edge":  Distinguishability from a random
function.  For a random function from {0,1}*->{0,1}^k, we expect to have to do
2^k work (where the unit of work is an evaluation of the function) per
collision.  The collisions are all "independent" - if you've found N, you have
... N.  The next one you want still costs you 2^k work.  However ... no on-
line function can actually be this hard, because a Joux-style attack lets you
combine collisions and find them with much less than the expected work.
Because a Joux-style attack works exponentially well, the cost for a very
large number of collisions is arbitrarily small.  Note that this has nothing
to do with the number of internal states:  One can distinguish random on-line
functions (in the sense I defined) from random functions (My criterion of
"infinitely many extendible collision pairs" may be too generous; you may need
some kind of minimum density of "extendibile collision pairs".)

You can certainly "de-rate" such a function by discarding some of the bits
of the output and considering the range of the output to be {0,1}^l, l < k.
But I don't see how that helps:  In the limit, collisions become arbirarily
cheap, and making collisions easier can only decrease the cost of finding
them.

If the question is how close to the ultimate limit you can get a function
*constructed in a particular way*, then, yes, passing more internal state may
help.  In fact, it's the only thing that can, if you want an on-line
algorithm - you have nothing else available to vary!  A full analysis in this
direction, though, should have *two* security parameters:  The current one,
the size of the output; and the amount of memory required.  After all, MD5
(for example) is only defined on inputs of up to 2^64 bytes.  If I allow 2^64
bytes of internal state, then the "on-line" qualifier becomes meaningless.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


[Muscle] [PATCH] MuscleCard engine for OpenSSL (fwd from mgold@cbnco.com)

2004-08-28 Thread Eugen Leitl
From: Michael Gold <[EMAIL PROTECTED]>
Subject: [Muscle] [PATCH] MuscleCard engine for OpenSSL
To: [EMAIL PROTECTED], [EMAIL PROTECTED]
Cc: 
Date: Fri, 27 Aug 2004 16:21:23 -0400
Reply-To: [EMAIL PROTECTED], MUSCLE <[EMAIL PROTECTED]>

I've created a patch to add a MuscleCard engine to OpenSSL 0.9.7d,
allowing it to access smart cards using the MuscleCard API. It is
located at:
http://www.scs.carleton.ca/~mgold/patches/openssl-add-mcard.patch

This engine implements RSA encryption (signing) and decryption using a
private key stored on a MuscleCard-compatible smart card. It has been
tested with a Cyberflex e-gate 32K Java Card running MUSCLE's
CardEdgeApplet (using the MCardPlugin service for PCSC Lite).

Usage example
-

This command will use the MuscleCard engine to create a self-signed
certificate:

openssl req -new -text -sha1 -x509 \
-engine musclecard -keyform engine \
-key "E-Gate 00 00:0:1::/var/ssl/cflex_pub.key" \
-out cacert.pem

The meaning of the key string is as follows:
  Use PCSC Lite reader "E-Gate 00 00"
  Private key 0
  Authenticate with PIN #1, value ""
  Public key is stored in /var/ssl/cflex_pub.key (to export public
key 1 using muscleTool: "exportkey 1 /var/ssl/cflex_pub.key")

- Michael



___
Muscle mailing list
[EMAIL PROTECTED]
http://lists.drizzle.com/mailman/listinfo/muscle


--

-- 
Eugen* Leitl http://leitl.org";>leitl
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net


pgpyZD9poPZbT.pgp
Description: PGP signature


Re: More problems with hash functions

2004-08-28 Thread bear


On Fri, 27 Aug 2004, Hal Finney wrote:

>Jerry Leichter writes:
>> However ... *any* on-line algorithm falls to a Joux-style attack.

>That's an interesting point.  One counter example I can offer is my
>earlier suggestion to use a wide path internally between the stages.
>The analysis suggested that if the internal path was twice the width
>of the output of the hash, the Joux attack was avoided.

"Avoided" is too strong a term here.  What you're doing is making
the course twice as long because now the runners are twice as fast.
The Joux attack still "works" in principle, in that finding a series
of k block collisions gives 2^k message collisions; All that you're
doing by making the internal path wider is making the block collisions
harder to find.

When you make them harder to find in such a way as to compensate
exactly for the advantage from the Joux attack, you're not "avoiding"
the attack so much as "compensating for" it.

Still, it looks like you're exactly right that that's one good way
to have an online algorithm that the Joux attack doesn't give any
advantage against; if you want a 256-bit hash, you pass at least 512
bits of internal state from one block to the next.

I had another brainstorm about this last night, which was to use
an encryption algorithm for the block function; that way there are
no block-level collisions in the first place.

Unfortunately, that doesn't cut it; even if there are no block-level
collisions on the state passed forward, there can still be collisions
on state passed forward for every two blocks.  Instead of searching
for a collision of internal state in single blocks, the attacker
would then be searching for it in block pairs.  Given the same amount
of internal state passed between blocks, the search would be no more
difficult than it is now; for the price of finding k block-length
collisions, the attacker gets 2^k colliding sequences.

So far, your suggestion of passing twice as much internal state
between blocks looks like the only sound and simple solution yet
proposed.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Cryptography and the Open Source Security Debate

2004-08-28 Thread lrk
On Wed, Aug 25, 2004 at 03:17:15PM +0100, Ben Laurie wrote:
> lrk wrote:
> 
> >My examination of RSAREF and OpenSSL code was more toward understanding how
> >they handled big numbers. It appears both generate prime numbers which are
> >half the length of the required N and with both of the two most significant
> >bits set to one. This means the ratio R=P/Q (P being the larger prime) is
> >limited to 1 >by examining N.
> 
> This doesn't sound right to me - OpenSSL, IIRC, sets the top and bottom 
> bits to 1. Of course, all large primes have the bottom bit set to one.

The source of OpenSSL I looked at was part of the FreeBSD distribution.

int BN_rand(BIGNUM *rnd, int bits, int top, int bottom);

BN_rand() generates a cryptographically strong pseudo-random number of
bits bits in length and stores it in rnd. If top is -1, the most
significant bit of the random number can be zero. If top is 0, it is
set to 1, and if top is 1, the two most significant bits of the number
will be set to 1, so that the product of two such random numbers will
always have 2*bits length. If bottom is true, the number will be odd.


It appears this is called with top=1 for RSA primes. OpenSSL may not use
it that way.



-- 
[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: More problems with hash functions

2004-08-28 Thread "Hal Finney"
Jerry Leichter writes:
> However ... *any* on-line algorithm falls to a Joux-style attack.  An
> algorithm with fixed internal memory that can only read its input linearly,
> exactly once, can be modeled as an FSM.  A Joux-style attack then is:  Find
> a pair of inputs M1 and M1' that, starting from the fixed initial state, both
> leave the FSM in state S.  Now find a pair of inputs M2 and M2' that, starting
> from S, leave the FSM in state S'.  Then for the cost of finding these two
> collisions, we have *four* distinct collisions as before.  More generally,
> by just continuing from state S' to S'' and so on, for the cost of k
> single collision searches, we get 2^k collisions.

That's an interesting point.  One counter example I can offer is my
earlier suggestion to use a wide path internally between the stages.
The analysis suggested that if the internal path was twice the width
of the output of the hash, the Joux attack was avoided.  Basically if
the hash output width is n, and the internal width is 2n, then it will
take 2^n to find an internal collision.  And the overall hash strength
is never more than 2^n against any attack, since you can exhaustively
invert the function for that effort.  So nothing based on internal
collisions can weaken a hash built around this principle.

It's not a particularly appealing design strategy due to its apparent
inefficiency.  But if you're right about the general vulnerability of
hashes that support incremental hashing, as any useful hash surely must,
then perhaps it is worth exploring.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: More problems with hash functions

2004-08-28 Thread Jerrold Leichter
| Bear writes:
| > One interesting idea which I came up with and haven't seen a way
| > past yet is to XOR each block with a value computed from its
| > sequence number, then compute the hash function on the blocks in
| > a nonsequential order based on the plaintext of the blocks
| > in their new order.
|
| This is an interesting idea, but it does not fully prevent the Joux
| attack.  This is seen most obviously in the two block case, where
| your method can at most increase the attacker's work by a factor of 2.
| Joux shows how to find a 4-collision using 2*2^(n/2) work, while it should
| take 2^(3n/4) work.  In the case of a 160 bit hash this is the difference
| between 2^81 and 2^120.  Doubling the work to find the 4-collision will
| not do much to address this discrepency.
|
| Even in the more general case, where Joux puts k blocks together to
| generate 2^k multicollisions, I can see a way to attack your method,
| with an increase in the work by only a factor of k^2
There's a more general problem with the algorithm:  It isn't on-line.  An
implementation requires either an unbounded internal state, or the ability to
re-read the input stream.  The first is obviously impossible, the second a
problem for many common uses of hash functions (in communications, as opposed
to storage).

However ... *any* on-line algorithm falls to a Joux-style attack.  An
algorithm with fixed internal memory that can only read its input linearly,
exactly once, can be modeled as an FSM.  A Joux-style attack then is:  Find
a pair of inputs M1 and M1' that, starting from the fixed initial state, both
leave the FSM in state S.  Now find a pair of inputs M2 and M2' that, starting
from S, leave the FSM in state S'.  Then for the cost of finding these two
collisions, we have *four* distinct collisions as before.  More generally,
by just continuing from state S' to S'' and so on, for the cost of k
single collision searches, we get 2^k collisions.

The nominal security of the FSM is its number of states; for k large enough,
we certainly get "too many" collisions compared to a random function.

What this shows is that our vague notion that a hash function should behave
like a random function can't work.  On-line-ness always kills that.  (In
fact, even given the function random access to its input doesn't help:  An FSM
can only specify a finite number of random "places" to look in the input.
If the FSM can only specify an absolute location, it can't work on arbitrarily
long inputs.  If it can specify a location relative to the current position,
you can re-write the machine as a linear one that gets repeated copies of
blocks of a fixed size.)

This is really just the prefix property writ large.  Define an on-line
function f:{0,1}*->{0,1}^k as one with the property that there exist
infinitely many pairs of strings Si, Si' with the property that, for any S in
{0,1}*, f(Si||S) = f(Si'||S).  (In particular, this is true for S the empty
string, so f(Si) = f(Si').)  Then the most we can expect of an (on-line)
hash function is that it act like a function picked at random from the set of
on-line functions.  (This, of course, ignores all the other desireable
properties - we want to choose from a subset of the on-line functions.)

Getting a security parameter in there is a bit tricky.  An obvious first cut
is to say an on-line function has minimum length n if the shortest Si has
length n.

Actual hash functions don't follow this model exactly.  For example, MD5 needs
512+128+64 bits of internal storage* - the first for the input buffer, all of
whose bits are needed together, the second for the running hash state, the
third for the length.  So there are 2^704 states in the MD5 FSM, but the
output only has 2^128 values - only 128 bits of the "state number" are used.
That in and of itself isn't an issue - it doesn't hurt, in this particular
analysis, to throw bits away in the *final* result.  What does limit it is
that every 512 input bits, the FSM itself logically "mods out" 512 bits of the
state (the input buffer), and ends up in one of "only" 2^192 possible states
(and if we work with "short" strings, then only a small fraction of the 2^64
states of the length field are actually accessible).  Because of this, MD5 can
at best have been chosen from on-line functions of minimum length 128, rather
than those of a minimal length up to 704.  That's why keeping more internal
state *might* help - though you have to be careful, as we've seen, about how
you do it.
-- Jerry

* Actually, you need 3 more bits because the MD5 length is in octets, not
bits. This memory is hidden in any real implementation on byte-oriented
hardware.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: How thorough are the hash breaks, anyway?

2004-08-28 Thread Arash Partow
Hello,
NO sorry I can't understand the logic here, I think I understand the
maths behind message digests pretty well and to that point I don't see
how the recent results diminish the current crypto grade hash
functions in the least.
The researchers have brought about an obscure plain text and provided
another text that produces the same hash values
But in reality, what people (i.e.: attackers) want is something like this:
Attack at 1pm
to become
Attack at 3pm
with common hash values and not something like this:
AtTaZk @ Epn
Even if it did pass the crypto test i.e.: message digest, the literal
acceptance by a person would not pass. Now lets assume the case of
binary data, most data nowadays is compressed then encrypted. finding
a text which will also be uncompressible-per-compression-algorithm and
also pass the message digest for another particular text heck you'd
have better luck finding snow in the middle of hell. also nowadays some
people tend to use multiple digests of data sort of like pealing the
onion, in this case including the compression related difficulties etc
it all becomes very very near impossible. Possible but highly improbable
To date attacks on crypto (not the software but the algorithms) have
been centered around people implementing the algorithms incorrectly
i.e.: weak primes etc, in situations where everything is done by the
book, only software implementations of the algorithms and also users
of the system remain as the weak links in the chain known as a crypto
system.
In a final word I would like to say thank-you the people that
did this research, the results were needed in order to prove a theory.
However everything should be taken into context.

Arash Partow
__
Be one who knows what they don't know,
Instead of being one who knows not what they don't know,
Thinking they know everything about all things.
http://www.partow.net
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Cryptography and the Open Source Security Debate

2004-08-28 Thread Arash Partow
Hello,
I've had a look at the code, the main problems I see are side-channel
attacks. The implementation is pretty standard, strong primes, proper
fields etc, however no salt!
Key generation, or more so the process of key generation should be
unique every time regardless of how unique the parameters being passed
into the process are.
I think a few months ago a student from the weizmann inst.
http://www.wisdom.weizmann.ac.il/~tromer/acoustic/ proposed analysis
of cpu noise as a side channel attack, the test code was PGP's key
generation.
That said acoustic attacks are not the only method for side channel
attacks, you have power and timing attacks, in theory if you could
sample these 3 channels you could be in much better position than if
you were only sampling one of the channels.
In short updates should be made to add salt to the calculations,
simple random operations being randomly added during the generation
process will suffice to eliminate the possibility for any of the above
attacks.
Other than that PGP is pretty much standard, and unless tomorrow
someone comes up some really wacked-out number theory that will
prove futile many of the principles of the mathematics behind the
crypto systems of today I wouldn't worry too much ;)

Arash Partow
__
Be one who knows what they don't know,
Instead of being one who knows not what they don't know,
Thinking they know everything about all things.
http://www.partow.net

Jon Callas wrote:
On 10 Aug 2004, at 5:16 AM, John Kelsey wrote:
So, how many people on this list have actually looked at the PGP key 
generation code in any depth?  Open source makes it possible for 
people to look for security holes, but it sure doesn't guarantee that 
anyone will do so, especially anyone who's at all good at it.


The relevant key generation code can be found in:
libs2/pgpsdk/priv/crypto/pubkey/
(those are backslashes on Windows, of course). The RSA key generation, 
for example is in ./pgpRSAKey.c.

You might also want to look at .../crypto/bignum and .../crypto/random/ 
while you're at it.

There is also high-level code in .../crypto/keys/pgpKeyMan.c for public 
key generation.

Incidentally, none of the issues that lrk brought up (RSA key being made 
from an "easy to factor" composite, a symmetric key that is a weak key, 
etc.) are unique to PGP. This should be obvious, but I have to say it.

Jon
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: How thorough are the hash breaks, anyway?

2004-08-28 Thread Nicholas Bohm
At 16:09 26/08/2004, Trei, Peter wrote:
>[snip]
>Looking over the recent work on hash collisions, one
>thing that struck me was that they all seem to be 
>attacks on known plaintext - the 'plaintexts' which
>collided were very close to each other,  varying in 
>only a few bits. 
>
>While any weakness is a concern, and I'm not
>going to use any of the compromised algorithms
>in new systems, this type of break seems to be
>of limited utility. 
>
>It allows you (if you're fortunate) to modify a signed
>message and have the signature still check out. 
>However, if you don't know the original plaintext
>it does not seem to allow you construct a second
>message with the same hash.
[snip]

 From a lawyer's perspective, it seems worrying that a message into which the word 
"not" has been inserted might still have the same hash as the original (assuming the 
hash to be a component of an electronic signature)

Regards

Nicholas Bohm

Salkyns, Great Canfield,
Takeley, Bishop’s Stortford CM22 6SX, UK

Phone   01279 871272(+44 1279 871272)
Fax 020 7788 2198   (+44 20 7788 2198)
Mobile  07715 419728(+44 7715 419728)

PGP RSA 1024 bit public key ID: 0x08340015.  Fingerprint:
9E 15 FB 2A 54 96 24 37  98 A2 E0 D1 34 13 48 07
PGP DSS/DH 1024/3072 public key ID: 0x899DD7FF.  Fingerprint:
5248 1320 B42E 84FC 1E8B  A9E6 0912 AE66 899D D7FF  

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: How thorough are the hash breaks, anyway?

2004-08-28 Thread talli
Ian Grigg writes: 

Daniel Carosone wrote:
There is one application of hashes, however, that fits these
limitations very closely and has me particularly worried:
certificates.  The public key data is public, and it's a "random"
bitpattern where nobody would ever notice a few different bits. 

If someone finds a collision for microsoft's windows update cert (or a
number of other possibilities), and the fan is well and truly buried
in it.
Correct me if I'm wrong ... but once finding
a hash collision on a public key, you'd also
need to find a matching private key, right?
You are not wrong... you can try to find the right private key for your 
collision too...  ;) 

In fact, looking for a collision to a public certificate is not as easy as 
breaking a signature but breaking many of them at the same time. 

Talliann 

iang 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to 
[EMAIL PROTECTED]

--
I came. I saw. I clicked. 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


system reliability -- Re: titles

2004-08-28 Thread Ed Gerck
David Honig wrote:
"Applications can't be any more secure than their
operating system." -Bram Cohen
That sounds cute but I believe it is incorrect. Example: error-
correcting codes. The theory of error-correcting codes allows
information to be coded so that it can be recovered even after
significant corruption. This allows, for example, for
_secret-sharing_ with multiple systems so that no operating
system platform has enough information or enough power to even
allow a compromise. Such an application can be much more secure
than any operating system supporting it.
RAID is another example of a realiable system that is made out
of unreliable parts.
The human application of these principles is well-known in
information security and also supports the examples above. Humans
are notorious for breaking security systems. Humans are the
wetware equivalent of an operating system. A common solution for
the risk presented by humans is also _secret-sharing_: No person
may have access to classified information unless the person has
the appropriate security clearance and a need-to-know.
What this means is that the search for the "perfect" operating
system as the solution to security is backwards.
Cheers,
Ed Gerck
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


?splints for broken hash functions

2004-08-28 Thread John Denker
Hi --
I don't fully understand the new attacks on the hash
functions (which I suppose is forgiveable, since the
papers don't seem to be available).
But here's a shot in the dark that seems promising at
first glance:
Given a message m, pad it to a length of at least one
block to form M.  Then encrypt M with some standard
algorithm (e.g. AES) using some agreed-upon key to
form E(M).  Then concatenate.  Then calculate the hash,
i.e.
  hash2(M) := hash1[M (+) E(M)].
The conjecture is that hash2() is stronger than hash1().
Note that encryption alone is not enough;  you could
just find a collision pair (X and Y) for the unadorned
hash1 function, then apply the decryption function D()
and then you've got a collision pair D(X) and D(Y) for
the encrypt-then-hash function.
But having both M and E(M) changes the story.  You have
to fiddle bits of M to get a collision in the first
block, and that will mess up whatever you're trying to
do to find collisions in the later block where E(M) is
processed.
===
Here's another splint using the same general idea, but
with less complexity:  calculate the hash once then
prepend that to the message and hash again, i.e.
   hash3(M) := hash1[hash1(M) (+) M]
The idea is that anything you do to M to arrange a
collision in the later blocks will severely mess up
whatever you wanted to do with the first block.
The work for the legit hash user is only slightly more
than doubled, while the work for the attacker is, I
conjecture, increased quite a bit more than that.
=
I find it ironic that only a few weeks ago in the thread
on "the future of security" there seemed to be a general
consensus that the future would not involve much interesting
work on the mathematics of cryptography or cryptographic
primitives.
As Yogi Berra supposedly said, it's hard to make predictions,
especially about the future.
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


finding key pairs with colliding fingerprints (Re: How thorough are the hash breaks, anyway?)

2004-08-28 Thread Adam Back
You would have to either:

- search for candidate collisions amongst public keys you know the
private key for (bit more expensive)

- factorize the public key after you found a collision

the 2nd one isn't as hard as it sounds because the public key would be
essentially random and have non-negligible chance of finding trivial
factors.  (Not a secure backdoor, but still create a pretty good mess
in DoS terms if such a key pair were published).

The latter approach is what I used to create a sample dead-fingerprint
attack on a PGP 2.x fingerprints.

http://cypherpunks.venona.com/date/1997/06/msg00523.html

(No need to find hash collision even tho' md5 because it has another
bug: the serialization has multiple candidate inputs).  So I just
searched through the candidate inputs for one I can factor in a few
minutes.

Adam

On Fri, Aug 27, 2004 at 12:22:26AM +0100, Ian Grigg wrote:
> Correct me if I'm wrong ... but once finding
> a hash collision on a public key, you'd also
> need to find a matching private key, right?
> 
> >If someone finds a collision for microsoft's windows update cert (or a
> >number of other possibilities), and the fan is well and truly buried
> >in it.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]