On Wed, Jan 13, 2010 at 5:58 AM, Steffen DETTMER
<steffen.dett...@ingenico.com> wrote:
> Hi,
>
> thank you very much for all your explanation and to give me one
> more free training :)

Hey, like I said, I believe this information needs to be free to all. :)

> * Kyle Hamilton wrote on Tue, Jan 12, 2010 at 13:33 -0800:
>> > Isn't it a bug in the application when it does not allow me (its
>> > user) to configure it? As far as I know there is no way to tell
>> > Firefox i.e. not to accept 40 bit.
>>
>> about:config, search for 'ssl' and 'tls'.  By default, Firefox
>> 3.0+ disable 40- and 56-bit ciphers, and I know that either
>> Firefox 3.0 or 3.5 disabled SSLv2 by default.  SSLv3 and TLSv1
>> do not use those ciphers.
>
> Ohh great, thanks for this information. I checked that also my
> Firefox 3 and Firefox 2 has 40 and 56 bit ciphers disabled and
> have `security.enable_ssl2 = false'.

It is arguably a bug when an application doesn't allow its TLS
configuration to be modified.  It's DEFINITELY a bug when an
application doesn't allow you to include the certificate chain
necessary to validate the certificate you present to the peer.

>> There is currently no way for even an ideal TLS implementation to
>> detect this issue.  This is why the IETF is working on the Secure
>> Renegotiation Indication TLS extension, which is close to finally
>> being released.
>>
>> > Like having some OpenSSL callback be called reliably on (right
>> > after?) each renegiotation - where a webserver could force to
>> > shutdown the connection if the new parameters are not acceptable?
>>
>> Yes.  Please see SSL_CTX_set_info_callback(3ssl).
>
> hum, now I'm confused, I think your last both answers contradict
> each other...
> If an application can use e.g. SSL_CTX_set_info_callback to
> reliably avoid this, I have to read more on what the IETF is working
> on. If there are webservers `trusting' peers without certificates
> (allowing pre-injection) what should stop people to ignore
> whatever extension as well...

What SSL_CTX_set_info_callback() does is tell you *when* a
renegotiation occurs.  It doesn't tell you what happened before.

In 0.9.8l, Mr Laurie pushed a version that disabled renegotiation
entirely by default.

Regarding "what should stop people from ignoring whatever SRI
extension", it would require the violation of at least 3 "MUST"s and
"MUST NOT"s.  At that point, whatever's being used between the peers
isn't TLS, and it will be very easy to detect that it's malicious.
(Remember: Secure Renegotiation Indicator is a TLS extension, which
means it must be sent by the Client before it can be acknowledged by
the Server.)

> (well, of course in case of the renegiotation attack the main
> point probably is just that no one had this nice idea before :-))

I think that this is just the point when someone decided to exploit a
(in hindsight) glaring flaw, and we've got an entire ecosystem of
developers feverishly trying to solve the problems caused by their
reliance on the function which has that flaw.

>> > Someone could expect whenever a browers window or `tab' is
>> > operating in some extended validation mode.

As I think I mentioned, nobody ever actually mapped out the precise
semantics of how the green bar is supposed to work.  That is EV's
biggest Achilles's Heel... nobody knows what it means, the same way
nobody knew what the lock meant.

> I could imagine that the hyped success of SSL/TLS lead to
> weaknesses, because today someone can often hear `we are based on
> SSL/TLS and thus are secure'. Also interesting is when
> specifications require minimum RSA key lengths but don't tell
> anything about certification policies (requirements to CSPs) or
> require AES256 but no certificates (DH)... Which, BTW, in case of
> an MITM has the funny effect that it is cryptographically ensured
> that only the attacker can decrypt the traffic lol

You cannot call an application that uses SSL/TLS "secure" any more
than you can call a network that has a firewall "secure".

It's possible to negotiate an ephemeral key to encrypt the data, then
go back and renegotiate to keep the contents of the certificate
private from anyone except the entity to which it is offered.  This
is, I think, enough to keep the EU privacy directives happy, but I'm
not certain.  (Such a case would require the set_info_callback and
then a control on the channel to initiate a renegotiation, this second
one to require a certificate and proof of ownership of the
certificate.)

And yes, the point is that only the attacker can decrypt the traffic
-- until the client sends its negotiation, and the attacker proxies
it, and the ChangeCipherSpec gets sent.  At that point, the attacker
doesn't know the new keys.

> I think this is a server (configuration) bug but no TLS bug.
> How can someone assume it would be safe for preinjection when
> accepting anonymous connections?

...because they didn't realize that the prior session isn't
cryptographically bound to the new session, it's a complete wiping of
the slate.  It is certainly an application-design issue (defense in
depth is not just a buzzword), but it's also a TLS protocol issue as
one of the guarantees that the protocol attempted to provide was
violated.

> It's also strange that SSL + BasicAuth seems to be so common.
> For many web services, users register and receive some
> confirmation mail with a password or alike.
> Why isn't it standard practice that users get a client
> certificate? Of course, this makes it difficult to use internet
> caffee, hotel or airport computers to login, but for those who
> use the password manager function anyway there should be not a
> big difference...

If there were a standard for a USB cryptoken, someone could write a
PKCS#11 wrapper around it for every platform that supports USB.

> It would not even be needed to buy at some CA certificate or
> reasonable authenticate when considering such a special purpose
> (i.e. when the site trust email which is used now, why shouldn't
> it trust certificates, too?).

There are several reasons, most of which revolve around 'ignorance'.
Either ignorance of the vulnerability, or ignorance of the fact that
there's a way around it, or ignorance of the fact that if they want to
be able to provide certificates to their users, they have to lobby the
browser makers.

> But as far as I know Browsers have no simple-to-use CSR
> generation, transferring CSR and importing CRT is more
> complicated than reading a password mail -- but I think this is
> nothing TLS can be blamed for.

Netscape uses SPKAC for its <keygen> tag format.  Microsoft's XEnroll
or CEnroll do something else, something I don't even know.  I know
that if you're on an Active Directory domain with a DC that's also a
CA, there are ways to get certificates in Windows from the CA
automagically.

The best that anyone's come up with thus far is the CSR submit/CRT
download-install.  I think that this is hideous, and it's impossible
to expect that any standard user is going to understand it.  Even with
a HOWTO.

>> > ...IPSec...
>>
>> If you configure an IPsec stack to allow anonymous clients, you
>> can't trust them.  What you can do is trust the security
>> properties of other actions performed *within* those tunnels,
>> since they have different characteristics.
>
> Yes, but when it comes to webservers, anonymous clients are
> trusted...

Yes.  The difference is that in IPSec, the client must announce its
identity first before the server gives it a second glance, while in
TLS the Server must announce its identity before it can even ask the
client for its identity.  (This is an instance of
"policy-set-in-standard", and I am opposed to it.)

>> Because client certificates are too difficult for users to obtain, and
>> once the Secure Renegotiation RFC is published it (appears that it)
>> can be.
>
> but TLS cannot be made responsible that its difficult to obtain
> certificates (using the existing applications)...

TLS, as has been mentioned, does not require certificates or any other
authentication.  (For an analogy as to how someone can expect that
authentication in-channel is equivalent to authentication of-channel,
please see the OTR protocol at http://cypherpunks.ca/otr/ .  It
describes the process of creating a confidential channel between two
people, then using a secret known between them in the Socialist
Millionaire's Protocol to authenticate the person using the other end
of the channel.)

>> "In theory, there's no difference between theory and reality.  In
>> reality, there is."
>>
>> Technically, TLS is supposed to ensure that the endpoint that you were
>> talking with cannot change without collusion between the initial
>> endpoint and the final endpoint, sharing key and state data.  This
>> guarantee was violated, so they're fixing it.
>
> Ahh ok. Thank you for clarifying that. I thought this was not
> supposed to be ensured (e.g. client certificates could be changed
> during a session technically, but reasonable applications would
> check this and for example would not accept changes in DN or
> maybe only accept key and serial number changes or alike).

Client certificates *can* be changed in the middle of a session, and
if both sides authenticate each other then there's no way for the
prefix-injection attack to succeed.

I'll note that your definition of "reasonable applications" is a
particularly sneaky and snarky way of attempting to enforce your ideas
on policy on others, who may or may not need the session identity to
stay the same during the lifetime of the session.  I can think of at
least 2 businesses which use processes that, when translated directly
into the PKI concept, would require an initial certificate (the POS
terminal operator's) and a different certificate (the POS terminal
operator's manager, who might need to open the drawer or authorize a
particular transaction).

> I'm not sure if it is reasonable to use TLS without client
> authentication (client certificates), then putting some client
> authentication (BasicAuth) on top and then expecting /TLS/ to be
> secure (instead of requiring BasicAuth to be secure, which
> certainly is not very strong). I hope I understood correctly and
> that really was what Twitter is using.

TLS is expected to be secure against this kind of attack, even in the
case of single-side authentication.  That's the guarantee that was
violated, and that's why the IETF is fixing it.

> Online banking often also uses no client certificates (and if
> they use client certificates, most banks here then also want chip
> card readers, which IMHO reduces the acceptance horribly because
> this is expensive). However, online banking (hopefully) uses some
> session IDs...

My bank actually sends me an SMS each time I want to do things that
change the balances of my accounts, with a code that I have to enter
to authorize them.  They also go out of their way to ensure that the
only way into the secure side of the system is to go through the login
page, and it doesn't matter if a prefix is injected -- the same page,
with the same input fields, will show no matter what the initial URI
requested is.  It's only after an authentication (which writes into a
log which can be accessed by another server) that it will display
anything else -- and that's done via a 302 redirect to another server
(and if you don't have a valid sessionID, and I think they even verify
it's the same IP, you get bounced back to the login page).

>> Twitter uses HTTP Basic authentication, over TLS.  This is one of the
>> weakest forms of authentication I know of.  However, the twitter
>> attack worked because client certificates are not well-used, and
>> because TLS has a flaw that allowed for the guarantee of "one endpoint
>> unless collusion occurs" to be violated.
>
> Ahh ok, yes.
> I think there are applications (maybe not using TLS but other
> means) that (maybe even automatically) generate some
> cryptographically protected identity (I think a signed random
> number is suffcient to know to talk always with the same entity).

Uh, that's kinda what TLS does.  It signs the channel initiation, and
then after negotiating all the cipher specs it sends ChangeCipherSpec.
 The next message after that *must* be Finished, it must be under the
new cipher spec, and it must contain the hash of all the prior
handshake packets that it sent.

I'm not sure what you mean by 'signed random number' -- a shared
secret?  That's not scalable.

>> I want to change this so that client certificates are amazingly
>> simple to obtain,
>
> Yes, I think this sounds like beeing the fix for the
> renegiotation attack!

This is the *absolute* fix for the renegotiation attack, for *all*
versions of SSL and TLS (presuming that the initial negotiation, or
the initial renegotiation after negotiation of a DHE cipher, is
authenticated by the client without any application data being passed
in the meantime).

> If Browsers had a `Generate ID for this site and upload'-Button
> [gen-CSR] and a mean to transfer it and the certificate back, which
> surely is complex to design and develop and opens many questions,
> then this would be a great way. Except initial setup this would
> be much stronger than todays passwords (which probably usually
> are much weaker than the applied block cipher).
> Browsers could manage the retrieved certificates similar to
> passwords stored by the password manager. I think this is not as
> X.509 (or X.500 here?) was intended, but I think in practice this
> `trust a foreigner when a common trust anchor tells it' makes not
> much sense when working anonymously (or `pseudonymously', in case
> this forms something like a understandable english term :)) in
> register-for-free networks. Maybe we'll see something like this
> in future?

There are a lot of crypto apologists in the world (me included) who
have stumbled on various cryptosystems' failures.  Ian Grigg, who
maintains http://www.financialcryptography.com/ , has developed a
protocol which would allow banks to know for certain that the keys
they're requesting have been generated on-chip, by having the device
manufacturer to cause a keypair to be generated, pulling the public
key, and then generating and imprinting the device's certificate --
which would then be used to sign all requests generated by that chip.

The downside is that there's no protocol to allow a bank (for example)
to specify that it demands hardware security for the key generation
and use process.

>> (Really, please go read RFC 2246.
>
> It seems to be a very well-written specification.
>
> BTW:
> `The fundamental rule is that higher levels must be cognizant of
>  what their security requirements are and never transmit
>  information over a channel less secure than what they require.'
> so no BasicAuth :)

No, it's entirely possible that BasicAuth is acceptable.  Remember,
every realm has different security requirements, and thus different
security policies.  In Twitter's case, it knows that it must receive
updates over a TLS-encrypted HTTP session.

> Yes, I see that it explicitely forbidden to use NULL/NULL/NULL.

Now, for the mindbender: under what circumstances might it be
appropriate to use NULL-NULL-SHA256?

:)

-Kyle H
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to