On Wed, Jan 13, 2010 at 6:34 AM, Steffen DETTMER
<steffen.dett...@ingenico.com> wrote:
> * aerow...@gmail.com wrote on Tue, Jan 12, 2010 at 12:29 -0800:
>> On Tue, Jan 12, 2010 at 3:12 AM, Steffen DETTMER
>> The problem is this:
>>
>> The attacker makes a connection to a TLS-enabled server,
>> sending no certificate.  It sends a command that, for whatever
>> reason, needs additional privilege (in Apache's case, Directory
>> and Location clauses can require additional security above what
>> the default VirtualHost configuration negotiates).  Then, it
>> just starts proxying packets between the client and the server.
>
> Yeah, strange idea that. I read that it is common to do so, but I
> wonder why such a feature was implemented into Apache... Did this
> renegiotation-to-increase-security-retroactively idea ever pass
> any security approval?
> A pitty that such an attack exists, but maybe this just prooves
> `never design a crypto system' - sometimes security issues are
> really surprising :)

It was (wrongly) assumed that SSL certificate authentication could be
plugged in at precisely the same level as the other authentication
systems already in place, Basic and Digest.  See below.

>> One way to deal with this would be to enforce that all access
>> to resources on the same webserver require the same "security
>> cloak" -- that is, have the server demand a certificate during
>> the initial negotiation, so that *all* data that comes in is
>> authenticated as being from that identity.
>
> Yes, isn't this how TLS is intended to be used and the `add a
> certificate based on a directory' just some hack because the
> user interfaces are as they are (and that are passwords and
> BasicAuth when it comes to HTTP/HTTPS)?

TLS is authentication-agnostic.  The server does not need to identify
itself, it's only through common usage that there's any requirement
for servers to have certificates at all.  So the first part isn't
precisely true -- it's not "how TLS is intended to be used", it's "the
best practice for using TLS".

"Require certificate based on the URI requested" (restated from your
'add a certificate based on a directory', because there are many
servers that don't actually operate on directories, instead
virtualizing their URI spaces) is, indeed, a hack, for the reason you
express.  However, this hack was based on an assumed property of SSL
(and later TLS) that was never investigated until last year:  there is
no way for a man in the middle to attack in the presence of mutual
authentication.

>> >I thought this data injection attack fails when client
>> >certificates would be used correctly.
>>
>> It does, in the event that the server configuration does not allow for
>> non-client-certificated connections in any manner, in any way, for any
>> purpose.  THAT is "when client certificates are used correctly".
> [...]
>> If you get rid of the 'allow non-authenticated SSL clients'
>> part of the policy on that server, you do away with the
>> problem.
> [...]
>> In fact, if this [client-] certificate is presented during the initial
>> negotiation, it is impossible to perform this MITM attack.
>
> ok. Thank you for clarifying that.
> So as you wrote in the other mail, it is the `common practice'
> that lead to the problem.
> The fix IETF is preparing maybe is less a bugfix than a new
> feature - protecting a bit more, even if authentication is used
> wrongly.

It's definitely a bugfix, as it is fixing a property of the protocol
which was originally intended -- that there can be no change in either
of the endpoints without collusion (in the case of servers, a shared
session cache across a server farm is 'collusion'; in the case of
clients, it's typically a lot more difficult to collude, but it's
still possible).

>> In Twitter's case, they don't require certificate
>> authentication.  This means that they will never be able to
>> prevent this attack without disabling renegotiation until the
>> Secure Renegotiation Indicator draft standard is implemented on
>> both the servers and the clients.
>>
>> And yes: the problem is caused by TLS misuse, but the reason
>> for the misuse isn't on the server side.
>
> Yes, the MITM was never able to decrypt the password, but fooled
> the server to do so and to publish the result. Well... strictly
> speaking I think the bug is that the result (the password) was
> published, which I think is a server but not a TLS bug (which
> just happend because no one had the idea for this exploit in the
> past).

The MITM couldn't decrypt the session, but relied on a property of the
server to cause it to publish the Basic-encoded username/password
combination.  This is a "side effect" of the request, but this side
effect can be completely devastating.

> Since summary there are many aspects that work
> hand-in-hand: TLS does not differentiate negiotation and
> subsequent re-negiotations, HTTP `assumes' always talking to the
> same peer and replayable passwords or clear text password as with
> BasicAuth, best probably is to improve TLS AND stop using
> BasicAuth in favor of Digest Authentication AND make a dedicated
> login request/response at the begin of each connection in the
> hope that all together increase security best :)

At this time (the IETF are finishing work on the SRI standard), TLS
has no way of communicating which 'epoch' it's on during
renegotiation.  If there were an entry in the protocol header which
said 'I am requesting to negotiate epoch 1' and the client thought it
was negotiating epoch 0, then the client could abort with a
bad_decryption or other alert.  Personally, this is how I would prefer
to see it -- but the TLSEXT RFC came out and mandated that every
additional byte after the ClientHello adhere to its standard.

>> The point I wanted to make in that last paragraph is: the users
>> can't figure out how to generate keys, much less submit
>> requests to CAs.
>
> Maybe something simpler than CAs could be used. Now server store
> a password for each account, why shouldn't they be able to store
> a hash/signature for each account? For example, browser creates a
> self-signed certificate (some simple UI to fill
> out DN only), server receives it and stores this sig like a
> password? Or does this count as `design a cryptosystem' already?
> Also server could sent back signed cert, signed in a way that
> server recognized but without any trust for anyone else, like a
> private CA (it would not even be needed to publish its
> certificate, if any :))?

There's an implementation which uses OpenPGP keys/assertions, called GnuTLS.

Servers do store a hash/signature for each account, but this
essentially puts it into a Directory that the server can access.

Servers *could* send back signed certificates, and in fact <keygen>
and CEnroll both not only allow for it but expect it when they're in
use.

(and no, this doesn't count as 'design a cryptosystem'.  What you're
proposing is essentially to allow the client to set its own public
key, and thus trust anchor, upon correct authentication.
public/private keypairs are first-order identities; the reason CAs
exist is because it's impossible to know who claims that any given
keypair belongs to him without external intervention, and CAs are
*supposed* to strongly bind [using their own private key] the
appropriate details of the individual who did present the public key
for certification.  However, authenticating to the Server with a
username/password, and submitting a public key, is more than enough to
be able to issue a certificate related to that username.)

> Using (real) CAs could have disadvantages when it comes to
> privacy or when someone wants to use pseudonyms, I think.

Oh dear gods yes.  I've been trying to get people to see that for
years.  Thank you.

(As I think I mentioned, Wes Kussmaul is working on a project that
provides a different approach to the issue, but we're both hampered by
the lack of decent client certificate generation UIs.)

> Difficult topic. Good to know that experts (like you) are
> working on it.

You, since your signature says that you work for a payment solutions
provider, would not normally be one (in my eyes) to raise the privacy
concern -- but this suggests that you are in the EU, where regulations
are more strict.

>> This reduces the provisioning of client certificates, and thus
>> reduces the market.  What's needed is a useful mechanism and
>> interface for managing these things, and nobody's created that
>> yet.
>
> But outside SSL/TLS such things exist...
> Maybe in TLS world it is so uncommon because public CAs and some
> X.500-style `global directory ideas' are so common?

The only decent keygen UI method I've seen has been from PGP 5i and
WASTE, and possibly Windows versions of other software such as
ssh-keygen.  Don't rely on the system to give you entropy (Windows
CryptoAPI gives a rather severe lack of it); ask the user to move the
mouse, hit the keys, exercise the disks, and otherwise add as much as
you need to be able to generate a secure key.

>> I hope I've helped you understand a bit more clearly.  If you
>> have more questions, don't hesitate to ask. :)
>
> Yes, you helped me a lot, it was great lecture. Thank you very
> much that you again provided me free lessons with so many
> information!!

You are most welcome.  I think that this knowledge, above all else,
*must* be free, if people are to be able to learn how to protect
themselves and their information.

Also, your comments here have spurred me into thinking about different
parts of the problem... for example, a user cannot be a CA, under
X.509.  However, there is a different certificate profile which allows
users to issue certificates based on their own credentials, called the
"proxy certificate profile" (defined in RFC 3820 -- which I wish they
hadn't numbered it as, since it's so easy to lysdexiate that to RFC
3280 -- the predecessor to the current PKIX, RFC5280.)  It might be
useful to issue a user a certificate, and then allow authentication
using proxy certificates issued by that user's certificate, thus
reducing the number of times the private key is actually used --
allowing it to be stored offline, for example, while proxy
certificates issued by it are online and used.

It's an interesting thought, anyway.  :)

-Kyle H
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to