Re: impact of client certificates to re-negotiation attack

2010-01-19 Thread Steffen DETTMER
* Kyle Hamilton wrote on Thu, Jan 14, 2010 at 12:03 -0800:
 * Steffen asked...
  ...on this level
[thanks a lot again for all the clarifications: authentication
levels, authentication-agnostic, URI-dependent certificates,
bugfix because missed intention, MITM tricks twitter to decrypt
and disclose, epoch 1 to detect renegotiation, and GnuTLS]

 There's an implementation which uses OpenPGP keys/assertions,
 called GnuTLS.
 [...stores client cert hashes...] 
 (and no, this doesn't count as 'design a cryptosystem'.  What you're
 proposing is essentially to allow the client to set its own public
 key, and thus trust anchor, upon correct authentication.
 public/private keypairs are first-order identities; the reason CAs
 exist is because it's impossible to know who claims that any given
 keypair belongs to him without external intervention, and CAs are
 *supposed* to strongly bind [using their own private key] the
 appropriate details of the individual who did present the public key
 for certification.  However, authenticating to the Server with a
 username/password, and submitting a public key, is more than enough to
 be able to issue a certificate related to that username.)

ahh ok. This isn't common because not so easy-to-use, yes?

  Using (real) CAs could have disadvantages when it comes to
  privacy or when someone wants to use pseudonyms, I think.
 
 Oh dear gods yes.  I've been trying to get people to see that for
 years.  Thank you.
 
 (As I think I mentioned, Wes Kussmaul is working on a project that
 provides a different approach to the issue, but we're both hampered by
 the lack of decent client certificate generation UIs.)

Even with a UI avialable I could imagine that it could be
difficult to get a wide acceptance; I think many know the term
SSL and associate it 1:1 with internet security.

  Difficult topic. Good to know that experts (like you) are
  working on it.
 
 You, since your signature says that you work for a payment
 solutions provider, would not normally be one (in my eyes) to
 raise the privacy concern -- but this suggests that you are in
 the EU, where regulations are more strict.

There are several privacy concerns in payment systems. For
example, error messages may not be transmitted to protect card
holder privacy (sometimes making it difficult to track down some
issues), laws and requirement specs are about how to store data
(like PAN of credit cards). The German electronic purse (pre-paid
card) is even able to do anonymous payment transactions
(requiring a card not bound to any account; such cards are rarely
seen but exist).
Whatever I write here is my personal opinion only and I'm not a
security expert etc #include disclaimers/full_super_safe.h...

  Yes, you helped me a lot, it was great lecture. Thank you very
  much that you again provided me free lessons with so many
  information!!
 
 You are most welcome.  I think that this knowledge, above all else,
 *must* be free, if people are to be able to learn how to protect
 themselves and their information.
 
 Also, your comments here have spurred me into thinking about different
 parts of the problem... for example, a user cannot be a CA, under
 X.509.

Ohh why not? Forbidden by rfc5280 or so?
I though setting the needed basic constraint cA=TRUE would do the
job?  (beside that I don't know any public CA that would ever
certify such a certificate because typically in the web then I
would inherit the trust :-) which maybe just shows that this
chains might be bad in general - just because I trust a CA this
does not neccesarily mean I trust anyone who was able to
authenticate itself to this CA).

In theory, wouldn't even a self-signed user certificate be
possible (e.g. when the server maintains a user-password-certhash
data base), like you mentioned in GnuTLS (unless I
missunderstood)? A self-signed-only PKI?

 However, there is a different certificate profile which allows
 users to issue certificates based on their own credentials, called the
 proxy certificate profile (defined in RFC 3820 -- which I wish they
 hadn't numbered it as, since it's so easy to lysdexiate that to RFC
 3280 -- the predecessor to the current PKIX, RFC5280.)  It might be
 useful to issue a user a certificate, and then allow authentication
 using proxy certificates issued by that user's certificate, thus
 reducing the number of times the private key is actually used --
 allowing it to be stored offline, for example, while proxy
 certificates issued by it are online and used.

ohh yes, and my USB class 3/4 smart card reader's display could
display the proxyPolicy in an appropriate (for me verifyable)
form, I could decide to enter my PIN or to cancel. I could
generate a 1yr valid myspace cert stored on my laptop and a 10
minutes valid one for my banking account or even limited to a single
transaction (including amount and destination account name in the
proxyPolicy to allow me to verify on a trusted class 3/4 device).
cool.

Or having a cell phone application. 

Re: impact of client certificates to re-negotiation attack

2010-01-14 Thread Kyle Hamilton
On Wed, Jan 13, 2010 at 6:34 AM, Steffen DETTMER
steffen.dett...@ingenico.com wrote:
 * aerow...@gmail.com wrote on Tue, Jan 12, 2010 at 12:29 -0800:
 On Tue, Jan 12, 2010 at 3:12 AM, Steffen DETTMER
 The problem is this:

 The attacker makes a connection to a TLS-enabled server,
 sending no certificate.  It sends a command that, for whatever
 reason, needs additional privilege (in Apache's case, Directory
 and Location clauses can require additional security above what
 the default VirtualHost configuration negotiates).  Then, it
 just starts proxying packets between the client and the server.

 Yeah, strange idea that. I read that it is common to do so, but I
 wonder why such a feature was implemented into Apache... Did this
 renegiotation-to-increase-security-retroactively idea ever pass
 any security approval?
 A pitty that such an attack exists, but maybe this just prooves
 `never design a crypto system' - sometimes security issues are
 really surprising :)

It was (wrongly) assumed that SSL certificate authentication could be
plugged in at precisely the same level as the other authentication
systems already in place, Basic and Digest.  See below.

 One way to deal with this would be to enforce that all access
 to resources on the same webserver require the same security
 cloak -- that is, have the server demand a certificate during
 the initial negotiation, so that *all* data that comes in is
 authenticated as being from that identity.

 Yes, isn't this how TLS is intended to be used and the `add a
 certificate based on a directory' just some hack because the
 user interfaces are as they are (and that are passwords and
 BasicAuth when it comes to HTTP/HTTPS)?

TLS is authentication-agnostic.  The server does not need to identify
itself, it's only through common usage that there's any requirement
for servers to have certificates at all.  So the first part isn't
precisely true -- it's not how TLS is intended to be used, it's the
best practice for using TLS.

Require certificate based on the URI requested (restated from your
'add a certificate based on a directory', because there are many
servers that don't actually operate on directories, instead
virtualizing their URI spaces) is, indeed, a hack, for the reason you
express.  However, this hack was based on an assumed property of SSL
(and later TLS) that was never investigated until last year:  there is
no way for a man in the middle to attack in the presence of mutual
authentication.

 I thought this data injection attack fails when client
 certificates would be used correctly.

 It does, in the event that the server configuration does not allow for
 non-client-certificated connections in any manner, in any way, for any
 purpose.  THAT is when client certificates are used correctly.
 [...]
 If you get rid of the 'allow non-authenticated SSL clients'
 part of the policy on that server, you do away with the
 problem.
 [...]
 In fact, if this [client-] certificate is presented during the initial
 negotiation, it is impossible to perform this MITM attack.

 ok. Thank you for clarifying that.
 So as you wrote in the other mail, it is the `common practice'
 that lead to the problem.
 The fix IETF is preparing maybe is less a bugfix than a new
 feature - protecting a bit more, even if authentication is used
 wrongly.

It's definitely a bugfix, as it is fixing a property of the protocol
which was originally intended -- that there can be no change in either
of the endpoints without collusion (in the case of servers, a shared
session cache across a server farm is 'collusion'; in the case of
clients, it's typically a lot more difficult to collude, but it's
still possible).

 In Twitter's case, they don't require certificate
 authentication.  This means that they will never be able to
 prevent this attack without disabling renegotiation until the
 Secure Renegotiation Indicator draft standard is implemented on
 both the servers and the clients.

 And yes: the problem is caused by TLS misuse, but the reason
 for the misuse isn't on the server side.

 Yes, the MITM was never able to decrypt the password, but fooled
 the server to do so and to publish the result. Well... strictly
 speaking I think the bug is that the result (the password) was
 published, which I think is a server but not a TLS bug (which
 just happend because no one had the idea for this exploit in the
 past).

The MITM couldn't decrypt the session, but relied on a property of the
server to cause it to publish the Basic-encoded username/password
combination.  This is a side effect of the request, but this side
effect can be completely devastating.

 Since summary there are many aspects that work
 hand-in-hand: TLS does not differentiate negiotation and
 subsequent re-negiotations, HTTP `assumes' always talking to the
 same peer and replayable passwords or clear text password as with
 BasicAuth, best probably is to improve TLS AND stop using
 BasicAuth in favor of Digest Authentication AND make a 

Re: impact of client certificates to re-negotiation attack

2010-01-13 Thread Steffen DETTMER
* aerow...@gmail.com wrote on Tue, Jan 12, 2010 at 12:29 -0800:
 On Tue, Jan 12, 2010 at 3:12 AM, Steffen DETTMER 
 The problem is this:
 
 The attacker makes a connection to a TLS-enabled server,
 sending no certificate.  It sends a command that, for whatever
 reason, needs additional privilege (in Apache's case, Directory
 and Location clauses can require additional security above what
 the default VirtualHost configuration negotiates).  Then, it
 just starts proxying packets between the client and the server.

Yeah, strange idea that. I read that it is common to do so, but I
wonder why such a feature was implemented into Apache... Did this
renegiotation-to-increase-security-retroactively idea ever pass
any security approval?
A pitty that such an attack exists, but maybe this just prooves
`never design a crypto system' - sometimes security issues are
really surprising :)

 One way to deal with this would be to enforce that all access
 to resources on the same webserver require the same security
 cloak -- that is, have the server demand a certificate during
 the initial negotiation, so that *all* data that comes in is
 authenticated as being from that identity.

Yes, isn't this how TLS is intended to be used and the `add a
certificate based on a directory' just some hack because the
user interfaces are as they are (and that are passwords and
BasicAuth when it comes to HTTP/HTTPS)?

 I thought this data injection attack fails when client
 certificates would be used correctly.
 
 It does, in the event that the server configuration does not allow for 
 non-client-certificated connections in any manner, in any way, for any 
 purpose.  THAT is when client certificates are used correctly.
[...]
 If you get rid of the 'allow non-authenticated SSL clients'
 part of the policy on that server, you do away with the
 problem.
[...]
 In fact, if this [client-] certificate is presented during the initial 
 negotiation, it is impossible to perform this MITM attack.

ok. Thank you for clarifying that.
So as you wrote in the other mail, it is the `common practice'
that lead to the problem.
The fix IETF is preparing maybe is less a bugfix than a new
feature - protecting a bit more, even if authentication is used
wrongly.

 In Twitter's case, they don't require certificate
 authentication.  This means that they will never be able to
 prevent this attack without disabling renegotiation until the
 Secure Renegotiation Indicator draft standard is implemented on
 both the servers and the clients.
 
 And yes: the problem is caused by TLS misuse, but the reason
 for the misuse isn't on the server side.

Yes, the MITM was never able to decrypt the password, but fooled
the server to do so and to publish the result. Well... strictly
speaking I think the bug is that the result (the password) was
published, which I think is a server but not a TLS bug (which
just happend because no one had the idea for this exploit in the
past).

Since summary there are many aspects that work
hand-in-hand: TLS does not differentiate negiotation and
subsequent re-negiotations, HTTP `assumes' always talking to the
same peer and replayable passwords or clear text password as with
BasicAuth, best probably is to improve TLS AND stop using
BasicAuth in favor of Digest Authentication AND make a dedicated
login request/response at the begin of each connection in the
hope that all together increase security best :)

 The point I wanted to make in that last paragraph is: the users
 can't figure out how to generate keys, much less submit
 requests to CAs.

Maybe something simpler than CAs could be used. Now server store
a password for each account, why shouldn't they be able to store
a hash/signature for each account? For example, browser creates a
self-signed certificate (some simple UI to fill
out DN only), server receives it and stores this sig like a
password? Or does this count as `design a cryptosystem' already?
Also server could sent back signed cert, signed in a way that
server recognized but without any trust for anyone else, like a
private CA (it would not even be needed to publish its
certificate, if any :))?

Using (real) CAs could have disadvantages when it comes to
privacy or when someone wants to use pseudonyms, I think.
Difficult topic. Good to know that experts (like you) are
working on it.

 This reduces the provisioning of client certificates, and thus
 reduces the market.  What's needed is a useful mechanism and
 interface for managing these things, and nobody's created that
 yet.

But outside SSL/TLS such things exist...
Maybe in TLS world it is so uncommon because public CAs and some
X.500-style `global directory ideas' are so common?

 I hope I've helped you understand a bit more clearly.  If you
 have more questions, don't hesitate to ask. :)

Yes, you helped me a lot, it was great lecture. Thank you very
much that you again provided me free lessons with so many
information!!

oki,

Steffen


-- 












































Re: impact of client certificates to re-negotiation attack (was: Re: Re-negotiation handshake failed: Not accepted by client!?)

2010-01-12 Thread aerowolf

Responses inline. :)

On Tue, Jan 12, 2010 at 3:12 AM, Steffen DETTMER steffen.dett...@ingenico.com 
wrote:

Hi,

thank you too for the detailed explanation. But the impact on
the client certificates (and its correct validation etc) is not
clear to me (so I ask inline in the second half of this mail).

* Kyle Hamilton wrote on Mon, Jan 11, 2010 at 14:28 -0800:

The most succinct answer is this: the server and client
cryptographically only verify the session that they renegotiate
within at the time of initial negotiation, and never verify
that they're the same two parties that the session was between
at the time of renegotiation.

[...]

The worst thing about this attack is that it provides no means
for either the client or server to detect it.


As I understood, TLS implementations are not responsible to
authorize a peer, just to cryptographically ensure that the
identification of it is authentic.


This is correct.  The problem is that there is an MITM capability that the 
protocol allows for, which it should not have, and the reason is that the prior 
cryptographic session is not cryptographically authenticated by the new 
credentials.


This is the same question I asked in my other reply to David's
mail:

Is it by design impossible to make an application using
(optionally a future version of) OpenSSL to verify that e.g. the
`Subject' of the peer's certificate remains constant (and
authorized for the requested service)?


No, for a couple reasons.

1) The IETF has issued a Last Call for the renegotiation-indication proposed 
standard.
2) It is possible to register a callback function in the SSL or SSL_CTX object, 
which will iterate every certificate that is presented during a renegotiation.

#1 means that the next version of OpenSSL will almost certainly fix the 
prefix-injection attack scenario.

#2 means that you can see *all* of the certificates that the peer presents, and 
the return value of the function tells the library what to do now that you've 
seen it -- continue the connection, or abort it.


The client will receive the server's correct certificate, the
same way it expects to. The server will receive either the
client's correct certificate or no certificate (as the client
decides) the same way it expects to. There is no way to
identify this attack at the TLS protocol level.


But how can this be, wouldn't the TLS protocol level need to see
and verify the client certificate? On renegiotation, no callbacks
with the new subject are possible to check if this client
certificate (authenticated by TLS implementation) is authorized?


The problem is this:

The attacker makes a connection to a TLS-enabled server, sending no 
certificate.  It sends a command that, for whatever reason, needs additional 
privilege (in Apache's case, Directory and Location clauses can require 
additional security above what the default VirtualHost configuration 
negotiates).  Then, it just starts proxying packets between the client and the 
server.

When the server receives the request for a higher-authentication-required 
resource, it sends a ServerHello to initiate a renegotiation.  The packets, as 
they are proxied as-is to the client, are not modified by the attacker in any 
way.  This means that the ServerHello renegotiation is going to negotiate for a 
client certificate, and the client can't going to know that there's any problem 
in doing so.

One way to deal with this would be to enforce that all access to resources on the same 
webserver require the same security cloak -- that is, have the server demand 
a certificate during the initial negotiation, so that *all* data that comes in is 
authenticated as being from that identity.


I thought this data injection attack fails when client
certificates would be used correctly.


It does, in the event that the server configuration does not allow for 
non-client-certificated connections in any manner, in any way, for any purpose.  THAT is 
when client certificates are used correctly.


Does it fail or is it possible?


This attack fails in the presence of client certificates if and only if there 
is no means to make a connection with no authentication, submit data, and then 
have the server request additional authentication.


But then an attacker needs a valid client certificate, right?


If the server requires a valid client certificate on initial negotiation, and 
the attacker has a valid client certificate, then the attacker is limited to 
the privileges authorized to that identity and would have the same access 
privileges as a legitimate user.  This attack is used and is useful in the 
event that different resources on the same server require different levels of 
authentication, *including no authentication*.  If you get rid of the 'allow 
non-authenticated SSL clients' part of the policy on that server, you do away 
with the problem.


But then in turn, by definition, a user presenting a valid client
certificate (and who is authorized) is not an attacker but a