can TLS be used securely or it is flawed by design not allowing to use it securely (was: Re: Re-negotiation handshake failed: Not accepted by client!?)
Hi, thank you for your detailed explanations. The main thing I still not understood is whether TLS by design enforces the `bad behavior', meaning TLS cannot be used securely at all by anyone, - or - if TLS just does not enforce to use is securely, meaning that TLS relies on application code implementing and using it correctly and reasonable. I moved this topic up to have this question first (or almost first, following right after this paragraph) in the hope to get an answer to it; the rest of this mail unfortunately got so long that it cannot be read because probably it's a waste of time :( [move up from the end] * David Schwartz wrote on Mon, Jan 11, 2010 at 09:06 -0800: If this would be true, this means the information firefox shows up when clicking the lock icon does not tell anything about the data I will sent; at most it can tell about the past, how the page was loaded, but not reliable, because maybe it changed for the last part of the page. Where is my mistaking in thinking? Correct, and to the extent TLS permits a renegotiation to reduce the security parameters without confirming the intention to reduce those parameters at the current level, TLS is broken. Is TLS broken or is it just used in an unreasonable way? With OpenSSL, for example, I could configure to accept only SHA1 and 3DES (or stronger) and I would be safe to renegotiate to 40 bit something. Isn't it a bug in the application when it does not allow me (its user) to configure it? As far as I know there is no way to tell Firefox i.e. not to accept 40 bit. That is, if the two ends negotiate 1,024-bit RSA and 256-bit AES, then an attacker should not be able to renegotiate a lower (or different) security within that connection without having to break either 1,024-bit RSA, 256-bit AES, or one of the hard algorithms inside TLS itself (such as SHA1). TLS permitted an attacker to do this, and so was deemed broken. Is it possible for an application to use an ideal TLS implementation in a way to at least detect this? Like having some OpenSSL callback be called reliably on (right after?) each renegiotation - where a webserver could force to shutdown the connection if the new parameters are not acceptable? [original start of mail] * David Schwartz wrote on Mon, Jan 11, 2010 at 09:06 -0800: I think since TLS should be considered a layer, its payload should not make any assumptions to it (or vice versa). But in the moment some application `looks to the TLS state' and tries to associate this information to some data in some buffer, I think it makes a mistake. Well then TLS is basically useless. A secure connection whose properties I cannot trust is not particularly useful. If I receive foo over the connection and cannot ever know where the middle o came from, what can I do with the foo? Anser -- nothing. I think `trust' is not an absolute but a relative thing. Someone may trust rapidSSL certified 40 bit key connections sufficient to login into myspace via her own Wifi network, but not at all sufficient for online banking. Someone could expect whenever a browers window or `tab' is operating in some extended validation mode. If a server uses TLS to authenticate a client, a client certificate is needed. If the server delegates the authentication and authorisation both to TLS (which means, that such a server or HTTPS server port could not be used without a valid client certificate), as far as I understood no renegiotation attack would be possible. Did I understand correctly (or could client-certificate-based authentication be attacked as well)? As far as I understood the renegiotation attack bases on the fact that the server does not TLS-authenticate the client but relies on the assumption that it will talk to the same client during the whole connection / HTTP session. (I assume that the server is able to detect when a client certificate changes within renegiotation, if a client certificate change is possible at all - and is able to refuse that. So if a server does not perform client authenticate [using client certificates as TLS offers] the server cannot know the the middle o came from.). I think with TLS-authentication this type or renegiotation attack won't work or would at least be detectable because the client certificates Subject/CN or key changes. Is this correct? When using HTTP over IPSec, I think no one ever had the idea to open or block URLs based on the currently used IPSec certificate... I'm not sure I get the point of your analogy. ohh sorry. I hope I wasn't too confused. Or I'm just wrong. Unfortunately I need a longer explanation trying to tell what I mean: Similar, as I understood, the idea to the requirement for a client certificate on an existing connection when the higher layer protocol or higher application level demands that, for example, when HTTP browsing a specific sub directory. As I understand it, this is like trying to add security retroactively.
Re: can TLS be used securely or it is flawed by design not allowing to use it securely (was: Re: Re-negotiation handshake failed: Not accepted by client!?)
Responses inline, again. :) On Tue, Jan 12, 2010 at 2:53 AM, Steffen DETTMER steffen.dett...@ingenico.com wrote: The main thing I still not understood is whether TLS by design enforces the `bad behavior', meaning TLS cannot be used securely at all by anyone, - or - if TLS just does not enforce to use is securely, meaning that TLS relies on application code implementing and using it correctly and reasonable. You cannot simply add a security module and expect that your system is secure. The entire application, from the ground up, needs to be security-aware. This means that TLS cannot enforce that it is used correctly -- it can only negotiate keys between itself and its peer, and enforce that the cryptographic signatures on the packets are correct to prevent any mid-stream injection. (It was supposed to prevent any pre-stream injection, too, but the Apache configuration options make it possible to violate that constraint at the request of the application.) I moved this topic up to have this question first (or almost first, following right after this paragraph) in the hope to get an answer to it; the rest of this mail unfortunately got so long that it cannot be read because probably it's a waste of time :( [move up from the end] * David Schwartz wrote on Mon, Jan 11, 2010 at 09:06 -0800: If this would be true, this means the information firefox shows up when clicking the lock icon does not tell anything about the data I will sent; at most it can tell about the past, how the page was loaded, but not reliable, because maybe it changed for the last part of the page. Where is my mistaking in thinking? Correct, and to the extent TLS permits a renegotiation to reduce the security parameters without confirming the intention to reduce those parameters at the current level, TLS is broken. Is TLS broken or is it just used in an unreasonable way? With OpenSSL, for example, I could configure to accept only SHA1 and 3DES (or stronger) and I would be safe to renegotiate to 40 bit something. Isn't it a bug in the application when it does not allow me (its user) to configure it? As far as I know there is no way to tell Firefox i.e. not to accept 40 bit. about:config, search for 'ssl' and 'tls'. By default, Firefox 3.0+ disable 40- and 56-bit ciphers, and I know that either Firefox 3.0 or 3.5 disabled SSLv2 by default. SSLv3 and TLSv1 do not use those ciphers. That is, if the two ends negotiate 1,024-bit RSA and 256-bit AES, then an attacker should not be able to renegotiate a lower (or different) security within that connection without having to break either 1,024-bit RSA, 256-bit AES, or one of the hard algorithms inside TLS itself (such as SHA1). TLS permitted an attacker to do this, and so was deemed broken. No, TLS did not allow an attacker to renegotiate a lower security. TLS didn't even allow for a version rollback attack to force the use of weaker ciphers. TLS *is* allowed to renegotiate to a different level of authentication, and even though I can't think of any time when it's ever used, it's possible to reduce the level of cloaking. What happened is that TLS permitted an attacker to inject arbitrary bytes at the beginning of a connection, and the HTTP servers compounded this problem by allowing different security cloaks on different resources within the same server space. Is it possible for an application to use an ideal TLS implementation in a way to at least detect this? There is currently no way for even an ideal TLS implementation to detect this issue. This is why the IETF is working on the Secure Renegotiation Indication TLS extension, which is close to finally being released. Like having some OpenSSL callback be called reliably on (right after?) each renegiotation - where a webserver could force to shutdown the connection if the new parameters are not acceptable? Yes. Please see SSL_CTX_set_info_callback(3ssl). [original start of mail] * David Schwartz wrote on Mon, Jan 11, 2010 at 09:06 -0800: I think since TLS should be considered a layer, its payload should not make any assumptions to it (or vice versa). But in the moment some application `looks to the TLS state' and tries to associate this information to some data in some buffer, I think it makes a mistake. Well then TLS is basically useless. A secure connection whose properties I cannot trust is not particularly useful. If I receive foo over the connection and cannot ever know where the middle o came from, what can I do with the foo? Anser -- nothing. I think `trust' is not an absolute but a relative thing. Someone may trust rapidSSL certified 40 bit key connections sufficient to login into myspace via her own Wifi network, but not at all sufficient for online banking. I define 'trust' as: you expose some point of vulnerability to the peer, with the belief and expectation that they will not abuse that vulnerability. Someone could expect whenever a browers