Responses inline, again. :)

On Tue, Jan 12, 2010 at 2:53 AM, Steffen DETTMER
<steffen.dett...@ingenico.com> wrote:
> The main thing I still not understood is whether TLS by design
> enforces the `bad behavior', meaning TLS cannot be used securely
> at all by anyone,
> - or -
> if TLS just does not enforce to use is securely, meaning that TLS
> relies on application code implementing and using it
> correctly and reasonable.

You cannot simply "add a security module" and expect that your system
is secure.  The entire application, from the ground up, needs to be
security-aware.  This means that TLS cannot enforce that it is used
correctly -- it can only negotiate keys between itself and its peer,
and enforce that the cryptographic signatures on the packets are
correct to prevent any mid-stream injection.  (It was supposed to
prevent any pre-stream injection, too, but the Apache configuration
options make it possible to violate that constraint at the request of
the application.)

> I moved this topic up to have this question first (or almost
> first, following right after this paragraph) in the hope to get
> an answer to it; the rest of this mail unfortunately got so long
> that it cannot be read because probably it's a waste of time :(
>
>
>
> [move up from the end]
> * David Schwartz wrote on Mon, Jan 11, 2010 at 09:06 -0800:
>> > If this would be true, this means the information firefox shows
>> > up when clicking the lock icon does not tell anything about the
>> > data I will sent; at most it can tell about the past, how the
>> > page was loaded, but not reliable, because maybe it changed for
>> > the last part of the page.
>> >
>> > Where is my mistaking in thinking?
>>
>> Correct, and to the extent TLS permits a renegotiation to
>> reduce the security parameters without confirming the intention
>> to reduce those parameters at the current level, TLS is broken.
>
> Is TLS broken or is it just used in an unreasonable way?
>
> With OpenSSL, for example, I could configure to accept only
> SHA1 and 3DES (or stronger) and I would be safe to renegotiate to
> 40 bit something.
>
> Isn't it a bug in the application when it does not allow me (its
> user) to configure it? As far as I know there is no way to tell
> Firefox i.e. not to accept 40 bit.

about:config, search for 'ssl' and 'tls'.  By default, Firefox 3.0+
disable 40- and 56-bit ciphers, and I know that either Firefox 3.0 or
3.5 disabled SSLv2 by default.  SSLv3 and TLSv1 do not use those
ciphers.

>> That is, if the two ends negotiate 1,024-bit RSA and 256-bit
>> AES, then an attacker should not be able to renegotiate a lower
>> (or different) security within that connection without having
>> to break either 1,024-bit RSA, 256-bit AES, or one of the hard
>> algorithms inside TLS itself (such as SHA1). TLS permitted an
>> attacker to do this, and so was deemed broken.

No, TLS did not allow an attacker to renegotiate a lower security.
TLS didn't even allow for a "version rollback" attack to force the use
of weaker ciphers.

TLS *is* allowed to renegotiate to a different level of
authentication, and even though I can't think of any time when it's
ever used, it's possible to reduce the level of cloaking. What
happened is that TLS permitted an attacker to inject arbitrary bytes
at the beginning of a connection, and the HTTP servers compounded this
problem by allowing different security cloaks on different resources
within the same server space.

> Is it possible for an application to use an ideal TLS
> implementation in a way to at least detect this?

There is currently no way for even an ideal TLS implementation to
detect this issue.  This is why the IETF is working on the Secure
Renegotiation Indication TLS extension, which is close to finally
being released.

> Like having some OpenSSL callback be called reliably on (right
> after?) each renegiotation - where a webserver could force to
> shutdown the connection if the new parameters are not acceptable?

Yes.  Please see SSL_CTX_set_info_callback(3ssl).

> [original start of mail]
> * David Schwartz wrote on Mon, Jan 11, 2010 at 09:06 -0800:
>> > I think since TLS should be considered a layer, its payload
>> > should not make any assumptions to it (or vice versa). But in the
>> > moment some application `looks to the TLS state' and tries to
>> > associate this information to some data in some buffer, I think
>> > it makes a mistake.
>>
>> Well then TLS is basically useless. A secure connection whose
>> properties I cannot trust is not particularly useful. If I
>> receive "foo" over the connection and cannot ever know where
>> the middle "o" came from, what can I do with the "foo"? Anser
>> -- nothing.
>
> I think `trust' is not an absolute but a relative thing. Someone
> may trust rapidSSL certified 40 bit key connections sufficient to
> login into myspace via her own Wifi network, but not at all
> sufficient for online banking.

I define 'trust' as: you expose some point of vulnerability to the
peer, with the belief and expectation that they will not abuse that
vulnerability.

> Someone could expect whenever a browers window or `tab' is
> operating in some extended validation mode.

Personal opinion here: EV was what Verisign *used* to require and
provide, back in 1995 when they were the first and only CA that
Netscape included in Navigator betas.

The problem is that SSL and TLS are *far* too useful and general to
require the services of a public, commercial CA.

> If a server uses TLS to authenticate a client, a client
> certificate is needed. If the server delegates the authentication
> and authorisation both to TLS (which means, that such a server or
> HTTPS server port could not be used without a valid client
> certificate), as far as I understood no renegiotation attack
> would be possible.

This is correct *only* if mutual authentication is done on the initial
negotiation.  Otherwise, the server accepts an anonymous connection,
receives anonymous bytes that translate into a request for a resource
that's protected, sends a renegotiation request to the client, the
client provides its certificate, and the anonymously-injected data --
in the case of Twitter, a prefix of an entire header list -- is
processed as though it's under the security cloak of the client.

> Did I understand correctly (or could client-certificate-based
> authentication be attacked as well)?

See above.

> As far as I understood the renegiotation attack bases on the fact
> that the server does not TLS-authenticate the client but
> relies on the assumption that it will talk to the same client
> during the whole connection / HTTP session.

Correct.

> (I assume that the server is able to detect when a client
> certificate changes within renegiotation, if a client
> certificate change is possible at all - and is able to refuse
> that. So if a server does not perform client authenticate [using
> client certificates as TLS offers] the server cannot know the the
> middle "o" came from.).

See SSL_CTX_set_verify_callback(3ssl).

> I think with TLS-authentication this type or renegiotation attack
> won't work or would at least be detectable because the client
> certificates Subject/CN or key changes.
>
> Is this correct?

Client keys and certificates can expire in the middle of a session.
If the server is set up to do so, it will request a renegotiation at
that point, and will expect at the very least a new certificate and
quite possibly a new identity-verification key.  The DN (not Common
Name, the Distinguished Name) can stay the same, while referring to a
completely different principal, if they're issued through different
trust anchors.  (client certificates can also expire if the
intermediate CA that issued them expires.)

>> > When using HTTP over IPSec, I think no one ever had the idea to
>> > open or block URLs based on the currently used IPSec
>> > certificate...
>>
>> I'm not sure I get the point of your analogy.
>
> ohh sorry. I hope I wasn't too confused. Or I'm just wrong.
> Unfortunately I need a longer explanation trying to tell what I
> mean:
>
> Similar, as I understood, the idea to the requirement for a
> client certificate on an existing connection when the higher
> layer protocol or higher application level demands that, for
> example, when HTTP browsing a specific sub directory. As I
> understand it, this is like trying to add security retroactively.

Precisely.

> Similar for IPSec. If I would configure my IPSec client/stack to
> accept anonymous clients I cannot trust connection to them, even
> if over this insecure (not authenticated) connection some
> authentication was transmitted - especially when it is even
> possible to replay this authentication mean. For example even if
> a valid high-secure SSH login happend over this insecure (not
> authenticated) IPSec tunnel, from this it cannot be concluded
> that the IPSec tunnel itself is secure and authenticated. It is
> not.

Certainly, but that wasn't the point that I thought you were making...
I thought you meant that "IPsec stacks that perform certificate
authentication have never been used to allow or block URLs."  This is
false.

If you configure an IPsec stack to allow anonymous clients, you can't
trust them.  What you can do is trust the security properties of other
actions performed *within* those tunnels, since they have different
characteristics.

> So with TLS, why do webservers assume the tunnel (layer) is
> secure and authenticated if they received a e.g. password via it?

Because client certificates are too difficult for users to obtain, and
once the Secure Renegotiation RFC is published it (appears that it)
can be.

> Isn't this mixing levels/layers/responsibilities in an forbidden way?

"In theory, there's no difference between theory and reality.  In
reality, there is."

Technically, TLS is supposed to ensure that the endpoint that you were
talking with cannot change without collusion between the initial
endpoint and the final endpoint, sharing key and state data.  This
guarantee was violated, so they're fixing it.

> I have to admit that I did not study the `twitter attack', but I
> assume it works because the client authentication, probably some
> password or even HTTP protocol AuthType mean, is not bound to any
> random number or other client/session identification. By this, I
> assume, it also would be open for replay attacks (e.g. by a
> client-local trojan horse or so).

Twitter uses HTTP Basic authentication, over TLS.  This is one of the
weakest forms of authentication I know of.  However, the twitter
attack worked because client certificates are not well-used, and
because TLS has a flaw that allowed for the guarantee of "one endpoint
unless collusion occurs" to be violated.

> The analogy I see here is that as a valid SSH connection over an
> unauthenticated IPSec tunnel does not mean all other things via
> this tunnel are also secure so a valid password over an
> unauthenticated TLS also does not mean the wohle HTTPS session is
> secure.

This is correct.  However, theory must bend before reality: This is
the way things are.

I want to change this so that client certificates are amazingly simple
to obtain, as does Wes Kussmaul, the founder of the Osmio project
(http://www.osmio.org/).  He wants to change it in a way that is
orthogonal to how I want to, but we have determined that our systems
can coexist on the same network and there's no reason that our work to
improve the certificate user interface must diverge.  (ANY improvement
to the standard certificate UIs is welcome.)

>> > It seems it is, so what do I miss / where is my mistake in
>> > thinking?
>>
>> The mistake is in thinking that any security protocol is useful
>> as a security measure on end A if the security parameters can
>> be changed by end B at any time with no notification to higher
>> levels on end A.
>
> Ohh, is it? But when such a change is possible, couldn't an
> attacker then fool by some way to use very weak security
> parameters, for instance by MITM or inject or whatever to
> renegiotate a weak cipher and brute force it - or even disable
> security?    (as far as I know theoretically SSL can be operated
> without encryption using a nullcipher)

Nope.  TLS provides a resistance to "version rollback" attacks, where
an attacker tries to force the protocol negotiated to be an older,
compatibility version which only supports lesser key sizes than the
client and server both support.

In theory, TLS *can* be operated with a null cipher and null
authentication, but it can *never* (except under initial negotiation)
have a null message authentication code.  (The state of NULL-NULL-NULL
is illegal to negotiate or to enter, but it is the only valid initial
state of the connection.)

In reality, TLS clients disable low-security keysizes, as do TLS
servers.  (Again, <a href="about:config>Click here to configure your
Firefox</a>.  Bolded entries are user-set, and non-bolded entries are
defaults.)

>> > I could find myself suddenly using a 40 bit export grade or
>> > even a NULL chipher to a different peer (key) without any
>> > notice!

Okay.  Here's the problem we're having.  You are assuming that all
keysizes are always available. They are not, according to the mutual
security policies of the server and the client.

The keysizes and the ciphers used are chosen from the union:
(client-supported set) U (server-supported set).  This means that the
only ciphers that can be chosen are the ones that the client and the
server both support.  The server determines, from the list of ciphers
sent as being accepted by the client, the best option according to its
own security policy.  (Typically I include '@STRENGTH' in my cipher
definition list under OpenSSL, so that the highest-grade ciphersuite
that the client also supports is chosen -- @STRENGTH tells the cipher
list parser to reorder the available ciphers from strongest to
weakest.)

And Firefox disables NULL ciphers, 40-bit ciphers, and 56-bit ciphers.
 Thus, they're not in the set that the client reports.  Since they're
not there, they cannot be part of the union, and thus are excluded
from consideration.

(Really, please go read RFC 2246.  The specification goes into precise
detail as to how things are selected.  Also, please go get Wireshark
and run it on your local system, capturing the initial phases of an
initial negotiation.  Everything up to and including ChangeCipherSpec
is unencrypted, due to the NULL-NULL-NULL initial cipher state.  You
can then figure out what your client is offering.)

>> That could be argued to be a bug. Ideally, a renegotiation
>> should not be permitted to reduce the security parameters
>> unless, at absolute minimum, the intention to renegotiate is
>> confirmed at both ends using at least the security level
>> already negotiated.
>
> ahh ok. So I assume I'm not the only one who dislikes that
> firefox displays a lock icon on 40 bit connections :)

Firefox disallows 40-bit connections, and 56-bit connections, because
those are "export grade".  Specifically, in 2000 the United States
amended its International Trade in Arms Regulations (ITAR) so that
open-source cryptography can be freely transmitted to and from the US.
 SSLv3 came out in 1996, TLSv1 came out in 1999, TLSv1.1 came out in
2006, TLSv1.2 came out in 2008, and TLSv1.3 is forthcoming.  Firefox
is an open-source project, as is NSS (its cryptographic and TLS
implementation).  This means that they are not required to limit
themselves to 40- or 56-bit, and the version of Firefox that I'm
writing within (3.5.7) disables anything lower than 112-bit by default
(3DES is still supported, which allows 112 bits of entropy in the key
material).

If the user wants to override the defaults, that's the user's fault
and the user's problem.

> (ohh thanks if anyone made it through all my long text)

You're welcome. :)

-Kyle H
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to