Context:

Python has a long and sad history with regards to getting connection
security right. Modern versions of Python (>=2.7.9 and >=3.6) have a vastly
better story. But software often needs to handle what happens when running
on older versions of Python in the wild or else connection security could
be compromised. I'm trying to understand the security implications of the
interaction between Python <2.7.9 and TLS so I don't inadvertently roll out
insecure software.

The way you specify the desired TLS protocol version (which is heavily
inspired by OpenSSL's API) is to pass a protocol constant along with some
more options to control ciphers, protocol options (like compression), etc.
If you want to require TLS 1.2+, you use SSLv23 and then mask out older
protocols. e.g. ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3 | ssl.OP_NO_TLSv1 |
ssl.OP_NO_TLSv1_1.

Python versions before 2.7.9 lacked controls necessary to ensure optimal
security. For example, Python didn't expose constants to force TLS versions
>1.0. Instead, you had to use PROTOCOL_TLSv1 (the latest available
constant) and force TLS 1.0. Or, you used SSLv23 (masking out SSL v2 and v3
of course) and hoped the underlying crypto library can negotiate TLS >1.0.

The Problem:

I'm very naive about how TLS libraries are implemented and how the TLS
handshake works. But it seems to me that software establishing secure
connections can generally perform pre or post filtering. In
"pre-filtering," the ClientHello message only advertises ciphers/protocols
that we want to use. In "post-filtering," you advertise a more liberal list
of ciphers and depending on the negotiation results/security, you continue
or drop the connection.

Again, I'm naive, but it feels like pre-filtering is better because it
eliminates surface area for e.g. downgrade attacks. However - and this is
where the problem resides - Python <2.7.9 doesn't exactly give you the
requisite tools for adequate pre-filtering. Since the constants aren't
there, you have to use PROTOCOL_SSLv23 and "hope" that a TLS >1.0
connection is established.

Question:

Python exposes the negotiated TLS protocol version and cipher info post TLS
handshake (results of OpenSSL's SSL_get_version() and
SSL_get_current_cipher() functions). So it is possible to examine these
values to determine whether to proceed with the connection. My question is:
what are the dangers or concerns in doing so? I'm assuming there's a
surface area of downgrade-type attacks in play. But I'm not sure the
specifics.

e.g. on Python <2.7.9, the best we can do is use PROTOCOL_SSLv23 and "hope"
the underlying crypto library is able to negotiate TLS >1.0. But this will
advertise protocols and ciphers for TLS 1.0+ in ClientHello. I don't think
this is ideal: I think I'd prefer to not advertise client support for TLS
1.0 (and even 1.1) support at all if there is no intent on speaking these
older (and known vulnerable) protocols.

If you aren't able to limit the advertisement of TLS 1.0 and 1.1 protocols
from the client, is it safe to validate the TLS-level security from
negotiated protocol and cipher info? Is the TLS protocol version itself
sufficient or does it need to be supplemented with e.g. a "safe" list of
ciphers?
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to