>> [openssl-dev@openssl.org - Wed Apr 25 00:33:54 2012]:
>>
>> Hi,
>>
>> 1.0.0 had this:
>> /* SSL_OP_ALL: various bug workarounds that should be rather harmless.
>>  *             This used to be 0x000FFFFFL before 0.9.7. */
>> #define SSL_OP_ALL                                      0x80000FFFL
>>
>> 1.0.1 now has:
>> #define SSL_OP_NO_TLSv1_1                               0x00000400L
>> #define SSL_OP_ALL                                      0x80000BFFL
>>
>> So that basicly means that applications build with the
>> 1.0.0 headers disable TLS v1.1 support.  This causes a
>> problem talking to somethng that support TLS 1.1 but
>> doesn't support TLS 1.2.
>>
> 
> Which is a problem for OpenSSL clients which will advertise TLS 1.2
> support then choke if the server tries TLS 1.1. OpenSSL servers should
> work though and end up negotiating TLS 1.0 if a client advertises
> support for TLS 1.1.

Yes, and this is essential point. It's inappropriate for client to have
"holes" in its capability vector. For example to announce TLS1.2
capability, refuse TLS1.1 and expect that TLS1.0 works out. This by the
way is not specific to 1.0.0 application dynamically linked with 1.0.1
library. So that even though it's reported as incompatibility between
1.0.0 and 1.0.1, it can and should as well be considered as genuine
*post-1.0.0 bug*. Specifically that setting SSL_OP_NO_TLSv1_1 *alone*
(be it current value or reassigned) is prone to failure. Capability
vector should be contiguous, e.g. TLS1.2 through TLS1.0, TLS1.0 through
SSL3, etc.

This brings us to following question. Should SSL_OP_NO_TLSv1_X disable
everything above? Or should it disable everything below? Suggestion is
to disable everything above *if* there is something below.

This has certain side effect that might be considered counter-intuitive.
If [1.0.1] application passes SSL_OP_NO_TLSv1 *alone*, it would end up
negotiating SSLv3 [provided that it's not compiled with OPENSSL_NO_SSL3
and OPENSSL_NO_SSL2]. While programmer can say that he meant to disable
TLS1.0 to favor *higher* versions, not *lower*. But note that it works
this way in 1.0.0 and prior versions. With this in mind is it reasonable
to demand that if [post-1.0.0] application programmer wants to end up
above TLS1.0, he would have to
SSL_OP_NO_TLSv1|SSL_OP_NO_SSL3|SSL_OP_NO_SSL2, not SSL_OP_NO_TLSv1 alone?

If we can agree on this and fix the bug accordingly, then we can ask
ourselves if it's actually necessary to reassign SSL_OP_NO_TLSv1_1. What
if we don't do that? 1.0.0 SSL_OP_ALL would disable TLS1.2 *and* TLS1.1.
It would fix the interoperability problem caused by "hole" in client
capability, but would prevent 1.0.0 application from taking advantage of
more secure protocols. Trade-off. As 1.0.0 application is not in
position to expect anything above TLS1.0, trade-off can as well be
resolved in favor of interoperability. Note that there is not such
trade-off in 1.0.1 application context, because 1.0.1 SSL_OP_ALL won't
disable protocols above TLS1.0.

Note that I'm not suggesting to leave SSL_OP_NO_TLSv1_1 as is. I'm only
saying we *might* as well do that [leave as is that is]. Reassigning can
allow to squeeze in flag controlling choice between 0/n and 1/n-1 split;-)


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to