On Thu, Dec 12, 2019 at 6:51 AM Hubert Kario <hka...@redhat.com> wrote:

> On Wednesday, 11 December 2019 18:06:19 CET, David Benjamin wrote:
> > On Wed, Dec 11, 2019 at 9:22 AM Ilari Liusvaara <
> ilariliusva...@welho.com>
> > wrote:
> >
> >> On Wed, Dec 11, 2019 at 02:21:48PM +0100, Hubert Kario wrote:
> >>> On Saturday, 7 December 2019 11:20:17 CET, Ilari Liusvaara wrote:
> >>>>
> >>>> One test I just tried:
> >>>>
> >>>> - Smartcard capable of raw RSA.
> >>>> - OpenSC PKCS#11 drivers.
> >>>> - Firefox ESR 68
> >>>> - Server supports TLS 1.3 (Accept RSA PKCS#1v1.5 client signatures is
> >>>>   enabled[2]).
> >>>>
> >>>> Result: Failed. Client hits internal error code
> >>>> SEC_ERROR_LIBRARY_FAILURE
> >>>> [3].
> >>>
> >>> That doesn't match my understanding of how NSS works – AFAIK, NSS (and
> as
> >>> such, Firefox), will try both raw RSA and rsa-pss signatures with the
> >>> token,
> >>> depending on what kind of algorithms the token advertises.
> >>>
> >>> I think the issue was the old version of OpenSC, new versions can do
> >>> rsa-pss
> >>> with rsa-raw:
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1595626
> >>> https://github.com/OpenSC/OpenSC/pull/1435
> >>
> >> Ok, upgrading the OpenSC to git master (0.20.0-rc34-2-gee78b0b8) makes
> >> client certificates in TLS 1.3 in Firefox work with that card (works
> even
> >> if accept RSA PKCS#1v1.5 client signatures is disabled on server side)..
> >>
> >> There is apparently no release with the fix. One needs 0.20-rcX or
> recent
> >> git master.
> >>
> >
> > Chrome likewise tries to polyfill PSS support out of raw RSA when the
> > underlying keys support it, but PSS support is still a problem. In
> > particular, I believe TPM 1.2 can neither do RSA-PSS nor polyfill it with
> > raw padding. (Oddly, the spec does reference OAEP, but signing is only
> > PKCS#1 v1.5.) TPM 2.0 can do PSS, but hardware lifecycles are long.
> Between
> > the negotiation ordering and the client certificate privacy flaw fixed in
> > TLS 1.3, simply saying "no TLS 1.3 for those keys" is problematic. Thus,
> > the draft. It's true that it adds some transitionary codepoints to TLS
> 1.3,
> > but the point of TLS 1.3 was not switching to PSS. That's a minor bonus
> on
> > top of *much* more important changes.
> >
> > Most properties negotiated by TLS can be unilaterally updated by the
> > TLS-related components of a system. This is great because it means we can
> > deploy TLS 1.3's improvements. The long-term credentials are one of the
> big
> > exceptions here and, indeed, we didn't just make TLS 1.3 mandate Ed25519.
> > We wanted to maintain continuity with existing RSA keys, but since it was
> > possible to switch them to RSA-PSS we went ahead and did that. Sadly, it
> > appears that last point can be more true for server keys than client
> keys.
> > :-(
>
> If TLS 1.2 was looking insecure, I would be with you on this one. But given
> that TLS 1.2 can be configured to be as secure as TLS 1.3, I think
> introducing
> weak points to TLS 1.3, weak points we will have to live with for the next
> decade, if not two, is counter-productive and will only delay deployemnt
> of
> RSA-PSS capable HSMs. Not allowing PKCS#1 v1.5 in TLS 1.3 puts actual
> pressure
> to replace that obsolete hardware, without exposing users to unnecessary
> risk.
>

For client certificates, TLS 1.2 cannot be configured to be as secure as
TLS 1.3. The client certificates are sent in the clear.
https://tma.ifip.org/wordpress/wp-content/uploads/2017/06/tma2017_paper2.pdf

You mentioned elsewhere you can renegotiate, but renegotiation doesn't
work. It introduces a host of other problems from DoS risks to problems
with changing the security state of a connection after the fact to general
layering issues with the public API being odd. Some HTTP servers block
renegotiation altogether (I believe NGINX does) and some TLS stacks don't
support renegotiation as a server at all (BoringSSL and Go).

More importantly, renegotiation is not supported with HTTP/2, which is an
important improvement for performance and server load (fewer connections).
Now, the HTTP/2 spec does say the following:

> An endpoint MAY use renegotiation to provide confidentiality protection
for client credentials offered in the handshake, but any renegotiation MUST
occur prior to sending the connection preface. A server SHOULD request a
client certificate if it sees a renegotiation request immediately after
establishing a connection.

However, I'm not aware of anyone implementing this, and I do not think this
suggestion actually works. Chrome does not accept it as a client at all and
will fail the connection, nor can I see any way to accept it. An HTTP/2
client speaks as soon as the handshake completes and does not know whether
the server is going to do this. By the time the client receives a
HelloRequest, it may have sent zero, one, or many HTTP requests. This means
the fundamental ambiguities with a multiplexing protocol, as well as the
handshake/app-data interleave problems, are immediately present.
Conversely, as a server, the client may have sent arbitrarily many HTTP
requests before it processed the HelloRequest, so the server must buffer
them all up (DoS risk) or process them anonymously (changing security state
mid-connection and not the same semantics as initial handshake
authentication).

David
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to