Hello,

Recently I've become aware that "TLS fingerprinting" is a thing. I
understand it has been deployed by Google, Cloudflare, Apple and others to
(at a guess)  "authenticate" TLS clients. For example, I understand that
certain Google Android cloud services refuse to interop with anything that
doesn't look like openssl-1.1.1 with a specific configuration. That is very
disappointing.

GREASE is ineffective at diffusing fingerprinting -- and that was never its
goal -- because it seems fingerprinting processes ("JA3" seems popular?)
actively remove the published GREASE code points if they are encountered.

But it seems the spirit of GREASE was to avoid the protocol rusting shut,
and here we have another case of the protocol becoming defined by
implementation rather than standard -- now TLS implementations have to
strictly follow a fingerprint accidentally defined by other implementations
or risk interop failure. We now receive defect reports asking us to
determine and identically reproduce the behaviour of old OpenSSL versions.
This is a terrible situation -- cipher suites, extensions, and named groups
all have rusted shut.

I'd like to ask if there any latent knowledge in the WG:

1. why do extensions not have a defined order (such as in order of code
point) to reduce this as a source of fingerprinting? I'm aware of the
specific ordering constraint on the TLS1.3 PSK binder extension, but as far
as I know this is the only time extension ordering has ever been specified.
It seems odd that TLS1.3 had such a variety of privacy improvements, but
not here?

2. what's going on in appendix E.3 in RFC8446? There's a reference there to
[HCJC16] but the text does not address the main fingerprinting findings in
[HCJC16] at all.

Thanks,
Joe
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to