This is good work, but we need to be wary of getting too excited about
TTLB, and then declaring performance solved. Ultimately, TTLB simply
dampens the impact of postquantum by mixing in the (handshake-independent)
time to do the bulk transfer. The question is whether that reflects our
goals.

Ultimately, the thing that matters is overall application
performance, which can be complex to measure because you actually have to
try that application. Metrics like TTLB, TTFB, etc., are isolated to one
connection and thus easier to measure, and without checking each
application one by one. But they're only as valuable as they are predictors
of overall application performance. For TTLB, both the magnitude and
desirability of dampening effect are application-specific:

If your goal is transferring a large file on the backend, such that you
really only care when the operation is complete, then yes, TTLB is a good
proxy for application system performance. You just care about throughput in
that case. Moreover, in such applications, if you are transferring a lot of
data, the dampening effect not only reflects reality but is larger.

However, interactive, user-facing applications are different. There, TTLB
is a poor proxy for application performance. For example, on the web,
performance is determined more by how long it takes to display a meaningful
webpage to the user. (We often call this the time to "first contentful
paint".) Now, that is a very high-level metric that is impacted by all
sorts of things, such as whether this is a repeat visit, page structure,
etc. So it is hard to immediately translate that back down to TLS. But it
is frequently much closer to the TTFB side of the spectrum than the TTLB
side. And indeed, we have been seeing impacts from PQ to our high-level
metrics on mobile.

There's also a pretty natural intuition for this: since there is much more
focus on latency than throughput, optimizing an interactive application
often involves trying to reduce the amount of traffic on the critical path.
The more the application does so, the less accurate TTLB's dampening effect
is, and the closer we trend towards TTFB. (Of course, some optimizations in
this space involve making fewer connections, etc. But the point here was to
give a rough intuition.)

On Thu, Mar 7, 2024 at 2:58 PM Deirdre Connolly <durumcrustu...@gmail.com>
wrote:

> "At the 2024 Workshop on Measurements, Attacks, and Defenses for the Web
> (MADweb), we presented a paper¹ advocating time to last byte (TTLB) as a
> metric for assessing the total impact of data-heavy, quantum-resistant
> algorithms such as ML-KEM and ML-DSA on real-world TLS 1.3 connections. Our
> paper shows that the new algorithms will have a much lower net effect on
> connections that transfer sizable amounts of data than they do on the TLS
> 1.3 handshake itself."
>
>
> https://www.amazon.science/blog/delays-from-post-quantum-cryptography-may-not-be-so-bad
>
> ¹
> https://www.amazon.science/publications/the-impact-of-data-heavy-post-quantum-tls-1-3-on-the-time-to-last-byte-of-real-world-connections/
>
>
> _______________________________________________
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to