Re: [TLS] Time to first byte vs time to last byte

2024-03-07 Thread Martin Thomson
Hi Panos,

I realize that TTLB might correlate well for some types of web content, but 
it's important to recognize that lots of web content is badly bloated (if you 
can tolerate the invective, this is a pretty good look at the situation, with 
numbers: https://infrequently.org/series/performance-inequality/).

I don't want to call out your employer's properties in particular, but at over 
3M and with relatively few connections, handshakes really don't play much into 
page load performance.  That might be typical, but just being typical doesn't 
mean that it's a case we should be optimizing for.

The 72K page I linked above looks very different.  There, your paper shows a 
20-25% hit on TTLB.  TTFB is likely more affected due to the way congestion 
controllers work and the fact that you never leave slow start.

Cheers,
Martin

On Fri, Mar 8, 2024, at 13:56, Kampanakis, Panos wrote:
> Thx Deirdre for bringing it up. 
> 
> David,
> 
> ACK. I think the overall point of our paper is that application 
> performance is more closely related to PQ TTLB than PQ TTFB/handshake. 
> 
> Snippet from the paper
> 
> *> Google’s PageSpeed Insights [12] uses a set of metrics to measure 
> the user experience and webpage performance. The First Contentful Paint 
> (FCP), Largest Contentful Paint (LCP), First Input Delay (FID), 
> Interaction to Next Paint (INP), Total Blocking Time (TBT), and 
> Cumulative Layout Shift (CLS) metrics include this work’s TTLB along 
> with other client-side, browser application-specific execution delays. 
> The PageSpeed Insights TTFB metric measures the total time up to the 
> point the first byte of data makes it to the client. So, PageSpeed 
> Insights TTFB is like this work’s TTFB/TLS handshake time with 
> additional network delays like DNS lookup, redirect, service worker 
> startup, and request time.*
> 
> Specifically about the Web, TTLB (as defined in the paper) is directly 
> related to FCP, LCP, FID, INP, TBT, CLS, which are 6 of the 7 metrics 
> in Google’s PageSpeed Insights. We don’t want to declare that TTLB is 
> the ultimate metric, but intuitively, I think it is a better indicator 
> when it comes to application performance than TTFB. 
> 
> That does not intend to underestimate the importance of the studies on 
> handshake performance which was crucial to identify the best performing 
> new KEMs and signatures. It also does not intend to underestimate the 
> importance of slimming down PQ TLS 1.3 handshakes as much as possible.
> 
> Side note about Rob’s point: 
> We have not collected QUIC TTLB data yet, but I want to say that the 
> paper’s TTLB experimental results could more or less be extended to 
> QUIC be subtracting one RTT. OK, I don’t have experimental measurements 
> to prove it yet. So I will only make this claim and stop until I have 
> more data.
> 
> 
> 
> *From:* TLS  *On Behalf Of * David Benjamin
> *Sent:* Thursday, March 7, 2024 3:41 PM
> *To:* Deirdre Connolly 
> *Cc:* TLS@ietf.org
> *Subject:* RE: [EXTERNAL] [TLS] Time to first byte vs time to last byte
> 
> *CAUTION*: This email originated from outside of the organization. Do 
> not click links or open attachments unless you can confirm the sender 
> and know the content is safe.
>
> 
> This is good work, but we need to be wary of getting too excited about 
> TTLB, and then declaring performance solved. Ultimately, TTLB simply 
> dampens the impact of postquantum by mixing in the 
> (handshake-independent) time to do the bulk transfer. The question is 
> whether that reflects our goals. 
> 
> Ultimately, the thing that matters is overall application performance, 
> which can be complex to measure because you actually have to try that 
> application. Metrics like TTLB, TTFB, etc., are isolated to one 
> connection and thus easier to measure, and without checking each 
> application one by one. But they're only as valuable as they are 
> predictors of overall application performance. For TTLB, both the 
> magnitude and desirability of dampening effect are application-specific:
> 
> If your goal is transferring a large file on the backend, such that you 
> really only care when the operation is complete, then yes, TTLB is a 
> good proxy for application system performance. You just care about 
> throughput in that case. Moreover, in such applications, if you are 
> transferring a lot of data, the dampening effect not only reflects 
> reality but is larger.
> 
> However, interactive, user-facing applications are different. There, 
> TTLB is a poor proxy for application performance. For example, on the 
> web, performance is determined more by how long it takes to display a 
> meaningful webpage to the user. (We often call this the time to "first 
> contentful paint".) Now, that is a very high-level metric that is 
> impacted by all sorts of things, such as whether this is a repeat 
> visit, page structure, etc. So it is hard to immediately translate that 
> back down to TLS. But it is frequently much 

Re: [TLS] Time to first byte vs time to last byte

2024-03-07 Thread Kampanakis, Panos
Thx Deirdre for bringing it up.

David,

ACK. I think the overall point of our paper is that application performance is 
more closely related to PQ TTLB than PQ TTFB/handshake.

Snippet from the paper

> Google’s PageSpeed Insights [12] uses a set of metrics to measure the user 
> experience and webpage performance. The First Contentful Paint (FCP), Largest 
> Contentful Paint (LCP), First Input Delay (FID), Interaction to Next Paint 
> (INP), Total Blocking Time (TBT), and Cumulative Layout Shift (CLS) metrics 
> include this work’s TTLB along with other client-side, browser 
> application-specific execution delays. The PageSpeed Insights TTFB metric 
> measures the total time up to the point the first byte of data makes it to 
> the client. So, PageSpeed Insights TTFB is like this work’s TTFB/TLS 
> handshake time with additional network delays like DNS lookup, redirect, 
> service worker startup, and request time.

Specifically about the Web, TTLB (as defined in the paper) is directly related 
to FCP, LCP, FID, INP, TBT, CLS, which are 6 of the 7 metrics in Google’s 
PageSpeed Insights. We don’t want to declare that TTLB is the ultimate metric, 
but intuitively, I think it is a better indicator when it comes to application 
performance than TTFB.

That does not intend to underestimate the importance of the studies on 
handshake performance which was crucial to identify the best performing new 
KEMs and signatures. It also does not intend to underestimate the importance of 
slimming down PQ TLS 1.3 handshakes as much as possible.

Side note about Rob’s point:
We have not collected QUIC TTLB data yet, but I want to say that the paper’s 
TTLB experimental results could more or less be extended to QUIC be subtracting 
one RTT. OK, I don’t have experimental measurements to prove it yet. So I will 
only make this claim and stop until I have more data.



From: TLS  On Behalf Of David Benjamin
Sent: Thursday, March 7, 2024 3:41 PM
To: Deirdre Connolly 
Cc: TLS@ietf.org
Subject: RE: [EXTERNAL] [TLS] Time to first byte vs time to last byte


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


This is good work, but we need to be wary of getting too excited about TTLB, 
and then declaring performance solved. Ultimately, TTLB simply dampens the 
impact of postquantum by mixing in the (handshake-independent) time to do the 
bulk transfer. The question is whether that reflects our goals.

Ultimately, the thing that matters is overall application performance, which 
can be complex to measure because you actually have to try that application. 
Metrics like TTLB, TTFB, etc., are isolated to one connection and thus easier 
to measure, and without checking each application one by one. But they're only 
as valuable as they are predictors of overall application performance. For 
TTLB, both the magnitude and desirability of dampening effect are 
application-specific:

If your goal is transferring a large file on the backend, such that you really 
only care when the operation is complete, then yes, TTLB is a good proxy for 
application system performance. You just care about throughput in that case. 
Moreover, in such applications, if you are transferring a lot of data, the 
dampening effect not only reflects reality but is larger.

However, interactive, user-facing applications are different. There, TTLB is a 
poor proxy for application performance. For example, on the web, performance is 
determined more by how long it takes to display a meaningful webpage to the 
user. (We often call this the time to "first contentful paint".) Now, that is a 
very high-level metric that is impacted by all sorts of things, such as whether 
this is a repeat visit, page structure, etc. So it is hard to immediately 
translate that back down to TLS. But it is frequently much closer to the TTFB 
side of the spectrum than the TTLB side. And indeed, we have been seeing 
impacts from PQ to our high-level metrics on mobile.

There's also a pretty natural intuition for this: since there is much more 
focus on latency than throughput, optimizing an interactive application often 
involves trying to reduce the amount of traffic on the critical path. The more 
the application does so, the less accurate TTLB's dampening effect is, and the 
closer we trend towards TTFB. (Of course, some optimizations in this space 
involve making fewer connections, etc. But the point here was to give a rough 
intuition.)

On Thu, Mar 7, 2024 at 2:58 PM Deirdre Connolly 
mailto:durumcrustu...@gmail.com>> wrote:
"At the 2024 Workshop on Measurements, Attacks, and Defenses for the Web 
(MADweb), we presented a paper¹ advocating time to last byte (TTLB) as a metric 
for assessing the total impact of data-heavy, quantum-resistant algorithms such 
as ML-KEM and ML-DSA on real-world TLS 1.3 connections. Our paper shows that 
the new algorithms will have a much 

Re: [TLS] Next steps for key share prediction

2024-03-07 Thread Watson Ladd
On Thu, Mar 7, 2024 at 2:56 PM David Benjamin  wrote:
>
> Hi all,
>
> With the excitement about, sometime in the far future, possibly transitioning 
> from a hybrid, or to a to-be-developed better PQ algorithm, I thought it 
> would be a good time to remind folks that, right now, we have no way to 
> effectively transition between PQ-sized KEMs at all.
>
> At IETF 118, we discussed draft-davidben-tls-key-share-prediction, which aims 
> to address this. For a refresher, here are some links:
> https://davidben.github.io/tls-key-share-prediction/draft-davidben-tls-key-share-prediction.html
> https://datatracker.ietf.org/meeting/118/materials/slides-118-tls-key-share-prediction-00
> (Apologies, I forgot to cut a draft-01 with some of the outstanding changes 
> in the GitHub, so the link above is probably better than draft-00.)
>
> If I recall, the outcome from IETF 118 was two-fold:
>
> First, we'd clarify in rfc8446bis that the "key_share first" selection 
> algorithm is not quite what you want. This was done in 
> https://github.com/tlswg/tls13-spec/pull/1331
>
> Second, there was some discussion over whether what's in the draft is the 
> best way to resolve a hypothetical future transition, or if there was another 
> formulation. I followed up with folks briefly offline afterwards, but an 
> alternative never came to fruition.
>
> Since we don't have another solution yet, I'd suggest we move forward with 
> what's in the draft as a starting point. (Or if this email inspires folks to 
> come up with a better solution, even better! :-D) In particular, whatever the 
> rfc8446bis guidance is, there are still TLS implementations out there with 
> the problematic selection algorithm. Concretely, OpenSSL's selection 
> algorithm is incompatible with this kind of transition. See 
> https://github.com/openssl/openssl/issues/22203

Is that asking whether or not we want adoption? I want adoption.
>
> Given that, I don't see a clear way to avoid some way to separate the old 
> behavior (which impacts the existing groups) from the new behavior. The draft 
> proposes to do it by keying on the codepoint, and doing our future selves a 
> favor by ensuring that the current generation of PQ codepoints are ready for 
> this. That's still the best solution I see right now for this situation.
>
> Thoughts?

I think letting the DNS signal also be an indicator the server
implements the correct behavior would be a good idea.

Sincerely,
Watson


Astra mortemque praestare gradatim

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Next steps for key share prediction

2024-03-07 Thread David Benjamin
Hi all,

With the excitement about, sometime in the far future, possibly
transitioning from a hybrid, or to a to-be-developed better PQ algorithm, I
thought it would be a good time to remind folks that, right now, *we have
no way to effectively transition between PQ-sized KEMs at all*.

At IETF 118, we discussed draft-davidben-tls-key-share-prediction, which
aims to address this. For a refresher, here are some links:
https://davidben.github.io/tls-key-share-prediction/draft-davidben-tls-key-share-prediction.html
https://datatracker.ietf.org/meeting/118/materials/slides-118-tls-key-share-prediction-00
(Apologies, I forgot to cut a draft-01 with some of the outstanding changes
in the GitHub, so the link above is probably better than draft-00.)

If I recall, the outcome from IETF 118 was two-fold:

First, we'd clarify in rfc8446bis that the "key_share first" selection
algorithm is not quite what you want. This was done in
https://github.com/tlswg/tls13-spec/pull/1331

Second, there was some discussion over whether what's in the draft is the
best way to resolve a hypothetical future transition, or if there was
another formulation. I followed up with folks briefly offline afterwards,
but an alternative never came to fruition.

Since we don't have another solution yet, I'd suggest we move forward with
what's in the draft as a starting point. (Or if this email inspires folks
to come up with a better solution, even better! :-D) In particular,
whatever the rfc8446bis guidance is, there are still TLS implementations
out there with the problematic selection algorithm. Concretely, OpenSSL's
selection algorithm is incompatible with this kind of transition. See
https://github.com/openssl/openssl/issues/22203

Given that, I don't see a clear way to avoid *some* way to separate the old
behavior (which impacts the existing groups) from the new behavior. The
draft proposes to do it by keying on the codepoint, and doing our future
selves a favor by ensuring that the current generation of PQ codepoints are
ready for this. That's still the best solution I see right now for this
situation.

Thoughts?

David
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-07 Thread D. J. Bernstein
Bas Westerbaan writes:
> We think it's worth it now, but of course we're not going to keep
> hybrids around when the CRQC arrives.

I think this comment illustrates an important ambiguity in the "CRQC"
terminology. Consider the scenario described in the following paragraph
from https://blog.cr.yp.to/20240102-hybrid.html:

   Concretely, think about a demo showing that spending a billion
   dollars on quantum computation can break a thousand X25519 keys.
   Yikes! We should be aiming for much higher security than that! We
   don't even want a billion-dollar attack to be able to break _one_ key!
   Users who care about the security of their data will be happy that we
   deployed post-quantum cryptography. But are the users going to say
   "Let's turn off X25519 and make each session a million dollars
   cheaper to attack"? I'm skeptical. I think users will need to see
   much cheaper attacks before agreeing that X25519 has negligible
   security value.

It's easy to imagine the billion-dollar demo being important as an
advertisement for the quantum-computer industry but having negligible
impact on cryptography:

   * Hopefully we'll have upgraded essentially everything to
 post-quantum crypto before then.

   * It's completely unclear that the demo should or will prompt users
 to turn off hybrids.

   * On the attack side, presumably real attackers will have been
 carrying out quantum attacks before the public demo happens.

For someone who understands what "CRQC" is supposed to mean: Is such a
demo "cryptographically relevant"? Is the concept of relevance broad
enough that Google's earlier demonstration of "quantum supremacy" also
counts as "cryptographically relevant", so CRQCs are already here?

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [EXT] Re: ML-KEM key agreement for TLS 1.3

2024-03-07 Thread Blumenthal, Uri - 0553 - MITLL
I would like to see Deirdre’s request satisfied, and a full number assigned. 

Regards,
Uri

> On Mar 7, 2024, at 09:19, Salz, Rich  
> wrote:
> 
> 
> This Message Is From an External Sender
> This message came from outside the Laboratory.
> Back to the topic at hand. I think it'd very bad if we'd have a codepoint for 
> pure ML-KEM before we have a codepoint for an ML-KEM hybrid. Process wise, I 
> think that's up to the designated experts of the IANA registry.
>  
> Currently the TLS designated experts really only look at the request itself, 
> without larger context: is the ALPN valid, is the requested protocol number 
> available, is the documentation freely available and so on.  Section 15 of 
> https://datatracker.ietf.org/doc/draft-ietf-tls-rfc8447bis/ changes that a 
> bit.
>  
> So if Deirdre requests a code point right now, we’d probably reject it but 
> that could be appealed somehow. Once the RFC is out, we could then see if 
> there’s WG consensus or if it’s still a work-in-progress, and assign full 
> number or provisional or tell her to use the private range.
>  
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls


smime.p7s
Description: S/MIME cryptographic signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Time to first byte vs time to last byte

2024-03-07 Thread David Benjamin
This is good work, but we need to be wary of getting too excited about
TTLB, and then declaring performance solved. Ultimately, TTLB simply
dampens the impact of postquantum by mixing in the (handshake-independent)
time to do the bulk transfer. The question is whether that reflects our
goals.

Ultimately, the thing that matters is overall application
performance, which can be complex to measure because you actually have to
try that application. Metrics like TTLB, TTFB, etc., are isolated to one
connection and thus easier to measure, and without checking each
application one by one. But they're only as valuable as they are predictors
of overall application performance. For TTLB, both the magnitude and
desirability of dampening effect are application-specific:

If your goal is transferring a large file on the backend, such that you
really only care when the operation is complete, then yes, TTLB is a good
proxy for application system performance. You just care about throughput in
that case. Moreover, in such applications, if you are transferring a lot of
data, the dampening effect not only reflects reality but is larger.

However, interactive, user-facing applications are different. There, TTLB
is a poor proxy for application performance. For example, on the web,
performance is determined more by how long it takes to display a meaningful
webpage to the user. (We often call this the time to "first contentful
paint".) Now, that is a very high-level metric that is impacted by all
sorts of things, such as whether this is a repeat visit, page structure,
etc. So it is hard to immediately translate that back down to TLS. But it
is frequently much closer to the TTFB side of the spectrum than the TTLB
side. And indeed, we have been seeing impacts from PQ to our high-level
metrics on mobile.

There's also a pretty natural intuition for this: since there is much more
focus on latency than throughput, optimizing an interactive application
often involves trying to reduce the amount of traffic on the critical path.
The more the application does so, the less accurate TTLB's dampening effect
is, and the closer we trend towards TTFB. (Of course, some optimizations in
this space involve making fewer connections, etc. But the point here was to
give a rough intuition.)

On Thu, Mar 7, 2024 at 2:58 PM Deirdre Connolly 
wrote:

> "At the 2024 Workshop on Measurements, Attacks, and Defenses for the Web
> (MADweb), we presented a paper¹ advocating time to last byte (TTLB) as a
> metric for assessing the total impact of data-heavy, quantum-resistant
> algorithms such as ML-KEM and ML-DSA on real-world TLS 1.3 connections. Our
> paper shows that the new algorithms will have a much lower net effect on
> connections that transfer sizable amounts of data than they do on the TLS
> 1.3 handshake itself."
>
>
> https://www.amazon.science/blog/delays-from-post-quantum-cryptography-may-not-be-so-bad
>
> ¹
> https://www.amazon.science/publications/the-impact-of-data-heavy-post-quantum-tls-1-3-on-the-time-to-last-byte-of-real-world-connections/
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Time to first byte vs time to last byte

2024-03-07 Thread Rob Sayre
I agree the efficiency concerns are generally overstated, but this study
should have measured QUIC etc, since the web pages will have all sorts of
awful performance problems. But the thing you have to watch out for is when
someone in the datacenter steps on the power cord or something (or DNS is
wrong, etc). Then, you get a stampede of clients reconnecting, and there
you really do care about how expensive the handshake is.

thanks,
Rob

On Thu, Mar 7, 2024 at 11:58 AM Deirdre Connolly 
wrote:

> "At the 2024 Workshop on Measurements, Attacks, and Defenses for the Web
> (MADweb), we presented a paper¹ advocating time to last byte (TTLB) as a
> metric for assessing the total impact of data-heavy, quantum-resistant
> algorithms such as ML-KEM and ML-DSA on real-world TLS 1.3 connections. Our
> paper shows that the new algorithms will have a much lower net effect on
> connections that transfer sizable amounts of data than they do on the TLS
> 1.3 handshake itself."
>
>
> https://www.amazon.science/blog/delays-from-post-quantum-cryptography-may-not-be-so-bad
>
> ¹
> https://www.amazon.science/publications/the-impact-of-data-heavy-post-quantum-tls-1-3-on-the-time-to-last-byte-of-real-world-connections/
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Time to first byte vs time to last byte

2024-03-07 Thread Deirdre Connolly
"At the 2024 Workshop on Measurements, Attacks, and Defenses for the Web
(MADweb), we presented a paper¹ advocating time to last byte (TTLB) as a
metric for assessing the total impact of data-heavy, quantum-resistant
algorithms such as ML-KEM and ML-DSA on real-world TLS 1.3 connections. Our
paper shows that the new algorithms will have a much lower net effect on
connections that transfer sizable amounts of data than they do on the TLS
1.3 handshake itself."

https://www.amazon.science/blog/delays-from-post-quantum-cryptography-may-not-be-so-bad

¹
https://www.amazon.science/publications/the-impact-of-data-heavy-post-quantum-tls-1-3-on-the-time-to-last-byte-of-real-world-connections/
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-07 Thread D. J. Bernstein
Here's a chart I sent CFRG a few weeks ago of recent claims regarding
the exponent, including memory-access costs, of attacks against the most
famous lattice problem, namely the "shortest-vector problem" (SVP):

   * November 2023: 0.396, and then 0.349 after an erratum:
 
https://web.archive.org/web/20231125213807/https://finiterealities.net/kyber512/

   * December 2023: 0.349, or 0.329 in 3 dimensions:
 
https://web.archive.org/web/20231219201240/https://csrc.nist.gov/csrc/media/Projects/post-quantum-cryptography/documents/faq/Kyber-512-FAQ.pdf

   * January 2024: 0.311, or 0.292 in 3 dimensions:
 
https://web.archive.org/web/20240119081025/https://eprint.iacr.org/2024/080.pdf

I then wrote: "Something is very seriously wrong when the asymptotic
security level claimed three months ago for SVP---as part of a chorus of
confident claims that these memory-access costs make Kyber-512 harder to
break than AES-128---is 27% higher than what's claimed today."

This sort of dramatic instability in security analyses is exciting for
cryptographers, and one of the perennial scientific attractions of
lattice-based cryptography. It's also a security risk. The right way to
handle this tension is to treat these cryptosystems _very_ carefully.
The wrong way is to try to conceal the instability.

John Mattsson writes:
> https://csrc.nist.gov/csrc/media/Projects/post-quantum-cryptography/documents/faq/Kyber-512-FAQ.pdf

That's the December 2023 document above. There are many problems with
that document, but the most obvious is that the document claims a much
higher exponent for the "cost of memory access" than the January 2024
document. This is not some minor side issue: the December 2023 document
labels this cost as an "important consideration" and spends pages
computing the exponent.

One wonders why NIST didn't issue a prompt statement either admitting
error or disputing the January 2024 document. That document was posted
almost two full months ago. The document is on the list of accepted
papers for NIST's next workshop, but accepting a paper (1) isn't a
statement of endorsement and (2) doesn't tell readers "Please disregard
the fundamentally flawed December 2023 statement".

> https://keymaterial.net/2023/11/18/kyber512s-security-level/

See https://blog.cr.yp.to/20231125-kyber.html for comments on that.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS Digest, Vol 236, Issue 15

2024-03-07 Thread Wang Guilin

I do agree with Eric on that hybrid solutions may live for a very long term.

Moreover, IMO, it could even become a popular way to increase crypto-agility as 
this can tolerate weak or seroius security flaws which may be introduced from 
algorithm design, key size selection, and or software coding, under the 
assumption that at least one of the two or more component algorithms in the 
hybrided solution is still secure.

Performance downgrade may be the cost we have to pay for the above higher 
security assurance.

So, plenty discussions shall be very helpful.

On the other hand, if it is hard to make decision, specifying both hybrid and 
pure ML-KEM could be good as well. Namely, we are going to offer several 
choices so that the users in the whole world will gradually select what they 
like. Time will tell the truth.

Guilin




Wang Guilin
Mobile: +65-86920345
Email: wang.gui...@huawei.com

From:tls-request mailto:tls-requ...@ietf.org>>
To:tls mailto:tls@ietf.org>>
Date:2024-03-07 01:35:03
Subject:TLS Digest, Vol 236, Issue 15

Send TLS mailing list submissions to
tls@ietf.org

To subscribe or unsubscribe via the World Wide Web, visit
https://www.ietf.org/mailman/listinfo/tls
or, via email, send a message with subject or body 'help' to
tls-requ...@ietf.org

You can reach the person managing the list at
tls-ow...@ietf.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of TLS digest..."


Today's Topics:

  1. Re: ML-KEM key agreement for TLS 1.3 (Eric Rescorla)
  2. Re: ML-KEM key agreement for TLS 1.3 (Deirdre Connolly)





Message: 1
Date: Wed, 6 Mar 2024 09:20:20 -0800
From: Eric Rescorla mailto:e...@rtfm.com>>
To: Deirdre Connolly mailto:durumcrustu...@gmail.com>>
Cc: "TLS@ietf.org" mailto:tls@ietf.org>>
Subject: Re: [TLS] ML-KEM key agreement for TLS 1.3
Message-ID:

mailto:p9nmuoc6...@mail.gmail.com>>
Content-Type: text/plain; charset="utf-8"

On Wed, Mar 6, 2024 at 8:49?AM Deirdre Connolly 
mailto:durumcrustu...@gmail.com>>
wrote:

> > Can you say what the motivation is for being "fully post-quantum" rather
> than hybrid?
>
> Sure: in the broad scope, hybrid introduces complexity in the short-term
> that we would like to move off of in the long-term - for TLS 1.3 key
> agreement this is not the worst thing in the world and we can afford it,
> but hybrid is by design a hedge, and theoretically a temporary one.
>

My view is that this is likely to be the *very* long term.

I'm open to being persuaded, but at the moment, I don't think there is
anywhere near enough confidence in any of the PQ algorithms to confidently
use it standalone, which means we're going to see a lot of hybrid
deployment sooner rather than later. This also means that we're going to
have a long tail of clients and servers which only do hybrid and not
PQ-only, so that complexity is baked in for quite some time to come.


> In the more concrete scope, FIPS / CNSA 2.0 compliance guidelines
> 
> currently are a big 'maybe' at best for 'hybrid solutions', and the
> timetables for compliant browsers, servers, and services are to exclusively
> use FIPS 203 at level V (ML-KEM-1024) by 2033. I figure there will be
> demand for pure ML-KEM key agreement, not hybrid (with no questions that
> come along with it of whether it's actually allowed or not).
>

I'm honestly not moved by this very much. IETF should form its own opinion
about the security of algorithms, not just take whatever opinions are
handed down from NIST. If that means that IETF doesn't standardize what
NIST wants, then NIST is free to register its own codepoints and try to
persuade implementors to take them.

So I think the question here should be focused on "what level of confidence
would IETF need to specify ML-KEM standalone at Proposed Standard with
Recommended=Y".

-Ekr


> Relatedly, the currently adopted -hybrid-design
> 
> outlines several combinations of ECDH and KEM, and allows computing the
> ECDH share once and sharing it between an ECDH share and a hybrid ECDH+KEM
> share, but there is no equivalent for just using a KEM on its own, and
> computing its shared secret once and advertising it as both standalone and
> in a hybrid share. So I think defining these standalone ML-KEM
> `NamedGroup`s also 'draws the rest of the owl' implied by -hybrid-design.
>
> On Wed, Mar 6, 2024 at 10:12?AM Eric Rescorla 
> mailto:e...@rtfm.com>> wrote:
>
>> Deirdre, thanks for submitting this. Can you say what the motivation is
>> for being "fully post-quantum" rather than hybrid?
>>
>> Thanks,
>> -Ekr
>>
>>
>>
>> On Tue, Mar 5, 2024 at 6:16?PM Deirdre Connolly 
>> 

Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-07 Thread Salz, Rich
Back to the topic at hand. I think it'd very bad if we'd have a codepoint for 
pure ML-KEM before we have a codepoint for an ML-KEM hybrid. Process wise, I 
think that's up to the designated experts of the IANA registry.

Currently the TLS designated experts really only look at the request itself, 
without larger context: is the ALPN valid, is the requested protocol number 
available, is the documentation freely available and so on.  Section 15 of 
https://datatracker.ietf.org/doc/draft-ietf-tls-rfc8447bis/ changes that a bit.

So if Deirdre requests a code point right now, we’d probably reject it but that 
could be appealed somehow. Once the RFC is out, we could then see if there’s WG 
consensus or if it’s still a work-in-progress, and assign full number or 
provisional or tell her to use the private range.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-07 Thread Eric Rescorla
On Thu, Mar 7, 2024 at 1:47 AM Dennis Jackson  wrote:

> On 07/03/2024 03:57, Bas Westerbaan wrote:
>
> We think it's worth it now, but of course we're not going to keep hybrids
> around when the CRQC arrives.
>
> Sure, but for now we gain substantial security margin* against
> implementation mistakes, advances in cryptography, etc.
>
> On the perf/cost side, we're already making a large number of sub-optimal
> choices (use of SHA-3, use of Kyber in TLS rather than a CPA scheme,
> picking 768 over 512, etc), we can easily 'pay' for X25519 if you really
> wanted. I think if handshake cycles really mattered then we'd have shown
> RSA the door much more quickly [1].
>
> Best,
> Dennis
>
> * As in, actual security from combination of independent systems, not the
> mostly useless kind from using over-size primitives.
>
In a world where there is a CRQC, there are two distinct costs to
continuing to support hybrids:

1. The computational cost of actually doing X25519 (or whatever)
2. The software complexity cost of the code to do the hybrid and to
negotiate it.

>From the perspective of an implementation the computational cost scales
with the
number of handshakes it has to do with the hybrid rather than pure ML-KEM.
The complexity cost, however, is constant up to the point where you can
remove it
entirely. Because of the highly centralized structure of the TLS browser
and server
ecosystem, these timelines can be very different: it's relatively fast to
get to high
levels of deployment so that most handshakes are "new", but can be quite
slow
to eliminate the last "old-only" peers.

So, the question I have is whether having a code point for pure ML-KEM now
advances the project of deprecating hybrids at the point where the X25519
part
isn't doing anything meaningful (again, assuming that point eventually
comes).
My sense is that it largely does so if fielded implementations are willing
to do
pure-ML-KEM, because, as above, it's that quantity that dominates the
decision
of whether you can remove the hybrid code entirely. Personally, I'd still
be quite
uncomfortable with allowing pure ML-KEM negotiation in a major product. If
others
feel the same way, I'm not quite sure what it gets us, other than saving us
the
fairly small amount of specification effort of doing the pure version.

It's of course worth noting that a CRQC might be very far in the future and
we
might get better PQ algorithms by that point, in which case we'd never
deploy
pure ML-KEM.

-Ekr


> [1] https://blog.cloudflare.com/how-expensive-is-crypto-anyway
>
>
> Best,
>
>  Bas
>
> On Thu, Mar 7, 2024 at 1:56 AM Dennis Jackson  40dennis-jackson...@dmarc.ietf.org> wrote:
>
>> I'd like to understand the argument for why a transition back to single
>> schemes would be desirable.
>>
>> Having hybrids be the new standard seems to be a nice win for security
>> and pretty much negligible costs in terms of performance, complexity and
>> bandwidth (over single PQ schemes).
>>
>> On 07/03/2024 00:31, Watson Ladd wrote:
>> > On Wed, Mar 6, 2024, 10:48 AM Rob Sayre  wrote:
>> >> On Wed, Mar 6, 2024 at 9:22 AM Eric Rescorla  wrote:
>> >>>
>> >>>
>> >>> On Wed, Mar 6, 2024 at 8:49 AM Deirdre Connolly <
>> durumcrustu...@gmail.com> wrote:
>> > Can you say what the motivation is for being "fully post-quantum"
>> rather than hybrid?
>>  Sure: in the broad scope, hybrid introduces complexity in the
>> short-term that we would like to move off of in the long-term - for TLS 1.3
>> key agreement this is not the worst thing in the world and we can afford
>> it, but hybrid is by design a hedge, and theoretically a temporary one.
>> >>>
>> >>> My view is that this is likely to be the *very* long term.
>> >>
>> >> Also, the ship has sailed somewhat, right? Like Google Chrome,
>> Cloudflare, and Apple iMessage already have hybrids shipping (I'm sure
>> there many more, those are just really popular examples). The installed
>> base is already very big, and it will be around for a while, whatever the
>> IETF decides to do.
>> > People can drop support in browsers fairly easily especially for an
>> > experimental codepoint. It's essential that this happen: if everything
>> > we (in the communal sense) tried had to be supported in perpetuity, it
>> > would be a recipe for trying nothing.
>> >
>> >> thanks,
>> >> Rob
>> >>
>> >> ___
>> >> TLS mailing list
>> >> TLS@ietf.org
>> >> https://www.ietf.org/mailman/listinfo/tls
>> > ___
>> > TLS mailing list
>> > TLS@ietf.org
>> > https://www.ietf.org/mailman/listinfo/tls
>>
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org

Re: [TLS] Proposal: a TLS formal analysis triage panel

2024-03-07 Thread Jonathan Hoyland
I'd be happy to help work on something like this, but it might make more
sense to come present at UFMRG.

One of the goals of the Research Group is to try and bring together experts
and IETFers.

Rather than adding formal process, having a low stakes way of engaging with
the formal methods community even just to say "this doesn't invalidate the
current proofs" might be easier.

Regards,

Jonathan

On Wed, 6 Mar 2024, 02:27 Deirdre Connolly, 
wrote:

> > it's unclear to me whether this review would be a hard requirement to
> pass WGLC. Let's say a document makes it to that stage, and it is sent to
> the triage panel, but the panel never produces a formal analysis of it.
> (This could happen for example if the researchers don't find the extension
> at hand interesting enough, they're volunteering to help so I wouldn't
> blame them for picking what they want to work on.) In that hypothetical
> scenario, does the document proceed without formal analysis, or is it
> blocked?
>
> Indeed; the interaction with the panel would be in two phases: any changes
> that are being proposed by/to the WG would have a preliminary triage of
> whether such changes _should_ have formal analysis, and of what scope/type.
> This would probably be nicely triggered by an adoption call. Ideally we'd
> have a sense from the panel of whether the proposed changes would entail a
> significant amount of formal analysis work from the get-go, or not. This
> preliminary triage can help inform the adoption call discussion.
>
> If the proposal is adopted, and the working group has received a formal
> analysis triage from the panel, and accepts the general scope of work to be
> a requirement / blocker before moving to WGLC, then it is. We then work
> with the panel to select the researchers/whomever to conduct the analysis,
> which may entail rounds of back and forth if the document changes over
> time.
>
> There may be changes that, on triage from the panel, are 'easy' and may
> not require updated analysis at all, or very little. The WG may agree to
> adopt the document and agree to proceed to WGLC _without_ formal analysis,
> with the implicit understanding that the adopted document and the one
> approaching WGLC will not have significantly diverged from each other. If
> we are worried about this, we can implement a sort of 'last chance' review
> with the panel to make sure we aren't missing something on such a document
> before actually triggering the WGLC.
>
> This is sort of what I had in mind, but am of course welcome to
> suggestions or changes. 
>
> On Tue, Mar 5, 2024 at 9:12 PM David Schinazi 
> wrote:
>
>> Hi Deirdre,
>>
>> Thanks for this, I think this is a great plan. From the perspective of
>> standards work, more formal analysis is always better, and this seems like
>> a great way to motivate such work.
>>
>> That said, it's unclear to me whether this review would be a hard
>> requirement to pass WGLC. Let's say a document makes it to that stage, and
>> it is sent to the triage panel, but the panel never produces a formal
>> analysis of it. (This could happen for example if the researchers don't
>> find the extension at hand interesting enough, they're volunteering to help
>> so I wouldn't blame them for picking what they want to work on.) In that
>> hypothetical scenario, does the document proceed without formal analysis,
>> or is it blocked?
>>
>> Thanks,
>> David
>>
>> On Tue, Mar 5, 2024 at 5:38 PM Deirdre Connolly 
>> wrote:
>>
>>> A few weeks ago, we ran a WGLC on 8773bis, but it basically came up
>>> blocked because of a lack of formal analysis of the proposed changes. The
>>> working group seems to be in general agreement that any changes to TLS 1.3
>>> should not degrade or violate the existing formal analyses and proven
>>> security properties of the protocol whenever possible. Since we are no
>>> longer in active development of a new version of TLS, we don't necessarily
>>> have the same eyes of researchers and experts in formal analysis looking at
>>> new changes, so we have to adapt.
>>>
>>> I have mentioned these issues to several experts who have analyzed TLS
>>> (in total or part) in the past and have gotten tentative buy-in from more
>>> than one for something like a 'formal analysis triage panel': a rotating
>>> group of researchers, formal analysis experts, etc, who have volunteered to
>>> give 1) a preliminary triage of proposed changes to TLS 1.3¹ and _whether_
>>> they could do with an updated or new formal analysis, and 2) an estimate of
>>> the scope of work such an analysis would entail. Such details would be
>>> brought back to the working group for discussion about whether the proposed
>>> changes merit the recommended analysis or not (e.g., a small, nice-to-have
>>> change may actually entail a fundamentally new security model change,
>>> whereas a large change may not deviate significantly from prior analysis
>>> and be 'cheap' to do). If the working group agrees to proceed, the formal
>>> analysis 

Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-07 Thread Dennis Jackson

On 07/03/2024 03:57, Bas Westerbaan wrote:

We think it's worth it now, but of course we're not going to keep 
hybrids around when the CRQC arrives.


Sure, but for now we gain substantial security margin* against 
implementation mistakes, advances in cryptography, etc.


On the perf/cost side, we're already making a large number of 
sub-optimal choices (use of SHA-3, use of Kyber in TLS rather than a CPA 
scheme, picking 768 over 512, etc), we can easily 'pay' for X25519 if 
you really wanted. I think if handshake cycles really mattered then we'd 
have shown RSA the door much more quickly [1].


Best,
Dennis

* As in, actual security from combination of independent systems, not 
the mostly useless kind from using over-size primitives.


[1] https://blog.cloudflare.com/how-expensive-is-crypto-anyway



Best,

 Bas

On Thu, Mar 7, 2024 at 1:56 AM Dennis Jackson 
 wrote:


I'd like to understand the argument for why a transition back to
single
schemes would be desirable.

Having hybrids be the new standard seems to be a nice win for
security
and pretty much negligible costs in terms of performance,
complexity and
bandwidth (over single PQ schemes).

On 07/03/2024 00:31, Watson Ladd wrote:
> On Wed, Mar 6, 2024, 10:48 AM Rob Sayre  wrote:
>> On Wed, Mar 6, 2024 at 9:22 AM Eric Rescorla  wrote:
>>>
>>>
>>> On Wed, Mar 6, 2024 at 8:49 AM Deirdre Connolly
 wrote:
> Can you say what the motivation is for being "fully
post-quantum" rather than hybrid?
 Sure: in the broad scope, hybrid introduces complexity in the
short-term that we would like to move off of in the long-term -
for TLS 1.3 key agreement this is not the worst thing in the world
and we can afford it, but hybrid is by design a hedge, and
theoretically a temporary one.
>>>
>>> My view is that this is likely to be the *very* long term.
>>
>> Also, the ship has sailed somewhat, right? Like Google Chrome,
Cloudflare, and Apple iMessage already have hybrids shipping (I'm
sure there many more, those are just really popular examples). The
installed base is already very big, and it will be around for a
while, whatever the IETF decides to do.
> People can drop support in browsers fairly easily especially for an
> experimental codepoint. It's essential that this happen: if
everything
> we (in the communal sense) tried had to be supported in
perpetuity, it
> would be a recipe for trying nothing.
>
>> thanks,
>> Rob
>>
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-07 Thread John Mattsson
True, Classic McEliece is not possible with the current length restrictions. 
FrodoKEM does not seem to get any open-access standard. Cryptographic algorithm 
standards behind paywalls are a cybersecurity risk. I have seen several 
implementations that claim to follow a paywalled standard but in reality seem 
to have been implemented from Wikipidia and skip essential security 
considerations and requirements. If any European country want to use FrodoKEM, 
they should drive FrodoKEM in CFRG, or publish the specification themselves. An 
alternative conservative solution would be to combine ML-KEM with HQC/BIKE and 
x25519.

Secret and propriatary security protocols are much much worse. Rob Sayre 
mentioned iMessage in an earlier post. I think Apple is the worst offender of 
deploying secret and propriatary protocols to billions of users. The distance 
between their privacy marketing (privacy is a human right) and what is 
delivered by the secret iMessage are AirDrop protocols is astonishing to say 
the least.
https://www.rollingstone.com/politics/politics-features/whatsapp-imessage-facebook-apple-fbi-privacy-1261816/
https://arstechnica.com/security/2024/01/hackers-can-id-unique-apple-airdrop-users-chinese-authorities-claim-to-do-just-that/

Cheers,
John Preuß Mattsson

From: TLS  on behalf of Ilari Liusvaara 

Date: Wednesday, 6 March 2024 at 17:46
To: TLS@ietf.org 
Subject: Re: [TLS] ML-KEM key agreement for TLS 1.3
On Wed, Mar 06, 2024 at 04:25:16PM +, John Mattsson wrote:
> I think TLS should register all algorithm variants standardized by
> NIST. That means ML-KEM-512, ML-KEM-768, and ML-KEM-1024. And in
> the future a subset of HQC/BIKE/Classic McEliece.

Just as note, supporting Classic McEliece is not possible at all due to
the key size exceeding hard TLS 1.3 limit.

Even FrodoKEM, which seems to be quite widely viewed as "next step up"
from likes of ML-KEM, has painfully large keys. But at least those do
not bust any hard limits.




-Ilari

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-07 Thread John Mattsson
Hi,

Bas Westerbaan wrote:
>I think it'd very bad if we'd have a codepoint for pure ML-KEM before we have 
>a codepoint for an ML-KEM
>hybrid. I think that's up to the designated experts of the IANA registry.

Agree. We plan to use hybrid key exchange as the default, but would like to 
offer pure ML-KEM to customers that want that. As Deirdre states hybrid are a 
big 'maybe' at best for 'hybrid solutions'. Being able to offer CNSA 2.0 
compliant TLS is essential for many companies. I weould like to see standards 
track ML-KEM as well as standards track ML-DSA just like in IPSECME and LAMPS.

Deirdre Connolly wrote:
>My current draft does not include ML-KEM-512, mostly because there seems to be 
>alignment around
>ML-KEM-768 being ~equivalent to say X25519 or P-256 ECDH in terms of security 
>level. I'm not married
>strongly to excluding it but that was kind of the thinking.

I don't think there is any such alignment. NIST latest assessment is that “the 
most plausible values for the practical security of Kyber512 against known 
attacks are significantly higher than that of AES128”. Ericsson agrees with 
that assessment and so do Sophie Schmieg (Google).
https://csrc.nist.gov/csrc/media/Projects/post-quantum-cryptography/documents/faq/Kyber-512-FAQ.pdf
https://keymaterial.net/2023/11/18/kyber512s-security-level/

Cheers,
John Preuß Mattsson

From: TLS  on behalf of Deirdre Connolly 

Date: Thursday, 7 March 2024 at 05:37
To: Orie Steele 
Cc: Bas Westerbaan , TLS@ietf.org 

Subject: Re: [TLS] ML-KEM key agreement for TLS 1.3
> Isn't support for the component mandatory to support the hybrid anyway?

Strictly speaking, not necessarily: I could see support for X-Wing or another 
hybrid key agreement as a standalone unit, both from a software dependency 
perspective and protocol API perspective. Whether that works in the long term 
that also supports the standalone component algorithms, that's another question

On Wed, Mar 6, 2024, 11:30 PM Orie Steele  wrote:
Does the argument about hybrid code points first generalize to all PQ Code 
points?

Is it equally true of hybrid signatures?

I don't understand why registering composite components first wouldn't be 
assumed.

Isn't support for the component mandatory to support the hybrid anyway?

Let's assume CRQC drops tomorrow, why did we not register ML-KEM first?

Assume it never drops, you still needed to implement ML-KEM to use the hybrid.

If the goal is to prohibit ML-KEM without a traditional component, just 
register it as prohibited.

OS

On Wed, Mar 6, 2024, 10:10 PM Bas Westerbaan 
mailto:40cloudflare@dmarc.ietf.org>> 
wrote:
Back to the topic at hand. I think it'd very bad if we'd have a codepoint for 
pure ML-KEM before we have a codepoint for an ML-KEM hybrid. Process wise, I 
think that's up to the designated experts of the IANA registry.

Best,

 Bas


On Wed, Mar 6, 2024 at 3:16 AM Deirdre Connolly 
mailto:durumcrustu...@gmail.com>> wrote:
I have uploaded a preliminary version of ML-KEM for TLS 
1.3  
and have a more fleshed 
out
 version to be uploaded when datatracker opens. It is a straightforward new 
`NamedGroup` to support key agreement via ML-KEM-768 or ML-KEM-1024, in a very 
similar style to 
-hybrid-design.

It will be nice to have pure-PQ options (that are FIPS / CNSA 2.0 compatible) 
ready to go when users are ready to use them.

Cheers,
Deirdre
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls