In yesterday’s working group meeting we had a bit of a discussion of the impact 
of the sizes of post-quantum key exchange on TLS and related protocols like 
QUIC. As we neglected to put Kyber’s key sizes in our slide deck (unlike the 
signature schemes), I thought it would be a good idea to get the actual numbers 
of Kyber onto the mailing list.

Before we dive into applying the NIST algorithms to TLS 1.3 let us first 
consider our security goals and recalibrate accordingly.

 

That’s always a good idea.

 

I have the luxury of having 0 users. So I can completely redesign my 
architecture if needs be. But the deeper I have got into Quantum Computer 
Cryptanalysis (QCC) PQC, the less that appears to be necessary.

 

Respectfully disagree – though if all you got to protect is your TLS-secured 
purchases from Amazon.com, I’d concede the point.

 

First off, let us level set. Nobody has built a QC capable of QCC and it is 
highly unlikely anyone will in the next decade. 

 

You cannot know that, and “professional opinions” vary widely.

 

So, what we are concerned about today is data we exchange today being 
cryptanalyzed in the future. 

We can argue about that if people want but can we at least agree that our #1 
priority should be confidentiality.

 

Yes and yes.

 

So the first proposal I have is to separate our concerns into two separate 
parts with different timelines:

 

#1 Confidentiality, we should aim to deliver a standards based proposal by 2025.

 

If we (well, some of us) got the data today that mustn’t be disclosed a decade 
from now, then we do not have the luxury of waiting till 2025. Otherwise, see 
above.

 

#2 Fully QCC hardened spec before 2030.

 

If by “fully…” you mean “including PQ digital signatures”, I’d probably agree.

 

That immediately reduces our scope to confidentiality. QCC of signature keys is 
irrelevant as far as the priority is concerned. TLS can wait for the results of 
round 4 before diving into signatures at the very least.

 

TLS probably can… Impact of PQ signatures seems mostly to be in the size of 
certs and such, no conceptual difference (unlike KEM)…

 

[This is not the case for the Mesh as I need a mechanism that enables me to 
upgrade from my legacy base to a PQC system. The WebPKI should probably give 
some thought to these concerns as well. We should probably be talking about 
deploying PQC root keys but that is not in scope for TLS.]

 

No idea – therefore, no comment.

 

Second observation is that all we have at this point is the output of the NIST 
competition and that is not a KEM. No sorry, NIST has not approved a primitive 
that we can pass a private key to and receive that key back wrapped under the 
specified public key. What NIST actually tested was a function to which we pass 
a public key and get back a shared secret generated by the function and a blob 
that decrypts to the key.

 

The output of NIST PQC is exactly KEM. And it’s fully specified.

 

NIST did not approve 'KYBER' at least it has not done so yet. 

 

NIST did – what it did not do is finalizing the specs, which requires public 
review. Some people conjecture that Kyber will not need many changes to become 
a “full” standard.

 

Since we won't have that report within the priority timeline, I suggest people 
look at the function NIST tested which is a non-interactive key establishment 
protocol. If you want to use Kyber instead of the NIST output, you are going to 
have to wait for the report before we can start the standards process.

 

I don’t think I understood that, but, offhand, I disagree.

 

Third observation is that people are looking at how to replace existing modules 
in the TLS exchange rather than asking where the most appropriate point to 
deploy PQC is. I have faced that exact same problem in the Mesh and I have a 
rather bigger problem because the Mesh is all about Threshold cryptography and 
there is no NIST threshold PQC algorithm yet.

 

TLS protocol includes derivation of “session” keys. Currently it employs 
asymmetric “pre-Quantum” crypto. That has to be replaced by PQ asymmetric 
crypto. That’s the most appropriate (and the only) point to deploy PQC in. I’ve 
no clue about Mesh, so exclude Mesh from my comment.

 

The solution I am currently working with is to regard QCC at the same level as 
a single defection. So if Alice has established a separation of the decryption 
role between Bob and the Key Service, both have to defect (or be breached) for 
disclosure to occur. Until I get Threshold PQC, I am going to have to accept a 
situation in which the system remains secure against QCC but only if the key 
service does not defect.

 

Skipping the above.

 

Applying the same principles to TLS we actually have two key agreements in 
which we might employ PQC:

 

1) The primary key exchange

2) The forward secrecy / ephemeral / rekey

 

Looking at most of the proposals they seem to be looking to drop the PQC scheme 
into the ephemeral rekeying. That is one way to do it but does the threat of 
QCC really justify the performance impact of doing that?

 

First, I don’t see performance impact from that. PQC KEMs are pretty fast. The 
main cost is in exchanging much larger bit blobs. Second – if your today’s data 
will maintain its value into 2030+, then definitely yes; otherwise – who cares.

 

PQC hardening the initial key exchange should suffice provided that we fix the 
forward secrecy exchange so that it includes the entropy from the primary. This 
would bring TLS in line with Noise and with best practice. It would be a change 
to the TLS key exchange but one that corrects an oversight in the original 
forward secrecy mechanism.

 

If your rekey depends on the initial key values, and/or uses only Classic 
crypto – how can you provide Forward Secrecy?

 

TNX 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to