[TLS]Re: Curve-popularity data?

2024-06-07 Thread Scott Fluhrer (sfluhrer)
Does X448 actually provide "a huge advantage" in security (practically 
speaking) over X25519?

On a classical computer, the best known attack against X25519 requires circa 
2^126 point addition operations - that is generally accepted as being 
infeasible.  Attacking X448 requires far more point addition operations - 
however, all that means is that it is also infeasble.

On a Quantum Computer, an attack on X448 requires perhaps 5 times as many 
Quantum operations and 70% more Qubits than an attack on X25519.  This is nice, 
but (given as we know little about the eventual cost of either Qubit operations 
or Qubits themselves) I cannot characterize it as "huge" - it may very well be 
the case that once we have a Quantum Computer that is large/reliable enough to 
attack X25519, it is only a minor effort to scale it upwards to attack X448.

From: John Mattsson 
Sent: Friday, June 7, 2024 11:52 AM
To: Hubert Kario 
Cc: D. J. Bernstein ; tls@ietf.org
Subject: [TLS]Re: Curve-popularity data?

>so we should also remove X448, as it's much slower than X25519?

That does not follow from what I said. I think the important metric is 
performance/security. P-256 does basically not have any benefits compared to 
X25519 (excepts a few bits of security more) but these are quite irrelevant 
compared to the huge advantages is implication security provided by X25519.
I do think IETF should move to X448 instead of P-384 and P-521 for use cases 
wanting a higher security level than P-256/X25519.

Cheers,
John
From: Hubert Kario mailto:hka...@redhat.com>>
Date: Friday, 7 June 2024 at 17:41
To: John Mattsson 
mailto:john.matts...@ericsson.com>>
Cc: D. J. Bernstein mailto:d...@cr.yp.to>>, 
tls@ietf.org mailto:tls@ietf.org>>
Subject: Re: [TLS] Curve-popularity data?
On Friday, 7 June 2024 17:29:35 CEST, John Mattsson wrote:
> Hubert Kario wrote:
>>Such small differences in performance should absolutely have no effect on
>>IETF selecting an algorithm or not.
>
> I completely disagree. As long as people argue that we need
> symmetric rekeying and reuse of key share because the ephemeral
> key exchange algorithms are slow, I think the performance
> differences illustrated in this thread are a very strong
> argument why IETF should chose one algorithm over another.
> Symmetric rekeying and reuse of key shares lowers
> confidentiality and privacy in the face of pervasive
> surveillance. Also, I would not call the differences small at
> all.

so we should also remove X448, as it's much slower than X25519?

please, be reasonable

> Cheers,
> John Preuß Mattsson
>
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.rfc-editor.org%2Frfc%2Frfc7258&data=05%7C02%7Cjohn.mattsson%40ericsson.com%7Cb49e466dab144e45ee7608dc8707d932%7C92e84cebfbfd47abbe52080c6b87953f%7C0%7C0%7C638533716844307272%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=4v8gMeGXJpoFSWo8JPxFfT6VvUJHbcSI2YGIoOVRsg0%3D&reserved=0
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdatatracker.ietf.org%2Fdoc%2Fhtml%2Frfc7624&data=05%7C02%7Cjohn.mattsson%40ericsson.com%7Cb49e466dab144e45ee7608dc8707d932%7C92e84cebfbfd47abbe52080c6b87953f%7C0%7C0%7C638533716844316386%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=G7rhNxEuiHpI1Ts4lyNZ8i%2By1fPqNxwtrexW79rGkcs%3D&reserved=0
>
> From: Hubert Kario mailto:hka...@redhat.com>>
> Date: Friday, 7 June 2024 at 16:36
> To: D. J. Bernstein mailto:d...@cr.yp.to>>
> Cc: tls@ietf.org mailto:tls@ietf.org>>
> Subject: [TLS]Re: Curve-popularity data?
>
> On Friday, 7 June 2024 15:54:17 CEST, D. J. Bernstein wrote:
>> Hubert Kario writes:
>>> Fedora 39, openssl-3.1.1-4.fc39.x86_64, i7-10850H
>>> x25519 derive shared secret: 35062.2 op/s
>>> P-256 derive shared secret: 22741.1 op/s
>>
>> The Intel Core i7-10850H microarchitecture is Comet Lake. To see numbers
>> from current code, I tried the script below on a Comet Lake (Intel Core
>> i3-10110U with Turbo Boost disabled, so 2.1GHz where your i7-10850H can
>> boost as high as 5.1GHz; Debian 11; gcc 10.2.1). The script uses shiny
>> new OpenSSL 3.2.2 (released on Tuesday), and a beta OpenSSL "provider"
>> for lib25519. The script prints results without and with the provider.
>> The results with the provider are a better predictor of the user's
>> ultimate costs than obsolete code; consider, e.g., AWS announcing in
>>
>>
>> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fiacr.org%2Fsubmit%2Ffiles%2Fslides%2F2024%2Frwc%2Frwc2024%2F38%2Fslides.pdf&data=05%7C02%7Cjohn.mattsson%40ericsson.com%7Cb49e466dab144e45ee7608dc8707d932%7C92e84cebfbfd47abbe52080c6b87953f%7C0%7C0%7C638533716844322698%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=R%2BIU

[TLS]Re: [EXTERNAL] Re: Curve-popularity data?

2024-06-05 Thread Scott Fluhrer (sfluhrer)
On the other hand, I was at a recent conference, and asked an NSA 
representative (William Layton, Cryptographic Solutions Technical Director) 
about “hybrid crypto”, and he responded that the NSA was “ambivalent” (his 
word); that they’d prefer straight ML-KEM/ML-DSA, but if the industry insisted 
on hybrid crypto, they’d go along.

Hence, I don’t believe that CNSA 2.0 is taking quite as hard of a stand as the 
summary states…

From: John Mattsson 
Sent: Wednesday, June 5, 2024 11:52 AM
To: Watson Ladd ; Andrei Popov 

Cc: Blumenthal, Uri - 0553 - MITLL ; Scott Fluhrer (sfluhrer) 
; tls@ietf.org
Subject: Re: [TLS]Re: [EXTERNAL] Re: Curve-popularity data?

>I really do not understand this argument, given that the DoD has explicitly 
>said they aren't doing that.

I think it is a bit unclear what CNSA 2.0 says:
- They say that CNSA 2.0 compliant browsers need to support ML-KEM-1024 by 2025.
- The also say that “NSA does not approve using pre-standardized or 
non-FIPS-validated CNSA 2.0 algorithms (even in hybrid modes)”.

I don’t see how you can combine fulfill both of these if estimates for when 
FIPS compliant implementations will be ready. My five cents would be that 
P-384+ML-KEM-1024 is CNSA 1.0 compliant until ML-KEM gets FIPS validated and 
then it becomes CNSA 2.0 compliant.

https://media.defense.gov/2022/Sep/07/2003071834/-1/-1/0/CSA_CNSA_2.0_ALGORITHMS_.PDF
https://media.defense.gov/2022/Sep/07/2003071836/-1/-1/0/CSI_CNSA_2.0_FAQ_.PDF
Cheers,
John

From: Watson Ladd mailto:watsonbl...@gmail.com>>
Date: Wednesday, 5 June 2024 at 17:39
To: Andrei Popov mailto:andrei.po...@microsoft.com>>
Cc: Blumenthal, Uri - 0553 - MITLL mailto:u...@ll.mit.edu>>, 
Scott Fluhrer (sfluhrer) mailto:sfluh...@cisco.com>>, John 
Mattsson mailto:john.matts...@ericsson.com>>, 
tls@ietf.org<mailto:tls@ietf.org> mailto:tls@ietf.org>>
Subject: Re: [TLS]Re: [EXTERNAL] Re: Curve-popularity data?
On Wed, Jun 5, 2024 at 8:38 AM Andrei Popov
mailto:Andrei.Popov=40microsoft@dmarc.ietf.org>>
 wrote:
>
> This is my understanding too, and I believe a lot of deployments limited to 
> P384 will want to use a P384-based hybrid, at least “in transition”. The 
> duration of this transition could be years…

I really do not understand this argument, given that the DoD has
explicitly said they aren't doing that.

>
>
>
> Cheers,
>
>
>
> Andrei
>
>
>
> From: Blumenthal, Uri - 0553 - MITLL mailto:u...@ll.mit.edu>>
> Sent: Wednesday, June 5, 2024 7:59 AM
> To: Scott Fluhrer (sfluhrer) 
> mailto:sfluhrer=40cisco@dmarc.ietf.org>>;
>  John Mattsson 
> mailto:john.mattsson=40ericsson@dmarc.ietf.org>>;
>  tls@ietf.org<mailto:tls@ietf.org>
> Subject: [TLS]Re: [EXTERNAL] Re: Curve-popularity data?
>
>
>
> CNSA 1.0 requires P-384 or RSA-3072, and does not allow P-256.
>
>
>
> CNSA 2.0 requires ML-KEM, and does not approve any of the ECC curves. But 
> there’s a “transition period”, during which P-384 could presumably be used.
>
> --
>
> V/R,
>
> Uri
>
>
>
>
>
> From: Scott Fluhrer (sfluhrer) 
> mailto:sfluhrer=40cisco@dmarc.ietf.org>>
> Date: Wednesday, June 5, 2024 at 09:54
> To: John Mattsson 
> mailto:john.mattsson=40ericsson@dmarc.ietf.org>>,
>  tls@ietf.org<mailto:tls@ietf.org> mailto:tls@ietf.org>>
> Subject: [EXT] [TLS]Re: [EXTERNAL] Re: Curve-popularity data?
>
> If we’re talking about CNSA, well CNSA 2. 0 insists on ML-KEM-1024 (and would 
> prefer that alone) – I had been assuming that could be better handled by the 
> ML-KEM-only draft… From: John Mattsson  dmarc. ietf. org>
>
> ZjQcmQRYFpfptBannerStart
>
> This Message Is From an External Sender
>
> This message came from outside the Laboratory.
>
> ZjQcmQRYFpfptBannerEnd
>
> If we’re talking about CNSA, well CNSA 2.0 insists on ML-KEM-1024 (and would 
> prefer that alone) – I had been assuming that could be better handled by the 
> ML-KEM-only draft…
>
>
>
> From: John Mattsson 
> mailto:john.mattsson=40ericsson@dmarc.ietf.org>>
> Sent: Wednesday, June 5, 2024 1:56 AM
> To: tls@ietf.org<mailto:tls@ietf.org>
> Subject: [TLS]Re: [EXTERNAL] Re: Curve-popularity data?
>
>
>
> Andrei Popov wrote:
>
> >CNSA requires P384, so we’ll also need a hybrid that includes this EC.
>
>
>
> Yes, I am not sure about the statement that P-256 is required. The 
> requirement for FIPS in the next few years should be one of the NIST 
> P-curves. I think P-384 is the most required of the NIST P-curves.
>
>
>
> Scott Fluhrer wrote:
> >I believe that it is unreasonable to expect that a single combination would 
> >satisfy everyone’s needs.
>
> Yes, that is completely unreasona

[TLS]Re: [EXTERNAL] Re: Curve-popularity data?

2024-06-05 Thread Scott Fluhrer (sfluhrer)
If we’re talking about CNSA, well CNSA 2.0 insists on ML-KEM-1024 (and would 
prefer that alone) – I had been assuming that could be better handled by the 
ML-KEM-only draft…

From: John Mattsson 
Sent: Wednesday, June 5, 2024 1:56 AM
To: tls@ietf.org
Subject: [TLS]Re: [EXTERNAL] Re: Curve-popularity data?

Andrei Popov wrote:
>CNSA requires P384, so we’ll also need a hybrid that includes this EC.

Yes, I am not sure about the statement that P-256 is required. The requirement 
for FIPS in the next few years should be one of the NIST P-curves. I think 
P-384 is the most required of the NIST P-curves.

Scott Fluhrer wrote:
>I believe that it is unreasonable to expect that a single combination would 
>satisfy everyone’s needs.
Yes, that is completely unreasonable. TLS is MUCH larger than the Web. There 
will clearly be registrations for combinations of most current curves 
(P-curves, X-curves, Brainpool, SM, GOST) with most PQC KEMs (ML-KEM, BIKE/HQC, 
Classic McEliece, FrodoKEM, future Isogeny? (Isogenies was the hottest topic at 
Eurocrypt this year) ). European countries say that hybrids will be a must for 
a long-time.

Cheers,
John

From: Andrei Popov 
mailto:Andrei.Popov=40microsoft@dmarc.ietf.org>>
Date: Wednesday, 5 June 2024 at 07:24
To: Eric Rescorla mailto:e...@rtfm.com>>, Stephen Farrell 
mailto:stephen.farr...@cs.tcd.ie>>
Cc: tls@ietf.org mailto:tls@ietf.org>>
Subject: [TLS]Re: [EXTERNAL] Re: Curve-popularity data?
CNSA
 requires P384, so we’ll also need a hybrid that includes this EC.

Cheers,

Andrei

From: Eric Rescorla mailto:e...@rtfm.com>>
Sent: Monday, June 3, 2024 12:53 PM
To: Stephen Farrell 
mailto:stephen.farr...@cs.tcd.ie>>
Cc: Loganaden Velvindron mailto:logana...@gmail.com>>; 
Andrei Popov mailto:andrei.po...@microsoft.com>>; 
Salz, Rich mailto:rs...@akamai.com>>; 
tls@ietf.org
Subject: Re: [TLS]Re: [EXTERNAL] Re: Curve-popularity data?




On Mon, Jun 3, 2024 at 11:55 AM Stephen Farrell 
mailto:stephen.farr...@cs.tcd.ie>> wrote:

I'm afraid I have no measurements to offer, but...

On 03/06/2024 19:05, Eric Rescorla wrote:
> The question is rather what the minimum set of algorithms we need is. My
>   point is that that has to include P-256. It may well be the case that
> it needs to also include X25519.

Yep, the entirely obvious answer here is we'll end up defining at least
x25519+PQ and p256+PQ. Arguing for one but not the other (in the TLS
WG) seems pretty pointless to me. (That said, the measurements offered
are as always interesting, so the discussion is less pointless than
the argument:-)

Yes, this seems correct to me.

-Ekr




Cheers,
S.
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: Curve-popularity data?

2024-06-04 Thread Scott Fluhrer (sfluhrer)
I would disagree; it does have implications on the TLS protocol.

This working group does make the call as to which combinations it would like to 
specify in an RFC and generate TLS code points for; be it:


  *   P256 + ML-KEM-768
  *   X25519 + MK-KEM-768
  *   Some other combination

And, as it would be reasonable to try to minimize the change to existing 
implementations, it would appear to be reasonable to enquire about the support 
for P256 vs X25519 (in addition to how well they would satisfy other 
requirements, such as compliance and user trust).

As for my two cents (US):

  *   I don’t personally see how relevant it is how often P256 vs X25519 are 
used – if they are both supported by an implementation, then it is plausible to 
assume (lacking further information about the implementation) that an update to 
that implementation would be equally easy in both cases.
  *   Having P256 + ML-KEM-768 on the list would make my employer happier, for 
FIPS compliance reasons.
  *   I believe that it is unreasonable to expect that a single combination 
would satisfy everyone’s needs, hence it would certainly be reasonable to 
allocate multiple code points for different combinations.


From: Richard Barnes 
Sent: Tuesday, June 4, 2024 2:57 PM
To: Salz, Rich 
Cc: Dennis Jackson ; tls@ietf.org
Subject: [TLS]Re: Curve-popularity data?

This WG does not get to decide which hybrids will exist or be standardized, 
unless it has implications on the TLS protocol, which it does not.

--RLB

On Tue, Jun 4, 2024 at 2:51 PM Salz, Rich 
mailto:rs...@akamai.com>> wrote:
I urge the chairs to call cloture on this thread.  There is nothing relevant 
for the working group here.

I think that is premature.  Yes, there is a lot of noise, but it was only one 
or two days ago that reasons for hybrids with both P256 and X25519 were given.
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] A suggestion for handling large key shares

2024-03-18 Thread Scott Fluhrer (sfluhrer)
Recently, Matt Campagna emailed the hybrid KEM group (Douglas, Shay and me) 
about a suggestion about one way to potentially improve the performance (in the 
'the server hasn't upgraded yet' case), and asked if we should add that 
suggestion to our draft.  It occurs to me that this suggestion is equally 
applicable to the pure ML-KEM draft (and future PQ drafts as well); hence 
putting it in our draft might not be the right spot.

Here's the core idea (Matt's original scenario was more complicated):


  *   Suppose we have a client that supports both P-256 and P256+ML-KEM.  What 
the client does is send a key share for P-256, and also indicate support for 
P256+ML-KEM.  Because we're including only the P256 key share, the client hello 
is short
  *   If the server supports only P256, it accepts it, and life goes on as 
normal.
  *   If the server supports P256+ML-KEM, what Matt suggested is that, instead 
of accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.  We then 
continue as expected and end up negotiating things in 2 round trips.

Hence, the non-upgraded scenario has no performance hit; the upgraded scenario 
does (because of the second round trip), but we're transmitting more data 
anyways (and the client could, if it communicates with the server again, lead 
off with the proposal that was accepted last time).

Matt's suggestion was that this should be a SHOULD in our draft.

My questions to you: a) do you agree with this suggestion, and b) if so, where 
should this SHOULD live?  Should it be in our draft?  The ML-KEM draft as well 
(assuming there is one, and it's not just a codepoint assignment)?  Another RFC 
about how to handle large key shares in general (sounds like overkill to me, 
unless we have other things to put in that RFC)?

Thank you.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-11 Thread Scott Fluhrer (sfluhrer)
I can’t say I agree with this argument.

If we have a combiner with a proof that “if either of the primitives we have 
meet security property A, then the output of the combiner meets security 
property B”, and we have proofs that both our primitives meet security property 
A”, then doesn’t that mean that our system has a proof that it meets security 
property B?  Wouldn’t that proof still apply even if one of our primitives 
fails due to some cryptanalytic attack?

Wouldn’t that also mean that if we have several primitives that all have proofs 
of security property A, we could mix-and-match as convenient, and that we don’t 
need to generate N^2 proofs to handle each of the combinations?  [Note: I’m not 
arguing that we should have this level of flexibility; only that we could]

As an analogy, consider the current TLS 1.3 situation.  There are multiple key 
agreements allowable (DH and ECDH), multiple ways to do authentication (PSK and 
certificate), multiple signature types (RSA and ECDSA), multiple data 
protection algorithms (GCM, ChaCha20).  For some reason, they don’t feel the 
need to prove each specific combination separately, but instead show that the 
various primitives meet some security assumptions, and go from there…


From: CFRG  On Behalf Of Orie Steele
Sent: Thursday, January 11, 2024 2:58 PM
To: Sophie Schmieg 
Cc: Salz, Rich ; Karolin Varner 
; Peter C ; IRTF CFRG 
; Bas Westerbaan ; 
 ; Kampanakis, Panos 

Subject: Re: [CFRG] [TLS] X-Wing: the go-to PQ/T hybrid KEM?

Hybrids by their very nature are the explosion.

If there will only ever be X-Wing, I think it's fine to not make it generic 
(since we admit that it is a special case, not an instance of a generic).

However, if B-Wing (brainpool + kyber) and P-Wing (p curve + kyber) also end up 
getting made, we never stopped the explosion, and we made it harder to evaluate 
the security properties, and we delayed the rollout against harvest and 
decrypt... for the cases where X-Wing could not fit.

Yes, we will need proofs for all those other hybrids, sounds like that will 
keep people busy for a while... It feels like promising false hope to say that 
making X-Wing not generic will stop all that other work from happening... If 
anything, making X-Wing generic will reduce the cost of doing the work, that 
seems inevitable at this point.

I do think it's important for this to not end up as "crabs in a bucket", where 
each candidate holds the others back, and then they all get cooked together.

If arguing over generic's causes that, I suggest we not make generics a 
requirement.

OS



On Thu, Jan 11, 2024 at 1:35 PM Sophie Schmieg 
mailto:40google@dmarc.ietf.org>> 
wrote:
I very much appreciate having a concrete hybrid scheme that is intentionally 
not generic. This avoids the explosion of ciphertext suites that would 
otherwise occur, and allows for better compatibility of libraries. Fixing the 
key sizes to ML-KEM 768 and X25519 is aligned with our preferred choices as 
well, and further increases interoperability.

On Thu, Jan 11, 2024 at 9:31 AM Salz, Rich 
mailto:40akamai@dmarc.ietf.org>> wrote:
I'm going to echo Bas to highlight that X-Wing is not generic to any IND-CCA 
KEM, it is a particular primitive construction based on the internal 
construction of ML-KEM in particular:

I don’t think it’s our place to try to shoe-horn everything into one construct. 
 Particularly when we are in the experimentation phase of things.

If people want to have ML-KEM as a material in their composites, it sounds like 
they might want to learn from X-Wing, but not try to chop them to fit into that 
one keyhole, as it were.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


--

Sophie Schmieg | Information Security Engineer | ISE Crypto | 
sschm...@google.com

___
CFRG mailing list
c...@irtf.org
https://mailman.irtf.org/mailman/listinfo/cfrg


--



ORIE STEELE
Chief Technology Officer
www.transmute.industries

[https://ci3.googleusercontent.com/mail-sig/AIorK4xqtkj5psM1dDeDes_mjSsF3ylbEa5EMEQmnz3602cucAIhjLaHod-eVJq0E28BwrivrNSBMBc]
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Key Update for TLS/DTLS 1.3

2024-01-05 Thread Scott Fluhrer (sfluhrer)
Here's an issue I see with postquantum exchanges; Kyber (and most other 
postquantum key exchanges) would have an issue with the current format.  There 
are distinct 'initiate key shares' and 'response key shares', and they're not 
interchangeable; a 'response key share' must be generated for a specific 
'initiate key share'.

Now, it would be possible to extend ExtendedKeyUpdate to include a flag stating 
whether this is a request for new keys, or a response (distinguishing the two 
cases); that would be a relatively small change.

However, what happens if both sides happen to issue ExtendedKeyUpdate at the 
same time?

With DH, it's not an issue; they could ignore the flag, and they would both 
accept the other's ExtendedKeyUpdate as a response to their own, and both 
update the keys in the same way.

However, with Kyber, if we issue an 'initiate key share' and get an 'initiate 
key share' in response, we can't generate keys.

One possibility would be if there is a tie-breaker between the two sides (such 
as 'who had the lexically larger key share)'; the loser of that tie-breaker 
would discard his original ExtendedKeyUpdate, and reissue another one (which is 
a response to the other side's)?

I believe this idea would extend to DTLS as well as TLS.

A bit kludgy (and definitely adding complexity); however I also believe that it 
would be short-sighted to ignore postquantum crypto at this point.

From: TLS  On Behalf Of Tschofenig, Hannes
Sent: Thursday, January 4, 2024 6:42 AM
To: TLS List 
Subject: [TLS] Key Update for TLS/DTLS 1.3

Hi all,

we have just submitted a draft that extends the key update functionality of 
TLS/DTLS 1.3.
We call it the "extended key update" because it performs an ephemeral 
Diffie-Hellman as part of the key update.

The need for this functionality surfaced in discussions in a design team of the 
TSVWG. The need for it has, however, already been discussed years ago on the 
TLS mailing list in the context of long-lived TLS connections in industrial IoT 
environments.
Unlike the TLS 1.3 Key Update message, which is a one-shot message, the 
extended Key Update message requires a full roundtrip.

Here is the link to the draft:
https://datatracker.ietf.org/doc/draft-tschofenig-tls-extended-key-update/

I am curious what you think.

Ciao
Hannes

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?

2023-11-09 Thread Scott Fluhrer (sfluhrer)
We had that argument several IETF's ago (IETF 105?), and the clear consensus of 
the working group was that explicit named hybrid combinations (e.g. one for 
ML-KEM-512 + X25519) was the way to go.

Do we want to reopen that argument?  Now, I was on the other side (and I still 
think it would be a better engineering decision, given the right negotiation 
mechanism), but if it delays actual deployment, I would prefer if we didn't.

From: TLS  On Behalf Of John Mattsson
Sent: Thursday, November 9, 2023 3:48 AM
To: Sophie Schmieg ; tls@ietf.org
Subject: Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?

Hi,

Everybody seem to agree that hybrids should be specified. Looking in my crystal 
ball, I predict that registering hybrids as code points will be a big mess with 
way too many opinions and registrations similar to the TLS 1.2 cipher suites. 
The more I think about it, the more I think TLS 1.3 should standardize a 
generic solution for combining two or more key shares.

My understanding of what would be needed:

- New "split_key_PRF" extension indicating that client supports split-key PRF.

- When "split_key_PRF" is negotiated the server may chose more than one 
group/key share.

  struct {
  NamedGroup selected_groups<0..2^16-1>;
  } KeyShareHelloRetryRequest;

 struct {
  KeyShareEntry server_shares<0..2^16-1>;
  } KeyShareServerHello;

- When "split_key_PRF" is negotiated HKDF-Expand(Secret, HkdfLabel, Length) is 
replaced by a split-key PRF(Secret1, Secret2, ... , HkdfLabel, Length)

I think the current structure that the TLS server makes the decisions on 
"groups" and "key shares" should be kept.

There was a short discussion earlier on the list
https://mailarchive.ietf.org/arch/msg/tls/Z-s8A0gZsRudZ9hW4VoCsNI9YUU/


Sophie Schmieg sschm...@google.com wrote:
"Our stated intention is to move to Kyber once NIST releases the standard"
"I do not think it makes a lot of sense to have multiple schemes based on 
structured lattices in TLS, and Kyber is in my opinion the superior algorithm."

I agree with that.

Cheers,
John Preuß Mattsson



From: TLS mailto:tls-boun...@ietf.org>> on behalf of 
Sophie Schmieg 
mailto:sschmieg=40google@dmarc.ietf.org>>
Date: Thursday, 9 November 2023 at 08:40
To: tls@ietf.org mailto:tls@ietf.org>>
Subject: Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?
> > On 8 Nov 2023, at 8:34, Loganaden Velvindron 
> > mailto:logana...@gmail.com>> wrote:
> >
> > I support moving forward with hybrids as a proactively safe deployment
> > option. I think that supporting
> > only Kyber for KEX  is not enough. It would make sense to have more options.
> >
> > Google uses NTRU HRSS internally:
> > https://cloud.google.com/blog/products/identity-security/why-google-now-uses-post-quantum-cryptography-for-internal-comms
> >
> > If Google decides to use this externally, how easy would it be to get
> > a codepoint for TLS ?
> As easy as writing it up in a stable document (may or may not be an 
> Internet-draft) and asking IANA for a code point assignment.
>
> It can be done in days, if needed.
>
>  Yoav

Just to clarify a few things about our internal usage of NTRU-HRSS: This is for 
historic reasons.

Our stated intention is to move to Kyber once NIST releases the standard, see 
e.g. my talk at PQCrypto [1], where I go into some detail on this topic.
Long story short, we had to choose a candidate well before even NIST's round 3 
announcement, and haven't changed since changing a ciphersuite, while 
relatively straightforward is not free, so we would like to avoid doing it 
twice in a year.
The only security consideration that went into the decision for NTRU was that 
we wanted to use a structured lattice scheme, with NTRU being chosen for 
non-security related criteria that have since materially changed.
I do not think it makes a lot of sense to have multiple schemes based on 
structured lattices in TLS, and Kyber is in my opinion the superior algorithm.

[1] https://www.youtube.com/watch?v=8PYYM3G7_GY


--
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?

2023-11-07 Thread Scott Fluhrer (sfluhrer)
Is it sizable?  I have talked to enough people who feel the need to say “yes”.

The other thing to consider is the cost.  If it is essentially free, I believe 
we can make a reasonable case to add it, even if the benefit is only moderate.  
If it is costly, then we really need to consider if it is worth it.

As for the costs, here is what I can see:


  *   Additional computation: ECDH is fairly efficient, and so the cost there 
is reasonable
  *   Additional bandwidth: ECDH is ridiculously small compared to Kyber (which 
is what we’d be using anyways), and so that comes close to being ignorable
  *   Additional complexity: I’m assuming that, in the intermediate term, most 
implementations will need to implement ECDH (for backwards compatibility) in 
addition to Kyber, and so the requirement to implement that is not actually an 
addition.  The other complexity is the need to compute the Kyber and ECDH 
shared secret; there are a number of proposed options (ranging from “just hash 
the two together” to fancier constructions designed to address various subtle 
issues), however even in the most complex proposal, it’s still not that bad.

Is there a cost I’m missing (or did I mischaracterize one of them)?

If there is some demand and the cost is reasonable (albeit not “essentially 
free”), I don’t see a reason not to include it as an option.

From: Yoav Nir 
Sent: Tuesday, November 7, 2023 11:36 AM
To: Scott Fluhrer (sfluhrer) 
Cc: Watson Ladd ; Kris Kwiatkowski 
; Bas Westerbaan ; TLS List 

Subject: Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?

For signatures or keys in something like a certificate, I understand how you 
would want to have both the PQ and classical keys/sigs in the same structure, 
so satisfy those who want the classical algorithm and those who prefer the 
post-quantum.

For key exchange? For the most part a negotiation is good enough, no?  To 
justify a hybrid key exchange you need people who are both worried about 
quantum computers and worried about cryptanalysis or the new algorithms, but 
are willing to bet that those things won’t happen at the same time. Or at 
least, within the time where the generated key still matters.

I’m sure it’s not an empty set of people, but is it sizable?



On 7 Nov 2023, at 10:29, Scott Fluhrer (sfluhrer) 
mailto:sfluhrer=40cisco@dmarc.ietf.org>>
 wrote:

The problem with the argument “X trusts Kyber, so we don’t need hybrid” (where 
X can be “NIST” or “the speaker”) is that trust, like beauty, is in the eye of 
the beholder.  Just because NIST (or any other third party) is comfortable with 
just using Kyber (or Dilithium) does not mean that everyone does.

As long as there are a number of users that don’t quite trust fairly new 
algorithms, there will be a valid demand for using those new algorithms with 
older ones (which aren’t postquantum, but we are moderately confident that are 
resistant to conventional cryptanalysis).

From: TLS mailto:tls-boun...@ietf.org>> On Behalf Of 
Watson Ladd
Sent: Monday, November 6, 2023 2:44 PM
To: Kris Kwiatkowski mailto:k...@amongbytes.com>>
Cc: Bas Westerbaan 
mailto:bas=40cloudflare@dmarc.ietf.org>>;
 TLS List mailto:TLS@ietf.org>>
Subject: Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?

Why do we need FIPS hybrids? The argument for hybrids is that we don't trust 
the code/algorithms that's new. FIPS certification supposedly removes that 
concern so can just use the approved PQ implementation.

___
TLS mailing list
TLS@ietf.org<mailto:TLS@ietf.org>
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?

2023-11-07 Thread Scott Fluhrer (sfluhrer)
The problem with the argument “X trusts Kyber, so we don’t need hybrid” (where 
X can be “NIST” or “the speaker”) is that trust, like beauty, is in the eye of 
the beholder.  Just because NIST (or any other third party) is comfortable with 
just using Kyber (or Dilithium) does not mean that everyone does.

As long as there are a number of users that don’t quite trust fairly new 
algorithms, there will be a valid demand for using those new algorithms with 
older ones (which aren’t postquantum, but we are moderately confident that are 
resistant to conventional cryptanalysis).

From: TLS  On Behalf Of Watson Ladd
Sent: Monday, November 6, 2023 2:44 PM
To: Kris Kwiatkowski 
Cc: Bas Westerbaan ; TLS List 

Subject: Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?

Why do we need FIPS hybrids? The argument for hybrids is that we don't trust 
the code/algorithms that's new. FIPS certification supposedly removes that 
concern so can just use the approved PQ implementation.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] whitepaper from ambit inc

2023-08-16 Thread Scott Fluhrer (sfluhrer)
Why would TLS require triple AES?

If you’re worried that Grover’s attack reduces the strength of AES-256 to 128 
bits, well, yes it does – unless we are extremely impatient.  If the attacker 
insists that the attack succeeds before, say, the Sun turns into a red giant, 
running Grover’s on a single Quantum Computer doesn’t work – and running it in 
parallel enough to reduce it so something practical drastically reduces the 
savings that Grover’s gives us.

And, in any case, we shouldn’t be obsessed with making sure that all the 
primitives we use have precisely the same security strength – it is quite 
sufficient if they are all ‘secure’, and AES-256 certainly meets that criteria 
for any plausible attacker (hence, any practical meaning of secure)

From: TLS  On Behalf Of 
bingma2022=40skiff@dmarc.ietf.org
Sent: Sunday, July 23, 2023 4:46 AM
To: tls@ietf.org
Subject: [TLS] whitepaper from ambit inc


https://www.ambit.inc/pdf/KyberDrive.pdf It says "Kyber-1024 is known to have 
254 bits of classical security and 230 bits of quantum security (core-

SVP hardness)." So the future version of TLS may require triple 256-bit AES. 
Since meet-in-the-middle attack, it requires three different 256-bit AES keys. 
Furthermore, consider whether to use post-quantum RSA (even if NIST said it 
does NOT guarantee quantum resistance) for hybrid TLS, because pqRSA provides 
much higher security level for classical computers. 
https://csrc.nist.gov/CSRC/media/Projects/Post-Quantum-Cryptography/documents/round-1/submissions/PostQuantum_RSA_Enc.zip
 The document says "pqRSA provides much higher pre-quantum security levels than 
most post-quantum proposals." In conclusion, Kyber1024 is more secure than AES 
for quantum computers, but triple 256-bit AES is more secure than Kyber1024 for 
classical computers, it may need post-quantum RSA (even though it's NOT 
post-quantum) for hybrid TLS handshake. NSA still has NOT approved ChaCha20 for 
their ciphersuit.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] CRYSTALS Kyber and TLS

2023-06-19 Thread Scott Fluhrer (sfluhrer)
I do not believe that Müller is correct - we do not intend use the Kyber CPA 
public key encryption interface, but instead the Kyber CCA KEM interface.  And, 
with that interface, the server does contribute to the shared secret:

The shared secret that Kyber KEM (round 3) generates on success is:

KDF( G( m || H(pk)) || H(c) )

where:
- m is the hash of a value that the server selects
- pk is the public key selected by the client
- c is the server's keyshare
- H is SHA3-256, G is SHA3-512, KDF - SHAKE-256
Note that this formula includes a value (pk) that is selected solely by the 
client; hence we cannot say that this value contains only values selected by 
the server.
(reference: algorithms 8, 9 of the round 3 Kyber submission)

[Minor note: I believe that for the final FIPS version of Kyber, H(pk) will be 
replaced by pk - this does not change this argument]


Now, our proposal is not specific to Kyber; it could be used with other key 
exchange mechanisms that do not share this property.  That property could be 
introduced by using a stronger shared secret combiner (which, for example, may 
include key shares from both sides into the key derivation function) - that was 
suggested in Yokohoma and (I believe) is still under consideration by the 
working group.


Müller also goes on to suggest a two round key exchange - I do not believe that 
introducing such a change to the existing TLS protocol would be warranted.

> -Original Message-
> From: Stephan Müller 
> Sent: Monday, June 19, 2023 4:24 AM
> To: TLS List ; dsteb...@uwaterloo.ca; Scott Fluhrer (sfluhrer)
> ; shay.gue...@gmail.com
> Subject: CRYSTALS Kyber and TLS
> 
> Hi,
> 
> Post-quantum computing cryptographic algorithms are designed and
> available for use. Considering that the Kyber algorithm is going to be
> mandated by US authorities in the future as a complete replacement for
> asymmetric key exchange and agreement, a proposal integrating Kyber into
> TLS is specified with [1].
> 
> This proposal, however, has one central shortcoming: only the TLS server
> contributes to the security strength of the shared secret generated by Kyber.
> This shortcoming can be solved with a slightly improved approach where the
> client and the server both independent of each other contribute to the
> security of the communication channel where the channel even retains its
> security when one side has insufficient entropy.
> 
> The entire analysis and the suggested proposal to address the outlined issue
> is provided with [2]. I would like to share this proposal to contribute to the
> discussion how Kyber can be applied to TLS.
> 
> [1] https://www.ietf.org/archive/id/draft-ietf-tls-hybrid-design-06.txt
> 
> [2] http://www.chronox.de/papers/TLS_and_Kyber_analysis.pdf
> 
> Ciao
> Stephan
> 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Consensus call on codepoint strategy for draft-ietf-tls-hybrid-design

2023-05-11 Thread Scott Fluhrer (sfluhrer)
My opinion: since NIST has announced that “Kyber768 Rounds 3 != The final NIST 
approved version”, we should keep codepoint 0x6399 with its current meaning, 
and allocate a fresh one when NIST does public the Kyber FIPS draft (which is 
likely, but not certainly, what will be the final FIPS approved version…)

From: TLS  On Behalf Of Kampanakis, Panos
Sent: Thursday, May 11, 2023 10:16 AM
To: Bas Westerbaan ; Christopher Wood 

Cc: tls@ietf.org
Subject: Re: [TLS] Consensus call on codepoint strategy for 
draft-ietf-tls-hybrid-design

Great!

So to clarify, when Kyber gets ratified as MLWE_KEM or something like that, 
will we still be using 0x6399 in the keyshare when we are negotiating? Or is  
0x6399 just a temporary codepoint for Kyber768 Round 3 combined with X25519?


From: TLS mailto:tls-boun...@ietf.org>> On Behalf Of Bas 
Westerbaan
Sent: Wednesday, May 10, 2023 3:09 PM
To: Christopher Wood mailto:c...@heapingbits.net>>
Cc: tls@ietf.org
Subject: RE: [EXTERNAL][TLS] Consensus call on codepoint strategy for 
draft-ietf-tls-hybrid-design


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.

FYI IANA has added the following entry to the TLS Supported Groups registry:

Value: 25497
Description: X25519Kyber768Draft00
DTLS-OK: Y
Recommended: N
Reference: [draft-tls-westerbaan-xyber768d00-02]
Comment: Pre-standards version of Kyber768

Please see
https://www.iana.org/assignments/tls-parameters

On Mon, May 1, 2023 at 11:59 AM Christopher Wood 
mailto:c...@heapingbits.net>> wrote:
It looks like we have consensus for this strategy. We’ll work to remove 
codepoints from draft-ietf-tls-hybrid-design and then get experimental 
codepoints allocated based on draft-tls-westerbaan-xyber768d00.

Best,
Chris, for the chairs

> On Mar 28, 2023, at 9:49 PM, Christopher Wood 
> mailto:c...@heapingbits.net>> wrote:
>
> As discussed during yesterday's meeting, we would like to assess consensus 
> for moving draft-ietf-tls-hybrid-design forward with the following strategy 
> for allocating codepoints we can use in deployments.
>
> 1. Remove codepoints from draft-ietf-tls-hybrid-design and advance this 
> document through the process towards publication.
> 2. Write a simple -00 draft that specifies the target variant of 
> X25519+Kyber768 with a codepoint from the standard ranges. (Bas helpfully did 
> this for us already [1].) Once this is complete, request a codepoint from 
> IANA using the standard procedure.
>
> The intent of this proposal is to get us a codepoint that we can deploy today 
> without putting a "draft codepoint" in an eventual RFC.
>
> Please let us know if you support this proposal by April 18, 2023. Assuming 
> there is rough consensus, we will move forward with this proposal.
>
> Best,
> Chris, Joe, and Sean
>
> [1] https://datatracker.ietf.org/doc/html/draft-tls-westerbaan-xyber768d00-00

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New Version Notification for draft-mattsson-tls-compact-ecc-00.txt

2023-01-17 Thread Scott Fluhrer (sfluhrer)
It looks good; just one comment:

The current draft says (section 3.2)
A full validation according to Section 5.6.2.3.3 of
   [SP-800-56A] can be achieved by also checking that 0 ≤ x < p and that
   y^2 ≡ x^3 + a ⋅ x + b (mod p)
(emphasis added).

I believe such a validation check should be mandatory.  For the curves listed 
in the draft, the corresponding twists are not of prime order; hence someone 
injecting an invalid curve point has some advantage at recovering the peer’s 
secret value (and hence if an implementation reuses ECDHE private values, that 
gives them some advantage at recovering the keys for other sessions).

Yes, this is not a major point (this relies on the device under attack reusing 
private values, which they’re not supposed to do); on the other hand, the 
expense is fairly minimal (just computing y^2 (after all, you’ve already 
computed x^3+a+b), and the additional complexity needed to perform this check), 
hence I believe it is warranted.

Alternatively, you can mandate a safe curve (e.g. X25519), which is immune to 
this sort of attack.

From: TLS  On Behalf Of John Mattsson
Sent: Monday, January 16, 2023 4:45 PM
To: TLS@ietf.org
Subject: [TLS] FW: New Version Notification for 
draft-mattsson-tls-compact-ecc-00.txt

Hi,

I wrote a draft specifying new optimal fixed-length encodings for ECDHE and 
ECDHA with the NIST P-curves. This seems to be exactly what is needed for cTLS. 
The new encodings are defined as a subset of the old encodings which hopefully 
makes them easy to implement.

Cheers,
John

From: internet-dra...@ietf.org 
mailto:internet-dra...@ietf.org>>
Date: Monday, 16 January 2023 at 22:38
To: John Mattsson 
mailto:john.matts...@ericsson.com>>, John Mattsson 
mailto:john.matts...@ericsson.com>>
Subject: New Version Notification for draft-mattsson-tls-compact-ecc-00.txt

A new version of I-D, draft-mattsson-tls-compact-ecc-00.txt
has been successfully submitted by John Preuß Mattsson and posted to the
IETF repository.

Name:   draft-mattsson-tls-compact-ecc
Revision:   00
Title:  Compact ECDHE and ECDSA Encodings for TLS 1.3
Document date:  2023-01-16
Group:  Individual Submission
Pages:  9
URL:
https://www.ietf.org/archive/id/draft-mattsson-tls-compact-ecc-00.txt
Status: https://datatracker.ietf.org/doc/draft-mattsson-tls-compact-ecc/
Html:   
https://www.ietf.org/archive/id/draft-mattsson-tls-compact-ecc-00.html
Htmlized:   
https://datatracker.ietf.org/doc/html/draft-mattsson-tls-compact-ecc


Abstract:
   The encodings used in the ECDHE groups secp256r1, secp384r1, and
   secp521r1 and the ECDSA signature algorithms ecdsa_secp256r1_sha256,
   ecdsa_secp384r1_sha384, and ecdsa_secp521r1_sha512 have significant
   overhead and the ECDSA encoding produces variable-length signatures.
   This document defines new optimal fixed-length encodings and
   registers new ECDHE groups and ECDSA signature algorithms using these
   new encodings.  The new encodings reduce the size of the ECDHE groups
   with 33, 49, and 67 bytes and the ECDSA algorithms with an average of
   7 bytes.  These new encodings also work in DTLS 1.3 and are
   especially useful in cTLS.




The IETF Secretariat
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-hybrid-design

2022-08-18 Thread Scott Fluhrer (sfluhrer)
> -Original Message-
> From: TLS  On Behalf Of Martin Thomson
> Sent: Wednesday, August 17, 2022 7:05 PM
> To: tls@ietf.org
> Subject: Re: [TLS] WGLC for draft-ietf-tls-hybrid-design
> 
> On Sat, Aug 13, 2022, at 04:13, Scott Fluhrer (sfluhrer) wrote:
> > Well, if we were to discuss some suggested hybrids (and we now know
> > the NIST selection), I would suggest these possibilities:
> >
> > - X25519 + Kyber512
> > - P256 + Kyber512
> > - X448 + Kyber768
> > - P384 + Kyber768
> 
> Any specific pairs of primitives should be specified in a different document 
> to
> this one.

Actually, that was our original intention with this draft - to specify the 
framework, and to have other documents specify the actual pairs.  However, I 
believe that the sense of the working group is that they want this draft to 
start with a limited number of options (and people, please correct me if I'm 
wrong).

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-hybrid-design

2022-08-17 Thread Scott Fluhrer (sfluhrer)
So that we get an initial answer to this (so we can put it into the draft - of 
course, we can debate what's in the draft...)

Illari suggested:

X25519+Kyber768
P384+Kyber768

Well, I would suggest adding in

X25519+Kyber512

For those situations where we need to limit the message size (perhaps DTLS and 
QUIC).

Is the working group happy with that?

> -Original Message-
> From: TLS  On Behalf Of Ilari Liusvaara
> Sent: Saturday, August 13, 2022 11:12 AM
> To: TLS@ietf.org
> Subject: Re: [TLS] WGLC for draft-ietf-tls-hybrid-design
> 
> On Fri, Aug 12, 2022 at 06:13:38PM +0000, Scott Fluhrer (sfluhrer) wrote:
> > Again, this is late, however Stephen did ask this to be discussed in the
> working group, so here we go:
> >
> > > -Original Message-
> > > From: TLS  On Behalf Of Stephen Farrell
> > > Sent: Saturday, April 30, 2022 11:49 AM
> > > To: Ilari Liusvaara ; TLS@ietf.org
> > > Subject: Re: [TLS] WGLC for draft-ietf-tls-hybrid-design
> > >
> > >
> > > Hiya,
> > >
> > > On 30/04/2022 10:05, Ilari Liusvaara wrote:
> > > > On Sat, Apr 30, 2022 at 01:24:58AM +0100, Stephen Farrell wrote:
> > > >> - section 5: IMO all combined values here need to have
> > > >> recommended == "N" in IANA registries for a while and that needs
> > > >> to be in this draft before it even gets parked. Regardless of
> > > >> whether or not the WG agree with me on that, I think the current
> > > >> text is missing stuff in this section and don't recall the WG
> > > >> discussing that
> > > >
> > > > I think that having recommended = Y for any combined algorithm
> > > > requires NIST final spec PQ part and recommended = Y for the
> > > > classical part (which allows things like x25519 to be the classical 
> > > > part).
> > > >
> > > > That is, using latest spec for NISTPQC winner is not enough. This
> > > > impiles recommended = Y for combined algorithm is some years out
> > > > at the very least.
> > >
> > > I agree, and something like the above points ought be stated in the
> > > draft after discussion in the WG.
> >
> > Section 5 is 'IANA considerations', and would be where we would list
> > the various supported hybrids, which we don’t at the moment.
> >
> > Well, if we were to discuss some suggested hybrids (and we now know
> > the NIST selection), I would suggest these possibilities:
> >
> > - X25519 + Kyber512
> > - P256 + Kyber512
> > - X448 + Kyber768
> > - P384 + Kyber768
> 
> I would take:
> 
> X25519+Kyber768
> P384+Kyber768
> 
> The reason for taking Kyber768 is because the CRYSTALS team recommends
> it. The reason for taking P384 is because it is CNSA-approved, so folks that
> need CNSA can use that.
> 
> Of course, that is likely to bust packet size limits. I do not think that is 
> an
> issue in TLS, but DTLS and QUIC might be another matter entierely (in theory
> DTLS and QUIC can handle it just fine, practice might be another matter
> entierely. And if such problems are there, it is good to know about those...
> This stuff is experimental).
> 
> 
> > Of course, it's possible that NIST will tweak the definition of Kyber;
> > that's just a possibility we'll need to live with (and wouldn't change
> > what hybrid combinations we would initially define)
> 
> I would think such changes would just mean the interim post-quantum kex is
> not compatible with the final one. Not that big of deal, there are tens of
> thoursands of free codepoints. If an implementation  needs both, it can
> probably share vast majority of the code.
> 
> 
> 
> -Ilari
> 
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-hybrid-design

2022-08-12 Thread Scott Fluhrer (sfluhrer)
Again, responding to old emails...

> -Original Message-
> From: TLS  On Behalf Of Stephen Farrell
> Sent: Friday, April 29, 2022 8:25 PM
> To: TLS@ietf.org
> Subject: Re: [TLS] WGLC for draft-ietf-tls-hybrid-design
> 
> - section 2: if "classic" DH were broken, and we then depend on a PQ-KEM,
> doesn't that re-introduce all the problems seen with duplicating RSA private
> keys in middleboxes? If not, why not? If so, I don't recall that discussion in
> the WG (and we had many mega-threads on RSA as abused by MITM folks so
> there has to be stuff to be said;-)

Actually, unless the client uses a PQ-KEM private key permanently, no one can 
do a MITM.  It is similar to Diffie-Hellman; the client picks a key share; the 
server picks a response key share; the both derive the same shared secret.

The draft allows (but does not encourage) the reuse of KEM private values (and 
while it must limit the reuse to what the specification of the KEM allows, in 
practice, that's not a restriction).  Should we modify the draft to forbid 
reuse?  Kyber public/private key generation is fast enough to make this 
practical.

Looking through the TLS 1.3 RFC, I don’t see any text addressing the reuse of 
ECDHE private values; is that implicit by the definition of DHE?  I do see in 
the text "If fresh (EC)DHE keys are used for each connection, then the output 
keys are forward secret."; that wording would imply the possibility of not 
using a fresh (EC)DHE key for each exchange...

> 
> - similar to the above: if PQ KEM public values are like RSA public keys, how
> does the client know what value to use in the initial, basic 1-RTT 
> ClientHello?
> (sorry if that's a dim question:-) If the answer is to use something like a 
> ticket
> (for a 2nd connection) then that should be defined here I'd say, if it were to
> use yet another SVCB field that also ought be defined (or at least hinted 
> at:-)

Actually, it's the client that selects the KEM public key.


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-hybrid-design

2022-08-12 Thread Scott Fluhrer (sfluhrer)
Again, this is late, however Stephen did ask this to be discussed in the 
working group, so here we go:

> -Original Message-
> From: TLS  On Behalf Of Stephen Farrell
> Sent: Saturday, April 30, 2022 11:49 AM
> To: Ilari Liusvaara ; TLS@ietf.org
> Subject: Re: [TLS] WGLC for draft-ietf-tls-hybrid-design
> 
> 
> Hiya,
> 
> On 30/04/2022 10:05, Ilari Liusvaara wrote:
> > On Sat, Apr 30, 2022 at 01:24:58AM +0100, Stephen Farrell wrote:
> >> - section 5: IMO all combined values here need to have recommended ==
> >> "N" in IANA registries for a while and that needs to be in this draft
> >> before it even gets parked. Regardless of whether or not the WG agree
> >> with me on that, I think the current text is missing stuff in this
> >> section and don't recall the WG discussing that
> >
> > I think that having recommended = Y for any combined algorithm
> > requires NIST final spec PQ part and recommended = Y for the classical
> > part (which allows things like x25519 to be the classical part).
> >
> > That is, using latest spec for NISTPQC winner is not enough. This
> > impiles recommended = Y for combined algorithm is some years out at
> > the very least.
> 
> I agree, and something like the above points ought be stated in the draft
> after discussion in the WG.

Section 5 is 'IANA considerations', and would be where we would list the 
various supported hybrids, which we don’t at the moment.

Well, if we were to discuss some suggested hybrids (and we now know the NIST 
selection), I would suggest these possibilities:

- X25519 + Kyber512
- P256 + Kyber512
- X448 + Kyber768
- P384 + Kyber768

I don't see the point of including finite field groups.  I would hope to hold 
off on national curves, such as Brainpool and the GOST curves (although they're 
likely to be forced on us anyways).  I personally see Kyber1024 as overkill (of 
course, if you disagree, please say so).

Of course, it's possible that NIST will tweak the definition of Kyber; that's 
just a possibility we'll need to live with (and wouldn't change what hybrid 
combinations we would initially define)
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-hybrid-design

2022-08-12 Thread Scott Fluhrer (sfluhrer)
Sorry for the late response; I was going through old emails and came across 
this; I thought it warranted a response

> -Original Message-
> From: TLS  On Behalf Of Ilari Liusvaara
> Sent: Saturday, April 30, 2022 5:05 AM
> To: TLS@ietf.org
> Subject: Re: [TLS] WGLC for draft-ietf-tls-hybrid-design
> 
> I don't think compression method like ECH uses would work here.
> 
> However, I did come up with compression method:
> 
> 1) Sub-shares in CH may be be just replaced by a group id (two octets).
>The replacements can be deduced from length of the whole share.
> 2) First sub-share copies from first octets of share for the designated
>group.
> 3) Second sub-share copies from last octets of share for the designated
>group.
> 
> This can be decoded regardless of if the sever knows what the referenced
> groups are. The compression can also never run into loop, as recursive
> references are not allowed.
> 
> 
> So for example, if one wants to send x25519, p256, x25519+saber and
> p256+saber, one can do that as:
> 
> - x25519:  (32+4 octets)
> - p256:  (65+4 octets)
> - x25519+saber:  (2+992+4 octets)
> - p256+saber:  (2+2+4 octets)
> 
> Total overhead is 22 octets. 16 for 4 groups, and 6 for the compression 
> itself.

That sort of thing is possible.  However, it was my understanding that the 
working group wanted a simple proposal; one with minimal changes to the TLS 
architecture.  The current draft, which treats the hybrid as a single atomic 
group, meets that.

This compression protocol would require something on the server side to parse 
through the compressed key shares to extract the desired shares (and, of 
course, handle it if the shares were not present).  We'd also need something to 
distinguish between exactly one key share was presented (that is, the current 
protocol) and when multiple key shares were given.  And, for consistency, we'd 
have the server use the same TLV format for its hybrid keyshares if it selects 
a hybrid group.

So, the advantage of this compressed design is if the client's proposal was 
close (e.g. it wanted to negotiate either P256+Kyber or x25519+Kyber), it 
wouldn't have to guess - it could include all the alternatives, and as long as 
the server accepted either one, the negotiation would proceed; with the current 
draft design, the client would have to guess (and if it guessed wrong, we'd 
take an additional round trip for the HRR).

The disadvantage of this design is a bit of additional complexity.

Does the working group have a strong opinion about this?

> 
> -Ilari
> 
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Before we PQC... Re: PQC key exchange sizes

2022-08-07 Thread Scott Fluhrer (sfluhrer)
> -Original Message-
> From: TLS  On Behalf Of Blumenthal, Uri - 0553 -
> MITLL
> Sent: Sunday, August 7, 2022 1:32 PM
> To: Phillip Hallam-Baker 
> Cc: TLS@ietf.org
> Subject: Re: [TLS] Before we PQC... Re: PQC key exchange sizes
> 
> > > I thought a Quantum Annoyance was someone who keeps banging on
> about
> > > imaginary attacks that don't exist as a means of avoiding having to
> > > deal with actual attacks that have been happening for years without
> being addressed.
> >
> > That is a little unfair but only a little.
> 
> I don't think Quantum "Annoyance" makes any sense at all. It's only
> annoying to implementers.

Actually, we came up with the concept while evaluating PAKEs for the CFRG, and 
in the that context, it makes sense.  For some PAKEs, if we assume that the 
adversary has the ability to compute one discrete log, all that would gain him 
is the ability to check of one particular password was being used for a 
recorded exchange (and hence if computing a discrete log is costly, which is 
likely to be the case for the first generation of Quantum Computers), you're 
still "mostly safe".

In contrast, with other PAKEs, computing one discrete log would allow you to 
break any implementation of that PAKE parameter set globally - that is about is 
'un-annoying' as you can possibly get.

We say this disparity, and the term 'Quantum Annoyance' was coined to express 
it.

Now, with key exchanges, it is somewhat less applicable.  However, if computing 
a few thousand discrete logs allows you to put together a usable factor base, 
well, perhaps that would indicate that 'finite field DH with a common modulus' 
is less 'quantum annoying' (in the above sense) than (say) ECC...

> 
> > I have seen references to a 'NIST' slide insisting that we should not
> > use hybrid schemes and I completely disagree with them.

(The above comment was by PHP)

Hmmm, I had thought I tracked just about everything NIST said about 
postquantum, and I don't recall that.  In any case, I don't believe that anyone 
is taking that advice; initially, just about everyone is suggesting to combine 
postquantum with classical (ECC or RSA).  And, since this is the TLS working 
group, I would point out that the current TLS postquantum draft does do hybrid.

> 
> > First, do no harm: At this point it is very clear that the risk of a
> > Laptop on a Weekend breaking Kyber is rather higher than anyone
> > building a QCC capable computer in the next decade.
> > So, what is not going to happen is a system in which a break of Kyber
> > results in a break of TLS.

Again, that's why we're planning on hybrid; to break the privacy of TLS, you 
would need to break both Kyber (or NTRU; I'll spout off on that if you're 
interested) and (say) X25519.  Hence, what we are proposing is no less secure 
than what we are currently doing now.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Before we PQC... Re: PQC key exchange sizes

2022-08-05 Thread Scott Fluhrer (sfluhrer)
Now, we have done some initial work on postquantum extensions for TLS for 
privacy; the (now expired, soon to be refreshed) draft 
https://datatracker.ietf.org/doc/draft-ietf-tls-hybrid-design/

Might I suggest that any comments you make be in reference to that draft?  I 
don’t mind if you disagree with the draft (that’s rather the point of an IETF 
draft, to see if someone can suggest better ideas), but we have already 
discussed some of these issues in the working group.

In any case, I have some responses inline

From: TLS  On Behalf Of Phillip Hallam-Baker
Sent: Friday, August 5, 2022 2:54 PM
To: Thom Wiggers 
Cc:  
Subject: [TLS] Before we PQC... Re: PQC key exchange sizes

Before we dive into applying the NIST algorithms to TLS 1.3 let us first 
consider our security goals and recalibrate accordingly.

I have the luxury of having 0 users. So I can completely redesign my 
architecture if needs be.

I would disagree; after all, we do have existing TLS 1.3 implementations.  I 
believe it is important to avoid unnecessary changes, for two reasons:

  *   To avoid disrupting the existing implementations (and make it easier to 
upgrade to postquantum)
  *   To keep the existing TLS 1.3 security proofs valid (after all, a large 
point of the TLS 1.3 design was to be provable; it would appear shortsighted to 
discard that)
Now, if there were a real need to rearchitect things to be postquantum, well, 
we’ll have to live with it.  However, at least for the security goal of 
privacy, I don’t see the need.

But the deeper I have got into Quantum Computer Cryptanalysis (QCC) PQC, the 
less that appears to be necessary.

First off, let us level set. Nobody has built a QC capable of QCC and it is 
highly unlikely anyone will in the next decade.
So what we are concerned about today is data we exchange today being 
cryptanalyzed in the future. We can argue about that if people want but can we 
at least agree that our #1 priority should be confidentiality.
Agreed; that’s what we conclude in the draft.

So the first proposal I have is to separate our concerns into two separate 
parts with different timelines:

#1 Confidentiality, we should aim to deliver a standards based proposal by 2025.
#2 Fully QCC hardened spec before 2030.

That immediately reduces our scope to confidentiality. QCC of signature keys is 
irrelevant as far as the priority is concerned. TLS can wait for the results of 
round 4 before diving into signatures at the very least.
Of course, none of the round 4 candidates are signature schemes :-)
On a less snide note, a ‘fully QCC hardened spec’ would most likely depend on 
postquantum certificates; we’re working on that in the lamps working group…

[This is not the case for the Mesh as I need a mechanism that enables me to 
upgrade from my legacy base to a PQC system. The WebPKI should probably give 
some thought to these concerns as well. We should probably be talking about 
deploying PQC root keys but that is not in scope for TLS.]

Second observation is that all we have at this point is the output of the NIST 
competition and that is not a KEM. No sorry, NIST has not approved a primitive 
that we can pass a private key to and receive that key back wrapped under the 
specified public key. What NIST actually tested was a function to which we pass 
a public key and get back a shared secret generated by the function and a blob 
that decrypts to the key.

NIST did not approve 'KYBER' at least it has not done so yet. The only 
primitive we have at this point is what NIST actually tested. Trying to extract 
the Kyber function from that code and use it independently is not kosher for a 
standards based protocol. The final report might well provide that function but 
it might not and even if it does, the commentary from the cryptographers 
strongly suggests that any use of the inner function is going to be accompanied 
by a lot of caveats.

I’m trying to figure out what you’re saying; at least one of us is confused.  
NIST asked for a Key Exchange Mechanism (KEM), and Kyber meets that definition 
(which is essentially what you describe; both sides gets a shared secret).  
That is the functionality that TLS needs, and is the functionality that NIST 
(and others) evaluated.  Yes, there are internal functions within Kyber; no one 
is suggesting those be used directly.  And, yes, NIST might tweak the precise 
definition of Kyber before it is formally approved; any such tweak would be 
minor (and there might not be any at all); if they do make such a change, it 
should not be difficult to modify any draft we put out to account for that 
change.

Since we won't have that report within the priority timeline, I suggest people 
look at the function NIST tested which is a non-interactive key establishment 
protocol. If you want to use Kyber instead of the NIST output, you are going to 
have to wait for the report before we can start the standards process.

Third observation is that people are looking at how to replace 

Re: [TLS] Revised hybrid key exchange draft

2022-01-11 Thread Scott Fluhrer (sfluhrer)


From: TLS  On Behalf Of Eric Rescorla
Sent: Tuesday, January 11, 2022 4:01 PM
To: Douglas Stebila 
Cc:  
Subject: Re: [TLS] Revised hybrid key exchange draft

…
With that said, defense in depth is good. Does it help to have just a 
structured input, e.g.,

opaque KeyInput<0..2^16-1>;

struct {
   KeyInput inputs<2..2^16-1>;
} KeyScheduleInput;

I don’t believe that the structured input idea would do much to frustrate the 
attack.  The attack relies on the attacker controlling the initial part of the 
concatenated shared secret, and implementing a collision if one of the bytes of 
the shared secret he does not know is a specific value.

What the structured input would do is set certain bytes of the initial part of 
the concatenated shared secret to known (to the attacker) values; that is, 
instead of the concatenated shared secret being:

  XX XX XX … XX XX XX YY YY … YY YY YY(XX are the bytes from the weak key 
exchange, YY being the ones from the strong one),
It would look like
  02 00 20 00 XX XX XX … XX XX XX 20 00 YY YY … YY YY YY  (this example assumes 
both key inputs are 32 bytes long)

As long as he can find collisions where those specific bytes are fixed to those 
set by the structured inputs in both blocks (that is, with the 02 00 20 00 XX 
XX XX … XX XX XX 20 00 pattern, with the differences being confined to the XX 
bytes), then the attack can proceed exactly as normal.  Now, allowing the 
attacker less flexibility when setting up the collision would (one would 
expect) make finding collisions harder; however we can’t assume it makes them 
impractical.

What would address the attack would be to pad the KeyInput (should they be of 
variable length) to a fixed maximum length (for example, the longest that key 
exchange algorithm can output).

This would also address a weakness someone found in TLS 1.2, where the DH 
shared secret was effectively variable length (as the leading 0’s were stripped 
off), which could lead to a timing attack.  Now, the attack that used that 
weakness relied on other properties of DH – however, that could also be used as 
precedence to forbid variable length shared secrets.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] DTLS 1.2 and 1.3: HS message reassembly prior to processing

2021-11-06 Thread Scott Fluhrer (sfluhrer)
There are a number of postquantum algorithms (e.g. NTRU, Falcon, Dilithium) 
that require considerably smaller key shares/signatures - we're talking about 
the 1k-2k range.  It would sounds reasonable that an MCU implementation might 
want to consider those algorithms, if they are more suitable for their 
deployment model.

-Original Message-
From: TLS  On Behalf Of Achim Kraus
Sent: Saturday, November 6, 2021 6:22 AM
To: Hanno Becker 
Cc: tls@ietf.org
Subject: Re: [TLS] DTLS 1.2 and 1.3: HS message reassembly prior to processing

Hi Hanno,

 > Note also that in the context of Post-Quantum Crypto, we're sometimes  > 
 > talking about key material >100Kb - this is an issue for MCUs.

I didn't say, this is impossible, it's more that in my opinion for DTLS
1.2 it doesn't pay off. Considering 100k, I guess, that will require a more 
general update of the RFCs, not just that MUST. For IoT it may be also valid, 
to assume that such large public keys will be shared ahead by other means.

best regards
Achim

Am 06.11.21 um 09:18 schrieb Hanno Becker:
> Hey Achim,
>
> Thanks for the quick reply!
>
> Actually, for TLS, you can do the same: Process handshake messages 
> piece by piece (ordered, this time), without full reassembly. I'm not 
> aware that the TLS spec forbids that, or does it?
>
> For Post-Quantum Crypto, streaming implementations of schemes with 
> very large key materials are a thing, see e.g. SPHINCS or McEliece [1].
> However, those are only of value for (D)TLS if the (D)TLS stack 
> forwards data to the handshake layer prior to full reassembly -- 
> again, both in TLS and DTLS.
>
> You're right that in DTLS the situation is even harder, because 
> fragments might be received out of order. But that doesn't mean 
> there's no way of potentially processing them out of order -- it very 
> much depends on the data. E.g. if you receive a huge matrix which 
> you'd like to perform a matrix-vector multiplication with, you can do 
> that entry by entry -- so long as you know the offset of the data you 
> received, which you do of course.
>
> Note also that in the context of Post-Quantum Crypto, we're sometimes 
> talking about key material >100Kb - this is an issue for MCUs.
>
> I think a MUST like this should have a justification. If there's none, 
> then IMO it should be left out for the benefit of implementation flexibility.
>
> Cheers,
> Hanno
>
> [1]:
>
> Johannes Roth and Evangelos Karatsiolis and Juliane Krämer
>
> "Classic McEliece Implementation with Low Memory Footprint",
> https://eprint.iacr.org/2021/138 ,
> Cryptology ePrint Archive: Report 2021/138 - Classic McEliece 
> Implementation with Low Memory Footprint - IACR 
> 
> Cryptology ePrint Archive: Report 2021/138. Classic McEliece 
> Implementation with Low Memory Footprint. Johannes Roth and Evangelos 
> Karatsiolis and Juliane Krämer eprint.iacr.org
>
> //
>
> --
> --
> *From:* Achim Kraus 
> *Sent:* Saturday, November 6, 2021 7:36 AM
> *To:* Hanno Becker 
> *Cc:* tls@ietf.org 
> *Subject:* Re: [TLS] DTLS 1.2 and 1.3: HS message reassembly prior to 
> processing Hi Hanno,
>
>   > Can someone explain the underlying rationale?
>
> I can only guess, that this makes the processing of the handshake 
> messages equal to TLS. So it's separating the layers (record layer - 
> handshake layer).
>
>   > It seems that in the context of very large key material or 
> certificate
>   > chains (think e.g. PQC), gradual processing of handshake messages
>   > (where possible) is useful to reduce RAM usage.
>   > Is there a security risk in doing this?
>
> I'm not sure, if such an approach really pays off. Consider, that 
> sometimes the fragments may be reordered or single fragments are 
> missing. Under such conditions, collecting the fragments is a 
> solution, which makes receiving the complete message more probable.
> For me, if someone decides to go with x509, then please provide the RAM.
> That RAM may only be used temporary, later it may be used for 
> application payload processing. So, I don't think this should be 
> really an issue.
>
>   > It would also be useful for stateless handling of fragmented
>   > ClientHello messages. I'm sure this was discussed before but
>   > I don't remember where and who said it, but a server 
> implementation
>   > could peek into the initial fragment of a ClientHello, check if it
>   > contains a valid cookie, and if so, allocate state for subsequent 
> full
>   > reassembly. That wouldn't be compliant with the above MUST, 
> though,
>   > as far as I understand it.
>
> How do you want to calculate the cookie. According:
>
> https://datatracker.ietf.org/doc/html/rfc6347#section-4.2.1
> 
>
> Cookie = HMAC(Secret, Client-IP, Client-Parameters)
>
> So, which Client-Parameters are included?
> For me, stateless proce

Re: [TLS] Adoption call for Deprecating Obsolete Key Exchange Methods in TLS

2021-07-30 Thread Scott Fluhrer (sfluhrer)
> Was it wrong to generate server-side DH parameters?

The problem is that it is hard for the client to distinguish between a well 
designed server vs a server that isn't as well written, and selects the DH 
group in a naïve way.

For example, if the server just selects a random prime and a random generator 
value, well, that has a good probability of leaking quite a bit of information 
about the private exponents; leak enough, and the shared secret may be 
recoverable.  This is not obvious to someone new to the field; it is also very 
hard to detect by the client.

Now, as I mentioned in the WG meeting, it would be possible to detect if the 
server proposes a safe prime (it's not especially cheap, being several times as 
expensive as the rest of the DH operations, but it's possible), and that would 
prevent most of the problems that can happen (exception: if the server proposes 
an SNFS-friendly modulus, say, one with a very simple binary representation - 
that would reduce the security noticeably).  Of course, this works only if the 
legacy servers you are talking about actually do use safe primes...

-Original Message-
From: TLS  On Behalf Of Viktor Dukhovni
Sent: Friday, July 30, 2021 2:57 PM
To: tls@ietf.org
Subject: Re: [TLS] Adoption call for Deprecating Obsolete Key Exchange Methods 
in TLS

On Fri, Jul 30, 2021 at 05:14:08AM +, Peter Gutmann wrote:

> >The only other alternative is to define brand new TLS 1.2 FFDHE 
> >cipher code points that use negotiated groups from the group list.  
> >But it is far from clear that this is worth doing given that we now have 
> >ECDHE, X25519 and X448.
> 
> There's still an awful lot of SCADA gear that does FFDHE, and that's 
> never going to change from that.  The current draft as it stands is 
> fine, in fact it seems kinda redundant since all it's saying is "don't 
> do things that you should never have been doing in the first place", 
> but I assume someone needs to explicitly say that.  No need to go beyond that.

Can you explain what you mean by "don't do things that you should never have 
been doing in the first place"?

There are quite a few deployments that generate local strong (Sophie Germain 
prime) DH parameters.  These would break if the draft sails through as-is, and 
there's no mechanism for the client to inform the legacy server that its would 
be choice of DH parameters is not acceptable.

Was it wrong to generate server-side DH parameters?

-- 
Viktor.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS Opaque

2021-04-01 Thread Scott Fluhrer (sfluhrer)

On Tue, Mar 30, 2021 at 9:39 PM Joseph Salowey 
mailto:j...@salowey.net>> wrote:

There is at least one question on the list that has gone unanswered for some 
time [1].

[1] https://mailarchive.ietf.org/arch/msg/tls/yCBYp10QuYPSu5zOoM3v84SAIZE/

I've found most of the OPAQUE drafts are pretty confusing / incorrect / or 
typo'd when it comes to lines like these. Describing these calculations seems 
difficult in ASCII, so I don't fault anyone for making mistakes here. The 
authors have also been pretty responsive in adding test vectors and such.

If the answer is “it’s a typo”, that’s fine – I agree that RFCs are a horrid 
format for expressing equations.  However, it would be good if there were to 
state what is the correct relationship here (and possibly update the draft with 
the corrected versions)


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Comments on draft-friel-tls-eap-dpp-01

2021-03-08 Thread Scott Fluhrer (sfluhrer)
Again, last minute reviews...

It would appear that the exact computations that both the client and the server 
need to perform needs to be explicitly spelled out, as there are several 
possibilities.

Here is the one I could see that appear to have the security properties that 
you appear to be looking for:

Variable names:
g - Well known group generator
h - The secret generator that is private to the client and the 
server
z - The secret value known to the client; g^z = h
x - The client's ephemeral DH private value
y - The server's ephemeral DH private value:

Client keyshare:
This is the value g^x

When the server receives this, he selects y (and retrieves the value h); he 
then transmits (as his keyshare) the value:
h^y
and stirs the value (g^x)^y into his KDF

When the client receives this (h^y), he computes:
(h^y) ^ (x z^-1)
(where z^-1 is the modular inverse of z modulo the group order), and stirs that 
value into his KDF.

With this protocol, it appears that the client needs to know not only h, but 
also the value z.  However, this really needs to be spelled out (and run past 
the CFRG to check for subtle issues)
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Comment on draft-sullivan-tls-opaque-00

2021-03-08 Thread Scott Fluhrer (sfluhrer)
I am glad that someone in the working group is looking at this.  However, as I 
reviewed this before the wg meeting, I was completely puzzled by this text 
(from section 6.1):

3DH

   C computes K = H(g^y ^ PrivU || PubU ^ x || PubS ^ PrivU || IdU || IdS )
   S computes K = H(g^x ^ PrivS || PubS ^ y || PubU ^ PrivS || IdU || IdS )

Obviously these needs to be the same for an honest client-server pair.  I can't 
see where the above variables are defined in the doc; I would assume that the 
meanings are:


  *   x, y are the private values from the ephemeral DH operation, and are 
randomly selected for each exchange.
  *   PrivU, PubU, PrivS, PubS are static values from the Opaque record.

However, if that's the case, I can't see how that could work; for one, g^y ^ 
PrivU and g^x ^ PrivS would be different values, and so differing values would 
be stirred into the Master Secret.  In addition, I can't see how PubU ^ x 
(where PubU and x would appear to be client specific) could be expected to be 
the same as PubS ^ y (as both those values would be server specific).

What am I missing?
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] DH generator 2 problem?

2020-10-08 Thread Scott Fluhrer (sfluhrer)
> -Original Message-
> From: TLS  On Behalf Of Michael D'Errico
> Sent: Thursday, October 08, 2020 1:54 PM
> To: TLS List 
> Subject: [TLS] DH generator 2 problem?
> 
> Using finite-field Diffie-Hellman with a generator of 2 is probably not the 
> best
> choice.  Unfortunately all of the published primes (RFCs 2409, 3526, and
> 7919) use 2 for the generator.  Any other generator would likely be (not sure
> how much?) more secure.

No, that is known to be not true.

In particular, if you can compute discrete logs to the base 2, you can compute 
discrete logs to any base (except in the cases where 2 generates an anomalously 
small subgroup, which is not the case in the above groups).

Here's how it works; suppose you were given the problem of solving the discrete 
log problem g^x = h, for some g, h.  Then, if you can solve discrete logs to 
base 2, you would solve these two problems:

2^y = g
2^z = h

Once you have solved those two problems, then you have x = y z^-1 mod p-1.

It's a little more complex if g, h is not in the subgroup that 2 generates, but 
not that much more (unless, as above, the size of that subgroup is far smaller 
than p-1).

> 
> The problem is that 2^X consists of a single bit of value 1 followed by a huge
> string of zeros.  When you then reduce this modulo a large prime number,
> there will be a pattern in the bits which may help an attacker discern the
> value of X.  This is further helped by the fact that all of the published 
> primes
> have 64 bits of 1 in the topmost and bottom-most bits.
> In addition, the larger published primes are very similar to the shorter ones,
> the shorter ones closely matching truncated versions of the larger primes.
> 
> If you were to manually perform the modulo-P operation yourself, you
> would add enough zeros to the end of P until the topmost bit is just to the
> right of the 1 bit from 2^X, and then you'd subtract.  This bit pattern will
> always be the same, no matter the value of X.  In particular, the top 64 bits
> disappear since they're all one.  Continuing the mod-P operation, you adjust
> the number of zeros after the prime P and then subtract again, reducing the
> size of the operand.  The pattern of bits again will be the same, regardless 
> of
> the value of X, the only difference being the number of trailing zeros.

Actually, for these group, the value of 2^x mod p can take on (p-1)/2 different 
values; there is no chance that the bit pattern will be trapped in some 
cul-de-sac, as you appear to be suggesting...

> 
> I have not looked at the cyclic patterns which happen as you do this, but I
> wouldn't be surprised to find that the "new" primes based on e (RFC 7919)
> have easier-to-spot bit patterns than those based on pi.

I would be surprised; do you have some reason that would suggest why bits 
derived from the binary expansion of 'e' would be somehow qualitatively 
different from bits derived from the binary expansion of 'pi'?

> 
> This is speculation of course.

Might I suggest you learn a bit of number theory to go along with your 
speculation?

> 
> Should we define some new DH parameters which use a different
> generator?  Maybe the primes are fine

If the prime is fine, so is the generator...

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Requesting working group adoption of draft-stebila-tls-hybrid-design

2020-02-21 Thread Scott Fluhrer (sfluhrer)
> -Original Message-
> From: TLS  On Behalf Of Russ Housley
> Sent: Friday, February 21, 2020 2:29 PM
> To: Martin Thomson 
> Cc: IETF TLS 
> Subject: Re: [TLS] Requesting working group adoption of draft-stebila-tls-
> hybrid-design
> 
> 
> 
> > On Feb 12, 2020, at 6:22 PM, Martin Thomson 
> wrote:
> >
> > On Thu, Feb 13, 2020, at 10:01, Carrick Bartle wrote:
> >> I'm brand new to the IETF, so please forgive me if I'm totally off
> >> base here, but my understanding is that Informational RFCs are
> >> explicitly not recommendations (let alone mandates)?
> >
> > This would of course be information, but my comment was about phrasing.
> This document comes off as being quite prescriptive, where it doesn't really
> need to be.  Absent actual algorithms, it's just a set of guidelines.  That's
> reflected in its Informational status, but it would be better if the verbiage
> also reflected that more clearly.
> >
> > To address Stephen's comment at the same time: I think that we can
> publish an RFC on this before the competition completes if it is just a
> framework.  That might in fact make standardizing the one true composite
> scheme easier.
> 
> I do not agree.  I do not think the WG should adopt this draft.
> 
> The CFRG has stated a position that the IETF should wait for the NIST
> standardization process to be complete.

I disagree with Russ's statement that what the CFRG has stated actually applies 
to this draft.  This draft does not specify what postquantum algorithms should 
be used (which is what the CFRG is talking about).  What it tries to address is 
"once we have an approved algorithm, how do we integrate it into TLS".  Surely 
it would be better to get that preliminary work out of the way first, rather 
than waiting for the NIST process to conclude, and then start spending the time 
working on the integration process.

> There are at least two approaches
> to mixing symmetric keying material into the TLS 1.3 key schedule for
> information that needs to be protected for the next few decades.  (The two
> that I know about are draft-ietf-tls-tls13-cert-with-extern-psk and draft-
> vanrein-tls-kdh.) These approaches make existing key establishment
> techniques secure even if a quantum computer gets developed as long as
> the symmetric key is not disclosed.  In my opinion, those techniques will hold
> us until the NIST standardization process finishes.

Symmetric keys solve the problem, but are usable only in scenarios where you 
have a pre-existing relationship between the client and the server.  It doesn't 
work in a more general "web-surfing" type scenario.

> 
> I do not understand the goal of mixing (EC)DH with one of candidate
> algorithms.  We do not know enough about the candidate algorithms in the
> NIST process.  If the goal is to add quantum resistance, we do not have
> enough information to pick well.

And, again, the draft does not attempt to pick one.

>  If the goal is to learn about mixing in
> general, then I question the project altogether.  Once the NIST
> standardization process is complete, we can simply use the selected
> algorithm without mixing.

I can see someone disagreeing.  Whatever postquantum algorithm NIST selects, it 
will be a relatively recent one (and hence one that hasn't gone through as much 
cryptographical scrutiny as one would like) [1].  And, given that slapping on 
an ECDH exchange is relatively cheap (both in computation and especially in 
bandwidth), I can see someone wanting to put it in there as a safety measure, 
in case that the postquantum algorithm turns out to have a weakness to 
classical computers.

In any case, even if we ignore mixing, there are questions that need to be 
considered for a postquantum algorithm (which doesn't behave precisely like 
DH).  For example, to what extent should we insist that fresh key shared be 
used each time?  Should we require the key exchange mechanism to have "CCA" 
security (which DH does, as long as you perform the proper validation tests), 
or is "CPA" security sufficient?  Should we attempt to support key exchanges 
with huge (e.g. a Megabyte) public keys?

Those questions can be considered even before we learn which postquantum 
algorithms we will eventually use.


[1]: Exception: McEliece has been around a long time, and has picked up enough 
scrutiny for us to trust it.  However, it's also the one with Megabyte keys, 
and even if we were to support it, it's likely to be only rarely used; the 
issue will still remain with whatever other postquantum key exchange we also 
support

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] DH security issue in TLS

2019-12-03 Thread Scott Fluhrer (sfluhrer)
See SRF

From: TLS  On Behalf Of Pascal Urien
Sent: Tuesday, December 03, 2019 5:16 PM
To: tls@ietf.org
Subject: [TLS] DH security issue in TLS

I wonder if g**x , with x =(1-p)/2 is checked in current TLS 1.2 implementation 
?

In RFC https://tools.ietf.org/html/rfc7919
"Negotiated Finite Field Diffie-Hellman Ephemeral Parameters for Transport 
Layer Security (TLS)"

"Traditional finite field Diffie-Hellman has each peer choose their secret 
exponent from the range [2, p-2].
Using exponentiation by squaring, this means each peer must do roughly 
2*log_2(p) multiplications,
twice (once for the generator and once for the peer's public key)."

Not True !!!
Even for p= safe prime (i.e.. Sophie Germain prime, p=2*q+1, with p & q prime 
number) secret exponent x= (p-1)/2 is a security issue since :

g**xy = 1   with y an even integer
g**xy = g**x   for y an odd integer

SRF: actually, g**xy  = 1 in both cases, as g**x = 1 (for the g, p values 
specified in RFC7919); this is easily seen as all listed p values are safe 
primes, and in all cases, g=2 and p=7 mod 8.

In any case, why would that be a security issue?  If both sides are honest (and 
select their x, y values honestly), the probability of one of them selecting 
(p-1)/2 as their private value is negligible (even if our selection logic 
allowed that as a possible value – it generally doesn’t).  If we have two 
honest parties with an adversary replacing one of the side’s key share with 
g**(p-1)/2, well, the protocol transmits signatures of the transcript, and so 
that’ll be detected. If you have an honest side negotiating with a dishonest 
one, well, the dishonest one could select (p-1)/2 as its private value – 
however, they could also run the protocol honestly (and learn the shared secret 
and the symmetric keys, which are usually the target), and there’s nothing the 
protocol can do about that.

Now, if an honest party reused their private values for multiple exchanges, a 
similar observation would allow an adversary to obtain a single bit of the 
private value.  He would do that by performing an exchange with the honest 
party mostly honestly, selecting a value x as his private value, but instead of 
transmitting g**x as his key share, he would transmit -g**x.  Then, the shared 
value that the honest party would derive is:

  g**xy  with y an even integer
  -g**xy  with y an odd integer

The adversary can compute both these values, and determine which is being used 
later in the protocol.

So, the adversary can learn a single bit of the private value (which doesn’t 
translate to him learning any bit of the shared secret, much less the symmetric 
keys) – however, he cannot leverage this to learn anything else of the private 
key.  I do not believe that a single bit is worth worrying about.  And, again, 
if we generate a fresh DH private value for every exchange (which we encourage 
people to do for PFS), even that single bit doesn’t apply to any other exchange.



If p is not a safe prime (like in RFC 5114) other issues occur


Pascal
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Options for negotiating hybrid key exchanges for postquantum

2019-07-30 Thread Scott Fluhrer (sfluhrer)
> -Original Message-
> From: Stephen Farrell 
> Sent: Tuesday, July 30, 2019 3:53 PM
> To: Scott Fluhrer (sfluhrer) ; Watson Ladd
> 
> Cc: TLS List 
> Subject: Re: [TLS] Options for negotiating hybrid key exchanges for
> postquantum
> 
> 
> I'm neutral as to how we represent this stuff for the moment as I think it's
> too early to tell until we get closer to the end of the algorithms 
> competition.

I'm of the opposite opinion; I think it is important to get this settled before 
(or at the time) the algorithm competition ends.  I really wouldn't want to see 
us wait for NIST to settle on (say) SIKE and NewHope, and then have us spend 
another year or two debating on how to integrate them into our protocols.  
Instead, I would rather spend the year or two now (when we're not on the 
critical path).

Now, there are certainly things we don't know yet about the results of the 
competition (how many algorithms, what types of parameter sets, what sizes of 
key shares do they have); however (based on the current round 2 submissions) we 
can certainly have some informed suspicions...

> 
> That said, I do want to second this...
> 
> On 30/07/2019 19:41, Scott Fluhrer (sfluhrer) wrote:
> > Here is one opinion (mine, but I'm pretty sure it is shared by
> > others): the various NIST candidates are based on hard problems that
> > were only recently studied (e.g. supersingular isogenies, Quasicyclic
> > codes), or have cryptanalytic methods that are quite difficult to
> > fully assess (e.g. Lattices).  Even after NIST and CFRG have blessed
> > one or more of them, it would seem reasonable to me that we wouldn't
> > want to place all our security eggs in that one basket.  We currently
> > place all our trust in DH or ECDH; however those have been studied for
> > 30+ years - we are not there yet for most of the postquantum
> > algorithms.
> >
> > Hence, it seems reasonable to me that we give users the option of
> > being able to rely on multiple methods.
> The only person with whom I've spoken who said he'd plan to deploy some
> of this soon is a VPN operator who explicitly wanted to start early and use >1
> PQ scheme (3-4 is what he
> said) plus a current scheme. His expectation was that that'd settle down to
> one PQ scheme, or one PQ and a current one, in time, but that time may be a
> decade after he'd like to start.
> 
> So, to the extent it matters, count me as a +1 for supporting that.
> 
> Cheers,
> S.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] SIKE vs SIDH (was Options for negotiating hybrid key exchanges for postquantum)

2019-07-30 Thread Scott Fluhrer (sfluhrer)
This is a side issue (we’re probably not going to talk about which postquantum 
primitives to use for another two years), but:


  *   What do you see as the advantage of SIDH?  The server can’t arbitrarily 
select the shared secret (at least, if we select the KEM version of SIKE); what 
specific advantage were you thinking of.
  *   The security proof of TLS 1.3 (at least, the ones I’ve seen) assume that 
the key exchange is CCA secure [1].  SIDH is not; I would prefer to stay with a 
key exchange that meets the assumptions of the proof, and barring an improved 
security proof that makes weaker assumptions, that means SIKE rather than SIDH.

[1] For people with real lives and who do not obsess over the minutiae of 
cryptography, CCA for a key exchange means that the private key is secure, even 
if the attacker is allowed to submit an arbitrary (possibly malformed) key 
share.  The conventional wisdom is that you don’t need to this level of 
protection for ephemeral key exchanges (where we generate a fresh private key 
for each exchange; we allow him to learn the shared secret for the exchange he 
is a party of, and learning the private key for this exchange tells him nothing 
about any other exchange); however the current proofs for TLS 1.3 make this 
assumption, even if you use a private key only once.

From: Blumenthal, Uri - 0553 - MITLL 
Sent: Tuesday, July 30, 2019 3:41 PM
To: Panos Kampanakis (pkampana) ; Scott Fluhrer (sfluhrer) 
;  
Subject: Re: [TLS] Options for negotiating hybrid key exchanges for postquantum

One more thing: I would expect to use SIDH rather than SIKE.

Because to emulate the security advantages of DH, you’d have to run two SIKE’s 
– one in each direction.


From: TLS mailto:tls-boun...@ietf.org>> on behalf of 
"Panos Kampanakis (pkampana)" mailto:pkamp...@cisco.com>>
Date: Tuesday, July 30, 2019 at 3:37 PM
To: "Scott Fluhrer (sfluhrer)" mailto:sfluh...@cisco.com>>, 
"mailto:tls@ietf.org>>" mailto:tls@ietf.org>>
Subject: Re: [TLS] Options for negotiating hybrid key exchanges for postquantum

+1 for option 2. The combinatoric explosion and complexity of 1 is unnecessary. 
I expect that just a few, conservative, acceptably efficient, classical+PQ 
combinations need to be standardized and used. Combining a classical algo with 
more than one postquantum algorithms in a key exchange does not seem practical 
based on the PQ candidates key sizes and performance.

Panos


From: TLS mailto:tls-boun...@ietf.org>> On Behalf Of 
Scott Fluhrer (sfluhrer)
Sent: Tuesday, July 30, 2019 11:21 AM
To: mailto:tls@ietf.org>> mailto:tls@ietf.org>>
Subject: [TLS] Options for negotiating hybrid key exchanges for postquantum

During the physical meeting in Montreal, we had a discussion about postquantum 
security, and in particular, on how one might want to negotiate several 
different ‘groups’ simultaneously (because there might not be one group that is 
entirely trusted, and I put ‘groups’ in scarequotes because postquantum key 
exchanges are typically not formed from a Diffie-Hellman group).

At the meeting, there were two options presented:

Option 1: as the supported group, we insert a ‘hybrid marker’ (and include an 
extension that map lists which combination the hybrid marker stands for)
For example, the client might list in his supported groups 
hybrid_marker_0 and hybrid_marker_1, and there would be a separate extension 
that lists hybrid_marker_0 = X25519 + SIKEp434 and hybrid_marker_1 = X25519 + 
NTRUPR653.  The server would then look up the meanings of hybrid_marker_0 and 1 
in the extension, and then compare that against his security policy.
In this option, we would ask IANA to allocate code points for the various 
individual postquantum key exchanges (in this example, SIKEp434 and NTRUPR653), 
as well a range of code points for the various hybrid_markers.

Option 2: we have code points for all the various combinations that we may want 
to support; hence IANA might allocate a code point X25519_SIKEp434 and another 
code point for X25519_NTRUPR653.  With this option, the client would list 
X25519_SIKEp434 and X25519_NTRUPR653 in their supported groups.
In this option, we would ask IANA to allocate code points for 
all the various combinations that we want allow to be negotiated.

I would like to make an argument in favor of option 1:


-  It is likely that not everyone will be satisified with “X25519 plus 
one of a handful of specific postquantum algorithms”; some may prefer another 
elliptic curve (for example, x448), or perhaps even a MODP group; I have talked 
to people who do not trust ECC); in addition, other people might not trust a 
single postquantum algorithm, and may want to rely on both (for example) SIKE 
and NewHope (which are based on very different hard problems).  With option 2, 
we could try to anticipate all the common combintations (such as 
P384_SIKEp434

Re: [TLS] Options for negotiating hybrid key exchanges for postquantum

2019-07-30 Thread Scott Fluhrer (sfluhrer)
I believe that one important property (of either of the options I listed) is a 
nice fallback if an enhanced client talks to an older server.  In both cases, 
the server will see a series of named groups that it doesn’t know (which it 
will ignore), and possibility an extension it doesn’t know (which it will 
ignore); the server will accept either a named group that it does understand 
(if the client did propose a traditional group as a fall back), or it will come 
to the correct conclusion that the two sides have no mutually acceptable 
security policy.

It is not clear if the proposal you outlined share this property; do you 
duplicate a payload that an unenhanced server would assume only occurs once?

From: TLS  On Behalf Of Andrei Popov
Sent: Tuesday, July 30, 2019 2:48 PM
To: David Benjamin ; Watson Ladd 
Cc: TLS List 
Subject: Re: [TLS] Options for negotiating hybrid key exchanges for postquantum

Given these options, I also prefer option 2, for some of the same reasons.

For my understanding though, why not have the client advertise support for 
hybrid-key-exchange (e.g. via a “flag” extension) and then KeyShareServerHello 
can contain two KeyShareEntries (essentially, using the same format as 
KeyShareClientHello? This would solve the Cartesian product issue.

Cheers,

Andrei

From: TLS mailto:tls-boun...@ietf.org>> On Behalf Of 
David Benjamin
Sent: Tuesday, July 30, 2019 11:24 AM
To: Watson Ladd mailto:watsonbl...@gmail.com>>
Cc: TLS List mailto:tls@ietf.org>>
Subject: Re: [TLS] Options for negotiating hybrid key exchanges for postquantum

I think this underestimates the complexity cost of option 1 to the protocol and 
implementations. Option 1 means group negotiation includes entire codepoints 
whose meaning cannot be determined without a parallel extension. This compounds 
across everything which interacts with named groups, impacting everything from 
APIs to config file formats to even UI surfaces. Other uses of NamedGroups are 
impacted too. For instance, option 2 fits into draft-ietf-tls-esni as-is. 
Option 1 requires injecting hybrid_extension into ESNI somehow. Analysis must 
further check every use, say, incorporates this parallel lookup table into 
transcript-like measures.

The lesson from TLS 1.2 code points is not combined codepoints vs. split ones. 
Rather, the lesson is to avoid interdependent decisions:

* Signature algorithms in TLS 1.2 were a mess because the ECDSA codepoints 
required cross-referencing against the supported curves list. The verifier 
could not express some preferences (signing SHA-512 with P-256 is silly, and 
mixing hash+curve pairs in ECDSA is slightly off in general). As analogy to 
option 1's ESNI problem, we even forgot to allow the server to express curve 
preferences. TLS 1.3 combined signature algorithm considerations into a single 
codepoint to address all this.

* Cipher suites in TLS 1.2 were a mess because they were half-combined and 
half-split. TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 said to use some ECDHE key 
exchange, but you need to check if you have a NamedGroup in common first. It 
said to use ECDSA, but you need to check signature algorithms (which themselves 
cross-reference curves) first. Early drafts of TLS 1.3 had it even worse, where 
a TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 full handshake morphed into 
TLS_ECDHE_PSK_WITH_AES_128_GCM_SHA256 on resumption. Thus, TLS 1.3 cipher 
suites negotiate solely AEAD + PRF hash.

In fairness to TLS 1.2, some of this was a consequence of TLS 1.2's evolution 
over time as incremental extensions over SSL 3.0. And sometimes we do need to 
pay costs like these. But hybrid key exchanges fit into the NamedGroup "API" 
just fine, so option 2 is the clear answer. Code points are cheap. Protocol 
complexity is much more expensive.

It's true that standards are often underspecified. This means the IETF should 
finish the job, not pass all variations through. RSA-PSS is a clear example of 
what to avoid. It takes more bytes to merely utter "RSA-PSS with SHA-256 and 
usual parameters" in X.509 than to encode an entire ECDSA signature! We should 
not define more than a handful of options, regardless of the encoding..

On Tue, Jul 30, 2019 at 12:18 PM Watson Ladd 
mailto:watsonbl...@gmail.com>> wrote:

On Tue, Jul 30, 2019, 8:21 AM Scott Fluhrer (sfluhrer) 
mailto:sfluh...@cisco.com>> wrote:
During the physical meeting in Montreal, we had a discussion about postquantum 
security, and in particular, on how one might want to negotiate several 
different ‘groups’ simultaneously (because there might not be one group that is 
entirely trusted, and I put ‘groups’ in scarequotes because postquantum key 
exchanges are typically not formed from a Diffie-Hellman group).

At the meeting, there were two options presented:

Option 1: as the supported group, we insert a ‘hybrid marker’ (and include an 
extension that map lists which combination the hybrid mark

Re: [TLS] Options for negotiating hybrid key exchanges for postquantum

2019-07-30 Thread Scott Fluhrer (sfluhrer)


From: Watson Ladd  

> On Tue, Jul 30, 2019, 8:21 AM Scott Fluhrer (sfluhrer) 
> <mailto:sfluh...@cisco.com> wrote:
>> During the physical meeting in Montreal, we had a discussion about 
>> postquantum security, and in particular, on how one might want to negotiate 
>> several different ‘groups’ simultaneously (because there might not be one 
>> group that is entirely trusted, and I put ‘groups’ in scarequotes because 
>> postquantum key exchanges are typically not formed from a Diffie-Hellman 
>> group).
>> 
>> At the meeting, there were two options presented:
 >> 
>> Option 1: as the supported group, we insert a ‘hybrid marker’ (and include 
>> an extension that map lists which combination the hybrid marker stands for)
>>    For example, the client might list in his supported groups 
>>hybrid_marker_0 and hybrid_marker_1, and there would be a separate extension 
>>that lists hybrid_marker_0 = X25519 + SIKEp434 and hybrid_marker_1 = X25519 + 
>>NTRUPR653.  The server would then look up the meanings of hybrid_marker_0 and 
>>1 in the extension, and then compare that against his security policy.
>>In this option, we would ask IANA to allocate code points for the 
>> various individual postquantum key exchanges (in this example, SIKEp434 and 
>> NTRUPR653), as well a range of code points for the various hybrid_markers.
 >>
>> Option 2: we have code points for all the various combinations that we may 
>> want to support; hence IANA might allocate a code point X25519_SIKEp434 and 
>> another code point for X25519_NTRUPR653.  With this option, the client would 
>> list X25519_SIKEp434 and X25519_NTRUPR653 in their supported groups.
>>     In this option, we would ask IANA to allocate code points 
>> for all the various combinations that we want allow to be negotiated. 
>> 
>
> Are people actually going to use hybrid encryption post NIST? The actual 
> deployments today  for experiment have all fit option 2 and hybrids are 
> unlikely in the future. 

it sounds like you are questioning, not between option 1 or option 2, but 
instead whether we need either of them at all.  Those are both methods of 
negotiating multiple keygroups; it appears that you don’t see any need for such 
an option.

Perhaps we need to have such a debate.  Here is one opinion (mine, but I'm 
pretty sure it is shared by others): the various NIST candidates are based on 
hard problems that were only recently studied (e.g. supersingular isogenies, 
Quasicyclic codes), or have cryptanalytic methods that are quite difficult to 
fully assess (e.g. Lattices).  Even after NIST and CFRG have blessed one or 
more of them, it would seem reasonable to me that we wouldn't want to place all 
our security eggs in that one basket.  We currently place all our trust in DH 
or ECDH; however those have been studied for 30+ years - we are not there yet 
for most of the postquantum algorithms.

Hence, it seems reasonable to me that we give users the option of being able to 
rely on multiple methods.

>
> My objection to 1 is it gets very messy. Do we use only the hybrids we both 
> support? What if I throw a bunch of expensive things together? No reason we 
> need a hybrid scheme! 

Actually, I personally don't see the messiness; if the client wants to propose 
X25519 + NTRUPR653, he places hybrid_marker_1 in the supported groups list, and 
adds hybrid_marker_1 = X25519 + NTRUPR653 to his hybrid group extension.  If 
the server sees hybrid_marker_1 in the client's supported group list, he looks 
for the definition in the hybrid group extension, and processes the policy 
accordingly.  The logic in both direction would appear (at least to me) to be 
reasonably simple (and it doesn’t get more complex if we feel the need to 
negotiate three or more key exchanges).

As for the expense, that is for the user to judge.  If the user decides that he 
is willing to pay for a series of expensive key exchanges, that should be his 
decision to make.  Option 1 gives the user the ability to negotiate a series of 
expensive (but perhaps more trusted) algorithms, it doesn't mandate that the 
user do so.

And, as for a need for a hybrid scheme (either option 1, option 2 or something 
else), I do believe that there will be a demand for it, even after NIST and 
CFRG has given their blessing, as their will be users who will not fully trust 
a new scheme that was just endorsed (but they do want some protection against 
future quantum computers).

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Options for negotiating hybrid key exchanges for postquantum

2019-07-30 Thread Scott Fluhrer (sfluhrer)
During the physical meeting in Montreal, we had a discussion about postquantum 
security, and in particular, on how one might want to negotiate several 
different 'groups' simultaneously (because there might not be one group that is 
entirely trusted, and I put 'groups' in scarequotes because postquantum key 
exchanges are typically not formed from a Diffie-Hellman group).

At the meeting, there were two options presented:

Option 1: as the supported group, we insert a 'hybrid marker' (and include an 
extension that map lists which combination the hybrid marker stands for)
For example, the client might list in his supported groups 
hybrid_marker_0 and hybrid_marker_1, and there would be a separate extension 
that lists hybrid_marker_0 = X25519 + SIKEp434 and hybrid_marker_1 = X25519 + 
NTRUPR653.  The server would then look up the meanings of hybrid_marker_0 and 1 
in the extension, and then compare that against his security policy.
In this option, we would ask IANA to allocate code points for the various 
individual postquantum key exchanges (in this example, SIKEp434 and NTRUPR653), 
as well a range of code points for the various hybrid_markers.

Option 2: we have code points for all the various combinations that we may want 
to support; hence IANA might allocate a code point X25519_SIKEp434 and another 
code point for X25519_NTRUPR653.  With this option, the client would list 
X25519_SIKEp434 and X25519_NTRUPR653 in their supported groups.
In this option, we would ask IANA to allocate code points for 
all the various combinations that we want allow to be negotiated.

I would like to make an argument in favor of option 1:


  *   It is likely that not everyone will be satisified with "X25519 plus one 
of a handful of specific postquantum algorithms"; some may prefer another 
elliptic curve (for example, x448), or perhaps even a MODP group; I have talked 
to people who do not trust ECC); in addition, other people might not trust a 
single postquantum algorithm, and may want to rely on both (for example) SIKE 
and NewHope (which are based on very different hard problems).  With option 2, 
we could try to anticipate all the common combintations (such as 
P384_SIKEp434_NEWHOPE512CCA), however that could very well end up as a lot of 
combinations.
  *   There are likely to be several NIST-approved postquantum key exchanges, 
and each of those key exchanges are likely to have a number of supported 
parameter sets (if we take the specific postquantum key exchange as analogous 
to th ECDH protocool, the "parameter set" could be thought of an analogous to 
the specific elliptuc curve, and it modifies the key share size, the 
performance and sometimes the security properties).  In fact, one of the NIST 
submissoins currently has 30 parameter sets defined.  Hence, even if NIST 
doesn't approve all the parameter sets (or some of them do not make sense for 
TLS in any scenario), we might end up with 20 or more different key 
exchange/parameter set combinations that do make sense for some scenario that 
uses tLS (be it in a tranditional PC client/server, a wireless client, two 
cloud devices communicating or an IOT device).
  *   In addition, we are likely to support additional primitives in the 
future; possibly National curves (e.g. Brainpool), or additional Postquantum 
algorithms (or additional parameter sets to existing ones).  Of course, once we 
add that code point, we'll need to add the additional code points for all the 
combinations that it'll make sense in (very much like we had to add a number of 
ciphersuites whenever we added a new encryption algorithm into TLS 1.2).

It seemds reasonable to me that the combination of these two factors are likely 
to cause us (should we select option 2) to define a very large number of code 
points to cover all the various options that people need.

Now, this is based on speculation (both of the NIST process, and additional 
primitives that will be added to the protocol), and one objection I've heard is 
"we don't know what's going to happen, and so why would we make decisions based 
on this speculation?"  I agree that we have lack of knowledge; however it seems 
to me that a lack of knowledge is an argument in favor of selecting the more 
flexible option (which, in my opinion, is option 1, as it allows the 
negotiation of combinations of key exchanges that the WG has not anticipated).

My plea: lets not repeat the TLS 1.2 ciphersuite mess; lets add an extension 
that keeps the number of code points we need to a reasonable bound.

The costs of option 1?

  *   It does increase the complexity on the server a small amount (I'm not a 
TLS implementor, however it would seem to me to be only a fairly small amount)
  *   It may increase the size of the client hello a small amount (on the other 
hand, because it allows us to avoid sending duplicate key shares, it can also 
reduce the size of the client hello as well, depending on what's actually 

[TLS] Comments on draft-stebila-tls-hybrid-design-00

2019-03-28 Thread Scott Fluhrer (sfluhrer)
First of all, I would say that this is excellent work, and I would support 
making this a working group item.

As for my comments (both on the document itself, and my opinions on 
alternatives that the document lists):


  *   2.1. Negotiation of the use of hybridization in general and component 
algorithms specifically?

One point that the current doc does not address explicitly are the costs 
relevant to the negotiation method; that is, how do we judge that solution A is 
better than solution B.  From my view point, the (somewhat conflicting) goals 
that the solution should attempt to achieve are:

 *   Simplicity; in terms of ease of implementing it correctly (and hard to 
accidentally get it "almost right"), ease in testing that an implementation is 
correct (especially, does not accept any proposal outside the security policy), 
and the ease of documenting it clearly in an eventual RFC
 *   Simplicity 2; the hybridization solution should not complicate the 
protocol in the case that you don't need to do hybridization (even if what 
you're proposing is a Nextgen algorithm).
 *   Backwards compatibility; we'll need to live with older clients and 
older servers (which don't implement this extension) for a long time; we need 
to ability to either (depending on our security policy) fall back to only 
traditional algorithms, or abort the negotiation cleanly.
 *   Performance; being fairly cheap in terms of the messages to be 
transmitted, and the amount of processing required
 *   Completeness; being able to negotiate the most likely security 
policies (and in a manner that is consistent with the performance goal)
 *   Extensibility; being able to handle likely future requirements (new 
algorithms, etc) without excessive redesign.
These goals are somewhat nebulous (e.g. what are the "most likely security 
policies?  What sorts of future requirements are likely?), however I believe we 
should write them down.

  *   3.2. How many component algorithms to combine?

My opinion: unless allowing three or more algorithms has significant cost (even 
if we are using only two), I would strongly prefer an alternative that could 
scale to three or more.  The whole point of this design is that we don't fully 
trust any one algorithm; I believe that there will be people who would not want 
to rely on two partially trusted algorithms, but would prefer to use three (or 
possibly more).

  *   3.3.1. (Shares-concat) Concatenate key shares

My concern with this is that there may be algorithms with variable key share 
size (I don't know of any right now, but 'extensibility'); if we do this, we 
would want internal length markers

  *   3.4.2. (Comb-XOR) XOR keys and then KDF

This one makes me nervous.  Yes, for practical key agreement protocols, this is 
probably safe; however to prove it, it would appear that you'd have to make 
additional assumptions on the key agreement protocols.  It would appear to be 
safer to use one of the alternatives.  And, if you're concerned with the extra 
processing time running the KDF, I expect that the KDF time to be insignificant 
compared to the time taken by the key agreement protocol.

  *   5.1. Active security

As for whether CPA is good enough, well, one of the overall goals of TLS 1.3 is 
PFS.  If we are negotiating a fresh set of key shares each time, then CPA works 
(the issue with CPA protocols is that an invalid key share will allow the 
attacker to learn some information about the private key; however if we discard 
the private key each time, the attacker gains no benefit).  On the other hand, 
a valid concern would be if an implementor decides to optimize the protocol by 
reusing key shares.  Also, see 'Failures' below for another argument for CCA.

  *   5.3. Failures

Yes, some postquantum algorithms can fail between two honest parties (with 
probabilities ranging from 2**-40 to 2**-256).  On such a failure, the two 
sides would create distinct session keys, and so the encrypted portions of the 
Server Hello would not decrypt properly, and so there would not be a security 
issue here.  Three thoughts:

  1.  small failure probabilities might not happen in practice at all; if we 
assume 2**39 TLS devices (about 100 for every person), and each device made an 
average of 2**30 TLS connections over its lifetime, then we probably will never 
see even one failure anywhere over the lifetime of the protocol if we had a 
security failure probability significantly less than 2**-69
  2.  TCP errors; a similar failure would happen if there was a TCP error 
transmitting the key_share (actually, any part of the client or server hello); 
if the key exchange failure rate is significantly smaller than this, we have 
not affected the reliability of the protocol.
  3.  If we do go with only with CCA alternatives, these always have very small 
(2**-128 or less) failure rates (because a failure does generally leak some 
information about the private key, and so designers a

Re: [TLS] AdditionalKeyShare Internet-Draft

2017-04-19 Thread Scott Fluhrer (sfluhrer)

> -Original Message-
> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Douglas Stebila
> Sent: Monday, April 17, 2017 2:24 PM
> To: 
> Subject: [TLS] AdditionalKeyShare Internet-Draft
> 
> Dear TLS mailing list,
> 
> We have posted an Internet-Draft
> https://tools.ietf.org/html/draft-schanck-tls-additional-keyshare-00
> for using an additional key share in TLS 1.3.  The intended use case is to
> provide support for transitional key exchange mechanisms in which both a
> pre-quantum algorithm (e.g., ECDH) and a post-quantum algorithm are used.
> (Google's experiment with New Hope in 2016 had such an arrangement.) Our
> draft replicates the functionality of the KeyShare extension in a new
> extension called AdditionalKeyShare. Like KeyShare, the client's
> AdditionalKeyShare contains a vector of KeyShareEntry structs. The server
> can respond with a single matching KeyShareEntry in the AdditionalKeyShare
> extension of its ServerHello. The resulting additional shared secret is 
> included
> in the TLS key schedule after the ECDH shared secret.
> 
> While the motivation for our Internet-Draft is to facilitate the transition to
> post-quantum algorithms, our Internet-Draft does not specify any post-
> quantum algorithms.

We have a draft with similar goals; draft-whyte-qsh-tls13 .  It also tries to 
achieve Quantum-safeness through multiple key exchange mechanisms; however in 
terms of protocol, it works somewhat differently.  You have the 'normal' key 
exchange (as exists in the protocol), and a second one on the side.  We allows 
the client to define a hybrid group (which consists of several key exchanges 
done in parallel).  One of our goals was to stay within the existing TLS 
architecture as much as possible (and thus limiting the changes needed to the 
TLS state machine and parsing logic); things such as modifying the key 
derivation mechanism (such as you do) were considered to be too large.

It may be worth your while to go through our draft,,,

> 
> There are a couple of items for discussion related to this draft:
> 
> - We only provide a mechanism for a single AdditionalKeyShare, thus leading
> to
>   the session establishing at most PSK + ECDHE + 1 additional shared secret.  
> Is
>   there a value in even more shared secrets than that? Will someone want
>   to include more than one post-quantum algorithm?  If so, our draft could be
>   adapted to have AdditionalKeyShare1, AdditionalKeyShare2, etc., but we
> did not
>   want to add that complexity unless there was desire for it.

Our draft allows for that naturally.

As for the need, well, I expect that some will want it.  We don't have any 
postquantum key exchanges that are both practical and really well trusted (at 
least, to the same extent that (EC)DH is); I would expect that some people will 
want to spread their risk and do (for example) x25519 + Frodo + SIDH.

> 
> - TLS 1.3 allows the client to restrict the use of PSKs that they provide in
>   ClientHello through the "psk_key_exchange_modes" extension. The client
> may,
>   for instance, request that the PSK only be used in a PSK+(EC)DHE mode, so
> as
>   to ensure that the resumed session has forward secrecy.  It is unclear the
>   best way to reconcile this with support for multiple key shares; we outline
>   some possibilities in Section 4 of our Internet-Draft, and we welcome input.
> 
> We have also created a pull request to TLS 1.3 draft 19 which includes a
> clarification on how additional secrets are to be included in the TLS key
> schedule.
> https://github.com/jschanck/tls13-spec
> 
> John and Douglas
> 
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] The alternative idea I had for token buckets.

2017-03-28 Thread Scott Fluhrer (sfluhrer)
Sorry, I wasn't aware that unlinkability was a requirement...

> -Original Message-
> From: Martin Thomson [mailto:martin.thom...@gmail.com]
> Sent: Tuesday, March 28, 2017 11:51 AM
> To: Scott Fluhrer (sfluhrer)
> Cc: 
> Subject: Re: [TLS] The alternative idea I had for token buckets.
> 
> On 28 March 2017 at 10:48, Scott Fluhrer (sfluhrer) 
> wrote:
> > The server recovers E_K(R) because the client sent it (along with i and the
> protected message).  It recovers R because it also knows K.
> 
> So E_K(R) is sent directly?  That would link packets.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] The alternative idea I had for token buckets.

2017-03-28 Thread Scott Fluhrer (sfluhrer)

> -Original Message-
> From: Martin Thomson [mailto:martin.thom...@gmail.com]
> Sent: Tuesday, March 28, 2017 11:46 AM
> To: Scott Fluhrer (sfluhrer)
> Cc: 
> Subject: Re: [TLS] The alternative idea I had for token buckets.
> 
> On 28 March 2017 at 10:41, Scott Fluhrer (sfluhrer) 
> wrote:
> > E_K(R); that is, R is encrypted with the server's long term key.
> >
> > (I meant to specify that...)
> 
> 
> OK, so how does the server recover E_K(R)?  The point here is that it doesn't
> know R.

The server recovers E_K(R) because the client sent it (along with i and the 
protected message).  It recovers R because it also knows K.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] The alternative idea I had for token buckets.

2017-03-28 Thread Scott Fluhrer (sfluhrer)
E_K(R); that is, R is encrypted with the server's long term key.

(I meant to specify that...)

> -Original Message-
> From: Martin Thomson [mailto:martin.thom...@gmail.com]
> Sent: Tuesday, March 28, 2017 11:37 AM
> To: Scott Fluhrer (sfluhrer)
> Cc: 
> Subject: Re: [TLS] The alternative idea I had for token buckets.
> 
> I'm sorry, but I don't understand this proposal.  I'm losing you when you say
> E(R) without specifying the key that you are using.
> 
> On 28 March 2017 at 10:21, Scott Fluhrer (sfluhrer) 
> wrote:
> > Here’s how it would work:
> >
> >
> >
> > -  The server has a long term secret key K, which it never gives out
> >
> > -  When the server wants to give a token to a client, it picks a
> > random value R, and securely gives the client the values R and E_K(R)
> >
> > -  When the client wants to use the token, it picks a value i, and
> > computes the key Hash( R || i).  It uses that key to protect the
> > message, and also sends the server the values E(R) and i
> >
> > -  The server decrypts the value E(R) to recover R, it computes
> > Hash( R || i) to recover the message key, and then decrypts the
> > message
> >
> >
> >
> > That way, the server doesn’t have to send the client N different
> > tokens…
> >
> >
> > ___
> > TLS mailing list
> > TLS@ietf.org
> > https://www.ietf.org/mailman/listinfo/tls
> >
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] The alternative idea I had for token buckets.

2017-03-28 Thread Scott Fluhrer (sfluhrer)
Here's how it would work:


-  The server has a long term secret key K, which it never gives out

-  When the server wants to give a token to a client, it picks a random 
value R, and securely gives the client the values R and E_K(R)

-  When the client wants to use the token, it picks a value i, and 
computes the key Hash( R || i).  It uses that key to protect the message, and 
also sends the server the values E(R) and i

-  The server decrypts the value E(R) to recover R, it computes Hash( R 
|| i) to recover the message key, and then decrypts the message

That way, the server doesn't have to send the client N different tokens...
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Question about unrecognized extension types in the TLS 1.3 client hello message

2017-01-30 Thread Scott Fluhrer (sfluhrer)
My question: in TLS 1.3, if the client inserts an extension of a type that the 
server does not recognize, how must the server behave?  Is it required that the 
server just ignore the extension, or can it take some other action (e.g. ignore 
the client hello)?

Background (why I'm asking): one of the things we've been doing is seeing how 
we might retrofit postquantum security into TLS 1.3; I know that the WG does 
not want to address this now, however I believe it will eventually; ideally, we 
could later create an RFC on how to do this within TLS 1.3 ( without having to 
come up with TLS 1.4).

The specific subtask we're looking at is how a postquantum key exchange (and a 
nonpostquantum one) can be used to generate keys.  Yes, I know that's been 
proposed before; I just want to make sure that it's actually kosher by the 
rules of TLS 1.3.  One goal that we have is to be able to have backwards 
compatibility with TLS 1.3 implementations that don't know about these 
post-quantum extensions.  One of the things we're looking at is having the 
client include an extension that would have some of the data; we would set 
things up so that if the server ignores the extension, the protocol acts 
"correctly" (that is, if the client and the server are both willing to use the 
same group, they'll interoperate, if not, then the connection will fail because 
both sides don't share a group in common).

So, a key requirement of this specific type extension is that the server 
ignores an extension it doesn't recognize.  We could do it without adding an 
extension; however that gets rather uglier.

I've been going through the TLS 1.3 draft (draft-ietf-tls-tls13-18), and there 
doesn't appear to be any MUST statements that says that the server ignores 
extensions it doesn't recognize.  There's a statement that a client MUST abort 
if it gets an extension it doesn't expect, but there's no similar language for 
the server.  Presumably, the server is supposed to be silent about zero length 
extensions from the client (as the draft states that the client sends a zero 
length extension for any type that it doesn't need to send, but is willing to 
receive in reply), however the extensions I'm asking about will not have zero 
length.

Is it the intension of the WG that the client is able to insert extensions into 
the client hello that the server might not expect?  If it is, could the next 
version of the draft insert a MUST statement to that effect?

Thank you.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] SHA-3 in SignatureScheme

2016-09-01 Thread Scott Fluhrer (sfluhrer)

> -Original Message-
> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Hubert Kario
> Sent: Thursday, September 01, 2016 2:17 PM
> To: Benjamin Kaduk
> Cc: 
> Subject: Re: [TLS] SHA-3 in SignatureScheme
> 
> On Thursday, 1 September 2016 12:43:31 CEST Benjamin Kaduk wrote:
> > On 09/01/2016 12:38 PM, Hubert Kario wrote:
> > > The SHA-3 standard is already published and accepted[1], shouldn't
> > > TLSv1.3 include signatures with those hashes then?
> >
> > Why does it need to be part of the core spec instead of a separate
> document?
> 
> because: we also are adding RSA-PSS to TLSv1.2 in this document, I don't see
> why it needs to be delayed. Finally, TLSv1.2 added SHA-2 just like that, it 
> was
> not tacked on later.

IIRC, SHA-2 was a special case; SHA-1 was demonstrated to be cryptographically 
weaker than expected and so we needed to have a secure alternative ASAP.

The SHA-3 is not like that; there's no evidence that suggests that SHA-2 is 
weak; the only incentive to implementing SHA-3 is "we'll, it is a standard, and 
so we might as well support it".

IMHO, how SHA-2 was handled should be viewed as an exception, not a rule for 
how we should proceed in the future...


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Scott Fluhrer (sfluhrer)

> -Original Message-
> From: Paterson, Kenny [mailto:kenny.pater...@rhul.ac.uk]
> Sent: Tuesday, July 12, 2016 1:17 PM
> To: Dang, Quynh (Fed); Scott Fluhrer (sfluhrer); Eric Rescorla; tls@ietf.org
> Subject: Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt
> 
> Hi
> 
> On 12/07/2016 18:04, "Dang, Quynh (Fed)"  wrote:
> 
> >Hi Kenny,
> >
> >On 7/12/16, 12:33 PM, "Paterson, Kenny" 
> wrote:
> >
> >>Finally, you write "to come to the 2^38 record limit, they assume that
> >>each record is the maximum 2^14 bytes". For clarity, we did not
> >>recommend a limit of 2^38 records. That's Quynh's preferred number,
> >>and is unsupported by our analysis.
> >
> >What is problem with my suggestion even with the record size being the
> >maximum value?
> 
> There may be no problem with your suggestion. I was simply trying to make it
> clear that 2^38 records was your suggestion for the record limit and not ours.
> Indeed, if one reads our note carefully, one will find that we do not make any
> specific recommendations. We consider the decision to be one for the WG;
> our preferred role is to supply the analysis and help interpret it if people
> want that. Part of that involves correcting possible misconceptions and
> misinterpretations before they get out of hand.
> 
> Now 2^38 does come out of our analysis if you are willing to accept single key
> attack security (in the indistinguishability sense) of 2^{-32}. So in that 
> limited
> sense, 2^38 is supported by our analysis. But it is not our recommendation.
> 
> But, speaking now in a personal capacity, I consider that security margin to 
> be
> too small (i.e. I think that 2^{-32} is too big a success probability).

To be clear, this probability is that an attacker would be able to take a huge 
(4+ Petabyte) ciphertext, and a compatibly sized potential (but incorrect) 
plaintext, and with probability 2^{-32}, be able to determine that this 
plaintext was not the one used for the ciphertext (and with probability 
0.9767..., know nothing about whether his guessed plaintext was correct 
or not).

I'm just trying to get people to understand what we're talking about.  This is 
not "with probability 2^{-32}, he can recover the plaintext"


> 
> Regards,
> 
> Kenny

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Scott Fluhrer (sfluhrer)
Actually, a more correct way of viewing the limit would be 2^52 encrypted data 
bytes. To come to the 2^38 record limit, they assume that each record is the 
maximum 2^14 bytes.  Of course, at a 1Gbps rate, it'd take over a year to 
encrypt that much data...

> -Original Message-
> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Dang, Quynh (Fed)
> Sent: Tuesday, July 12, 2016 11:12 AM
> To: Paterson, Kenny; Dang, Quynh (Fed); Eric Rescorla; tls@ietf.org
> Subject: Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt
> 
> Hi Kenny,
> 
> The indistinguishability-based security notion in the paper is a stronger
> security notion than the (old) traditional confidentiality notion.
> 
> 
> (*) Indistinguishability notion (framework) guarantees no other attacks can
> be better than the indistinguishability bound. Intuitively, you can¹t attack 
> if
> you can¹t even tell two things are different or not. So, being able to say two
> things are different or not is the minimal condition to lead to any attack.
> 
> The traditional confidentiality definition is that knowing only the 
> ciphertexts,
> the attacker can¹t know any content of the corresponding plaintexts with a
> greater probability than some value and this value depends on the particular
> cipher. Of course, the maximum amount of data must not be more than
> some limit under a given key which also depends on the cipher.
> 
> For example, with counter mode AES_128, Let¹s say encrypting 2^70 input
> blocks with a single key. With the 2^70 ciphertext blocks alone (each block is
> 128 bits), I don¹t think one can find out any content of any of the 
> plaintexts.
> The chance for knowing any block of the plaintexts is
> 1/(2^128) in this case.
> 
> I support the strongest indistinguishability notion mentioned in (*) above,
> but in my opinion we should provide good description to the users.
> That is why I support the limit around 2^38 records.
> 
> Regards,
> Quynh.
> 
> On 7/12/16, 10:03 AM, "Paterson, Kenny" 
> wrote:
> 
> >Hi Quynh,
> >
> >This indistinguishability-based security notion is the confidentiality
> >notion that is by now generally accepted in the crypto community.
> >Meeting it is sufficient to guarantee security against many other forms
> >of attack on confidentiality, which is one of the main reasons we use it.
> >
> >You say that an attack in the sense implied by breaking this notion
> >does not break confidentiality. Can you explain what you mean by
> >"confidentiality", in a precise way? I can then try to tell you whether
> >this notion will imply yours.
> >
> >Regards
> >
> >Kenny
> >
> >On 12/07/2016 14:04, "TLS on behalf of Dang, Quynh (Fed)"
> > wrote:
> >
> >>Hi Eric and all,
> >>
> >>
> >>In my opinion, we should give better information about data limit for
> >>AES_GCM in TLS 1.3 instead of what is current in the draft 14.
> >>
> >>
> >>In this paper: http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf,  what
> >>is called confidentiality attack is the known plaintext
> >>differentiality attack where  the attacker has/chooses two plaintexts,
> >>send them to the AES-encryption oracle.  The oracle encrypts one of
> >>them, then sends the ciphertext to the attacker.  After seeing the
> >>ciphertext, the attacker has some success probability of telling which
> >>plaintext  was encrypted and this success probability is in the column
> >>called ³Attack Success Probability² in Table 1.  This attack does not
> >>break confidentiality.
> >>
> >>
> >>If the attack above breaks one of security goal(s) of your individual
> >>system, then making success probability of that attack at 2^(-32) max
> >>is enough. In that case, the Max number of records is around 2^38.
> >>
> >>
> >>
> >>
> >>Regards,
> >>Quynh.
> >>
> >>
> >>
> >>
> >>
> >>
> >>Date: Monday, July 11, 2016 at 3:08 PM
> >>To: "tls@ietf.org" 
> >>Subject: [TLS] New draft: draft-ietf-tls-tls13-14.txt
> >>
> >>
> >>
> >>Folks,
> >>
> >>
> >>I've just submitted draft-ietf-tls-tls13-14.txt and it should show up
> >>on the draft repository shortly. In the meantime you can find the
> >>editor's copy in the usual location at:
> >>
> >>
> >>  http://tlswg.github.io/tls13-spec/
> >>
> >>
> >>The major changes in this document are:
> >>
> >>
> >>* A big restructure to make it read better. I moved the Overview
> >>  to the beginning and then put the document in a more logical
> >>  order starting with the handshake and then the record and
> >>  alerts.
> >>
> >>
> >>* Totally rewrote the section which used to be called "Security
> >>  Analysis" and is now called "Overview of Security Properties".
> >>  This section is still kind of a hard hat area, so PRs welcome.
> >>  In particular, I know I need to beef up the citations for the
> >>  record layer section.
> >>
> >>
> >>* Removed the 0-RTT EncryptedExtensions and moved ticket_age
> >>  into the ClientHello. This quasi-reverts a change in -13 that
> >>  made implementation of 0-RTT kind of a pain.
> >>
> >>
> >>As usual, comments welcome.
>

Re: [TLS] removing 128-bit ciphers in TLS 1.3

2016-05-12 Thread Scott Fluhrer (sfluhrer)

> -Original Message-
> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Fedor Brunner
> Sent: Thursday, May 12, 2016 4:10 AM
> To: tls@ietf.org
> Subject: [TLS] removing 128-bit ciphers in TLS 1.3
> 
> Because of this attacks:
> 
> https://blog.cr.yp.to/20151120-batchattacks.html
> 
> could please consider to obsoleting 128-bit ciphers in TLS 1.3.

But that attack isn't effective against the GCM-based cipher suite in TLS 1.2.

GCM (as implemented in TLS 1.2) has both sides agree on a 32 bit salt as a part 
of the key agreement; a batch attack (such as Bernstein describes) doesn't work 
unless you happen to guess the 128 bit key *and* the 32 bit salt; hence if 
you've collected 2**N TLS sessions, then the attacker would need a work effort 
of about 2**{160-N) to happen to be able to decrypt 1 random session.  If we 
estimate N=50 (literally, 1 quadrillion TLS sessions, which I suspect is in the 
ballpark for number of TLS sessions world-wide), this would put the work effort 
at 2**110.

I suspect that's a bit much, even for the NSA.

> 
> 
> For example AES-128 encryption has been removed from Suite B
> 
> https://www.nsa.gov/ia/programs/suiteb_cryptography/index.shtml

One could argue that perhaps the reason NSA removed it from Suite B because 
they couldn't break it; hence that would be an excellent reason to keep it :-)

Attempts at humor aside, I believe their reason was that they think AES-128 was 
insufficiently strong against a Quantum Computer.  Now, I rather think that we 
should be moving TLS to use Quantum Resistant cryptography; but as of right 
now, TLS is rather far from that goal, and the symmetric key size is a minor 
issue; how we perform key exchange and authentication are much harder, and much 
more immediately important.


Now, I wouldn't be against going only to AES-256 (the cost delta isn't that 
much); however if we do it, it should be for valid reasons...

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RSA-PSS in TLS 1.3

2016-03-08 Thread Scott Fluhrer (sfluhrer)

> -Original Message-
> From: Hubert Kario [mailto:hka...@redhat.com]
> Sent: Monday, March 07, 2016 12:18 PM
> To: Scott Fluhrer (sfluhrer)
> Cc: tls@ietf.org; Nikos Mavrogiannopoulos; Hanno Böck; Blumenthal, Uri -
> 0553 - MITLL
> Subject: Re: [TLS] RSA-PSS in TLS 1.3
> 
> On Monday 07 March 2016 15:23:17 Scott Fluhrer wrote:
> > > -Original Message-
> > > From: Hubert Kario [mailto:hka...@redhat.com]
> > > Sent: Monday, March 07, 2016 6:43 AM
> > > To: tls@ietf.org
> > > Cc: Scott Fluhrer (sfluhrer); Nikos Mavrogiannopoulos; Hanno Böck;
> > > Blumenthal, Uri - 0553 - MITLL
> > > Subject: Re: [TLS] RSA-PSS in TLS 1.3
> > >
> > > On Friday 04 March 2016 13:49:11 Scott Fluhrer wrote:
> > >
> > > > I agree with Hanno; if we're interested in defending against a
> > > > Quantum Computer, post Quantum algorithms are the way to go
> > >
> > >
> > > except that using RSA keys nearly an order of magnitude larger than
> > > the biggest ECC curve that's widely supported (secp384) is the
> > > current recommended minimum by ENISA and long term minimum by
> NIST
> > > (3072).
> > > Using keys 5 times larger still is not impossible, so while it may
> > > not buy us extra 20 years after ECC is broken, 10 years is not
> > > impossible and 5 is almost certain (if Moore's law holds for
> > > quantum computers).
> > > It's not much, but it may be enough to make a difference.
> >
> >
> > If we believe that growth in Moore's law will be accurate for Quantum
> > Computers, then no one has to worry about Quantum Computers for the
> > next millennia.
> 
> > In 2001, a Quantum Computer factored a 4 bit number.  In 2014, the
> > factorization of a 16 bit number was announced (however, the
> > factorization used a special relationship between the factors, so I
> > don’t think it counts as a general factorization, but let's ignore
> > that for now).  That's not too far off from a Moore's law type
> > expansion.  If this rate continues, well see the first 1024 bit
> > factorization circa the year 3100 AD (aka CE).
> 
> GIGO, you're extrapolating from two data points when we have no idea how
> fast or how slow will be the progress in general

Actually, that sort of logic is what you're using.  You have no idea how fast 
or slow will the progress be in general, however you assure us that it'll be 
take significantly longer to create a Quantum Computer that can break large key 
RSA than it would be to break ECC.

If you don't believe the oversimplified logic I showed above, you must assume 
that, at some point in the future, that Quantum Computers must increase much 
more rapidly than a simple Moore's law prediction (based on simple 
extrapolation from what we have now).  However, you assume that this rapid 
expansion will stop at a point insufficient to break large key RSA.

> 
> and I meant Moore's 18-24 months per double, not the idea of exponential
> growth in general; in other words P-256 succumbing to quantum computers
> 4 to 8 years before 1024 bit RSA

As you are making assertions on the likely progress in building Quantum 
Computers, I have to ask: what expertise do you have in the design and 
construction of Quantum Computers?  How up to date are you on the theory?  Are 
you familiar with Toffoli gates or Clifford gates?  How about magic state 
factories [real name]?

I'm not an expert in this field either - however, I have talked to experts; the 
opinions I've heard is that a realistic computer that can break RSA is perhaps 
10-15 years off (estimates differ between experts); once it's been built, 
scaling it up isn't likely to be much of an issue (largely because we already 
know how to etch quite large construction onto Silicon).  In essence, the 
problem isn't the actual construction process, but knowing what to build.

Might they be wrong?  Might they be overoptimistic about their technology?  
Might there be a practical bump in the road that they don't foresee yet?  
Perhaps; however it wouldn't appear prudent to assume that.

And, I would argue that 10-15 years isn't that far off, since we need to worry 
about someone storing the encrypted data now, and decrypting them later.


The bottom line: am I saying that you shouldn't start supporting large key RSA 
as a short term solution, in the hopes that it might fend off a Quantum 
Computer for a bit?  No, as that's not likely to be harmful, go ahead and knock 
yourself out.  However, I am saying that it would be foolish to pretend that is 
anything but a shortterm patch at best; it might end up providing no additional 
protection.  If we're interested in a longer term solution, we would need to 
eventually go with real postquantum cryptography (and I would argue that 
'eventually' isn't that far in the future).

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] (TLS1.3 - algorithm agility support is enough; no need to crystal ball gaze PQ right now, except as pass-time) Re: RSA-PSS in TLS 1.3

2016-03-07 Thread Scott Fluhrer (sfluhrer)
I'm not precisely sure what you are really saying.  Are you claiming that, even 
though currently known PQ key exchange don't give all the flexibility that 
(EC)DH has, you're pretty sure that'll change in the future?  Or, are you 
saying that relying on the full flexibility of (EC)DH isn't a problem, as the 
WG will be willing to make changes to the TLS 1.3 0-RTT infrastructure in the 
future?

The issue really is the 0-RTT infrastructure, and its static key share.  Unlike 
(EC)DH, known postquantum key agreement protocols either don't securely support 
static key shares (largely because it is impossible to distinguish a valid 
client keyshare from an invalid one, and how the server reacts to an invalid 
client keyshare depends on the server's private keyshare value), or they are 
really public key encryption systems being used to do key agreement, and the 
client's key share in the 0-RTT exchange can't be used to negotiate a long term 
set of keys.

Outside of 0-RTT, there doesn't appear to be an issue.  In those cases, the 
client gives an ephemeral key share, the server responds with an ephemeral key 
share, they both derive a shared secret, and everyone (other than the NSA, who 
can't listen in :-) is happy.  However, 0-RTT is an important part of the 
protocol; we can't forbid it in a postquantum setting.

Of course, we can in the future, we can revise the TLS 1.3 protocol to adapt to 
the functionality that postquantum algorithms will give us.  However (based on 
the protocols we know about) the changes required by the protocol would be 
nontrivial.  My hope is to make it so that algorithm agility is enough; that 
all the protocol needs to know is that there's this additional ciphersuite 
that, from a blackbox standpoint, does precisely the same job as the other 
ciphersuites, and that nothing outside of the crypto code needs to change.  
However, given that the postquantum algorithms we know about can't do 
everything that (EC)DH can do, it would appear to be prudent not to assume (in 
either our protocol or in our proofs) this additional functionality.


From: Rene Struik [mailto:rstruik....@gmail.com]
Sent: Monday, March 07, 2016 2:49 PM
To: Scott Fluhrer (sfluhrer); Tony Arcieri
Cc: tls@ietf.org
Subject: (TLS1.3 - algorithm agility support is enough; no need to crystal ball 
gaze PQ right now, except as pass-time) Re: [TLS] RSA-PSS in TLS 1.3

Hi Scott:

I think it is really premature to speculate on features PQ-secure algorithms 
can or cannot provide (*) and try and have this influence *current* TLS1.3 
protocol design.  Should one wish to include PQ algorithms in a future update 
of TLS1.3, one can simply specify which protocol ingredients those updates 
relate to. As long as the current TLS1.3 specification has some facility for 
implementing algorithm agility, one can shy away from crystal ball gazing for 
now.

Best regards, Rene

(*) I am not sure everyone would concur with your speculation, but crypto 
conferences may be a better venue to discuss this than a TLS mailing list.

On 3/7/2016 12:21 PM, Scott Fluhrer (sfluhrer) wrote:

From: Tony Arcieri [mailto:basc...@gmail.com]
Sent: Monday, March 07, 2016 11:40 AM
To: Scott Fluhrer (sfluhrer)
Cc: Nikos Mavrogiannopoulos; Hanno Böck; Blumenthal, Uri - 0553 - MITLL; 
tls@ietf.org<mailto:tls@ietf.org>
Subject: Re: [TLS] RSA-PSS in TLS 1.3

On Mon, Mar 7, 2016 at 8:34 AM, Scott Fluhrer (sfluhrer) 
mailto:sfluh...@cisco.com>> wrote:
Defenses against the first type of attack (passive evesdropping by someone who 
will build a QC sometime in the future) are something that this WG should 
address; even if the PKI people don't have an answer, we would at least be 
secure from someone recording the traffic and decrypting it later

I think it would make sense to wait for the CFRG to weigh in on post-quantum 
crypto. Moving to a poorly studied post-quantum key exchange algorithm 
exclusively runs the risk that when it does receive wider scrutiny new attacks 
will be found. I think we need to define hybrid pre/post-quantum key exchange 
algorithms (e.g. ECC+Ring-LWE+HKDF), and that sounds like work better suited 
for the CFRG than the TLS WG.


I'm sorry that I wasn't clearer; I agree that *now* isn't the time to define a 
postquantum ciphersuite/named group; we're not ready yet (and this WG probably 
isn't the right group to define it).  However, I believe that we will need to 
do at some point; my guess is that it'll be sooner rather than later.  What 
(IMHO) this WG should be doing now is making sure that there isn't something in 
TLS 1.3 that'll make it harder to transition to postquantum crypto when we do 
have a concrete proposal.

One thing that proposed QR key exchanges have is that they don't have the full 
flexibility that (EC)DH have; either they aren't secure with static key shares, 
or we can't

Re: [TLS] RSA-PSS in TLS 1.3

2016-03-07 Thread Scott Fluhrer (sfluhrer)

From: Tony Arcieri [mailto:basc...@gmail.com]
Sent: Monday, March 07, 2016 11:40 AM
To: Scott Fluhrer (sfluhrer)
Cc: Nikos Mavrogiannopoulos; Hanno Böck; Blumenthal, Uri - 0553 - MITLL; 
tls@ietf.org
Subject: Re: [TLS] RSA-PSS in TLS 1.3

On Mon, Mar 7, 2016 at 8:34 AM, Scott Fluhrer (sfluhrer) 
mailto:sfluh...@cisco.com>> wrote:
Defenses against the first type of attack (passive evesdropping by someone who 
will build a QC sometime in the future) are something that this WG should 
address; even if the PKI people don't have an answer, we would at least be 
secure from someone recording the traffic and decrypting it later

I think it would make sense to wait for the CFRG to weigh in on post-quantum 
crypto. Moving to a poorly studied post-quantum key exchange algorithm 
exclusively runs the risk that when it does receive wider scrutiny new attacks 
will be found. I think we need to define hybrid pre/post-quantum key exchange 
algorithms (e.g. ECC+Ring-LWE+HKDF), and that sounds like work better suited 
for the CFRG than the TLS WG.


I’m sorry that I wasn’t clearer; I agree that *now* isn’t the time to define a 
postquantum ciphersuite/named group; we’re not ready yet (and this WG probably 
isn’t the right group to define it).  However, I believe that we will need to 
do at some point; my guess is that it’ll be sooner rather than later.  What 
(IMHO) this WG should be doing now is making sure that there isn’t something in 
TLS 1.3 that’ll make it harder to transition to postquantum crypto when we do 
have a concrete proposal.

One thing that proposed QR key exchanges have is that they don’t have the full 
flexibility that (EC)DH have; either they aren’t secure with static key shares, 
or we can’t use the same key share as both an ‘initiator’ and a ‘responder’ key 
share.  This would indicate to me that we need to make sure that TLS 1.3 should 
be engineered to use (EC)DH as only a simple, ephemeral-only key exchange – 
yes, it has more flexibility than that, however taking advantage of such 
flexibility might cause us problems in the future

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RSA-PSS in TLS 1.3

2016-03-07 Thread Scott Fluhrer (sfluhrer)

> -Original Message-
> From: Nikos Mavrogiannopoulos [mailto:n...@redhat.com]
> Sent: Monday, March 07, 2016 8:42 AM
> To: Scott Fluhrer (sfluhrer); Hanno Böck; Blumenthal, Uri - 0553 - MITLL;
> tls@ietf.org
> Subject: Re: [TLS] RSA-PSS in TLS 1.3
> 
> On Fri, 2016-03-04 at 13:49 +, Scott Fluhrer (sfluhrer) wrote:
> > Given that there probably is no long term future for RSA anyway
> > > > (people want ECC and postquantum is ahead) I doubt anything else
> > > > than the primitives we already have in standards will ever be
> > > > viable.
> > > On the contrary. If we have a future with quantum computers
> > > available, the only thing that we have now and would work is RSA
> > > with larger keys, not ECC.
> > RSA isn't *that* much more secure against a Quantum Computer than
> ECC.
> > It would appear to take a larger Quantum Computer to break RSA than it
> > would to break ECC (for reasonable moduli/curve sizes), however not
> > that much more.  It is possible that, at one stage, we'll be able to
> > build a QC that's just large enough to break EC curves, but not larger
> > RSA keys - however, we would be likely to be able to scale up our QC
> > to be a bit larger; possibly in a few months, quite likely in a year
> > or two.  Hence, moving back to RSA would appear likely to buy us only
> > a short window...
> >
> > I agree with Hanno; if we're interested in defending against a Quantum
> > Computer, post Quantum algorithms are the way to go
> 
> Assuming that we have such algorithms which are practical to manage and
> deploy we would first need to enhance existing protocols with them,
> including TLS and PKI. Then it is only the (simple) task of 
> upgrading/replacing
> every single piece of infrastructure we have today from certificates to
> implementations with the new algorithms.

There are two threats that a Quantum Computer may bring:

- Someone (who might not have a QC now) recording the encrypting traffic, and 
then (when they do have a QC) decrypting the traffic
- Someone with a QC forging our authentication (certificates),and acting either 
as an imposter, or as a man-in-the-middle.

The second attack isn't feasible until someone actually has a QC at the time of 
the attack; for the first attack, that's a threat until the data being 
protected is no longer of any interest - depending on what that data is, that 
may be decades.

To defend against the first attack, we don't need to update the entire 
infrastructure.  Instead, all we need to do is update the client and the server 
to use a Quantum-Resistant ciphersuite (I'd argue that a QR named group would 
actually be preferable, however that's an argument for another time).  We know 
how to do a gradual rollout of this.

To defend against the second attack, yes, that would require changes to PKI, 
which this WG isn't in charge of.  However, these attacks become a threat later.


So, the points I want to make are:

- Defenses against the first type of attack (passive evesdropping by someone 
who will build a QC sometime in the future) are something that this WG should 
address; even if the PKI people don't have an answer, we would at least be 
secure from someone recording the traffic and decrypting it later
- Large size RSA keys (actually, DH groups; TLS 1.3 doesn't use RSA for key 
transport) don't necessarily add any protection from a QC (depending on how 
fast practical QC's ramp up).

> 
> Unless you can use the quantum computer to complete the above transition
> overnight, the quickest way to defend against the presence of a quantum
> computer is by allowing larger RSA keys.

Actually, that brings up a point; if we are worried about some old, unupdatable 
servers, how are we going to ever upgrade our authentication infrastructure?  A 
lot of those are built on cryptolibraries that cannot handle 16K RSA keys any 
more than they can handle Hash Based or BLISS certificates.  However, that's an 
argument for another time (and probably not by this WG)
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RSA-PSS in TLS 1.3

2016-03-07 Thread Scott Fluhrer (sfluhrer)

> -Original Message-
> From: Hubert Kario [mailto:hka...@redhat.com]
> Sent: Monday, March 07, 2016 6:43 AM
> To: tls@ietf.org
> Cc: Scott Fluhrer (sfluhrer); Nikos Mavrogiannopoulos; Hanno Böck;
> Blumenthal, Uri - 0553 - MITLL
> Subject: Re: [TLS] RSA-PSS in TLS 1.3
> 
> On Friday 04 March 2016 13:49:11 Scott Fluhrer wrote:
> > > -Original Message-
> > > From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Nikos
> > > Mavrogiannopoulos
> > > Sent: Friday, March 04, 2016 3:10 AM
> > > To: Hanno Böck; Blumenthal, Uri - 0553 - MITLL; tls@ietf.org
> > > Subject: Re: [TLS] RSA-PSS in TLS 1.3
> > >
> > > On Thu, 2016-03-03 at 17:11 +0100, Hanno Böck wrote:
> > >
> > > > It may be worth asking the authors what's their opinion of FDH vs
> > > >
> > > > > PSS
> > > > > in view of the state of the art *today*.
> > > >
> > > > You may do that, but I doubt that changes much.
> > > >
> > > >
> > > >
> > > > I think FDH really is not an option at all here. It may very well
> > > > be that there are better ways to do RSA-padding, but I don't think
> > > > that this is viable for TLS 1.3 (and I don't think FDH is better).
> > > > PSS has an RFC (3447) and has been thoroughly analyzed by
> > > > research. I
>  think there has been far less analyzing effort
> > > > towards FDH (or any other construction) and it is not in any way
> > > > specified in a standards document. If one would want to use FDH or
> > > > anything else one would imho first have to go through some
> > > > standardization process (which could be CFRG or NIST or someone
> > > > else) and call for a thorough analysis of it by the cryptographic
> > > > community. Which would take at least a couple of years.
> > > >
> > > >
> > > >
> > > > Given that there probably is no long term future for RSA anyway
> > > > (people want ECC and postquantum is ahead) I doubt anything else
> > > > than the primitives we already have in standards will ever be
> > > > viable.>
> > >
> > > On the contrary. If we have a future with quantum computers
> > > available, the only thing that we have now and would work is RSA
> > > with larger keys, not ECC.
> >
> > RSA isn't *that* much more secure against a Quantum Computer than ECC.
> >  It would appear to take a larger Quantum Computer to break RSA than
> > it would to break ECC (for reasonable moduli/curve sizes), however not
> > that much more.  It is possible that, at one stage, we'll be able to
> > build a QC that's just large enough to break EC curves, but not larger
> > RSA keys - however, we would be likely to be able to scale up our QC
> > to be a bit larger; possibly in a few months, quite likely in a year
> > or two.  Hence, moving back to RSA would appear likely to buy us only
> > a short window...
> 
> > I agree with Hanno; if we're interested in defending against a Quantum
> > Computer, post Quantum algorithms are the way to go
> 
> except that using RSA keys nearly an order of magnitude larger than the
> biggest ECC curve that's widely supported (secp384) is the current
> recommended minimum by ENISA and long term minimum by NIST (3072).
> 
> Using keys 5 times larger still is not impossible, so while it may not buy us
> extra 20 years after ECC is broken, 10 years is not impossible and 5 is almost
> certain (if Moore's law holds for quantum computers).
> It's not much, but it may be enough to make a difference.

If we believe that growth in Moore's law will be accurate for Quantum 
Computers, then no one has to worry about Quantum Computers for the next 
millennia.

In 2001, a Quantum Computer factored a 4 bit number.  In 2014, the 
factorization of a 16 bit number was announced (however, the factorization used 
a special relationship between the factors, so I don’t think it counts as a 
general factorization, but let's ignore that for now).  That's not too far off 
from a Moore's law type expansion.  If this rate continues, well see the first 
1024 bit factorization circa the year 3100 AD (aka CE).

However, right now one of the chief problems in building a Quantum Computer is 
error correction; catching when decoherence has occurred, and fixing it before 
it spoils the entire operation.  Once they have that solved, practical Quantum 
Computers may get much bigger very fast.

Might they, after the initial explosive growth (to 1000 qbits or 1,000,000 
qbits, I don't think anyone knows where that point would be), might it continue 
to grow in a Moore's law fashion?  It might; however we're not sure.  Using RSA 
keys which are 5 times larger might not buy us any time at all...

> 
> --
> Regards,
> Hubert Kario
> Senior Quality Engineer, QE BaseOS Security team
> Web: www.cz.redhat.com
> Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RSA-PSS in TLS 1.3

2016-03-04 Thread Scott Fluhrer (sfluhrer)


> -Original Message-
> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Nikos
> Mavrogiannopoulos
> Sent: Friday, March 04, 2016 3:10 AM
> To: Hanno Böck; Blumenthal, Uri - 0553 - MITLL; tls@ietf.org
> Subject: Re: [TLS] RSA-PSS in TLS 1.3
> 
> On Thu, 2016-03-03 at 17:11 +0100, Hanno Böck wrote:
> > It may be worth asking the authors what's their opinion of FDH vs
> > > PSS
> > > in view of the state of the art *today*.
> > You may do that, but I doubt that changes much.
> >
> > I think FDH really is not an option at all here. It may very well be
> > that there are better ways to do RSA-padding, but I don't think that
> > this is viable for TLS 1.3 (and I don't think FDH is better).
> > PSS has an RFC (3447) and has been thoroughly analyzed by research. I
> > think there has been far less analyzing effort towards FDH (or any
> > other construction) and it is not in any way specified in a standards
> > document. If one would want to use FDH or anything else one would imho
> > first have to go through some standardization process (which could be
> > CFRG or NIST or someone else) and call for a thorough analysis of it
> > by the cryptographic community. Which would take at least a couple of
> > years.
> >
> > Given that there probably is no long term future for RSA anyway
> > (people want ECC and postquantum is ahead) I doubt anything else than
> > the primitives we already have in standards will ever be viable.
> 
> On the contrary. If we have a future with quantum computers available, the
> only thing that we have now and would work is RSA with larger keys, not ECC.

RSA isn't *that* much more secure against a Quantum Computer than ECC.  It 
would appear to take a larger Quantum Computer to break RSA than it would to 
break ECC (for reasonable moduli/curve sizes), however not that much more.  It 
is possible that, at one stage, we'll be able to build a QC that's just large 
enough to break EC curves, but not larger RSA keys - however, we would be 
likely to be able to scale up our QC to be a bit larger; possibly in a few 
months, quite likely in a year or two.  Hence, moving back to RSA would appear 
likely to buy us only a short window...

I agree with Hanno; if we're interested in defending against a Quantum 
Computer, post Quantum algorithms are the way to go
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Do we actually need semi-static DHE-based 0-RTT?

2016-02-19 Thread Scott Fluhrer (sfluhrer)
I would also suggest keeping PSK 0RTT.

On of the things I'm looking at is postquatum cryptography (that is, 
cryptography that would still be secure even if someone manages to build a 
large quantum computer).  While this is not the most important issue that TLS 
1.3 needs to deal with; it's probably not in the top 100; however at some 
point, TLS will need to deal with it, and it would be preferable if we could 
say "here's a minor tweak/new ciphersuite/named group we apply to TLS 1.3, and 
we're good to go", rather than "we need to start doing a TLS 1.4".

Here's one of the issues that I foresee; we have key exchange protocols that 
look postquantum, however they can't do everything that (EC)DH can; either 
either must be run ephemerally (that is, no static keyshares, you have to 
generate a fresh one for each key exchange), or the response keyshare is a 
function of the initiator's one (and can't be used be used in a second 
exchange).

While such a key exchange protocol would work fine in an initial contact 
situation, it doesn't work out nearly as well in a DHE-0RTT protocol (which 
does make both assumptions).  Leaving in a doorway where we don't require our 
cryptoprimitives to have such capabilities may, in the long run, make things 
easier on us.

> -Original Message-
> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Dave Garrett
> Sent: Friday, February 19, 2016 6:15 PM
> To: Eric Rescorla; tls@ietf.org
> Subject: Re: [TLS] Do we actually need semi-static DHE-based 0-RTT?
> 
> On Friday, February 19, 2016 05:24:25 pm Eric Rescorla wrote:
> > My impression is exactly the opposite. All the infrastructure to
> > PSK-resumption and hence PSK-0RTT is already in place for TLS 1.2. And
> > of course PSK-resumption is also much faster.
> 
> That's good to hear. The perf advantage is why I'm not advocating dropping
> it; merely saying that I too would prefer a single method if it didn't loose
> capability. Dropping PSK resumption drops less than dropping DHE 0RTT, but
> keeping both seems like the best option at the moment.
> 
> > On Fri, Feb 19, 2016 at 3:08 PM, Dave Garrett 
> wrote:
> > > It would mean that TLS only has 0RTT resumption and not actually have
> any 0RTT sessions.
> >
> > Why do you think that this makes a material difference?
> 
> One of the fundamental complaints about TLS, performance-wise, is the
> added round trip time over plaintext. That's why the WG made a point to
> focus on adding 0RTT in the first place. Someone considering upgrading from
> HTTP to HTTPS generally has this concern (or any other protocol to a variant
> over TLS). With only PSK resumption, we can still always have a 1RTT hit on
> first connection, and revert to that after the session is considered expired.
> With DHE 0RTT we have a longer term config that could allow for more
> generic caching and distribution and thus not have to get that 1RTT hit in
> many scenarios.
> 
> The lack of a current ability to easily distribute a new config system should
> not be used as evidence against creating a new config system that we would
> want to create a way to easily distribute. Even dumping the top few dozen
> sites' 0RTT DHE config into a static file in a client update would be a 
> noticeable
> improvement over not doing so. Coming up with a better method can come
> next.
> 
> People should be using TLS or encryption in general as a matter of
> responsibility, but they don't. Softening *all* barriers to them upgrading is
> very necessary to get more to switch. 0RTT PSK only gets rid of a delay in
> continuing a session, which in some use-cases might be minimally noticeable.
> 0RTT DHE allows for a first-connect with comparable speed to plaintext (in
> scenarios where 0RTT data is safe, namely most HTTP). Within the context of
> HTTP, which is singled out as a focus in the TLS WG charter, 0RTT DHE will
> provide a more noticeable latency reduction in comparison to 0RTT PSK only.
> 
> Another issue is that of privacy with session resumption. If the only way to
> get 0RTT is to keep sessions alive, then clearing that cache on the client 
> side
> costs a future 1RTT. You can, however, cache 0RTT DHE configs safely for a
> longer time because they are not specific to the user agent. (we should
> probably narrow the spec to state that configs SHOULD NOT be per-client) In
> order to reliably get 0RTT without DHE configs, applications/services would
> need to cache PSK resumption sessions for as long as possible, which leaves a
> distinct per-user marker on both the client and server. Anyone trying to
> optimize away the 1RTT hit of first-connect will be required to maintain a
> system that keeps more user identifiable information than we should want.
> 
> 
> Dave
> 
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Re: [TLS] Data volume limits

2015-12-15 Thread Scott Fluhrer (sfluhrer)

> -Original Message-
> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Henrick Hellström
> Sent: Tuesday, December 15, 2015 7:09 PM
> To: tls@ietf.org
> Subject: Re: [TLS] Data volume limits
> 
> On 2015-12-16 00:48, Eric Rescorla wrote:
> >
> >
> > On Tue, Dec 15, 2015 at 3:08 PM, Scott Fluhrer (sfluhrer)
> > mailto:sfluh...@cisco.com>> wrote:
> > The quadratic behavior in the security proofs are there for just
> > about any block cipher mode, and is the reason why you want to stay
> > well below the birthday bound.
> >
> >
> > The birthday bound here is 2^{64}, right?
> >
> > -Ekr
> >
> >However, that's as true for (say) CBC mode as it is for GCM
> 
> Actually, no.
> 
> Using the sequence number as part of the effective nonce, means that it
> won't collide. There is no relevant bound for collisions in the nonces or in 
> the
> CTR state, because they simply won't happen (unless there is an
> implementation flaw). There won't be any potentially exploitable collisions.
> 
> However, theoretically, the GHASH state might collide with a 2^{64} birthday
> bound. This possibility doesn't seem entirely relevant, though.

That is a good point, and deserves to be examined more.

With CBC mode, there's a probability that two different ciphertext blocks will 
happen to be identical; when that unlikely event happens, the attacker can 
determine the bitwise difference between the corresponding plaintext blocks 
(and thereby leak a small amount of plaintext)

This doesn't happen with GCM.  Instead, the distinguisher is of this form: the 
attacker with a potential plaintext can compute the internal CTR values for 
GCM; if he sees a duplicate value, he can deduce that that potential plaintext 
wasn't the real one (because the internal CTR values never repeat).

Assuming that they cannot distinguish AES with a random key from a random 
permutation, that's the only thing they can learn.

That is, when they prove that there is no distinguisher with better than 
2^{-64} advantage, what they are referring to (in practice) is that the 
attacker could eliminate a tiny fraction (1 out of 2^{64}) of the possible 
plaintexts; they gain no more information than that.


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Scott Fluhrer (sfluhrer)


> -Original Message-
> From: Watson Ladd [mailto:watsonbl...@gmail.com]
> Sent: Tuesday, December 15, 2015 5:38 PM
> To: Scott Fluhrer (sfluhrer)
> Cc: Eric Rescorla; tls@ietf.org
> Subject: Re: [TLS] Data volume limits
> 
> On Tue, Dec 15, 2015 at 5:01 PM, Scott Fluhrer (sfluhrer)
>  wrote:
> > Might I enquire about the cryptographical reason behind such a limit?
> >
> >
> >
> > Is this the limit on the size of a single record?  GCM does have a
> > limit approximately there on the size of a single plaintext it can
> > encrypt.  For TLS, it encrypts a record as a single plaintext, and so
> > this would apply to extremely huge records.
> 
> The issue is the bounds in Iwata-Ohashai-Minematsu's paper, which show a
> quadratic confidentiality loss after a total volume sent. This is an 
> exploitable
> issue.

Actually, the main result of that paper was that GCM with nonces other than 96 
bits were less secure than previous thought (or, rather, that the previous 
proofs were wrong, and what they can prove is considerably worse; whether their 
proof is tight is an open question).  They address 96 bit nonces as well, 
however the results they get are effectively unchanged from the original GCM 
paper.  I had thought that TLS used 96 bit nonces (constructed from 32 bit salt 
and a 64 bit counter); were the security guarantees from the original paper too 
weak?  If not, what has changed?

The quadratic behavior in the security proofs are there for just about any 
block cipher mode, and is the reason why you want to stay well below the 
birthday bound.  However, that's as true for (say) CBC mode as it is for GCM

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Scott Fluhrer (sfluhrer)
Might I enquire about the cryptographical reason behind such a limit?

Is this the limit on the size of a single record?  GCM does have a limit 
approximately there on the size of a single plaintext it can encrypt.  For TLS, 
it encrypts a record as a single plaintext, and so this would apply to 
extremely huge records.

Or is this a limit on the total amount of traffic that can go through a 
connection over multiple records?  If this is the issue, what is the security 
concern that you would have if that limit is exceeded?

Thank you.

From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Eric Rescorla
Sent: Tuesday, December 15, 2015 4:15 PM
To: tls@ietf.org
Subject: [TLS] Data volume limits

Watson kindly prepared some text that described the limits on what's safe
for AES-GCM and restricting all algorithms with TLS 1.3 to that lower
limit (2^{36} bytes), even though ChaCha doesn't have the same
restriction.

I wanted to get people's opinions on whether that's actually what we want
or whether we should (as is my instinct) allow people to use ChaCha
for longer periods.

-Ekr

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Obscure ciphers in TLS 1.3

2015-09-23 Thread Scott Fluhrer (sfluhrer)

> -Original Message-
> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Dave Garrett
> Sent: Wednesday, September 23, 2015 6:41 PM
> To: tls@ietf.org
> Subject: [TLS] Obscure ciphers in TLS 1.3
> 
> https://tlswg.github.io/tls13-spec/#cipher-suites
> https://www.iana.org/assignments/tls-parameters/tls-
> parameters.xhtml#tls-parameters-4
> 
> When I updated the lists in the TLS 1.3 draft, I just put everything in that 
> is
> currently in the registry and usable. I'd like to now start a discussion on 
> what
> should be allowed. Specifically, I have questions about ARIA and Camellia, as
> well as 8-bit authentication tag variants of AES-CCM or anything else.
> 
> How relevant is this ARIA attack?
> https://eprint.iacr.org/2010/168

That's not relevant to the use of ARIA -- against 256 bit ARIA, it breaks 8 of 
16 rounds; against 192 bit ARIA, it breaks 7 of 14 rounds.  That gives us a 
factor-of-2 safety margin for both key sizes, which is rather a lot.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Comments on the TLS 1.3 draft

2015-08-06 Thread Scott Fluhrer (sfluhrer)
I recently reviewed the most recent TLS 1.3 draft, and I must say that I am 
impressed; the protocol appears to be a significant improvement.  In 
particular, you simplify the protocol significantly, and as we all know, 
complexity is the enemy of security.  You also drop many of the weak options, 
such as RC4 and the export ciphers; that sounds like an excellent idea.

That said, I do see a few things that puzzle me:


-  When dealing with ECDSA signatures, the default hash algorithm is 
SHA1, and any other hash function needs to be specified explicitly.  Might I 
ask why that is?  No one has demonstrated a SHA1 collision yet; however people 
are creeping closer.  If you ask me (which you didn't, but I'll give my opinion 
anyways), the default should be (at least) SHA-256; you should allow SHA-1 as a 
downgrade option only if someone makes a strong case that they can implement 
ECDSA and the rest of TLS 1.3, but implementing SHA-256 is just too hard.  
Otherwise, it should be discarded just like the other known weak 
cryptographical primitives (such as RC4).

-  You also allow the provision of someone using MD5 as the hash 
algorithm.  Take the above comments about how SHA-1 is not a good idea, and 
multiply it by a factor of about 10.  I see no justification for allowing 
someone to use a known broken hash algorithm.

-  Given the general theme of simplification, I was a bit puzzled by 
something; you appear to provide two different solutions to "how do I quickly 
reestablish a TLS tunnel"; you have both 0-RTT and session resumption.  While 
both appear to have minor advantages over the other, I don't immediately see 
any real justification for including the complexity of both.  Now, it might be 
that I'm catching a draft in the middle, and that you fully intend to combine 
them.  Alternatively, there might be strong reasons to have both (and it just 
doesn't occur to me). In either of those two cases, "never mind"

However, despite these minor nits, I would end with saying that the working 
group has done good work on this draft; I look forward to the end product.

Thanks!
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls