The nuisance with just a flag is the client can't express [what I think are] reasonable preferences. It should be able to say things like:
Offhand, I don’t see at all why the client should be able to say any of these “don’t”s: * Don't do X25519 + P-256. This is just silly. * Don't do PQ1 on its own. I really want the PQ scheme paired with something more established. * Don't do PQ1 + PQ2. I said something more established, please. * Don't do PQ1 + X25519 + P-256. Why are you doing three of these? * Don't do PQ1 + PQ2 + PQ3 + PQ4 + X25519 + P-256 + P-384 + FFDHE2048 + FFDHE3072, oww my head. This is the only thing that IMHO the client should be able to say: * PQ1 + X25519 is cool. I like that combo. On Tue, Jul 30, 2019 at 2:48 PM Andrei Popov <andrei.po...@microsoft.com> wrote: Given these options, I also prefer option 2, for some of the same reasons. For my understanding though, why not have the client advertise support for hybrid-key-exchange (e.g. via a “flag” extension) and then KeyShareServerHello can contain two KeyShareEntries (essentially, using the same format as KeyShareClientHello? This would solve the Cartesian product issue. Cheers, Andrei From: TLS <tls-boun...@ietf.org> On Behalf Of David Benjamin Sent: Tuesday, July 30, 2019 11:24 AM To: Watson Ladd <watsonbl...@gmail.com> Cc: TLS List <tls@ietf.org> Subject: Re: [TLS] Options for negotiating hybrid key exchanges for postquantum I think this underestimates the complexity cost of option 1 to the protocol and implementations. Option 1 means group negotiation includes entire codepoints whose meaning cannot be determined without a parallel extension. This compounds across everything which interacts with named groups, impacting everything from APIs to config file formats to even UI surfaces. Other uses of NamedGroups are impacted too. For instance, option 2 fits into draft-ietf-tls-esni as-is. Option 1 requires injecting hybrid_extension into ESNI somehow.. Analysis must further check every use, say, incorporates this parallel lookup table into transcript-like measures. The lesson from TLS 1.2 code points is not combined codepoints vs. split ones. Rather, the lesson is to avoid interdependent decisions: * Signature algorithms in TLS 1.2 were a mess because the ECDSA codepoints required cross-referencing against the supported curves list. The verifier could not express some preferences (signing SHA-512 with P-256 is silly, and mixing hash+curve pairs in ECDSA is slightly off in general). As analogy to option 1's ESNI problem, we even forgot to allow the server to express curve preferences. TLS 1.3 combined signature algorithm considerations into a single codepoint to address all this. * Cipher suites in TLS 1.2 were a mess because they were half-combined and half-split. TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 said to use some ECDHE key exchange, but you need to check if you have a NamedGroup in common first. It said to use ECDSA, but you need to check signature algorithms (which themselves cross-reference curves) first. Early drafts of TLS 1.3 had it even worse, where a TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 full handshake morphed into TLS_ECDHE_PSK_WITH_AES_128_GCM_SHA256 on resumption. Thus, TLS 1.3 cipher suites negotiate solely AEAD + PRF hash. In fairness to TLS 1.2, some of this was a consequence of TLS 1.2's evolution over time as incremental extensions over SSL 3.0. And sometimes we do need to pay costs like these. But hybrid key exchanges fit into the NamedGroup "API" just fine, so option 2 is the clear answer. Code points are cheap. Protocol complexity is much more expensive. It's true that standards are often underspecified. This means the IETF should finish the job, not pass all variations through. RSA-PSS is a clear example of what to avoid. It takes more bytes to merely utter "RSA-PSS with SHA-256 and usual parameters" in X.509 than to encode an entire ECDSA signature! We should not define more than a handful of options, regardless of the encoding.. On Tue, Jul 30, 2019 at 12:18 PM Watson Ladd <watsonbl...@gmail.com> wrote: On Tue, Jul 30, 2019, 8:21 AM Scott Fluhrer (sfluhrer) <sfluh...@cisco.com> wrote: During the physical meeting in Montreal, we had a discussion about postquantum security, and in particular, on how one might want to negotiate several different ‘groups’ simultaneously (because there might not be one group that is entirely trusted, and I put ‘groups’ in scarequotes because postquantum key exchanges are typically not formed from a Diffie-Hellman group). At the meeting, there were two options presented: Option 1: as the supported group, we insert a ‘hybrid marker’ (and include an extension that map lists which combination the hybrid marker stands for) For example, the client might list in his supported groups hybrid_marker_0 and hybrid_marker_1, and there would be a separate extension that lists hybrid_marker_0 = X25519 + SIKEp434 and hybrid_marker_1 = X25519 + NTRUPR653. The server would then look up the meanings of hybrid_marker_0 and 1 in the extension, and then compare that against his security policy. In this option, we would ask IANA to allocate code points for the various individual postquantum key exchanges (in this example, SIKEp434 and NTRUPR653), as well a range of code points for the various hybrid_markers. Option 2: we have code points for all the various combinations that we may want to support; hence IANA might allocate a code point X25519_SIKEp434 and another code point for X25519_NTRUPR653. With this option, the client would list X25519_SIKEp434 and X25519_NTRUPR653 in their supported groups. In this option, we would ask IANA to allocate code points for all the various combinations that we want allow to be negotiated. I would like to make an argument in favor of option 1: · It is likely that not everyone will be satisified with “X25519 plus one of a handful of specific postquantum algorithms”; some may prefer another elliptic curve (for example, x448), or perhaps even a MODP group; I have talked to people who do not trust ECC); in addition, other people might not trust a single postquantum algorithm, and may want to rely on both (for example) SIKE and NewHope (which are based on very different hard problems). With option 2, we could try to anticipate all the common combintations (such as P384_SIKEp434_NEWHOPE512CCA), however that could very well end up as a lot of combinations. · There are likely to be several NIST-approved postquantum key exchanges, and each of those key exchanges are likely to have a number of supported parameter sets (if we take the specific postquantum key exchange as analogous to th ECDH protocool, the “parameter set” could be thought of an analogous to the specific elliptuc curve, and it modifies the key share size, the performance and sometimes the security properties). In fact, one of the NIST submissoins currently has 30 parameter sets defined. Hence, even if NIST doesn’t approve all the parameter sets (or some of them do not make sense for TLS in any scenario), we might end up with 20 or more different key exchange/parameter set combinations that do make sense for some scenario that uses tLS (be it in a tranditional PC client/server, a wireless client, two cloud devices communicating or an IOT device). · In addition, we are likely to support additional primitives in the future; possibly National curves (e.g. Brainpool), or additional Postquantum algorithms (or additional parameter sets to existing ones). Of course, once we add that code point, we’ll need to add the additional code points for all the combinations that it’ll make sense in (very much like we had to add a number of ciphersuites whenever we added a new encryption algorithm into TLS 1.2). Are people actually going to use hybrid encryption post NIST? The actual deployments today for experiment have all fit option 2 and hybrids are unlikely in the future. My objection to 1 is it gets very messy. Do we use only the hybrids we both support? What if I throw a bunch of expensive things together? No reason we need a hybrid scheme! It seemds reasonable to me that the combination of these two factors are likely to cause us (should we select option 2) to define a very large number of code points to cover all the various options that people need. Now, this is based on speculation (both of the NIST process, and additional primitives that will be added to the protocol), and one objection I’ve heard is “we don’t know what’s going to happen, and so why would we make decisions based on this speculation?” I agree that we have lack of knowledge; however it seems to me that a lack of knowledge is an argument in favor of selecting the more flexible option (which, in my opinion, is option 1, as it allows the negotiation of combinations of key exchanges that the WG has not anticipated). My plea: lets not repeat the TLS 1.2 ciphersuite mess; lets add an extension that keeps the number of code points we need to a reasonable bound. The costs of option 1? · It does increase the complexity on the server a small amount (I’m not a TLS implementor, however it would seem to me to be only a fairly small amount) · It may increase the size of the client hello a small amount (on the other hand, because it allows us to avoid sending duplicate key shares, it can also reduce the size of the client hello as well, depending on what’s actually negotiated) IMHO, the small increase in complexity is worth the lack of complexity in the code point table, and the additional flexibility it gives. _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls