Inline: On Sun, Apr 9, 2023, 1:27 PM Manu Sporny <mspo...@digitalbazaar.com> wrote:
> On Sun, Mar 26, 2023 at 1:41 PM Ilari Liusvaara > <ilariliusva...@welho.com> wrote: > > The problem with lack of cryptographic agility is that if a component > > is broken or proves inadequate, you are in a world of hurt. > > It looks like the definition of "cryptographic agility" is failing us. :) > > The article isn't arguing against having multiple algorithms to jump > to when the one's we're using today inevitably fail. It's arguing > about how we minimize the parameters that we expose to developers > while enabling us to jump to a new mechanism before the old one fails > us. > > What's not under debate is: "Have backup algorithms and parameters at > the ready." (some people call this cryptographic agility) > > What the article calls out, as have a number of implementers cited in > the article, is this notion of: "Enable the developers at the > application layer to dynamically switch between all the backup > algorithms and parameters." (some people call this cryptographic > agility) > > That is, building all the backup algorithms into the application layer > of software and giving buttons and levers to developers that are not > trained in picking the 'right ones' is an anti-pattern. It leads to > "kitchen sink" cryptographic libraries, larger attack and audit > surfaces, downgrade attacks, and a variety of other things that have > been biting us at the application layer for years. > > EdDSA largely got this right, and we need more of that, and less of > this notion of continuing this trend where we expose developers to > pick algorithms and parameters that they don't understand, which > inevitably ends up generating CVEs. > > > Of all the three problems brought up, versions are worse than > > algorithms: > > > > - Versions are much more expensive. > > - Versions are much more likely to interact badly. > > - Versions are much more vulernable to downgrade attacks. > > Can you define "version"? For example, do you mean "protocol version" > or "cryptographic suite version" or something else? > > > And with algorithms being expensive, sometimes it is perversely > > lack of agility that makes things expensive. E.g., consider wanting > > to use Edwards25519 curve for signatures in constrained environment... > > I don't think anyone is arguing against having cryptographic suite(s) > that are suitable for use in embedded contexts and other ones that are > suitable for use in non-constrained environments. > > The argument is against allowing developers that don't understand how > to pick the right parameters, at the application layer, from doing so > by using language that more clearly conveys what they're picking > (rather than strings like 'A128CBC', 'A128CTR', and 'A128GCM'). > > > And the example of downgrade attack given is version downgrade > > attack, not algorithm downgrade attack. As hard as algorithm negotiation > > is, version negotiation is much harder. > > And in response to the statement "No one should have used those > > suites after 1999!": Better suites were not registered until 2008. > > Christopher (cc'd) will have to speak to the point he was trying to > get at with this... > > > And the article does not seem to bring up overloading as a solution: > > Use the same identifiers with meanings that depend on the key. The > > applications/libraries are then forced to consider the key type before > > trying operations. > > Could you please elaborate more on this point, I think we might agree here. > > > RS256 and HS256 is are very different things, and applications > > absolutely require control over that sort of stuff. > > The point is that developers that have implemented software that have > led to CVEs didn't know that they were very different things because > of the APIs and parameters that they were using made it easy to > footgun themselves. IOW, "RS256" and "HS256" sound very similar to > them and the library APIs that they were using just did something like > "sign(tokenPayload, 'HS256', serverRSAPublicKey)"... which uses a > public key value to create an HMAC signature. > > > https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/ > > > And who cares about SHA-256 versus SHAKE-256 (until either gets broken, > > but nobody knows which). > > The point is: Why are we expecting developers to pick these values at > the application layer? > > Here's one of the problems w/ cryptographic system design today: > > 1. We (at the IETF) create a bunch of choices in the name of > "algorithm agility". > 2. Library implementers expose all of those choices to developers in > the name of "algorithm agility" (which they've been told is a good > thing). > 3. Developers footgun themselves because they pick the wrong values. > > Now, the rebuttal to #3 tends to be: Well, those developers shouldn't > be touching cryptographic code or security software! ... but the > problem there is that it ignores the reality that this happens on a > regular basis (and results in CVEs). > > So the question is, can we do more, as a security community, to > prevent footguns? EdDSA (ed25519) is the sort of improvement I'm > talking about (vs. what happened w/ RSA and ECDSA). I'll argue that > the "simplicity in design" consideration is the same one that went > into Wireguard and the W3C Data Integrity work as well. > It's clear by now we are talking about families of standards (JOSE, COSE) families of algorithms ( RSA, ECDSA, EdDSA ) and parameterization... W3C Data Integrity adds to these Dataset Canonicalization algorithms, and Dataset Digest algorithms. And exposes all of them under a single yearly versioned name. So for example ECDSA-2023, internalizes the following parameters: - URDNA2015 RDF Data Set Normalization - SHA-256 Data Set Digest Algorithm. - ECDSA Secp256r1 with sha256 signatures (ES256) There is no "parameterization", developers either support "ECDSA-2023" or they don't... They can't support only half of it. This is in contrast to things like HPKE, where you can decide to implement part of each registry and end up having 2 implementations of HPKE not be able to talk to each other because one only supports Kyber and the other only supports DHKem on p-256, and maybe they support different kdf and aeads as well. HPKE has lots of agility. COSE and JOSE have some agility. W3C Data Integrity has the least agility (in terms of choices for developers). But they are at different layers. HPKE is from CFRG, it does not assume application details, it is applied for TLS and COSE. COSE applies CFRG work to CBOR, JOSE does the same for JSON. W3C Data Integrity is (basically) doing the same thing as COSE but for RDF, and with extra algorithms related to RDF application types. Conflating agility at the cryptographic layer, with agility at the envelope or application layer... Is a problem. When I look at HPKE and COSE, I see "pro agility" at both layers. When I look at W3C Data Integrity, I see "anti agility", but only at the envelope / application layer... There is actually much more potential agility available, it is just not exposed to developers (canonicalization and digest). W3C Data Integrity assumes canonicalization, you can't change a parameter and use a different canonicalization algorithm, if you could it would have higher agility. You can't change a parameter, and use a different signing algorithm, if you could it would have higher agility... This is maybe changing, but the set is still at most 3 elements... And there is no registry. You can't change a parameter, and use a different dataset digest algorithm, if you could it would have higher agility. W3C Data Integrity relies on lower level primitives, such as the work of CFRG... Data Integrity does not currently define new cryptographic primitives, unless you count canonicalization algorithms as cryptographic primitives. W3C Data Integrity could have been built on JOSE and COSE, if it had been, it would have more internal agility, even if that agility was not exposed to application developers, externally. > Considering the multitude of security issues with JOSE, I don't think > > those have much to do with poor algorithm choices: > > Well, we certainly agree there -- there are many reasons that JOSE has > led to the number of security issues that have come about as > developers have used the stack. Many of those reasons boil down to > questionable design choices to expose the developer to algorithms and > parameters they shouldn't have been exposed to. > > > - Libraries somehow managing to use RSA public key as HMAC key (don't > > ask me how). > > Yep, exposing that selection in the way that JOSE libraries do is an > anti-pattern, IMHO. > > > - Bad library API design leading to alg=none being used when it should > > not. > > Yep, "Let the attacker choose the algorithm." ... another bad anti-pattern. > > > - Trusting untrustworthy in-band keys. > > Yep, due to lack of language around how to resolve and use public key > information. > > > - Picking wrong kinds of algorithms. > > Yep, because a non-trivial number of developers using the JOSE stack > are not trained in parameter selection in that stack... multiple > footguns. > > > - And numerious others where no algorithm is going to save you. > > Well, there's only so much we can do... but that list is not zero, and > is what Christopher was getting at with his article. We are taking > these lessons learned into account in the W3C Data Integrity work and > trying to do better there, with a focus on a reduction in parameter > choices and eliminating the exposure of data structures that have no > business being exposed at the application layer (such as every > component of a public/private key broken out into a different > variable). > > > And indeed, looking at JOSE algorithm registry, while there are some > > bad algorithms there (e.g., RS1), I would not say any of those is easy > > to pick apart if right kind of algorithms are chosen. > > I'll note that "Required" JWS algorithms are: HS256, A128CBC-HS256, > and A256CBC-HS512. > > A naive read at that table would suggest that those are the "safe > bets" when it comes to signing things in 2023. > > Everything else is variations of "Recommended" (12 options), or > "Optional" (25 options). Are we certain most developers using that > table (or the libraries, which tend to give little to no advice around > the options) are able to make the right decision? > > > The COSE registry has considerably worse stuff. E.g., WalnutDSA and > > SHA-256/64. Those might actually be easy to pick apart. > > No argument there. :) > > > One part of "improvement" seen with algorithms in newer stuff is that > > newer protocols/versions tends to not have the most horrible stuff > > anymore. > > Yes, that's good... but how about we provide some language that says > that implementations that include "Prohibited" things lead to > "non-conformant" status. Language like that can have weight in > government conformance criteria (such as FIPS), which are then used by > large organizations, which then pressure vendors/implementers to make > the changes take effect. > > IOW, aggressive deprecation of things that shouldn't be used and not > putting options forward that have questionable value (ES384, RS384, > A192GCM) should be a discussion we are having. (Perhaps that > discussion has already happened and I missed it, which is certainly > possible). > > > The problem with ciphersuites is that it is easy to couple things that > > absolutely should not be coupled (and if you don't then number of > > ciphersuites explodes). And they slot well with very flawed arguments > > about "cryptographic strength matching". The end result can easily end > > up being a disaster. > > Could you elaborate more on the point above, please? I'd like to > understand what you're saying at greater depth. > > > The worst stuff I have seen looks innocent (with some flawed "strength > > matching" arguments), with devil in the details. Not like the impressive > > mess that is TLS 1.2- ciphersuites. > > We agree that the TLS 1.2 ciphersuites are the wrong way to go. :) In > fact, that's exactly the sort of approach I'd like future work to > avoid, the W3C Data Integrity work included. > > So, when I said "cryptosuite", I didn't mean "TLS 1.2 ciphersuites"... > Right. A W3C Data Integrity crypto suite is basically a JOSE / COSE algorithm + other algorithms, exposed under a single name, with no option for the developers to upgrade incrementally without changing the top level versioned name. This is the "less agility is good" for application developers angle. Compare to HPKE COSE approach, which reuses the IANA registries, and let's developers choose many different parameter combinations for the same algorithm. This is the "more agility is good" for application developers angle. Having to throw out your entire implementation when the dataset canonicalization algorithm has a vulnerability, is different than switching to a new registry entry and using the same software library that has already been vetted. You don't need to take the whole HPKE registry to support more than 1 kem, kdf and aead... Does this mean HPKE is a kitchen sink, but only when you want it to be? Imagine if W3C Data Integrity used HPKE and didn't expose these choices, and a problem was discovered and a new kdf needed to be used. You would have to cut a new data integrity suite version, in addition to telling developers to remove a parameter from an allow list. Because of the anti agility design choices made in data integrity, the only agility you get, is at the data integrity suite name layer. Certain application developers might love this simple design. When I first learned of the HPKE design choices, I came to this list to ask why not: alg: hpke-kem-kdf-aead aka hpke-safe-2023. Especially in the context of relationship to COSE Key and JWK. This question has been answered several times by illari and daisuke at this point. I wonder if it is the consensus of IETF or just the opinion of a few working group members (who I respect greatly), which is why I started the thread. Manu and Christopher, can you comment on the design choices of HPKE and HPKE-COSE? Is the agility expressed good or bad? Do you think versioned "suites" are better than named algorithms and exposed parameters? Can you cite any drafts or RFCs you have worked on as they might relate to this question? OS I'm starting to think we agree on a variety of points, but the > definitions for the words we're using are different in significant > ways. :) > > > And Wireguard is not linearly scalable, so it can get away with stuff > > other protocols that actually need linear scalability can not. > > Can you define what you mean by "linearly scalable" -- do you mean > "the concept that as computing power increases, the hardness of > brute-forcing the cryptography has to keep pace in a linear fashion?" > > Ilari, I'm a big fan of RFC8032 -- it got a lot of things right wrt. > simplicity, reduction in parameterization, reduction in implementation > errors, and is the sort of direction I hope future work at W3C and > IETF can head in. I expect that we agree on much more than it seems > initially, mostly due to the mismatch in definitions around > "cryptographic agility". :) > > -- manu > > -- > Manu Sporny - https://www.linkedin.com/in/manusporny/ > Founder/CEO - Digital Bazaar, Inc. > News: Digital Bazaar Announces New Case Studies (2021) > https://www.digitalbazaar.com/ > > _______________________________________________ > COSE mailing list > COSE@ietf.org > https://www.ietf.org/mailman/listinfo/cose >
_______________________________________________ COSE mailing list COSE@ietf.org https://www.ietf.org/mailman/listinfo/cose