> The article isn't arguing against having multiple algorithms to jump
to when the one's we're using today inevitably fail. It's arguing
about how we minimize the parameters that we expose to developers
while enabling us to jump to a new mechanism before the old one fails
us.

As soon as you have a cryptographic representation format that has the cited 
goal of wanting to support more than one algorithm, it is cryptographically 
agile IMO, its to what extent you then land on this spectrum that appears to be 
the core debate and how that agility is managed (e.g by algorithm identifiers 
or protocol versions).

> Can you define "version"? For example, do you mean "protocol version"
or "cryptographic suite version" or something else?

At least from my perspective “protocol version” can be used to signal changes 
that are more extensive then just an algorithm change, hence why I agree that a 
protocol versioning approach can be more expensive for implementers, then 
simply changing the algorithm.

Put another way, a cryptographic representation format that says “here is where 
you set the algorithm in your token and different algorithms will have 
different values”, creates a way to manage tokens secured with different 
algorithms in a manner that promotes tighter consistency in other aspects of 
the design, e.g overall representation of the token like whether there is the 
concept of a header, payload and how they are encoded (base64url).

Whereas a token (or representation format) that takes a versioning approach 
(depending on what the version represents), there can be no guarantee when the 
version is revised that other aspects of the design remains constant in the new 
token version. For example the overall structure of the token, even what 
encoding is used (e.g base64). The more this version represents in terms of 
possible scope of change, the more inertia/expense it creates for the 
implementation community if that scope is exercised through revisions. And if 
you argue instead that the protocol version is only supposed to represent the 
cryptographic algorithm used, then your protocol version is in fact an 
algorithm identifier.

> What the article calls out, as have a number of implementers cited in
the article, is this notion of: "Enable the developers at the
application layer to dynamically switch between all the backup
algorithms and parameters." (some people call this cryptographic
agility)

Again all I think this argument boils down to is we can design a format with 
less cryptographic algorithms to choose from but fundamentally our design is 
still cryptographically agile.

Thanks,
[MATTR website]<https://mattr.global/>

Tobias Looker
MATTR
+64 273 780 461
[email protected]<mailto:[email protected]>
[MATTR website]<https://mattr.global/>
[MATTR on LinkedIn]<https://www.linkedin.com/company/mattrglobal>
[MATTR on Twitter]<https://twitter.com/mattrglobal>
[MATTR on Github]<https://github.com/mattrglobal>

This communication, including any attachments, is confidential. If you are not 
the intended recipient, you should not read it – please contact me immediately, 
destroy it, and do not copy or use any part of this communication or disclose 
anything about it. Thank you. Please note that this communication does not 
designate an information system for the purposes of the Electronic Transactions 
Act 2002.

From: COSE <[email protected]> on behalf of Manu Sporny 
<[email protected]>
Date: Monday, 10 April 2023 at 6:27 AM
To: Ilari Liusvaara <[email protected]>
Cc: cose <[email protected]>, JOSE WG <[email protected]>, Christopher Allen 
<[email protected]>
Subject: Re: [COSE] [jose] Consensus on cryptographic agility in modern COSE & 
JOSE
EXTERNAL EMAIL: This email originated outside of our organisation. Do not click 
links or open attachments unless you recognise the sender and know the content 
is safe.


On Sun, Mar 26, 2023 at 1:41 PM Ilari Liusvaara
<[email protected]> wrote:
> The problem with lack of cryptographic agility is that if a component
> is broken or proves inadequate, you are in a world of hurt.

It looks like the definition of "cryptographic agility" is failing us. :)

The article isn't arguing against having multiple algorithms to jump
to when the one's we're using today inevitably fail. It's arguing
about how we minimize the parameters that we expose to developers
while enabling us to jump to a new mechanism before the old one fails
us.

What's not under debate is: "Have backup algorithms and parameters at
the ready." (some people call this cryptographic agility)

What the article calls out, as have a number of implementers cited in
the article, is this notion of: "Enable the developers at the
application layer to dynamically switch between all the backup
algorithms and parameters." (some people call this cryptographic
agility)

That is, building all the backup algorithms into the application layer
of software and giving buttons and levers to developers that are not
trained in picking the 'right ones' is an anti-pattern. It leads to
"kitchen sink" cryptographic libraries, larger attack and audit
surfaces, downgrade attacks, and a variety of other things that have
been biting us at the application layer for years.

EdDSA largely got this right, and we need more of that, and less of
this notion of continuing this trend where we expose developers to
pick algorithms and parameters that they don't understand, which
inevitably ends up generating CVEs.

> Of all the three problems brought up, versions are worse than
> algorithms:
>
> - Versions are much more expensive.
> - Versions are much more likely to interact badly.
> - Versions are much more vulernable to downgrade attacks.

Can you define "version"? For example, do you mean "protocol version"
or "cryptographic suite version" or something else?

> And with algorithms being expensive, sometimes it is perversely
> lack of agility that makes things expensive. E.g., consider wanting
> to use Edwards25519 curve for signatures in constrained environment...

I don't think anyone is arguing against having cryptographic suite(s)
that are suitable for use in embedded contexts and other ones that are
suitable for use in non-constrained environments.

The argument is against allowing developers that don't understand how
to pick the right parameters, at the application layer, from doing so
by using language that more clearly conveys what they're picking
(rather than strings like 'A128CBC', 'A128CTR', and 'A128GCM').

> And the example of downgrade attack given is version downgrade
> attack, not algorithm downgrade attack. As hard as algorithm negotiation
> is, version negotiation is much harder.
> And in response to the statement "No one should have used those
> suites after 1999!": Better suites were not registered until 2008.

Christopher (cc'd) will have to speak to the point he was trying to
get at with this...

> And the article does not seem to bring up overloading as a solution:
> Use the same identifiers with meanings that depend on the key. The
> applications/libraries are then forced to consider the key type before
> trying operations.

Could you please elaborate more on this point, I think we might agree here.

> RS256 and HS256 is are very different things, and applications
> absolutely require control over that sort of stuff.

The point is that developers that have implemented software that have
led to CVEs didn't know that they were very different things because
of the APIs and parameters that they were using made it easy to
footgun themselves. IOW, "RS256" and "HS256" sound very similar to
them and the library APIs that they were using just did something like
"sign(tokenPayload, 'HS256', serverRSAPublicKey)"... which uses a
public key value to create an HMAC signature.

https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/

> And who cares about SHA-256 versus SHAKE-256 (until either gets broken,
> but nobody knows which).

The point is: Why are we expecting developers to pick these values at
the application layer?

Here's one of the problems w/ cryptographic system design today:

1. We (at the IETF) create a bunch of choices in the name of
"algorithm agility".
2. Library implementers expose all of those choices to developers in
the name of "algorithm agility" (which they've been told is a good
thing).
3. Developers footgun themselves because they pick the wrong values.

Now, the rebuttal to #3 tends to be: Well, those developers shouldn't
be touching cryptographic code or security software! ... but the
problem there is that it ignores the reality that this happens  on a
regular basis (and results in CVEs).

So the question is, can we do more, as a security community, to
prevent footguns? EdDSA (ed25519) is the sort of improvement I'm
talking about (vs. what happened w/ RSA and ECDSA). I'll argue that
the "simplicity in design" consideration is the same one that went
into Wireguard and the W3C Data Integrity work as well.

> Considering the multitude of security issues with JOSE, I don't think
> those have much to do with poor algorithm choices:

Well, we certainly agree there -- there are many reasons that JOSE has
led to the number of security issues that have come about as
developers have used the stack. Many of those reasons boil down to
questionable design choices to expose the developer to algorithms and
parameters they shouldn't have been exposed to.

> - Libraries somehow managing to use RSA public key as HMAC key (don't
>   ask me how).

Yep, exposing that selection in the way that JOSE libraries do is an
anti-pattern, IMHO.

> - Bad library API design leading to alg=none being used when it should
>   not.

Yep, "Let the attacker choose the algorithm." ... another bad anti-pattern.

> - Trusting untrustworthy in-band keys.

Yep, due to lack of language around how to resolve and use public key
information.

> - Picking wrong kinds of algorithms.

Yep, because a non-trivial number of developers using the JOSE stack
are not trained in parameter selection in that stack... multiple
footguns.

> - And numerious others where no algorithm is going to save you.

Well, there's only so much we can do... but that list is not zero, and
is what Christopher was getting at with his article. We are taking
these lessons learned into account in the W3C Data Integrity work and
trying to do better there, with a focus on a reduction in parameter
choices and eliminating the exposure of data structures that have no
business being exposed at the application layer (such as every
component of a public/private key broken out into a different
variable).

> And indeed, looking at JOSE algorithm registry, while there are some
> bad algorithms there (e.g., RS1), I would not say any of those is easy
> to pick apart if right kind of algorithms are chosen.

I'll note that "Required" JWS algorithms are: HS256, A128CBC-HS256,
and A256CBC-HS512.

A naive read at that table would suggest that those are the "safe
bets" when it comes to signing things in 2023.

Everything else is variations of "Recommended" (12 options), or
"Optional" (25 options). Are we certain most developers using that
table (or the libraries, which tend to give little to no advice around
the options) are able to make the right decision?

> The COSE registry has considerably worse stuff. E.g., WalnutDSA and
> SHA-256/64. Those might actually be easy to pick apart.

No argument there. :)

> One part of "improvement" seen with algorithms in newer stuff is that
> newer protocols/versions tends to not have the most horrible stuff
> anymore.

Yes, that's good... but how about we provide some language that says
that implementations that include "Prohibited" things lead to
"non-conformant" status. Language like that can have weight in
government conformance criteria (such as FIPS), which are then used by
large organizations, which then pressure vendors/implementers to make
the changes take effect.

IOW, aggressive deprecation of things that shouldn't be used and not
putting options forward that have questionable value (ES384, RS384,
A192GCM) should be a discussion we are having. (Perhaps that
discussion has already happened and I missed it, which is certainly
possible).

> The problem with ciphersuites is that it is easy to couple things that
> absolutely should not be coupled (and if you don't then number of
> ciphersuites explodes). And they slot well with very flawed arguments
> about "cryptographic strength matching". The end result can easily end
> up being a disaster.

Could you elaborate more on the point above, please? I'd like to
understand what you're saying at greater depth.

> The worst stuff I have seen looks innocent (with some flawed "strength
> matching" arguments), with devil in the details. Not like the impressive
> mess that is TLS 1.2- ciphersuites.

We agree that the TLS 1.2 ciphersuites are the wrong way to go. :) In
fact, that's exactly the sort of approach I'd like future work to
avoid, the W3C Data Integrity work included.

So, when I said "cryptosuite", I didn't mean "TLS 1.2 ciphersuites"...
I'm starting to think we agree on a variety of points, but the
definitions for the words we're using are different in significant
ways. :)

> And Wireguard is not linearly scalable, so it can get away with stuff
> other protocols that actually need linear scalability can not.

Can you define what you mean by "linearly scalable" -- do you mean
"the concept that as computing power increases, the hardness of
brute-forcing the cryptography has to keep pace in a linear fashion?"

Ilari, I'm a big fan of RFC8032 -- it got a lot of things right wrt.
simplicity, reduction in parameterization, reduction in implementation
errors, and is the sort of direction I hope future work at W3C and
IETF can head in. I expect that we agree on much more than it seems
initially, mostly due to the mismatch in definitions around
"cryptographic agility". :)

-- manu

--
Manu Sporny - https://www.linkedin.com/in/manusporny/
Founder/CEO - Digital Bazaar, Inc.
News: Digital Bazaar Announces New Case Studies (2021)
https://www.digitalbazaar.com/

_______________________________________________
COSE mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/cose
_______________________________________________
jose mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/jose

Reply via email to