On Monday, 27 November 2017 20:31:53 CET Ryan Sleevi wrote:
> On Mon, Nov 27, 2017 at 12:54 PM, Hubert Kario <hka...@redhat.com> wrote:
> > > On the realm of CA policy, we're discussing two matters:
> > > 1) What should the certificates a CA issue be encoded as
> > > 2) How should the CA protect and use its private key.
> > > 
> > > While it may not be immediately obvious, both your proposal and 4055
> > > attempt to treat #2 by #1, but they're actually separate issues. This
> > > mistake is being made by treating PSS-params on CA certificates as an
> > > important signal for reducing cross-protocol attacks, but it doesn't.
> > 
> > This
> > 
> > > is because the same public/private key pair can be associated with
> > 
> > multiple
> > 
> > > certificates, with multiple params encodings (and potentially the same
> > > subject), and clients that enforced the silly 4055 restrictions would
> > > happily accept these.
> > 
> > the CA can also use sexy primes as the private key, making the private key
> > easy to derive from the modulus... We can't list every possible way you
> > can
> > overturn the intention of the RFCs.
> > 
> > we need to assume well-meaning actors, at least to a certain degree
> 
> First, I absolutely disagree with your assumption - we need to assume
> hostility, and design our code and policies to be robust against that. I
> should hope that was uncontroversial, but it doesn't seem to be.

my point was that there are some actions of the other party we interact with 
that must be taken on faith, like that the server_random is actually random, 
or the server private part of the key share stays secret, or a myriad other 
things that we cannot verify legitimacy or correctness of. Without assuming 
that good faith it's impossible to communicate. Same for generated keys 
provided to CAs.

> Second, the only reason this is an issue was your suggestion (derived from
> 4055, to be fair) about restricting the params<->signature interaction. The
> flexibility afforded by 4055 in expressing the parameters, and then
> subsequently constraining the validation rules, is not actually met by the
> threat model.

There was a threat model in RFC 4055?

Is it so hard to imagine that a CA may want to ensure that the signatures it 
makes will never use weak hash? And even if it makes them, they won't be 
validated by a conforming implementation?

> That is, if it's dangerous to mix the hash algorithms in PSS signatures
> (and I'm not aware of literature suggesting this is necessary, versus being
> speculative concern), then we should explicitly prohibit it via policy.
> Requiring the parameters in the certificates does not, in any way, mitigate
> this risk - and its presumptive inclusion in 4055 was to constrain how
> signature-creating-software behaved, rather than how
> signature-accepting-clients should behave.
> 
> Alternatively, if mixing the hash algorithms is not fundamentally unsafe in
> the case of RSA-PSS, then it's unnecessary and overly complicating things
> to include the params in the SPKI of the CA's certificate. The fact that
> 'rsaEncryption' needs to be accepted as valid for the issuance of RSA-PSS
> signatures already implies it's acceptable, and so the whole SHOULD
> construct is imposing on the ecosystem an unsupported policy.

it would be nice if the world was black and white, it would make any kind of 
nuanced work so much easier...

Those are all shades of grey, for some uses allowing possibility of cross-
protocol attacks is not important, for other, not so much.
 
> So no, we should not assume well-meaning actors, and we should be explicit
> about what the "intention" of the RFCs is, and whether they actually
> achieve that.

but we should achieve that by saying "do this", not "don't do this", 
enumerating badness doesn't work - ask firewall people if you don't believe 
me.

Or did we add to policy that keys revoked because they may haven been 
compromised (heartbleed) can't be reused? Ever? Even by a different CA?
 
> > > So I think it's useful to instead work from a clean set of principles,
> > 
> > and
> > 
> > > try to express them:
> > > 
> > > 1) The assumption, although the literature doesn't suggest it's
> > 
> > necessary,
> > 
> > > and it's not presently enforced in the existing WebPKI, is that the hash
> > > algorithm for both PKCS#1 v1.5 and RSA-PSS should be limited to a single
> > > hash algorithm for the private key.
> > > 
> > >   a) One way to achieve this is via policy - to state that all
> > >   signatures
> > > 
> > > produced by a CA with a given private key must use the same set of
> > > parameters
> > > 
> > >   b) Another way is to try and achieve this via encoding (as 4055
> > > 
> > > attempts), but as I noted, this is entirely toothless (and somewhat
> > > incorrectly presumes X.500's DIT as the mechanism of enforcing policy a)
> > 
> > just because the mechanism can be abused, doesn't make it useless for
> > people
> > that want to use it correctly. It still will protect people that use it
> > correctly.
> 
> B is absolutely useless as a security mechanism against threats, and is
> instead a way of signature-producing software to bake in an API contract in
> to an RFC. We shouldn't encourage that, nor should the ecosystem have to
> bear that complexity.
> 
> If it's not a security mechanism, then it's unnecessary.

how is stating "I will never use SHA-1" not a security mechanism?
 
> > > 2) We want to ensure there is a bounded, unambiguous set of accepted
> > > encodings for what a CA directly controls
> > > 
> > >   a) The "signature" fields of TBSCertificate (Certs) and TBSCertList
> > > 
> > > (CRL). OCSP does not duplicate the signature algorithm in the
> > 
> > ResponseData
> > 
> > > of a BasicOCSPResponse, so it's not necessary
> > 
> > that's already a MUST requirement, isn't it?
> 
> It's not what NSS has implemented, but shipping, as captured on the bug.
> 
> And this matters, because permissive bugs in client implementation
> absolutely leads to widespread ossification of server bugs, and why I
> specifically requested that the NSS developers unship RSA-PSS support until
> they can correctly and properly implement it.
> 
> We already saw this with RSA-PKCS#1v1.5 - it shouldn't be repeated again.

it turned out to be a problem because a). it was the 90's, b). everybody did 
it like that, c). everybody assumed (not test) that security software was 
written, well, securely...

I don't think either of those things remain true
 
> > >   b) The "subjectPublicKeyInfo" of a TBSCertificate
> > 
> > that's the biggest issue
> > 
> > > 3) We want to make sure to set expectations around what is supported in
> > 
> > the
> > 
> > > signatureAlgorithm fields of a Certificate (certs), CertificateList
> > 
> > (CRLs),
> > 
> > > and BasicOCSPResponse (OCSP).
> > > 
> > >   - Notably, these fields are mutable by attackers as they're part of
> > >   the
> > > 
> > > 'unsigned' portion of the certificate, so we must be careful here about
> > 
> > the
> > 
> > > flexibility
> > 
> > true, but a). there's no chance that a valid PKCS#1 v1.5 signature will be
> > accepted as an RSA-PSS signature or vice versa, b). I'm proposing addition
> > of
> > only 3 valid encodings, modulo salt size
> 
> IMO, a) is not relevant to set of concerns, which I echo'd on the bug and
> again above
> 
> And I'm suggesting that while you're proposing prosaically three valid
> encodings, this community has ample demonstration that CAs have difficulty
> correctly implementing things - in part, due to clients such as NSS
> shipping Postel-liberal parsers - and so the policy should make it as
> unambiguous as possible. The best way to make this unambiguous is to
> provide the specific encodings - byte for byte.

or provide examples of specific encodings with explanations what can change 
and to what degree...

> Then a correct implementation can do a byte-for-byte evaluation of the
> algorithm, without needing to parse at all - a net win.

that horse left the barn long time ago

> > > 4) We want to define what the behaviour will be for NSS (and Mozilla)
> > > clients if/when these constraints are violated
> > > 
> > >   - Notably, is the presence of something awry a sign of a bad
> > > 
> > > certification path (which can be recovered by trying other paths) or is
> > 
> > it
> > 
> > > a sign of bad CA action (in which case, it should be signalled as an
> > 
> > error
> > 
> > > and non-functioning)
> > 
> > it's an invalid signature, needs to be treated as that
> 
> I think my point still stands that 'invalid signature' can be treated as
> either case I mentioned, and so your answer doesn't actually resolve the
> matter.

it means it should be treated the exact same way other invalid signatures are 
treated by NSS
 
> > > However, if we chose to avoid simplicitcy and pursue complexity, then I
> > > think we'd want to treat this as:
> > > 
> > > 1) A policy restriction that a CA MUST NOT use a private key that has
> > 
> > been
> > 
> > > used for one algorithm to be used with another (no mixing PKCS#1 v1.5
> > > and
> > > RSA-PSS)
> > > 2) Optionally, a policy restriction that a CA MUST NOT use a private key
> > > with one set of RSA-PSS params to issue signatures with another set of
> > > RSA-PSS params
> > > 3) Optionally, a policy restriction that a CA MUST NOT use a private key
> > > with one RSA-PKCS#1v1.5 hash algorithm to issue signatures with another
> > > RSA-PKCS#1v1.5 hash algorithm
> > > 
> > > I say "optionally", because a substantial number of the CAs already do
> > 
> > and
> > 
> > > have done #3, and was critically necessary, for example, for the
> > 
> > transition
> > 
> > > from SHA-1 to SHA-256 - which is why I think #2 is silly and
> > > unnecessary.
> > 
> > I don't consider allowing for encoding such restrictions hugely important
> > either, but I don't see a reason to forbid CAs from doing that to CA
> > certificates either, if they decide that they want to do that
> 
> Why one and not the other? Personal preference? There's a lack of tight
> proof either way as to the harm.

because while saying "I will ever use $HASH_NAME" is not really useful, saying 
"I will never use $HASH_NAME" is useful, unfortunately we can only describe 
the latter in the terms of the former

> > > 5) A policy requirement that CAs MUST encode the signature field of
> > > TBSCertificate and TBSCertList in an unambiguous form (the policy would
> > > provide the exact bytes of the DER encoded structure).
> > > 
> > >   - This is necessary because despite PKCS#1v1.5 also having specified
> > 
> > how
> > 
> > > the parameters were encoded, CAs still screwed this up
> > 
> > that was because NULL versus empty was ambiguous - that's not the case for
> > RSA-PSS - empty params means SHA-1 and SHA-1 is forbidden, missing params
> > is
> > unbounded so there's nothing to fail interop
> 
> I disagree with your assessment, again born out by the experience here on
> the community.
> 
> I can easily see a CA mistaking "MGF is MGF1" leaning to encoding the
> hashAlgorithm as SHA-1 and the MGF as id-mgf1 without realizing that params
> also needs to be specified.

that's also the part I have no problem what so ever with specification 
spelling it out in excruciating detail

> Consider, for example, that RFC 4055's rsaSSA-PSS-SHA256-Params,
> SHA384-Params, and SHA512-Params all set saltLength as 20. The subtlety of
> the policy requiring 32/48/64 rather than 20/20/20 is absolutely a mistake
> a CA can make. For example, their software may say "PSS/SHA-256" and result
> in 4055's PSS-SHA256-Params rather than the proposed requirement.

then describe what is allowed, update policy, inform CAs in CA communication 
what is allowed and implement those limitations in software.

we are talking about months of time for any ossification to happen, it took 
decades to PKCS#1 v1.5 problems to pop up and it still affected only single 
digit percentages of certificates.

> > 6) A policy requirement that CAs MUST encode the subjectPublicKeyInfo
> > field
> > 
> > > of TBSCertificate in an unambiguous form (the policy would provide the
> > > exact bytes of the DER-encoded structure)
> > > 7) Changes to NSS to ensure it did NOT attempt to DER-decode the
> > 
> > structure
> > 
> > > (especially given NSS's liberal acceptance of invalid DER-like BER), but
> > > instead did a byte-for-byte comparison - much like mozilla::pkix does
> > > for
> > > PKCS#1v1.5 (thus avoiding past CVEs in NSS)
> > 
> > that would require hardcoding salt lengths, given their meaning in
> > subjectPublicKeyInfo, I wouldn't be too happy about it
> > 
> > looking at OpenSSL behaviour, it would likely render all past signatures
> > invalid and making signatures with already released software unnecessarily
> > complex (OpenSSL defaults to as large salt as possible)
> 
> That's OK, because none of these certificates are publicly trusted, and
> there's zero reason for a client to support all of the ill-considered
> flexibility of 4055.

that does not apply to software already released that can make those 
signatures with valid, publicly trusted keys

also, just because those CAs are not trusted now, doesn't mean that they can't 
become trusted in the future through cross-signing agreements

> > > If this is adopted, it still raises the question of whether 'past'
> > 
> > RSA-PSS
> > 
> > > issuances are misissued - whether improperly DER-like BER encoded or
> > 
> > mixed
> > 
> > > hash algorithms or mixed parameter encodings - but this is somewhat an
> > > intrinsic result of not carefully specifying the algorithms and not
> > 
> > having
> > 
> > > implementations be appropriately strict.
> > 
> > for X.509 only DER is allowed, if the tags or values are not encoded with
> > minimal number of bytes necessary, or with indeterminate length, it's not
> > DER
> > it's BER and that's strictly forbidden
> 
> I appreciate your contribution, but I think it's not born out by the real
> world. If it was, then
> https://wiki.mozilla.org/SecurityEngineering/Removing_Compatibility_Workarou
> nds_in_mozilla::pkix wouldn't have been necessary.
> 
> "strictly forbidden, but not enforced by clients" is just another way of
> saying "implicitly permitted and likely to ossify". I would like to avoid
> that, since many of these issues mentioned were in part caused by
> past-acceptance of DER-like BER by NSS.

then let's make sure that examples of such malformed certificates exists so 
that any library can be tested if they reject them. And as documentations to 
CA's that spells out "don't do this in particular".
-- 
Regards,
Hubert Kario
Senior Quality Engineer, QE BaseOS Security team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 115, 612 00  Brno, Czech Republic

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to