On 01/12/2017 16:23, Hubert Kario wrote:
On Friday, 1 December 2017 15:33:30 CET Ryan Sleevi wrote:
On Fri, Dec 1, 2017 at 7:34 AM, Hubert Kario <hka...@redhat.com> wrote:
It does feel like again the argument is The CA/EE should say 'I won't do

X'

so that a client won't accept a signature if the CA does X, except it
doesn't change the security properties at all if the CA/EE does actually

do

X, and the only places it does affect the security properties are either
already addressed (e.g. digitalSignature EKU) or themselves not
protected
by the proposed mechanism.

a). I think you're talking about Key Usage, not Extended Key Usage
b). digitalSignature is a Key Usage, not Extended Key Usage bit
c). Extended Key Usage has only one flag for use in TLS - serverAuth -
which
doesn't say anything about applicability of the key for SKE signature but
not
RSA key exchange
d). show me the clients that actually honour the Key Usage flags for TLS
in a
way that prevents use of certificate with rsaEncryption SPKI for RSA key
exchange

so, yes, I'm afraid that you "must be missing something"

So while we started off in disagreement, it sounds like we have cycled back
to the view that RSA-PSS-params, if present, should be memcmp() able
(between SPKI and Signature and between Signature and Policy)
So the only thing that we're debating here is whether or not expressing
RSA-PSS in the SPKI (at all) is a good thing.

The view in favor of this is:
- Because CAs have made a complete mess of the existing rsaEncryption + KU
, clients don't check KU for rsaEncryption (Notably, they do check KU for
ECDSA because that's necessary to distinguish from ECDH)
- If a certificate is encoded with rsaEncryption, it's possible for a
server to use it both with TLS 1.2 RSA PKCS#1v1.5 ciphersuites and TLS 1.3
RSA-PSS ciphersuites
- If used with TLS 1.2 RSA PKCS#1v1.5 ciphersuites, it's possible that the
implementation may be buggy and subject to Bleichenbacher
- And expressing (via the SPKI OID) is an 'effective' way to prevent that
downgrade, which itself is only a risk if you're using a buggy
implementation.

Is that accurate?

yes

To offset that risk, the goal is to use the SPKI algorithm as the signal to
'do not downgrade algorithms' (in this case, from PSS to PKCS#1v1.5).
This, despite the fact that SPKI parsing does not correctly work on any
platform

rejecting what you do not understand (iOS, Android) is completely valid and
expected behaviour - e.g. NSS server still won't use (at all) RSA-PSS keys
imported from PKCS#12 file...

   - Windows and NSS both apply DER-like BER parsers and do not strictly
reject (Postel's principle, despite Postel-was-wrong)

NSS did till very recently reject them, OpenSSL 1.0.2 still rejects them
(probably even 1.1.0), are you certain that Windows doesn't reject
certificates with SPKI with RSA-PSS OID? I mean, you _need_ additional code to
know that the public key for OID rsaEncryption and rsassaPss is formatted in
one and the same way... If you don't don't have that code, it looks like
completely different key type (think EdDSA or ECDSA for RSA-only
implementation)

   - macOS and iOS reject unrecognized SPKIs as weak keys
   - Android supports PSS-signatures but a provider for decoding said public
keys is not provided by default

Are there any other arguments in favor of the PSS-SPKI not captured here?

there is a remote chance that RSA-PSS with non-zero salts is strictly more
secure (unforgeable) than PKCS#1 v1.5, but for the sake of argument let's say
that what you said is the primary and only argument for RSA-PSS OID in SPKI

so no, there aren't other arguments
I think that we agree on the substance of the PSS implementation - Must Be
Memcmp-able - makes many of the client complexity concerns. The deployment
complexity concerns are unavoidable - few clients support RSA-PSS in part
because of the disaster than is RFC 4055 - but that's a deployment concern,
not an implementation concern.

As it relates to what changes this means for NSS:
- Strictly enforcing (memcmp)ing the accepted parameters that NSS accepts
   - That means NSS should NOT support arbitrary salt lengths, as doing so
adds flexibility at the cost of maintainability and security

Depending on the prevalence of non-public CAs (not listed in public
indexes) based on openssl (this would be a smallish company thing more
than a big enterprise thing), it might be useful to have *two* fixed
salt lengths for each combination of hash algorithm and RSA key length:

1. The salt length=hash length case previously suggested.

2. The salt length=largest permitted by RSA key length and hash length
(OpenSSL default).

Each of these could still be defined in a memcmp-able way.

   - This resolves the DER-like BER decoding
- Strictly enforcing the KU for RSA-PSS (which it improperly enforces KUs
on keys today already, but hopefully RSA-PSS has not been ruined)

Is that correct?

yes, fine by me

and fine for NSS too, if that changes don't have to be implemented in next
month or two, but have to be implemented before NSS with final TLS 1.3 version
ships



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to