Does that work with any other serious X.509 validation toolkit?
To make this work (assuming the old root CA cert has not yet expired),
the validation code will need to actually verify the End Entity
certificate against both public keys, which effectively reduces the
algorithm security by allowing twice as many bit strings to be
accepted as valid.
Filtering based on criteria that do not involve actual signature
validation would be cryptographically safe, but when falling back to
validating against multiple signatures, extra care is needed for the
above reason.
As for trust anchor update scenarios, I know of 3 different scenarios
that should be accepted by any good X.509 validation algorithm:
1. Changing expiry or other attributes while keeping the key.
Here the CA issues a new self-signed certificate with updated
attributes but unchanged key.
2. Changing the CA key when the old key has *not* been compromised.
Here the CA generates a new key and issues two certificates for it:
A. A self-signed new root with a serial number or other variation
in one of the subject name components.
B. A certificate for the new key and the same subject and (optional)
SubjectKeyIdentifier as A, but issued by the old root certificate
identity and key.
End Entities with certificates issued with the new key reference the
modified subject as authority and should be configured to supply the
transitional "B" certificate in chains sent to other end entities.
Such end entities should then succeed validating against either the
old root CA (via the B certificate supplied) or the new root CA
(via finding it as a locally configured trust anchor).
This is the scenario used by at least one major CA for its upgrade
to larger keys, and SSL servers that forget the B certificate in
the supplied chain get rejected by at least one common platform
version (Android 2.3) because it only includes the old trust
root in its store and uses an underdocumented BouncyCastly store
format thus preventing the efficient deployment of the new A cert.
3. Setting up the CA to have keys for more than one algorithm (such
as RSA 1024 with SHA1 and RSA 4096 with SHA256). In this case, the
certificate validation could SHOULD (but might not) match issued end
entity certificates to the trust anchor by also comparing
signatureAlgorithm in the issued certificate against
subjectPublicKeyInfo.algorithm in the candidate issuer cert from the
store.
However because not all clients are going to do that, it is still
recommended that the CA follows the scenario 2 procedures, except
when it is a test CA for verifying handling of this scenario in
X.509 implementations.
On 9/24/2012 8:01 PM, Ashok C wrote:
Only the private and public keys are different.. Rest of the fields are
same.. Basically I am simulating the trust anchor update related scenarios..
And yes Jacob, thanks for indicating, I'll make sure I don't use such
abbreviations from here on..
Ashok
On Sep 24, 2012 11:25 PM, "Jakob Bohm" <jb-open...@wisemo.com
<mailto:jb-open...@wisemo.com>> wrote:
Hi,
In your test case which fields actually differ between the
old root CA certificate and the new root CA certificate?
P.S.
Please do not use those 3 letter abbreviations of certificate
field names, very few people know those abbreviations.
For the benefit of other readers:
I think Ashok was referring to AuthorityKeyIdentifier and
SubjectKeyIdentifier fieldsbeing absent from the root
CA certificates in his scenario.
On 9/24/2012 6:26 PM, Ashok C wrote:
Hi,
One more observation was made here in another test case.
_*Configuration:*_
One old root CA certificate oldca.pem with subject name say, C=IN
One new root CA certificate newca.pem with same subject name.
One EE certificate, ee.pem issued by new root CA.
_*Test case 1:*_
Using CAFile option in openssl verify of the ee.pem,
If oldca.pem is placed on top and newca.pem below, verification
fails.
i.e., cat oldca.pem > combined.pem;cat newca.pem >> combined.pem
_*Test case 2:*_
If newca.pem is placed on top and oldca.pem below that, verification
succeeds.
i.e., cat newca.pem > combined.pem; cat oldca.pem>>combined.pem
Test case 1 does not seem to correct behaviour as the trust store
contains newca.pem. Ideally the lookup needs to verify ee.pem
with the
newca.pem.
P.S. The CA and EE certificates are v3 but do not contain AKI or
SKI fields.
--
Ashok
On Mon, Sep 24, 2012 at 6:50 PM, Jakob Bohm
<jb-open...@wisemo.com <mailto:jb-open...@wisemo.com>
<mailto:jb-open...@wisemo.com <mailto:jb-open...@wisemo.com>>__>
wrote:
On 9/13/2012 3:41 PM, Charles Mills wrote:
Would it make sense to delete the expired certificate
from the
Windows
store? Duplicate expired/non expired CA certificates
sounds to
me like a
problem waiting to happen.
/Charles/
Windows has built in support for using and checking time
stamping
countersignatures in PKCS7 signatures. If the
countersignature is
signed with a currently valid certificate with the time
stamping
extended usage enabled, then the outer PKCS7 signature and its
certificate is checked as if the current time was the one
certified
by the time stamp (but still using up to date revocation
information, paying attention to revocation reasons and dates).
Thus the Windows certificate store has good reason to keep
around
expired CA certificates going back to ca. 1996 (when Microsoft
released the first version supporting time stamped signatures).
P.S.
I see no technical reason why such time stamp countersignature
support could not be added to the OpenSSL core code.
The validation side implementation involves a few standard OIDs
(1.3.6.1.5.5.7.3.8 = timeStamping extended key usage,
1.2.840.113549.1.9.6 = counterSignature
unauthenticated attribute and 1.2.840.113549.1.9.5 signingTime
authenticated attribute). The countersignature specifies the
signed data type to be "1.2.840.113549.1.7.1=data", but the
signed data is really the encryptedDigest/signatureValue from
the enclosing SignerInfo being countersigned (as per
PKCS#9/RFC2985 section 5.3.6). The timeStamp indicated by
a counterSignature made by an entity with the timeStamping
extended key usage is simply the standard signingTime
authenticated attribute in the counterSignature itself.
On the signing side, the signer simply submits his
encryptedDigest/signatureValue to a time stamping service
he has
a contract with, using a simplified historic protocol as
follows:
POST url with "Accept: application/octet-string" "Content-Type:
application/octet-string" The post data is Base64 of DER of
SEQUENCE { -- Attribute
type OID -- must be 1.3.6.1.4.1.311.3.2.1
=timeStampRequest
SEQUENCE { -- This is a PKCS#7 ContentInfo, see RFC2315
content Type OID -- must be 1.2.840.113549.1.7.1
=data
content [0] EXPLICIT OCTET STRING
-- must be the outer encryptedDigest
}
}
}
Response if successful is a 200 HTTP status with "Content-Type:
application/octet-string" and a value which is Base64 of DER of
a PKCS#7/CMS SignedData where encapContentInfo is the
ContentInfo
from the request. The time is indicated by the standard
signingTime authenticated attribute in the only SignerInfo in
signerInfos)
This historic time stamping protocol is necessary because the
RFC3161 protocol returns a signature which is not a valid
RFC2985
counterSignature.
Enjoy
Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users@openssl.org
Automated List Manager majord...@openssl.org