Re: [FORGED] Re: Machine- and human-readable format for root store information?

2017-07-02 Thread David Adrian via dev-security-policy
To be clear: I don't care what format the certificates are released in, I
am primarily interested in a reliable URL to download for each root store.
I personally will be converting them to OpenSSL-style PEM-encoded-DER to be
used with common X.509 libraries. I suspect others will also be interested
in this format, but I see no reason to bikeshed what PEM means.

On Sat, Jul 1, 2017 at 12:52 AM Peter Gutmann 
wrote:

> Peter Gutmann via dev-security-policy <
> dev-security-policy@lists.mozilla.org> writes:
>
> >You keep using that word... I do not think it means what you think it
> does.
>
> "... what you think it means".  Dammit.
>
> Peter.
>
-- 
David Adrian
https://dadrian.io
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Machine- and human-readable format for root store information?

2017-06-30 Thread David Adrian via dev-security-policy
I just want to drop in a couple thoughts from the perspective of Censys
with regard purely to _obtaining_ root stores.

Censys validates certificates against multiple root stores. At the end of
the day, what we want is a reliable and repeatable way to get an up-to-date
version of a root store in PEM format. Right now, obtaining root stores is
a combination of cloning Android source and hoping they don't change their
standard for git tags, parsing an Apple webpage to get a list of tarballs
and hoping the format of the webpage doesn't change, fetching the NSS
source and running it through agl's utility, and then the method linked
above from Ryan Hurst for fetching Microsoft. [1]

This is ridiculous. I don't particularly have strong opinions on how root
stores are released, and I understand wanting to avoid a direct PEM release
to prevent downstream users from consuming it incorrectly, but we _should
not_ have to run a webpage through BeautfulSoup to try to find a root
store. I'd like to see either a reliable URL to fetch that can be converted
to PEM (i.e. what Microsoft does), or some API you can hit to the store
(e.g. what CT does).

[1]: https://github.com/zmap/rootfetch

On Fri, Jun 30, 2017 at 12:39 PM Kai Engert via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hello Gerv,
>
> given that today we don't have a single place where all of Mozilla's
> certificate
> trust decisions can be found, introducing that would be a helpful.
>
> I think the new format should be as complete as possible, including both
> trust
> and distrust information, including EV and description of rules for partial
> distrust.
>
> As of today, certdata.txt contains:
> - whitelisted root CAs (trusted for one or more purposes)
> - distrusted/blacklisted certificates (which can be either CAs,
> intermediate
>   CAs or end entity certificates), based on varying identification criteria
>   (sometimes we distrust all matches based on issuer/serial,
>sometimes we are more specific and only distrust if the certificate also
>matches exactly a specific hash)
>
> But it doesn't list the additional decisions that Mozilla has implemented
> in
> code:
> - additional domain name constraints
> - additional validity constraints for issued certificates
> - additional required whitelist matching
>
> In the past, some consumers of the Mozilla CA list didn't even implement
> the few
> distrust decision that are already listed in certdata.txt, and had focused
> only
> on the positive trust. I don't know if this was because consumers didn't
> worry,
> or because they didn't even notice, but might have also been done because
> of
> technical limitations.
>
> It would be good if the new format made it very clear that there are
> distrust
> entries, and that trust for some CAs is only partial. The latter could
> make it
> easier for list consumers to identify the partially restricted CA. E.g.
> some
> might decide to rather not trust a restricted CA at all, if the consumer is
> technically unable to implement the restricting checks.
>
> We could define identifiers for each class of trust restrictions (CTR),
> e.g.:
> - permitted name constraint
> - excluded name constraints
> - restricted to serial/name whitelist
> - not valid for serial/name blacklist
> - restrict validity period of root CA
> - restrict allowed validity of issued EE or intermediates
> - require successful revocation checking
> - require successful Certificate Transparency lookup
> - ...
>
> This list could be expanded in the future, so a list consumer that has
> implemented all of the older CTRs could decide to not trust new CAs that
> have
> unknown CTRs defined.
>
> There were several comments in this thread about the file format and
> questions
> what we use today.
>
> Let me mention the concept to implement CTRs as "stapled certificate
> extensions", e.g. reuse the standard certificate format definitions,
> create the
> binary extension that implements a specific CTR, and embed it into the
> trust
> list file. This approach can allow software to load these extensions
> somehow in
> memory to the certificates, with the effect that standard certificate
> validation
> code can see and use them, without requiring additional logic.
>
> We already use this stapling approach in Firefox and NSS for name
> constraints.
> Because this requires a very specific ASN.1 encoding, we manually used
> tools to
> create such an extension, and then copy the binary data. That might be a
> reasonable approach even for the near future, until it can be automated
> completely.
>
> Currently the encoding of these name constraints was copied into source
> code,
> but this could also live inside a future trust file, if we define the file
> format to represent such binary extensions, and if we enhance the code to
> load
> such extensions dynamically from the list.
>
> Regarding the question how we create new entries for certdata.txt today, we
> currently use the NSS tool