Re: Submission to ct-logs of the final certificate when there is already a pre-certificate

2018-04-03 Thread Matt Palmer via dev-security-policy
On Tue, Apr 03, 2018 at 01:49:58AM +0200, Jakob Bohm via dev-security-policy 
wrote:
> On 02/04/2018 18:26, Tom Delmas wrote:
> > Following the discussion on
> > https://community.letsencrypt.org/t/non-logging-of-final-certificates/58394
> > 
> > What is the position of Mozilla about the submission to ct-logs of the
> > final certificate when there is already a pre-certificate?
> > 
> > As it helps discover bugs (
> > https://twitter.com/_quirins/status/979788044994834434 ), it helps
> > accountability of CAs and it's easily enforceable, I feel that it should
> > be mandatory.
> 
> If such a policy were to be enacted, an alternative to submitting the
> final certificate should be to revoke the certificate in both a
> published CRL and in OCSP.  It would be counter to security to require
> issuance in the few cases where misissuance is detected between CT
> Pre-cert logging and actual issuance.

Logging the precert is considered demonstration of intent to issue, and is
considered misissuance to the exact same degree as actually issuing the
cert.  So revoke or whatever, you still done goofed, and so you should be
checking for misissuance *before* you log the precert, not afterwards.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 825 days success and future progress!

2018-04-03 Thread Matt Palmer via dev-security-policy
On Tue, Apr 03, 2018 at 03:16:53AM +0200, Jakob Bohm via dev-security-policy 
wrote:
> On 03/04/2018 02:35, Kurt Roeckx wrote:
> > On Tue, Apr 03, 2018 at 02:11:07AM +0200, Jakob Bohm via 
> > dev-security-policy wrote:
> > > seems
> > > to be mostly justified as a poor workaround for the browsers and
> > > certificate libraries not properly implementing reliable revocation
> > > checks.
> > 
> > The problem is not in the libraries, or even the applications
> > making use of them, it's that actually trying to check them is not
> > reliable. There are just too many cases where trying to check it
> > results in an error.
> > 
> > OCSP stapling should at least help with this. We should really
> > encourage people to use this, and have software enable this by
> > default. According to ssl-pulse 31% of the sites enable it.
> > 
> > There might also be library or application bugs. At least firefox
> > for me is annoying that if it for whatever reasons fails, it says
> > it's an internal server error (which as far as I know is never the
> > case), and then even doesn't seem to retry it and just give that
> > same error again.
> 
> Most of the remaining 69% of servers run software libraries that don't
> include a good enough OCSP stapling implementation.  This includes the
> omnipresent OpenSSL 1.0.2.
> 
> Automatically scheduled CRL downloads, though currently bandwidth
> inefficient, would be much more reliable, as they are done and retried
> in advance, with typically at least a day to recover from server
> glitches.  Also, CRL download servers are much more reliable than OCSP
> servers as they don't need to run special software with high CPU
> capacity and a secure private key, any basic redundant HTTP server can
> do the job.
> 
> Delta CRLs, with some systematic enhancements, could further speedup CRL
> downloads to a viable bandwidth level.

Bandwidth, whilst a big problem (not everyone has phat pipes, nor even
*always connected* pipes), isn't the only problem with CRLs.  You also need
to be able to store them all, and generate and store the indexes to make
searching them sufficiently fast.  Oh, and because CRL distribution points
aren't embedded in the root certificates, you're still going to stumble
across certs that you don't have the CRL for, which kills the "oh you'll
definitely have all the CRLs in advance" argument, bringing us back to the
same problem we've already got, that of "what do you do when you can't
access timely revocation information?" while *also* having all the other
problems of CRLs.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AC Camerfirma Chambers of Commerce and Global Chambersign 2016 Root Inclusion Request

2018-04-03 Thread Matt Palmer via dev-security-policy
On Tue, Apr 03, 2018 at 05:19:32AM -0700, ramirommunoz--- via 
dev-security-policy wrote:
> Completely agree with you about that a new root by itself do not solve the 
> problem.

The phrase you're looking for is "necessary but not sufficient".  That is,
it is necessary to create new roots to restore trust, but not sufficient to
do so.  You also have to demonstrate a high degree of competence in running
a CA, and understanding and responding to legitimate concerns.

> The issue now is choosing the starting point.
> 
> 1.- New root 2018
> 
> 2.- 2016 root, after revoke all certificates (< 100 units) and pass an
> "Point in Time" BR compliant audit.  (Camerfirma proposal)

The problem with the 2016 roots is that there is no way to rehabilitate them
to the point that they will be as worthy of trust as new, fresh roots, for
several reasons.

Firstly, revocation is "fail open".  Not every relying party checks
revocation information, and when the checks are made but they fail, it is
*usually* OK to continue, so user agents do so.

Revocation is the ejection seat of PKI.  It's the option of last resort when
everything else has gone to hell, and you can only hope it does the job. 
You don't build a crappy fighter aircraft whose wings fall off periodically,
and then justify it by saying "oh, it's OK, it's got an ejection seat".  No,
you build the best 'plane you can, and you add the ejection seat as the
"well, it's *slightly* better than nothing" final option.  Because they hurt
to use.  A lot.  And for various reasons they don't always work quite right. 
But they give you a better chance of survival than a crash.

However, back to the point at hand: even if revocation were 100% reliable,
there's still the possibility of incomplete enumeration of certificates.  I
cannot think of a way you could possibly prove that there is zero chance of
there being undisclosed certificates chaining to the old roots.  New roots,
on the other hand, have that zero chance, because they're new, and have been
under effective audited control since day zero.  So, again, new roots would
be more trustworthy than the old roots.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AC Camerfirma Chambers of Commerce and Global Chambersign 2016 Root Inclusion Request

2018-04-03 Thread okaphone.elektronika--- via dev-security-policy
On Tuesday, 3 April 2018 14:19:43 UTC+2, ramiro...@gmail.com  wrote:
> El martes, 3 de abril de 2018, 11:58:49 (UTC+2), okaphone.e...@gmail.com  
> escribió:
> > On Monday, 2 April 2018 19:22:02 UTC+2, ramiro...@gmail.com  wrote:
> > > El lunes, 2 de abril de 2018, 3:53:08 (UTC+2), Tom Prince  escribió:
> > > > On Sunday, April 1, 2018 at 4:16:47 AM UTC-6, ramiro...@gmail.com wrote:
> > > > > I fully understand the proposed solution about 2018 roots but as I 
> > > > > previously said some concerns arise, [...]
> > > > 
> > > > 
> > > > That is unfortunate for Camerfirma, but it is not Mozilla or this lists 
> > > > issue. While people have provided some suggestions on how you can 
> > > > create a root that *might* be acceptable, I don't think any of the 
> > > > participants care if Camerfirma has a root accepted; given the issues 
> > > > previously identified and the responses to those issues, I think a 
> > > > number of participants would be just as happy if Camerfirma doesn't get 
> > > > accepted.
> > > > 
> > > 
> > > Hi Tom
> > > I'm just trying to provide a different scenario than create a new root. 
> > > Sure that many participants do not care about our particular 
> > > situation:-(, but this make a big difference for us and also for our 
> > > customers. If the only way to going forward is to create a new root, we 
> > > will do it, but our obligation is trying to provide a more convenient 
> > > solution for Camerfirma without jeopardize the trustworthiness of the 
> > > ecosystem.
> > 
> > Creating a new root by itself will not solve anything. The problem you have 
> > is with trust. It's up to you to offer a root that can be used as a trust 
> > anchor. Reasons why the 2016 root has become unsuitable for this have been 
> > discussed in detail.
> > 
> > The way out can be creating a new root, but that makes only sense if/when 
> > you are sure all problems have been solved and will not happen with the 
> > certificates that would be issued from this new root. If you are not 100% 
> > sure about this, the new root will most likely soon become as useless (for 
> > thrust) as the old one. Please don't do it before you are ready.
> > 
> > CU Hans
> 
> Thank Hans for your comments.
> 
> Completely agree with you about that a new root by itself do not solve the 
> problem.
> 
> We have been working on those aspect detected by Wayne at the beginning of 
> this thread. CPS issues, CAA issues..etc. So we think we are now ready to 
> keep on with the root inclusion. Are we 100% sure ?, No one can assure that 
> of their own systems, but we have placed controls to avoid the known 
> problems, and detect the unknown ones.
> 
> The issue now is choosing the starting point.
> 
> 1.- New root 2018
> 
> 2.- 2016 root, after revoke all certificates (< 100 units) and pass an "Point 
> in Time" BR compliant audit. (Camerfirma proposal)
> 
> 3.- We have send two root to the inclusion process. "Chambers of commerce 
> root 2016", this is the root which has issue a few(4) missunsed certificates 
> https://crt.sh/?cablint=273=50473=2011-01-01, all of them 
> revoked. But we have sent "Chambersign Global Root 2016" as well, and this 
> root is free of issuing errors.
> 
> Our claim to the community is to use as starting point option 2 or option 3.

You still don't seem to understand. This is not about hoops you need to jump 
through to get your root included. It is about trust and it is entirely up to 
you to do whatever is needed to (re)gain that.

You won't get any "requirements" other than the ones you already know all 
about. Some people here may offer you advice they think will help you move 
forward with this. But if that doesn't suit you for one reason or another then 
that is just your choice, no problem. And if you do choose to follow somebody's 
advice, that doesn't imply your root will be included. It's just what they 
think is your best option. But as I said, creating a new root won't help you 
one bit if the problems have not been solved in a way that makes sure they 
won't happen again. Or if further problems will surface.

The bottom line is nothing more and nothing less than making it sufficiently 
plausible as a CA that the root you would like to see included is (and will 
stay) a suitable trust anchor. How you want to do that is your decision. The 
community will and can not make that decision for you. All they have for you is 
feedback (see above).

(Actually I have no idea why I'm telling you all this. You should already 
understand it as a CA. Anyway, just trying to help... ;-)

CU Hans
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: Complying with Mozilla policy on email validation

2018-04-03 Thread Wayne Thayer via dev-security-policy
On Tue, Apr 3, 2018 at 11:42 AM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tue, Apr 3, 2018 at 12:19 PM, Ryan Hurst via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> >
> >
> > For example, if we consider a CA supporting a large mail provider in
> > providing S/MIME certificates to all of its customers. In this model, the
> > mail provider is the authoritative namespace owner.
> >
>
> There may be interest in some of the other twists such a concept helps to
> illuminate.  MV (mail validated) versus IV (individual validated).  I'm
> aware that the BRs don't contemplate email certificates and standards
> pertaining to these, but one of the BR concepts is that Subject party data
> of an unvalidated nature not be included in the certificate.  For example,
> no O= component on certificates which were solely domain validated.
> Similarly, does or should Mozilla policy consider Subject parameters other
> than E= in the email program?
>
> A third party domain delegating via MX record email addresses of that
> domain does seem both point-in-time auditable and would generally prove
> that a party at the target MX server does or could choose to receive email
> to any address at that domain.  But even the domain serving the email
> address doesn't get us to CN=Real Name.  So, my question is whether the
> policy should incorporate standards similar to the BRs which require that
> those personal-level details be validated if included?
>
> Our policy covers this in section 2.2(1) :

*All information that is supplied by the certificate subscriber MUST be
verified by using an independent source of information or an alternative
communication channel before it is included in the certificate.*

>
> >
> > In the context of mail, you can imagine gmail.com or
> peculiarventures.com
> > as examples, both are gmail (as determined by MX records). It seems
> > reasonable to me (Speaking as Ryan and not Google here) to allow a CA to
> > leverage this internet reality (expressed via MX records) to work with a
> CA
> > to get S/MIME certificates for all of its customers without forcing them
> > through an email challenge.
> >
> > In this scenario, you could not rely on name constraints because the
> > onboarding of custom domains (like peculiarventures.com) happens real
> > time as part of account creation. The prior business controls text seemed
> > to allow this case but it seems the interpretation discussed here would
> > prohibit it.
> >
> >
> > Another case I think is interesting is that of a delegation of email
> > verification to a third-party. For example, when you do a OAUTH
> > authentication to Facebook it will return the user’s email address if it
> > has been verified. The same is true for a number of related scenarios,
> for
> > example, you can tell via Live Authentication and Google Authentication
> if
> > the user's email was verified.
> >
>
> Among other questions that raises, how do you determine and audit the
> trustworthiness of the third party to speak as to the email validation?  It
> seems the trend in the BR world of the WebPKI is to reduce reliance on
> third party validations and assertions.
>
>
> >
> > The business controls text plausibly would have allowed this use case
> also.
> >
> > I think a policy that does not allow a CA to support these use cases
> would
> > severly limit the use cases in which S/MIME could be used and I would
> like
> > to see them considered.
> >
> > Ryan Hurst
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: Complying with Mozilla policy on email validation

2018-04-03 Thread Wayne Thayer via dev-security-policy
On Tue, Apr 3, 2018 at 10:19 AM, Ryan Hurst via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Reading this thread and thinking the current text, based on the
> interpretation discussed, does not accommodate a few cases that I think are
> useful.
>
> For example, if we consider a CA supporting a large mail provider in
> providing S/MIME certificates to all of its customers. In this model, the
> mail provider is the authoritative namespace owner.
>
> In the context of mail, you can imagine gmail.com or peculiarventures.com
> as examples, both are gmail (as determined by MX records). It seems
> reasonable to me (Speaking as Ryan and not Google here) to allow a CA to
> leverage this internet reality (expressed via MX records) to work with a CA
> to get S/MIME certificates for all of its customers without forcing them
> through an email challenge.
>
> In this scenario, you could not rely on name constraints because the
> onboarding of custom domains (like peculiarventures.com) happens real
> time as part of account creation. The prior business controls text seemed
> to allow this case but it seems the interpretation discussed here would
> prohibit it.
>
> I agree that name constraints would be difficult to implement in this
scenario, but I'm less convinced that section 2.2(2) doesn't permit this.
It says:


*For a certificate capable of being used for digitally signing or
encrypting email messages, the CA takes reasonable measures to verify that
the entity submitting the request controls the email account associated
with the email address referenced in the certificate or has been authorized
by the email account holder to act on the account holder’s behalf.*

In this example, the entity submitting the request (Google) controls the
email account because it controls the server the MX record points to.

>
> Another case I think is interesting is that of a delegation of email
> verification to a third-party. For example, when you do a OAUTH
> authentication to Facebook it will return the user’s email address if it
> has been verified. The same is true for a number of related scenarios, for
> example, you can tell via Live Authentication and Google Authentication if
> the user's email was verified.
>
> The business controls text plausibly would have allowed this use case also.
>
> I'm not a fan of expanding the scope of such a vague requirement as
"business controls", and I'd prefer to have the CA/Browser Forum define
more specific validation methods, but if section 2.2(2) of our current
policy is too limiting, we can consider changing it to accommodate this use
case.

I think a policy that does not allow a CA to support these use cases would
> severly limit the use cases in which S/MIME could be used and I would like
> to see them considered.
>
> Ryan Hurst
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: Complying with Mozilla policy on email validation

2018-04-03 Thread Matthew Hardeman via dev-security-policy
On Tue, Apr 3, 2018 at 12:19 PM, Ryan Hurst via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
>
> For example, if we consider a CA supporting a large mail provider in
> providing S/MIME certificates to all of its customers. In this model, the
> mail provider is the authoritative namespace owner.
>

There may be interest in some of the other twists such a concept helps to
illuminate.  MV (mail validated) versus IV (individual validated).  I'm
aware that the BRs don't contemplate email certificates and standards
pertaining to these, but one of the BR concepts is that Subject party data
of an unvalidated nature not be included in the certificate.  For example,
no O= component on certificates which were solely domain validated.
Similarly, does or should Mozilla policy consider Subject parameters other
than E= in the email program?

A third party domain delegating via MX record email addresses of that
domain does seem both point-in-time auditable and would generally prove
that a party at the target MX server does or could choose to receive email
to any address at that domain.  But even the domain serving the email
address doesn't get us to CN=Real Name.  So, my question is whether the
policy should incorporate standards similar to the BRs which require that
those personal-level details be validated if included?


>
> In the context of mail, you can imagine gmail.com or peculiarventures.com
> as examples, both are gmail (as determined by MX records). It seems
> reasonable to me (Speaking as Ryan and not Google here) to allow a CA to
> leverage this internet reality (expressed via MX records) to work with a CA
> to get S/MIME certificates for all of its customers without forcing them
> through an email challenge.
>
> In this scenario, you could not rely on name constraints because the
> onboarding of custom domains (like peculiarventures.com) happens real
> time as part of account creation. The prior business controls text seemed
> to allow this case but it seems the interpretation discussed here would
> prohibit it.
>
>
> Another case I think is interesting is that of a delegation of email
> verification to a third-party. For example, when you do a OAUTH
> authentication to Facebook it will return the user’s email address if it
> has been verified. The same is true for a number of related scenarios, for
> example, you can tell via Live Authentication and Google Authentication if
> the user's email was verified.
>

Among other questions that raises, how do you determine and audit the
trustworthiness of the third party to speak as to the email validation?  It
seems the trend in the BR world of the WebPKI is to reduce reliance on
third party validations and assertions.


>
> The business controls text plausibly would have allowed this use case also.
>
> I think a policy that does not allow a CA to support these use cases would
> severly limit the use cases in which S/MIME could be used and I would like
> to see them considered.
>
> Ryan Hurst
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: Complying with Mozilla policy on email validation

2018-04-03 Thread Ryan Hurst via dev-security-policy
On Monday, April 2, 2018 at 1:10:13 PM UTC-7, Wayne Thayer wrote:
> I'm forwarding this for Tim because the list rejected it as SPAM.
> 
> 
> 
> *From:* Tim Hollebeek
> *Sent:* Monday, April 2, 2018 2:22 PM
> *To:* 'mozilla-dev-security-policy'  lists.mozilla.org>
> *Subject:* Complying with Mozilla policy on email validation
> 
> 
> 
> 
> 
> Mozilla policy currently has the following to say about validation of email
> addresses in certificates:
> 
> 
> 
> “For a certificate capable of being used for digitally signing or
> encrypting email messages, the CA takes reasonable measures to verify that
> the entity submitting the request controls the email account associated
> with the email address referenced in the certificate or has been authorized
> by the email account holder to act on the account holder’s behalf.”
> 
> 
> 
> “If the certificate includes the id-kp-emailProtection extended key usage,
> then all end-entity certificates MUST only include e-mail addresses or
> mailboxes that the issuing CA has confirmed (via technical and/or business
> controls) that the subordinate CA is authorized to use.”
> 
> 
> 
> “Before being included and periodically thereafter, CAs MUST obtain certain
> audits for their root certificates and all of their intermediate
> certificates that are not technically constrained to prevent issuance of
> working server or email certificates.”
> 
> 
> 
> (Nit: Mozilla policy is inconsistent in it’s usage of email vs e-mail.  I’d
> fix the one hyphenated reference)
> 
> 
> 
> This is basically method 1 for email certificates, right?  Is it true that
> Mozilla policy today allows “business controls” to be used for validating
> email addresses, which can essentially be almost anything, as long as it is
> audited?
> 
> 
> 
> (I’m not talking about what the rules SHOULD be, just what they are.  What
> they should be is a discussion we should have in a newly created CA/* SMIME
> WG)
> 
> 
> 
> -Tim

Reading this thread and thinking the current text, based on the interpretation 
discussed, does not accommodate a few cases that I think are useful.

For example, if we consider a CA supporting a large mail provider in providing 
S/MIME certificates to all of its customers. In this model, the mail provider 
is the authoritative namespace owner.  

In the context of mail, you can imagine gmail.com or peculiarventures.com as 
examples, both are gmail (as determined by MX records). It seems reasonable to 
me (Speaking as Ryan and not Google here) to allow a CA to leverage this 
internet reality (expressed via MX records) to work with a CA to get S/MIME 
certificates for all of its customers without forcing them through an email 
challenge. 

In this scenario, you could not rely on name constraints because the onboarding 
of custom domains (like peculiarventures.com) happens real time as part of 
account creation. The prior business controls text seemed to allow this case 
but it seems the interpretation discussed here would prohibit it.


Another case I think is interesting is that of a delegation of email 
verification to a third-party. For example, when you do a OAUTH authentication 
to Facebook it will return the user’s email address if it has been verified. 
The same is true for a number of related scenarios, for example, you can tell 
via Live Authentication and Google Authentication if the user's email was 
verified.

The business controls text plausibly would have allowed this use case also.

I think a policy that does not allow a CA to support these use cases would 
severly limit the use cases in which S/MIME could be used and I would like to 
see them considered.

Ryan Hurst
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Audits for new subCAs

2018-04-03 Thread Jakob Bohm via dev-security-policy

On 03/04/2018 14:57, Ryan Sleevi wrote:

On Mon, Apr 2, 2018 at 9:03 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 03/04/2018 02:15, Wayne Thayer wrote:


On Mon, Apr 2, 2018 at 4:36 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:



While Entrust happens to do this, as a relying party, I dislike frequent
updates to CP/CPS documents just for such formal changes.

This creates a huge loophole. The CP/CPS is the master set of policies
the


TSP agrees to be bound by and audited against. If a TSP doesn't include a
new subCA certificate in the scope of their CP/CPS, then from an audit
perspective  there is effectively no policy that applies to the subCA.
Similarly, if the TSP claims to implement a new policy but doesn't include
it in their CP/CPS, then the audit will not cover it (unless it's a BR
requirement that has made it into the BR audit criteria).



If the CP/CPS states as an auditable requirements that all SubCAs are
logged in the trusted hardware of the root CA HSM, listed in the
dedicated public document and audited, there is no need for that list to
be included verbatim in the CP/CPS any more than there is a need for
most other routine activities to change the CP/CPS.



This is not true. A CP/CPS is not bound to the organization, it's bound to
specific operations, and as such, the creation of a subCA by an
organization has no implicit binding to a CP/CPS.



A CP/CPS covers (by its very nature) all SubCA signing operations
by a covered CA private key and as others have pointed out, each
compliant SubCA is required to contain an OID identifying the applicable
CP/CPS.

Thus a CP/CPS can state that it covers all CA keys and certs listed in
public document X and that all SubCAs referencing that CP/CPS and issued
by a covered CA key must be added to public document X, which shall be
published and archived in a specific way, such as in the CA
organizations public repository, sent to all enrolled root programs and
logged to CT.

The CP/CPS can also state that the CA HSM must store, in a secured
hardware log, all signed SubCA certs (or even all issued certs), this
log will be one of the key things checked by auditors (it already is in
the current WebTrust and ETSI auditing standards, as part of the
sampling of previously issued end entity certs).






This is because the CP/CPS is a lengthy legal document which relying



parties are "supposed to" read and understand.  Thus each such needless
change generates a huge wave of millions of relying parties being
supposed to obtain and read through that document, a complete waste of
our time as relying parties.

I think you're confusing the Relying Party Agreement with the CP/CPS.
Even


so, you've pointed out that it is absurd to use this as an argument
against
regular CP/CPS updates.



Relying party agreements (and subscriber agreements too) often
incorporate the CP/CPS by reference.



Can you point out to where that's required by Mozilla policy or by the
Baseline Requirements?



It is not a BR, it is an observation of common practice.




TSPs are now required to maintain change logs in their CP/CPS.  Would not a

quick glance at that meet your needs?



Only to the extent such logs are sufficient to determine how much of the
document is literally (not just subjectively) unchanged.  And provided
any conflict between the change log and the actual document shall be
resolved to the benefit of all third parties.



What? No.



If the Change Log is only informative (from a legal standpoint), then it
does not relieve those who read the document of the burden of actually
checking that nothing else was changed.




It is much more reasonable, from a relying party perspective, for such



informational details to be in a parallel document and/or be postponed
until a scheduled annual or rarer document update (Yes I am aware of the
BR that such needless updates be done annually for no reason at all).

How would you distinguish between details and material changes? I would


argue that a new subCA certificate is more than an informational detail.



Details include such routine numerical changes as date of last review
(where that review does not result in any other change), changing the
list of CA certificates or changing the name brand of HSM hardware,
exact locations of technical facilities (as opposed to changing the
country and jurisdiction) etc.



These are all material and substantial changes.



Only if those things are hardcoded into the text of the CPS, rather than
being in different documents that are restricted (by the CPS) to only
provide those details.  For example, a CPS can state that the HSM must
be officially validated to FIPS 140-3 level 3 or higher, and that the
secured CA facility is located in Mountain View, while leaving it to
other documents to state if the HSM is a specific model from nCipher or
IBM, and which exact building contains the secure equipment 

Re: Policy 2.6 Proposal: Require audits back to first issuance

2018-04-03 Thread tom.prince--- via dev-security-policy
On Monday, April 2, 2018 at 7:12:19 PM UTC-6, Wayne Thayer wrote:
> In section 2.3 (Baseline Requirements Conformance), add a new bullet that
> states "Before being included, CAs MUST provide evidence that their root
> certificates have continually, from the time of creation, complied with the
> then current Mozilla Root Store Policy and CA/Browser Forum Baseline
> Requirements."

When I first read this, I parsed it as saying that the only root needs to 
comply with the policy at the time of creation (and not at later points in 
time). I don't have any suggestions on how to make it clear that the root needs 
to have complied at each time with the policy in force at that time.

-- Tom

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AC Camerfirma Chambers of Commerce and Global Chambersign 2016 Root Inclusion Request

2018-04-03 Thread Ryan Sleevi via dev-security-policy
On Tue, Apr 3, 2018 at 8:19 AM, ramirommunoz--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> El martes, 3 de abril de 2018, 11:58:49 (UTC+2), okaphone.e...@gmail.com
> escribió:
> > On Monday, 2 April 2018 19:22:02 UTC+2, ramiro...@gmail.com  wrote:
> > > El lunes, 2 de abril de 2018, 3:53:08 (UTC+2), Tom Prince  escribió:
> > > > On Sunday, April 1, 2018 at 4:16:47 AM UTC-6, ramiro...@gmail.com
> wrote:
> > > > > I fully understand the proposed solution about 2018 roots but as I
> previously said some concerns arise, [...]
> > > >
> > > >
> > > > That is unfortunate for Camerfirma, but it is not Mozilla or this
> lists issue. While people have provided some suggestions on how you can
> create a root that *might* be acceptable, I don't think any of the
> participants care if Camerfirma has a root accepted; given the issues
> previously identified and the responses to those issues, I think a number
> of participants would be just as happy if Camerfirma doesn't get accepted.
> > > >
> > >
> > > Hi Tom
> > > I'm just trying to provide a different scenario than create a new
> root. Sure that many participants do not care about our particular
> situation:-(, but this make a big difference for us and also for our
> customers. If the only way to going forward is to create a new root, we
> will do it, but our obligation is trying to provide a more convenient
> solution for Camerfirma without jeopardize the trustworthiness of the
> ecosystem.
> >
> > Creating a new root by itself will not solve anything. The problem you
> have is with trust. It's up to you to offer a root that can be used as a
> trust anchor. Reasons why the 2016 root has become unsuitable for this have
> been discussed in detail.
> >
> > The way out can be creating a new root, but that makes only sense
> if/when you are sure all problems have been solved and will not happen with
> the certificates that would be issued from this new root. If you are not
> 100% sure about this, the new root will most likely soon become as useless
> (for thrust) as the old one. Please don't do it before you are ready.
> >
> > CU Hans
>
> Thank Hans for your comments.
>
> Completely agree with you about that a new root by itself do not solve the
> problem.
>
> We have been working on those aspect detected by Wayne at the beginning of
> this thread. CPS issues, CAA issues..etc. So we think we are now ready to
> keep on with the root inclusion. Are we 100% sure ?, No one can assure that
> of their own systems, but we have placed controls to avoid the known
> problems, and detect the unknown ones.
>
> The issue now is choosing the starting point.
>
> 1.- New root 2018
>
> 2.- 2016 root, after revoke all certificates (< 100 units) and pass an
> "Point in Time" BR compliant audit. (Camerfirma proposal)
>
> 3.- We have send two root to the inclusion process. "Chambers of commerce
> root 2016", this is the root which has issue a few(4) missunsed
> certificates https://crt.sh/?cablint=273=50473=2011-
> 01-01, all of them revoked. But we have sent "Chambersign Global Root
> 2016" as well, and this root is free of issuing errors.
>
> Our claim to the community is to use as starting point option 2 or option
> 3.
>

Ramiro,

I don't think option 2 or option 3 really meaningfully address the concerns
being raised here. It's unclear if you don't understand those concerns, or
whether you disagree with those concerns. A root that has not operated
according to expected policy fundamentally is not a good starting point for
trust - there will always be a cloud of suspicion over it, that on a
technical level cannot be ameliorated. In that regard, Option 1 is the only
option that provides a meaningful starting point for trust going forward.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Audits for new subCAs

2018-04-03 Thread Ryan Sleevi via dev-security-policy
On Mon, Apr 2, 2018 at 9:03 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 03/04/2018 02:15, Wayne Thayer wrote:
>
>> On Mon, Apr 2, 2018 at 4:36 PM, Jakob Bohm via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>
>>> While Entrust happens to do this, as a relying party, I dislike frequent
>>> updates to CP/CPS documents just for such formal changes.
>>>
>>> This creates a huge loophole. The CP/CPS is the master set of policies
>>> the
>>>
>> TSP agrees to be bound by and audited against. If a TSP doesn't include a
>> new subCA certificate in the scope of their CP/CPS, then from an audit
>> perspective  there is effectively no policy that applies to the subCA.
>> Similarly, if the TSP claims to implement a new policy but doesn't include
>> it in their CP/CPS, then the audit will not cover it (unless it's a BR
>> requirement that has made it into the BR audit criteria).
>>
>>
> If the CP/CPS states as an auditable requirements that all SubCAs are
> logged in the trusted hardware of the root CA HSM, listed in the
> dedicated public document and audited, there is no need for that list to
> be included verbatim in the CP/CPS any more than there is a need for
> most other routine activities to change the CP/CPS.
>

This is not true. A CP/CPS is not bound to the organization, it's bound to
specific operations, and as such, the creation of a subCA by an
organization has no implicit binding to a CP/CPS.


>
> This is because the CP/CPS is a lengthy legal document which relying
>>
>>> parties are "supposed to" read and understand.  Thus each such needless
>>> change generates a huge wave of millions of relying parties being
>>> supposed to obtain and read through that document, a complete waste of
>>> our time as relying parties.
>>>
>>> I think you're confusing the Relying Party Agreement with the CP/CPS.
>>> Even
>>>
>> so, you've pointed out that it is absurd to use this as an argument
>> against
>> regular CP/CPS updates.
>>
>>
> Relying party agreements (and subscriber agreements too) often
> incorporate the CP/CPS by reference.


Can you point out to where that's required by Mozilla policy or by the
Baseline Requirements?


> TSPs are now required to maintain change logs in their CP/CPS.  Would not a
>> quick glance at that meet your needs?
>>
>
> Only to the extent such logs are sufficient to determine how much of the
> document is literally (not just subjectively) unchanged.  And provided
> any conflict between the change log and the actual document shall be
> resolved to the benefit of all third parties.
>

What? No.


> It is much more reasonable, from a relying party perspective, for such
>>
>>> informational details to be in a parallel document and/or be postponed
>>> until a scheduled annual or rarer document update (Yes I am aware of the
>>> BR that such needless updates be done annually for no reason at all).
>>>
>>> How would you distinguish between details and material changes? I would
>>>
>> argue that a new subCA certificate is more than an informational detail.
>>
>>
> Details include such routine numerical changes as date of last review
> (where that review does not result in any other change), changing the
> list of CA certificates or changing the name brand of HSM hardware,
> exact locations of technical facilities (as opposed to changing the
> country and jurisdiction) etc.
>

These are all material and substantial changes.


> Substantial changes are things that could actually affect the
> willingness of parties to trust or request certificates, such as the
> used validation methods, the contracting entity, the jurisdiction
> capable of interfering with operations and issuance etc.
>
>
> The point of the BR requirement is to create some documentation indicating
>> that a TSP is regularly reviewing and updating their CP/CPS.
>>
>>
> However that could equally just be a management statement (copied in the
> audit report if necessary), that the CP/CPS was reviewed on /MM/DD
> and found to remain compliant to BRs version X.Y.Z. Mozilla policy
> version A.B, ETSI standard EEE EEE and that the actual practice remains
> within its limits.
>
> There could also be restricting legal addendums saying things like
> "Although our current CPS allows us to issue 5 year certificates based
> on a sworn statement from a notary public, we will not and have not done
> so since /MM/DD".  Such addendums would be to satisfy those who no
> longer accept that particular practice but would have no effect on the
> relationship with parties that accept the CPS including the option to do
> so.


These all sound like rather terrible ideas, and while having been adopted
by some CAs in the past, are shown to be regularly problematic as to
forming a meaningful opinion as to whether the CA is themselves
trustworthy, particularly such CAs that argue for 'hidden' addendums, as
has happened in the past.

Re: AC Camerfirma Chambers of Commerce and Global Chambersign 2016 Root Inclusion Request

2018-04-03 Thread ramirommunoz--- via dev-security-policy
El martes, 3 de abril de 2018, 11:58:49 (UTC+2), okaphone.e...@gmail.com  
escribió:
> On Monday, 2 April 2018 19:22:02 UTC+2, ramiro...@gmail.com  wrote:
> > El lunes, 2 de abril de 2018, 3:53:08 (UTC+2), Tom Prince  escribió:
> > > On Sunday, April 1, 2018 at 4:16:47 AM UTC-6, ramiro...@gmail.com wrote:
> > > > I fully understand the proposed solution about 2018 roots but as I 
> > > > previously said some concerns arise, [...]
> > > 
> > > 
> > > That is unfortunate for Camerfirma, but it is not Mozilla or this lists 
> > > issue. While people have provided some suggestions on how you can create 
> > > a root that *might* be acceptable, I don't think any of the participants 
> > > care if Camerfirma has a root accepted; given the issues previously 
> > > identified and the responses to those issues, I think a number of 
> > > participants would be just as happy if Camerfirma doesn't get accepted.
> > > 
> > 
> > Hi Tom
> > I'm just trying to provide a different scenario than create a new root. 
> > Sure that many participants do not care about our particular situation:-(, 
> > but this make a big difference for us and also for our customers. If the 
> > only way to going forward is to create a new root, we will do it, but our 
> > obligation is trying to provide a more convenient solution for Camerfirma 
> > without jeopardize the trustworthiness of the ecosystem.
> 
> Creating a new root by itself will not solve anything. The problem you have 
> is with trust. It's up to you to offer a root that can be used as a trust 
> anchor. Reasons why the 2016 root has become unsuitable for this have been 
> discussed in detail.
> 
> The way out can be creating a new root, but that makes only sense if/when you 
> are sure all problems have been solved and will not happen with the 
> certificates that would be issued from this new root. If you are not 100% 
> sure about this, the new root will most likely soon become as useless (for 
> thrust) as the old one. Please don't do it before you are ready.
> 
> CU Hans

Thank Hans for your comments.

Completely agree with you about that a new root by itself do not solve the 
problem.

We have been working on those aspect detected by Wayne at the beginning of this 
thread. CPS issues, CAA issues..etc. So we think we are now ready to keep on 
with the root inclusion. Are we 100% sure ?, No one can assure that of their 
own systems, but we have placed controls to avoid the known problems, and 
detect the unknown ones.

The issue now is choosing the starting point.

1.- New root 2018

2.- 2016 root, after revoke all certificates (< 100 units) and pass an "Point 
in Time" BR compliant audit. (Camerfirma proposal)

3.- We have send two root to the inclusion process. "Chambers of commerce root 
2016", this is the root which has issue a few(4) missunsed certificates 
https://crt.sh/?cablint=273=50473=2011-01-01, all of them 
revoked. But we have sent "Chambersign Global Root 2016" as well, and this root 
is free of issuing errors.

Our claim to the community is to use as starting point option 2 or option 3.

Best Regards
Ramiro
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AC Camerfirma Chambers of Commerce and Global Chambersign 2016 Root Inclusion Request

2018-04-03 Thread okaphone.elektronika--- via dev-security-policy
On Monday, 2 April 2018 19:22:02 UTC+2, ramiro...@gmail.com  wrote:
> El lunes, 2 de abril de 2018, 3:53:08 (UTC+2), Tom Prince  escribió:
> > On Sunday, April 1, 2018 at 4:16:47 AM UTC-6, ramiro...@gmail.com wrote:
> > > I fully understand the proposed solution about 2018 roots but as I 
> > > previously said some concerns arise, [...]
> > 
> > 
> > That is unfortunate for Camerfirma, but it is not Mozilla or this lists 
> > issue. While people have provided some suggestions on how you can create a 
> > root that *might* be acceptable, I don't think any of the participants care 
> > if Camerfirma has a root accepted; given the issues previously identified 
> > and the responses to those issues, I think a number of participants would 
> > be just as happy if Camerfirma doesn't get accepted.
> > 
> 
> Hi Tom
> I'm just trying to provide a different scenario than create a new root. Sure 
> that many participants do not care about our particular situation:-(, but 
> this make a big difference for us and also for our customers. If the only way 
> to going forward is to create a new root, we will do it, but our obligation 
> is trying to provide a more convenient solution for Camerfirma without 
> jeopardize the trustworthiness of the ecosystem.

Creating a new root by itself will not solve anything. The problem you have is 
with trust. It's up to you to offer a root that can be used as a trust anchor. 
Reasons why the 2016 root has become unsuitable for this have been discussed in 
detail.

The way out can be creating a new root, but that makes only sense if/when you 
are sure all problems have been solved and will not happen with the 
certificates that would be issued from this new root. If you are not 100% sure 
about this, the new root will most likely soon become as useless (for thrust) 
as the old one. Please don't do it before you are ready.

CU Hans
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy