Re: localhost.megasyncloopback.mega.nz private key in client

2018-08-08 Thread Alex Cohn via dev-security-policy
On Wed, Aug 8, 2018 at 9:17 AM Hanno Böck  wrote:

>
> As of today this is still unrevoked:
> https://crt.sh/?id=630835231=ocsp
>
> Given Comodo's abuse contact was CCed in this mail I assume they knew
> about this since Sunday. Thus we're way past the 24 hour in which they
> should revoke it.
>
> --
> Hanno Böck
> https://hboeck.de/


As Hanno has no doubt learned, the ssl_ab...@comodoca.com address bounces.
I got that address off of Comodo CA's website at
https://www.comodoca.com/en-us/support/report-abuse/.

I later found the address "sslab...@comodo.com" in Comodo's latest CPS, and
forwarded my last message to it on 2018-08-05 at 20:32 CDT (UTC-5). I
received an automated confirmation immediately afterward, so I assume
Comodo has now known about this issue for ~70 hours now.

crt.sh lists sslab...@comodoca.com as the "problem reporting" address for
the cert in question. I have not tried this address.

Comodo publishes at least three different problem reporting email
addresses, and at least one of them is nonfunctional. I think similar
issues have come up before - there's often not a clear way to identify how
to contact a CA. Should we revisit the topic?

Alex
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AC Camerfirma's organizationName too long incident report

2018-08-08 Thread Wayne Thayer via dev-security-policy
Thank you for the incident report Juan. I created
https://bugzilla.mozilla.org/show_bug.cgi?id=1481862 to track this issue.
Please update the bug as action items are completed.


On Wed, Aug 8, 2018 at 8:41 AM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Wed, Aug 8, 2018 at 8:13 AM, Juan Angel Martin via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > Hello,
> >
> > We detected 5 certificates issued with ERROR: organizationName too long
> > (X.509 lint)
> >
> > 1. How your CA first became aware of the problem (e.g. via a problem
> > report submitted to your Problem Reporting Mechanism, a discussion in
> > mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and
> > the time and date.
> >
> > We detected these certificates checking the CA issued certificates into
> > crt.sh on August 3, 2018.
> >
> > 2. A timeline of the actions your CA took in response. A timeline is a
> > date-and-time-stamped sequence of all relevant events. This may include
> > events before the incident was reported, such as when a particular
> > requirement became applicable, or a document changed, or a bug was
> > introduced, or an audit was done.
> >
> > 2018-08-03 09:58 UTC --> We detected these 5 certificates and asked the
> > team that manages them to revoke them.
> > 2018-08-03 15:35 UTC --> All the certificates are revoked.
> >
> > 3. Whether your CA has stopped, or has not yet stopped, issuing
> > certificates with the problem. A statement that you have will be
> considered
> > a pledge to the community; a statement that you have not requires an
> > explanation.
> >
> > The issuance of certificates from this CA was suspended until the
> > operational control was deployed.
> >
> >
> > 4. A summary of the problematic certificates. For each problem: number of
> > certs, and the date the first and last certs with that problem were
> issued.
> >
> > https://crt.sh/?id=617995390
> > https://crt.sh/?id=606954201
> > https://crt.sh/?id=606953975
> > https://crt.sh/?id=606953727
> > https://crt.sh/?id=604874282
> >
> >
> > 5. The complete certificate data for the problematic certificates. The
> > recommended way to provide this is to ensure each certificate is logged
> to
> > CT and then list the fingerprints or crt.sh IDs, either in the report or
> as
> > an attached spreadsheet, with one list per distinct problem.
> >
> > https://crt.sh/?id=617995390
> > https://crt.sh/?id=606954201
> > https://crt.sh/?id=606953975
> > https://crt.sh/?id=606953727
> > https://crt.sh/?id=604874282
> >
> >
> > 6. Explanation about how and why the mistakes were made or bugs
> > introduced, and how they avoided detection until now.
> >
> > There was no effective control into Multicert's PKI platform about DN's O
> > lenth and this CA wasn't included into Camerfirma's quality controls
> until
> > 2018-08-03.
> >
> >
> > 7. List of steps your CA is taking to resolve the situation and ensure
> > such issuance will not be repeated in the future, accompanied with a
> > timeline of when your CA expects to accomplish these things.
> >
> > - Multicert's team have added an operational control and they'll delploy
> > the techinical control on August 9
> > - Multicert's team will check crt.sh for misissued certificates (from
> > today forward).
> > - Camerfirma will check for certificates issued by new intermedite CAs
> > into crt.sh no more than 24 hours after the CA certificate issuance (from
> > today forward).
> >
> > Your comments and suggestions will be appreciated.
> >
>
> Hi Juan,
>
> Can you speak more to what the technical controls being deployed are?
>
> The DNs O length comes from X.509v3 and RFC 5280, so it's a bit baffling to
> understand how there wasn't an effective control there. It does call into
> question the potential for a lack of other effective controls, which could
> be concerning.
>
> With respect to Camerfirma checking post-issuance what Multicert is doing,
> that's certainly the minimum expected, as you've cross-certified them. Can
> you help explain why this wasn't already part of your process and controls
> when working with third-party CAs? Can you explain why Multicert's PKI
> platform is allowed independent issuance, rather than having Camerfirma
> manage and maintain that for them, and how the community can rely on
> Camerfirma appropriately supervising and mitigating that risk going
> forward?
>
> I think it is good that you detected this, but note that your incident
> report doesn't actually examine when Multicert first had this issue.
> Looking at https://crt.sh/?id=617995390 for example, I see it being July
> 24. Can you explain why it took so long to detect? Can you discuss how far
> back you've examined Multicerts issuance?
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

Re: AC Camerfirma's organizationName too long incident report

2018-08-08 Thread Ryan Sleevi via dev-security-policy
On Wed, Aug 8, 2018 at 8:13 AM, Juan Angel Martin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hello,
>
> We detected 5 certificates issued with ERROR: organizationName too long
> (X.509 lint)
>
> 1. How your CA first became aware of the problem (e.g. via a problem
> report submitted to your Problem Reporting Mechanism, a discussion in
> mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and
> the time and date.
>
> We detected these certificates checking the CA issued certificates into
> crt.sh on August 3, 2018.
>
> 2. A timeline of the actions your CA took in response. A timeline is a
> date-and-time-stamped sequence of all relevant events. This may include
> events before the incident was reported, such as when a particular
> requirement became applicable, or a document changed, or a bug was
> introduced, or an audit was done.
>
> 2018-08-03 09:58 UTC --> We detected these 5 certificates and asked the
> team that manages them to revoke them.
> 2018-08-03 15:35 UTC --> All the certificates are revoked.
>
> 3. Whether your CA has stopped, or has not yet stopped, issuing
> certificates with the problem. A statement that you have will be considered
> a pledge to the community; a statement that you have not requires an
> explanation.
>
> The issuance of certificates from this CA was suspended until the
> operational control was deployed.
>
>
> 4. A summary of the problematic certificates. For each problem: number of
> certs, and the date the first and last certs with that problem were issued.
>
> https://crt.sh/?id=617995390
> https://crt.sh/?id=606954201
> https://crt.sh/?id=606953975
> https://crt.sh/?id=606953727
> https://crt.sh/?id=604874282
>
>
> 5. The complete certificate data for the problematic certificates. The
> recommended way to provide this is to ensure each certificate is logged to
> CT and then list the fingerprints or crt.sh IDs, either in the report or as
> an attached spreadsheet, with one list per distinct problem.
>
> https://crt.sh/?id=617995390
> https://crt.sh/?id=606954201
> https://crt.sh/?id=606953975
> https://crt.sh/?id=606953727
> https://crt.sh/?id=604874282
>
>
> 6. Explanation about how and why the mistakes were made or bugs
> introduced, and how they avoided detection until now.
>
> There was no effective control into Multicert's PKI platform about DN's O
> lenth and this CA wasn't included into Camerfirma's quality controls until
> 2018-08-03.
>
>
> 7. List of steps your CA is taking to resolve the situation and ensure
> such issuance will not be repeated in the future, accompanied with a
> timeline of when your CA expects to accomplish these things.
>
> - Multicert's team have added an operational control and they'll delploy
> the techinical control on August 9
> - Multicert's team will check crt.sh for misissued certificates (from
> today forward).
> - Camerfirma will check for certificates issued by new intermedite CAs
> into crt.sh no more than 24 hours after the CA certificate issuance (from
> today forward).
>
> Your comments and suggestions will be appreciated.
>

Hi Juan,

Can you speak more to what the technical controls being deployed are?

The DNs O length comes from X.509v3 and RFC 5280, so it's a bit baffling to
understand how there wasn't an effective control there. It does call into
question the potential for a lack of other effective controls, which could
be concerning.

With respect to Camerfirma checking post-issuance what Multicert is doing,
that's certainly the minimum expected, as you've cross-certified them. Can
you help explain why this wasn't already part of your process and controls
when working with third-party CAs? Can you explain why Multicert's PKI
platform is allowed independent issuance, rather than having Camerfirma
manage and maintain that for them, and how the community can rely on
Camerfirma appropriately supervising and mitigating that risk going forward?

I think it is good that you detected this, but note that your incident
report doesn't actually examine when Multicert first had this issue.
Looking at https://crt.sh/?id=617995390 for example, I see it being July
24. Can you explain why it took so long to detect? Can you discuss how far
back you've examined Multicerts issuance?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Telia CA - incorrect OID value

2018-08-08 Thread Ryan Sleevi via dev-security-policy
Thanks! I think this is more in line with the goal of these discussions -
trying to learn, share, and disseminate best practices.

Here, the best practice is that, prior to any configuration, the CA should
determine what the 'model' certificate should look like. This model
certificate is, in effect, the technical equivalent of their certificate
profile (e.g. 7.1 of a CA's CP/CPS) - indeed, it might even make sense for
CAs to include their 'model certificates' as appendices in their CP/CPS,
which helps ensure that the CP/CPS is updated whenever the profile is
updated, but also ensures there's a technically verifiable examination of
the profile.

Going further, it might make sense for CAs to share their model
certificates in advance, for community review and evaluation - although
we're not quite there yet, it could potentially help identify or mitigate
issues before hand, as well as help the CA ensure it's considered
everything in the profile.

Similarly, examining these model certificates through linting is another
thing to consider, and to compare these against the test certificates'
linted results. One thing to consider with model certificates is every
configurable option you allow (e.g. key size) can create different model
certificates, so as a testing procedure, you'll want to make sure you have
model certificates for every configurable option, as well as consider test
certificates for various permutations. For example, lets say you're
introducing a new subject attribute to a certificate - as part of
developing your model certificate, and your test certificate, you'll likely
want to examine the various constraints on that field (e.g. length of
field, acceptable characters), and run tests to make sure they produce the
correct and expected results. Consider situations like "all whitespace" -
does it do the expected thing (which could be to omit the field and allow
issuance, or prevent issuance, etc).

As far as training goes, it does sound like there is an opportunity for
routine training regarding changes to the BRs (and relevant RFCs), to make
sure the team constructing and reviewing profiles know what is and is not
acceptable. While it's good to examine the policies for RAs, looking more
holistically, you want to make sure the team tasked with creating and
reviewing these models is given adequate support and training to know - and
critically evaluate - what is or isn't permitted.

On Wed, Aug 8, 2018 at 6:34 AM, pekka.lahtiharju--- via dev-security-policy
 wrote:

> Telia got a serious lesson with this incident that should not have
> happened. Important detail also to know is that certificates were not
> issued to wrong entities and issuing new certificates with wrong OID field
> was prevented immediately.
>
> 1) Telia has a development process with multiple steps when doing a change
> to SSL process. Some steps of the process include creating test
> certificates in test and pre-production systems with documented change plan
> and a review. Unfortunately test certificates were using test OID values so
> that the problem couldn’t be detected at test side. Telia has analysed
> reasons that caused this error. The main reason was not adequately
> implemented testing. Test process didn't include certificate comparison
> correctly against so-called model certificate. Telia has model certificates
> for each certificate type that are used in comparison when any certificate
> profile changes. This time there wasn't DV model certificate at all (except
> in test system with test OID) because DV was a completely new certificate
> type for Telia. OV model certificate (that had OV OID value) was used
> instead by the reviewers. Telia should have created a DV model certificate
> at first. In model certificate creation there are several eye pairs
> including senior developers when accepting a new one. As a resolution Telia
> has now enhanced processes so that it is mandatory to create model
> certificate when completely new certificate type is created.
> 2) We have concluded that the main reason for this problem was not a lack
> of training but the incomplete test process and documentation. CA audits
> have annually evaluated Telia training. Recommendations about improvements
> have been documented into our internal audit reports if necessary.
> Recommendations (or issues) from CA auditors are always added to Telia
> Security Plan to improve Telia CA process continuously. Persons involved in
> the review have got many different types of training that vary from general
> security to deeper CA software related trainings. E.g. recently Feisty Duck
> – The Best TLS training in the World and several trainings from our CA
> Vendor.
> a) CA software vendor trainings have been held quarterly.
> b) Vendor keep the materials up to date and we update our own training
> materials annually or when needed
> c) CA audits have annually evaluated Telia internal trainings to
> Registration officers
>
> 3) When this problem was 

Re: localhost.megasyncloopback.mega.nz private key in client

2018-08-08 Thread Hanno Böck via dev-security-policy
On Sun, 5 Aug 2018 15:23:42 -0500
Alex Cohn via dev-security-policy
 wrote:

> The certificate [1] in the GitHub link you posted was issued by
> Comodo, not by GeoTrust. The two share a private key, though, so both
> the Comodo and GeoTrust certs should be considered compromised at
> this point. I've added the Comodo-issued cert to several CT logs for
> tracking, and I'm CCing ssl_ab...@comodoca.com for followup.

As of today this is still unrevoked:
https://crt.sh/?id=630835231=ocsp

Given Comodo's abuse contact was CCed in this mail I assume they knew
about this since Sunday. Thus we're way past the 24 hour in which they
should revoke it.

-- 
Hanno Böck
https://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: FE73757FA60E4E21B937579FA5880072BBB51E42
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


AC Camerfirma's organizationName too long incident report

2018-08-08 Thread Juan Angel Martin via dev-security-policy
Hello,

We detected 5 certificates issued with ERROR: organizationName too long (X.509 
lint)

1. How your CA first became aware of the problem (e.g. via a problem report 
submitted to your Problem Reporting Mechanism, a discussion in 
mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and the 
time and date.

We detected these certificates checking the CA issued certificates into crt.sh 
on August 3, 2018.

2. A timeline of the actions your CA took in response. A timeline is a 
date-and-time-stamped sequence of all relevant events. This may include events 
before the incident was reported, such as when a particular requirement became 
applicable, or a document changed, or a bug was introduced, or an audit was 
done.

2018-08-03 09:58 UTC --> We detected these 5 certificates and asked the team 
that manages them to revoke them.
2018-08-03 15:35 UTC --> All the certificates are revoked.

3. Whether your CA has stopped, or has not yet stopped, issuing certificates 
with the problem. A statement that you have will be considered a pledge to the 
community; a statement that you have not requires an explanation.

The issuance of certificates from this CA was suspended until the operational 
control was deployed.


4. A summary of the problematic certificates. For each problem: number of 
certs, and the date the first and last certs with that problem were issued.

https://crt.sh/?id=617995390 
https://crt.sh/?id=606954201 
https://crt.sh/?id=606953975 
https://crt.sh/?id=606953727 
https://crt.sh/?id=604874282 
 

5. The complete certificate data for the problematic certificates. The 
recommended way to provide this is to ensure each certificate is logged to CT 
and then list the fingerprints or crt.sh IDs, either in the report or as an 
attached spreadsheet, with one list per distinct problem.

https://crt.sh/?id=617995390 
https://crt.sh/?id=606954201 
https://crt.sh/?id=606953975 
https://crt.sh/?id=606953727 
https://crt.sh/?id=604874282 


6. Explanation about how and why the mistakes were made or bugs introduced, and 
how they avoided detection until now.

There was no effective control into Multicert's PKI platform about DN's O lenth 
and this CA wasn't included into Camerfirma's quality controls until 2018-08-03.


7. List of steps your CA is taking to resolve the situation and ensure such 
issuance will not be repeated in the future, accompanied with a timeline of 
when your CA expects to accomplish these things.

- Multicert's team have added an operational control and they'll delploy the 
techinical control on August 9
- Multicert's team will check crt.sh for misissued certificates (from today 
forward).
- Camerfirma will check for certificates issued by new intermedite CAs into 
crt.sh no more than 24 hours after the CA certificate issuance (from today 
forward).

Your comments and suggestions will be appreciated.

Thanks in advance!
Juan Ángel
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Telia CA - incorrect OID value

2018-08-08 Thread pekka.lahtiharju--- via dev-security-policy
Telia got a serious lesson with this incident that should not have happened. 
Important detail also to know is that certificates were not issued to wrong 
entities and issuing new certificates with wrong OID field was prevented 
immediately.

1) Telia has a development process with multiple steps when doing a change to 
SSL process. Some steps of the process include creating test certificates in 
test and pre-production systems with documented change plan and a review. 
Unfortunately test certificates were using test OID values so that the problem 
couldn’t be detected at test side. Telia has analysed reasons that caused this 
error. The main reason was not adequately implemented testing. Test process 
didn't include certificate comparison correctly against so-called model 
certificate. Telia has model certificates for each certificate type that are 
used in comparison when any certificate profile changes. This time there wasn't 
DV model certificate at all (except in test system with test OID) because DV 
was a completely new certificate type for Telia. OV model certificate (that had 
OV OID value) was used instead by the reviewers. Telia should have created a DV 
model certificate at first. In model certificate creation there are several eye 
pairs including senior developers when accepting a new one. As a resolution 
Telia has now enhanced processes so that it is mandatory to create model 
certificate when completely new certificate type is created.
2) We have concluded that the main reason for this problem was not a lack of 
training but the incomplete test process and documentation. CA audits have 
annually evaluated Telia training. Recommendations about improvements have been 
documented into our internal audit reports if necessary. Recommendations (or 
issues) from CA auditors are always added to Telia Security Plan to improve 
Telia CA process continuously. Persons involved in the review have got many 
different types of training that vary from general security to deeper CA 
software related trainings. E.g. recently Feisty Duck – The Best TLS training 
in the World and several trainings from our CA Vendor.
a) CA software vendor trainings have been held quarterly.
b) Vendor keep the materials up to date and we update our own training 
materials annually or when needed
c) CA audits have annually evaluated Telia internal trainings to Registration 
officers

3) When this problem was discovered the issue was immediately escalated 
according to Telia Incident process. One of the main steps of Telia incident 
process is to evaluate the effect of the incident. This time the evaluation 
result was that no immediate risk was caused as OID was correct OV OID, all 
certificates with wrong OID fields were issued to known Telia hosted service 
customers, even though the issue itself was confirmed serious. Telia prevented 
the issue to repeat by fixing the profile on the same day. As none of these 
certificates were issued to wrong entities Telia decided to do the 
replacement/revoking in the organized way that overall harm would be minimal by 
replacing these certificates with the customers.


Telia will revoke all incorrect certificates. Most of them have already been 
revoked. Today ~100 is left to be revoked in co-operation with the Customer.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy