RE: CA Problem Reporting Mechanisms

2017-08-08 Thread Tim Hollebeek via dev-security-policy
See BR 1.5.2.  CAs are already required to have contact information in their 
CPS.

-Original Message-
From: dev-security-policy 
[mailto:dev-security-policy-bounces+thollebeek=trustwave@lists.mozilla.org] 
On Behalf Of David E. Ross via dev-security-policy
Sent: Tuesday, August 8, 2017 10:37 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CA Problem Reporting Mechanisms

On 8/7/2017 8:09 PM, Jonathan Rudenberg wrote:
> 
>> On May 17, 2017, at 07:24, Gervase Markham via dev-security-policy 
>>  wrote:
>>
>> On 16/05/17 02:26, userwithuid wrote:
>>> After skimming the responses and checking a few CAs, I'm starting to
>>> wonder: Wouldn't it be easier to just add another mandatory field to 
>>> the CCADB (e..g. "revocation contact"), requiring $URL or $EMAIL via 
>>> policy and just use that to provide a public list?
>>
>> Well, such contacts are normally per CA rather than per root. I guess 
>> we could add it on the CA's entry.
> 
> I've been reporting a fair amount of misissuance this week, and the responses 
> to the Problem Reporting question in the April CA communication leave a lot 
> to be desired. Several CAs do not have any contact details at all, and others 
> require filling forms with captchas.
> 
> I think it'd be very useful if CAs were required maintain a problem reporting 
> email address and keep it current in the CCADB, this requirement could go in 
> the Mozilla Root Store policy or the CCADB policy. If they want to also 
> maintain other modes of contact, they can but no matter what an email address 
> should be required.
> 
> Jonathan
> 

I think that a public point of contact for a certification authority was a 
requirement under Mozilla's policy.  I cannot find such a requirement now 
unless the Baseline Requirements, which are included by reference in Mozilla's 
policy, require it.

--
David E. Ross


President Trump demands loyalty to himself from Republican members of Congress. 
 I always thought that members of Congress -- House and Senate -- were required 
to be loyal to the people of the United States.  In any case, they all swore an 
oath of office to be loyal to the Constitution.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://scanmail.trustwave.com/?c=4062=m8yJ2Wj4I3PpA9lLssqYcKc5sstI-v_FHXLApAaMgw=5=https%3a%2f%2flists%2emozilla%2eorg%2flistinfo%2fdev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Efficient test for weak RSA keys generated in Infineon TPMs / smartcards

2017-10-17 Thread Tim Hollebeek via dev-security-policy
I think this is right.  ROCA-detect appears to just be an implementation of the 
fingerprinting algorithm described in the 2016 paper 
(https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_svenda.pdf).
  There are already plenty of clues in the 2016 paper that something might be 
wrong with Infineon's prime selection algorithm.  It will be interesting to see 
what the actual attack is.

Fun quotes from the 2016 paper:

"It is possible to verify ... whether the primes generally do not exhibit same 
distribution as randomly generated numbers (Infineon JTOP 80K) by computing the 
distributions of the primes, modulo small primes."

On the factorization of p-1:

"The Infineon JTOP 80K card produces significantly more small factors than 
usual (compared with both random numbers and other
sources)."

On biases in the random number generator:

" The Infineon JTOP 80K failed the NIST STS Approximate Entropy test (85/100, 
expected entropy contained in the data) at a significant level and also failed 
the group of Serial tests from the Dieharder suite (39/100, frequency of 
overlapping n-bit patterns). Interestingly, the serial tests began to fail only 
for patterns with lengths of 9 bits and longer (lengths of up to 16 bits were 
tested), suggesting a correlation between two consecutive random bytes 
generated by the TRNG."

This is pure speculation on my part, but I'm wondering if they also used the 
classic smart card "optimization" of using 3 for the public exponent.  That 
would make it easier to exploit biases in selection of primes.

-Tim

-Original Message-
From: dev-security-policy 
[mailto:dev-security-policy-bounces+thollebeek=trustwave@lists.mozilla.org] 
On Behalf Of Nick Lamb via dev-security-policy
Sent: Tuesday, October 17, 2017 7:37 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Efficient test for weak RSA keys generated in Infineon TPMs / 
smartcards

On Monday, 16 October 2017 23:15:51 UTC+1, Jakob Bohm  wrote:
> They have also obfuscated their test by providing bitmasks as decimal 
> bigints instead of using hexadecimal or any other format that makes 
> the bitmasks human readable.

The essential fingerprinting trick comes down to this (I had to work all this 
out while I was discussing it with Let's Encrypt's @cpu yesterday):

Infineon RSA moduli have weird properties, when you divide them by some (but 
not all) small primes the remainder isn't zero (which would be instantly fatal 
to security) but is heavily biased. For example when divided by 11 the 
remainder is always 1 or 10.

The bitmasks are effectively lists of expected remainders for each small prime, 
if your modulus has an expected remainder for all the 20+ small primes that 
distinguish Infineon, there's a very high chance it was generated using their 
hardware, although it isn't impossible that it was selected by other means. The 
authors could give firm numbers but I have estimated the false positive rate as 
no more than 1-in 2 million. If any of the remainders are "wrong" then your 
keys weren't generated using this Infineon library, there is no "false 
negative" rate.

I believe the November paper will _not_ announce a new category of RSA weak 
keys, but instead will describe how to get better than chance rates of guessing 
RSA private key bits from the public modulus _if_ the key was generated using 
Infineon's library. Such knowledge can be leveraged into a cost effective 
attack using existing known techniques.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://scanmail.trustwave.com/?c=4062=3Ovl2apWfmmNe_UweJVlyoLYW7IcTt8TvAsvArum1g=5=https%3a%2f%2flists%2emozilla%2eorg%2flistinfo%2fdev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Anomalous Certificate Issuances based on historic CAA records

2017-11-30 Thread Tim Hollebeek via dev-security-policy
So it turns out DNSSEC solves CAA problems for almost nobody, because almost
nobody uses DNSSEC.  And given the serious flaws both in DNSSEC itself and
exiting DNSSEC implementations, it is unlikely to be part of any solution to
the current problems CAA is facing.  The presence of DNSSEC in the BR policy
for handling DNS failures, in hindsight, was probably a mistake, and
certainly deserved a lot more scrutiny than it got (Gerv tossed it out as a
possible compromise during CABF F2F discussion, and everyone sort of
shrugged and put it in because it seemed reasonable at the time).  Right
now, the only thing it is really accomplishing is preventing certificate
issuance to customers whose DNS infrastructure is flaky, misconfigured, or
unreliable.  Longer term, DNS over HTTPS is probably a more useful path
forward than DNSSEC for CAA, but unfortunately that is still in it's
infancy.

One of the things that has become very clear over the last year is that the
idea that the idea that there is a single, globally coherent state for what
DNS says at any particular time is more of a myth than a reality.  I'm sure
that most people familiar with DNS were already well aware of that, but it
has been entertaining seeing that almost every possible DNS failure mode
happens in practice with disturbing frequency.

The problem DNSSEC checks for CAA was intended to solve was the fact that it
is certainly possible that a well-resourced attacker can manipulate the DNS
responses that the CA sees as part of its CAA checks.  A better mitigation,
perhaps, is for multiple parties to publicly attest in a verifiable way as
to what the state of DNS was at/near the time of issuance with respect to
the relevant CAA records.  This leads to the idea that perhaps it's worth
exploring enhancing existing CT logging servers to do CAA checks and log
what they see.  That's probably easier than setting up an entirely separate
infrastructure for CAA transparency.  CT servers could even communicate back
to CAs the results they see to assist in detecting and identifying both
malicious and non-malicious discrepancies between the CA's own checks and
what the CT log is seeing.  "Thanks for the pre-cert.  We see a CAA record
at X.Y.Z.Z.Y that doesn't include you.  Do you really want to issue?"  There
are legitimate concerns that giving even more work to CT log servers might
put even more burden and expense onto those who are running CT log servers,
but that can probably be figured out.

Of course, to avoid some of the extremely interesting experiences the
industry has had with CAA, any "improved" version of CAA needs to be much
more clear about the proper handling of error conditions, discrepancies in
DNS responses, handling of malformed CAA records, and so on.  DNS is a
complicated beast, and any specification that exclusively contains
statements of the form "Let CAA(X) be the CAA record at DNS node X" is
oversimplified to the point where implementing it in practice will cause
problems.

-Tim

-Original Message-
From: dev-security-policy
[mailto:dev-security-policy-bounces+tim.hollebeek=digicert.com@lists.mozilla
.org] On Behalf Of Ben Laurie via dev-security-policy
Sent: Wednesday, November 29, 2017 3:37 PM
To: Paul Wouters 
Cc: douglas.beat...@gmail.com;
mozilla-dev-security-pol...@lists.mozilla.org; Jeremy Rowley

Subject: Re: Anomalous Certificate Issuances based on historic CAA records

On 29 November 2017 at 22:33, Paul Wouters  wrote:

>
>
> > On Nov 29, 2017, at 17:00, Ben Laurie via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> >
> > This whole conversation makes me wonder if CAA Transparency should 
> > be a thing.
>
> That is a very hard problem, especially for non-DNSSEC signed ones.
>

Presumably only for non-DNSSEC, actually? For DNSSEC, you have a clear chain
of responsibility for keys, and that is relatively easy to build on.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Anomalous Certificate Issuances based on historic CAA records

2017-11-30 Thread Tim Hollebeek via dev-security-policy
Wayne, your last point is closest to my thinking, and I whole-heartedly agree 
there may be better solutions.  My suggestion was that if CAA transparency is a 
desired thing, and it is clear that at least a few people think it is worth 
considering, it’s probably better to do it with existing transparency 
mechanisms instead of making up new ones.

 

There’s a lot of CAA debugging going on in private right now, and there isn’t 
necessarily an a priori reason why it has to be private.

 

-Tim

 

From: Wayne Thayer [mailto:wtha...@mozilla.com] 
Sent: Thursday, November 30, 2017 2:07 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: r...@sleevi.com; douglas.beat...@gmail.com; 
mozilla-dev-security-pol...@lists.mozilla.org; Paul Wouters <p...@nohats.ca>; 
Ben Laurie <b...@google.com>; Jeremy Rowley <jeremy.row...@digicert.com>
Subject: Re: Anomalous Certificate Issuances based on historic CAA records

 

What problem(s) are you trying to solve?

 

- Subscribers already (or soon will) have CT logs and monitors available to 
detect mis-issued certs. They don't need CAA Transparency.

 

- This thread started as a discussion over possible mis-issuance that was 
determined to be false positives. As has been stated, without DNSSEC there is 
no such thing as a coherent view of DNS and Ryan described a legitimate example 
where a domain owner may consciously update CAA records briefly to permit 
issuance. It's unclear to me how CAA Transparency could solve this problem and 
thus provide a mechanism to confirm mis-issuance, if that is the goal.

 

- The goal of reducing the risk of mis-issuance from well-behaved CAs who have 
bad or manipulated CAA data seems most worthwhile to me. To Ryan's point (I 
think), there may be better ways of achieving this one such as requiring CAs to 
"gossip" CAA records, or requiring CAA checks be performed from multiple 
network locations.

 

Wayne

 

On Thu, Nov 30, 2017 at 2:00 PM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

I think there’s value in publicly logging things even if that isn’t the basis 
for trust.  So I disagree that what I wrote boils down to what I didn’t write.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Anomalous Certificate Issuances based on historic CAA records

2017-11-30 Thread Tim Hollebeek via dev-security-policy

Paul,

Improving CAA by moving it to a protocol other than DNS is certainly worth
considering, going forward.

With respect to people using proper DNS libraries and not inventing their
own CNAME / DNAME handling, the problem was that RFC 6844 accidentally
specified semantics for CNAME / DNAME that were not the standard semantics!
Even the erratum discussed extensively last spring still isn't fully
compliant with the relevant RFCs.

About half of the CAA problems encountered could have been avoided if RFC
6844 had simply said "When doing CAA lookups, CNAME MUST be handled as
specified in RFC 2181, and DNAME MUST be handled as specified in RFC 6672",
without trying to explicitly include them in the lookup algorithm. 

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-12 Thread Tim Hollebeek via dev-security-policy

> A policy allowing CAs to generate key pairs should also include provisions
> for:
> - The CA must generate the key in accordance with technical best practices
> - While in possession of the private key, the CA must store it securely

Don't forget appropriate protection for the key while it is in transit.  I'll 
look a bit closer at the use cases and see if I can come up with some 
reasonable suggestions.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-14 Thread Tim Hollebeek via dev-security-policy
Within 24 hours?  Once the download completes?  It doesn’t seem significantly 
harder than the other questions we grapple with.  I’m sure there are plenty of 
reasonable solutions.

 

If you want to deliver the private key first, before issuance, that’d be fine 
too.  It just means two downloads instead of one and I tend to prefer avoiding 
unnecessary complexity.

 

-Tim

 

From: Wayne Thayer [mailto:wtha...@mozilla.com] 
Sent: Wednesday, December 13, 2017 5:40 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CA generated keys

 

On Wed, Dec 13, 2017 at 4:06 PM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:


Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the key and certificate
at the same time should be allowed, and I have no problem with a requirement
to delete the key after delivery.

 

How would you define a requirement to discard the private key "after delivery"? 
This seems like a very slippery slope.

 

  I also think server side generation along
the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC 7030
is about client certificates, but in a world with lots of tiny communicating
devices that interface with people via web browsers, there are lots of highly
resource constrained devices with poor access to randomness out there running
web servers.  And I think we are heading quickly towards that world.
Tightening up the requirements to allow specific, approved mechanisms is fine.
We don't want people doing random things that might not be secure.

Why is it unreasonable in this IoT scenario to require the private key to be 
delivered prior to issuance?



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-14 Thread Tim Hollebeek via dev-security-policy

Of course, the main reason Comodo gets sucked into this swamp is the 
price point.  That isn't necessarily their fault.

As I've pointed out elsewhere, Comodo has some great ideas about fixing
revocation that would go a long way towards solving the "misbehave only
after issuance" problem that you correctly pointed out.

-Tim

> -Original Message-
> From: Rob Stradling [mailto:rob.stradl...@comodo.com]
> Sent: Thursday, December 14, 2017 6:01 AM
> To: Tim Hollebeek <tim.holleb...@digicert.com>
> Cc: Peter Gutmann <pgut...@cs.auckland.ac.nz>; Gervase Markham
> <g...@mozilla.org>; mozilla-dev-security-pol...@lists.mozilla.org; Tim
> Shirley <tshir...@trustwave.com>
> Subject: Re: On the value of EV
> 
> On 14/12/17 00:25, Tim Hollebeek via dev-security-policy wrote:
> > If you look at where the HTTPS phishing certificates come from, they
> > come almost entirely from Let's Encrypt and Comodo.
> >
> > This is perhaps the best argument in favor of distinguishing between
> > CAs that care about phishing and those that don't.
> 
> Tim,
> 
> We reject certificate requests for sites that are already known to engage
in
> phishing, and we revoke (for all the good that does) certificates for
sites that
> are subsequently discovered to have engaged in phishing.
> 
> IIUC, you're saying that "CAs that care about phishing" are ~100%
successful
> at avoiding issuing certs to phishing sites.  If so, that's great!
Perhaps you
> could help us to become one of the "CAs that care about phishing" by
sharing
> your crystal ball technology with us, so that we too can avoid issuing
certs to
> sites that subsequently engage in phishing?
> 
> --
> Rob Stradling
> Senior Research & Development Scientist
> COMODO - Creating Trust Online



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-18 Thread Tim Hollebeek via dev-security-policy

> On 15/12/17 16:02, Ryan Hurst wrote:
> > So I have read this thread in its entirety now and I think it makes
sense for it
> to reset to first principles, specifically:
> >
> > What are the technological and business goals trying to be achieved,
> > What are the requirements derived from those goals, What are the
> > negative consequences of those goals.
> >
> > My feeling is there is simply an abstract desire to allow for the CA, on
behalf
> of the subject, to generate the keys but we have not sufficiently
articulated a
> business case for this.
> 
> I think I'm in exactly this position also; thank you for articulating it.
One might
> also add:
> 
> * What are the inevitable technical consequences of a scheme which meets
> these goals? (E.g. "use of PKCS#12 for key transport" might be one answer
to
> that question.)

I actually agree with Ryan, too.  I think it's more of an issue of what sort
of future we want, and we have time.  I'm actually far less interested in
the PKCS#12 use case, and more interested with things like RFC 7030, which
keep popping up in the IoT space.

Also, in response to Ryan's other comments on PKCS#12, replacing it with
something more modern for the use cases where it is currently common (e.g.
client certificates, email certificates) would also be a huge improvement.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-13 Thread Tim Hollebeek via dev-security-policy
If you look at the phishing data feeds and correlate them with EV certificates,
you'll find out that Tim's "speculation" is right.

In my experience, it's generally a bad idea to disagree with Tim Shirley.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Tim
> Shirley via dev-security-policy
> Sent: Wednesday, December 13, 2017 3:35 PM
> To: r...@sleevi.com
> Cc: mozilla-dev-security-pol...@lists.mozilla.org; Gervase Markham
> 
> Subject: Re: On the value of EV
> 
> No, I’m not presuming that; that’s why I put the ? after never.  I’ve never 
> heard
> of any, so it’s possible it really is never.  But I’m pretty confident in at 
> least the
> “rare” part because I’m sure if you knew of any you’d be sharing examples.  ;)
> 
> 
> From: Ryan Sleevi 
> Reply-To: "r...@sleevi.com" 
> Date: Wednesday, December 13, 2017 at 5:03 PM
> To: Tim Shirley 
> Cc: Gervase Markham , "mozilla-dev-security-
> pol...@lists.mozilla.org" 
> Subject: Re: On the value of EV
> 
> "The very fact that EV certs are rarely (never?) used" is, of course,
> unsubstantiated with data. It's a logically flawed argument - you're presuming
> that non-existence is proof of non-existence.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/1mqhGL6xJbzNGRpvF0vTa3WSnEAQZQF
> 5K8VgNSFvl4s=?d=gqGQYeASiQy2N3lU7K-
> sEhOlQFbmNC2fxAOHBYelo4XflHuD2J9CzlFlH1A4n9gPmfRm7PO65FrdOoGfE
> G4_NkKF6-
> 8MK2zsOPqWmmn1vGp6Vnisxb3aI7shACwoWBG13n7WdXQU7nSrm_tFvcoN
> 9O0NKUrlWvavx4iSGiiXzsDv01k8TE8-Yo_fPj-
> 3jovLn9wEG58glLeHrORIeDZBuxW2AhHJoW4MJTAlfEcVHypFeL1oqs8zKB9LvE
> VIUjqp3uKWLp2zpjq2Kig_eG7zbANgxreRmS4W7SCFZQXf6wwvzxDRQsu0mq-
> AEES6RX6E2oLIYUPGOm92xX7muZtDJiATEc4W4zkWK-OgxI-llU1e4nM-gBlD-
> MdN6MEdFgK31iyhAmp9nahN24LYmBIOZcmZtNEVVi8xWXSKfZ4HRQ94ZCQx
> mxlJBA%3D%3D=https%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-
> security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-13 Thread Tim Hollebeek via dev-security-policy
If you look at where the HTTPS phishing certificates come from, they come
almost
entirely from Let's Encrypt and Comodo.

This is perhaps the best argument in favor of distinguishing between CAs
that care
about phishing and those that don't.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Peter
> Gutmann via dev-security-policy
> Sent: Wednesday, December 13, 2017 4:23 PM
> To: Gervase Markham ; mozilla-dev-security-
> pol...@lists.mozilla.org; Tim Shirley 
> Subject: Re: On the value of EV
> 
> Tim Shirley via dev-security-policy

> writes:
> 
> >But regardless of which (or neither) is true, the very fact that EV
> >certs are rarely (never?) used on phishing sites
> 
> There's no need:
> 
> https://info.phishlabs.com/blog/quarter-phishing-attacks-hosted-https-
> domains
> 
> In particular, "the rate at which phishing sites are hosted on HTTPS pages
is
> rising significantly faster than overall HTTPS adoption".
> 
> It's like SPF and site security seals, adoption by spammers and crooks was
> ahead of adoption by legit users because the bad guys have more need of a
> signalling mechanism like that than anyone else.
> 
> Peter.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-13 Thread Tim Hollebeek via dev-security-policy
I don't want to spend too much time digressing into a discussion of the same
origin policy as a basis for a reasonable security model for the web, but I
hope we could all agree on one thing that was abundantly obvious twenty
years ago, and has only become more obvious:

Anything originally introduced by Netscape is horribly broken and needs to
be replaced.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of
> Matthew Hardeman via dev-security-policy
> Sent: Wednesday, December 13, 2017 2:41 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: On the value of EV
> 
> On Tuesday, December 12, 2017 at 3:52:40 PM UTC-6, Ryan Sleevi wrote:
> 
> > Yes. This is the foundation and limit of Web Security.
> >
> >
> https://clicktime.symantec.com/a/1/GrbZLkNqUS91rgzMay4M15oOr3bYABO
> Whq1
> > K3U87RIo=?d=pHiUFZpus7xBKMLSCUAfZRndcniHFdqZrXgc-
> _r0FxYSwiMHScu8QgSvJy
> > E8LSHlko0v84eVoyDMoTZTqKVUvrQ_LxFgoZAq1f-
> Iw1ESfQHF0h4v_K1IjkBwaIhjNiNX
> > coOSGp7NnMokKR3ug1bd6esHHwnMamBgCwow-ecE3suQ9uS4-
> zfp_NLR0LWp-kXGqFhQqR
> > AfcAImdNz09yApHBItSOYOep3BWfyNMoDnHxlSQJaFx3zhDxV3a-
> AkndjySZN86maZVN5c
> > DBfq3b_73V2qS22vAabmGLFF5uZN8g8Lxstv8tiVTx9_BPzKFZVzWHsrnnheL-
> W3D22riT
> > AFkvNYWYFwJ1fHe0NpVNxMU3y4vi7I9_zIoxa24Fox-
> VmvQlMPLAbZZwHNAumWKMqIhjrt
> >
> k76Lk7EkqLehoiC9__j0qne7lDkDd47_=https%3A%2F%2Fen.wikipedia.org%
> 2Fwi
> > ki%2FSame-origin_policy
> >
> > This is what is programatically enforced. Anything else either
> > requires new technology to technically enforce it (such as a new
> > scheme), or is offloading the liability to the user.
> >
> 
> The notion that a sub-resource load of a non-EV sort should downgrade the
EV
> display status of the page is very questionable.
> 
> I'm not sure we need namespace separation for EV versus non-EV
> subresouces.
> 
> The cause for this is simple:
> 
> It is the main page resource at the root of the document which causes each
> sub-resource to be loaded.
> 
> There is a "curatorship", if you will, engaged by the site author.  If
there are
> sub-resources loaded in, whether they are EV or not, it is the root page
> author's place to "take responsibility" for the contents of the DV or EV
> validated sub-resources that they cause to be loaded.
> 
> Frankly, I reduce third party origin resources to zero on web applications
on
> systems I design where those systems have strong security implications.
> 
> Of course, that strategy is probably not likely to be popular at Google,
which
> is, in a quite high percentage of instances, the target origin of all
kinds of sub-
> resources loaded in pages across the web.
> 
> If anyone takes the following comment seriously, this probably spawns an
> entirely separate conversation: I regard an EV certificate as more of a
code-
> signing of a given webpage / website and of the sub-resources whether or
not
> same origin, as they descend from the root page load.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/oq_SYtg88dEoDRxJA115VhfXkFgyjy6paw
> HDkVPMqrM=?d=pHiUFZpus7xBKMLSCUAfZRndcniHFdqZrXgc-
> _r0FxYSwiMHScu8QgSvJyE8LSHlko0v84eVoyDMoTZTqKVUvrQ_LxFgoZAq1f-
> Iw1ESfQHF0h4v_K1IjkBwaIhjNiNXcoOSGp7NnMokKR3ug1bd6esHHwnMamBg
> Cwow-ecE3suQ9uS4-zfp_NLR0LWp-
> kXGqFhQqRAfcAImdNz09yApHBItSOYOep3BWfyNMoDnHxlSQJaFx3zhDxV3a-
> AkndjySZN86maZVN5cDBfq3b_73V2qS22vAabmGLFF5uZN8g8Lxstv8tiVTx9_B
> PzKFZVzWHsrnnheL-
> W3D22riTAFkvNYWYFwJ1fHe0NpVNxMU3y4vi7I9_zIoxa24Fox-
> VmvQlMPLAbZZwHNAumWKMqIhjrtk76Lk7EkqLehoiC9__j0qne7lDkDd47_
> =https%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-13 Thread Tim Hollebeek via dev-security-policy

Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the key and certificate 
at the same time should be allowed, and I have no problem with a requirement 
to delete the key after delivery.  I also think server side generation along 
the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC 7030 
is about client certificates, but in a world with lots of tiny communicating 
devices that interface with people via web browsers, there are lots of highly 
resource constrained devices with poor access to randomness out there running 
web servers.  And I think we are heading quickly towards that world. 
Tightening up the requirements to allow specific, approved mechanisms is fine. 
We don't want people doing random things that might not be secure.

As usual, non-TLS certificates have a completely different set of concerns. 
Demand for escrow of client/email certificates is much higher and the practice 
is much more common, for a variety of business reasons.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-13 Thread Tim Hollebeek via dev-security-policy
As I’m sure you’re aware, RSA key generation is far, far more reliant on the 
quality of the random number generation and the prime selection algorithm than 
TLS is dependent on randomness.  In fact it’s the combination of poor 
randomness with attempts to reduce the cost of RSA key generation that has and 
will continue to cause problems.

 

While the number of bits in the key pair is an important security parameter, 
the number of potential primes and their distribution has historically not 
gotten as much attention as it should.  This is why there have been a number of 
high profile breaches due to poor RSA key generation, but as far as I know, no 
known attacks due to the use of randomness elsewhere in the TLS protocol.  This 
is because TLS, like most secure protocols, has enough of gap between secure 
and insecure that small deviations from ideal behavior don’t break the entire 
protocol.  RSA has a well-earned reputation for finickiness and fragility.

 

It doesn’t help that RSA key generation has a sort of birthday paradoxy feel to 
it, given that if any two key pairs share a prime number, it’s just a matter of 
time before someone uses Euclid’s algorithm in order to find it.  There are 
PLENTY of possible primes of the appropriate size so that this should never 
happen, but it’s been seen to happen.  I would be shocked if we’ve seen the 
last major security breach based on poor RSA key generation by resource 
constrained devices.

 

Given that there exist IETF approved alternatives that could help with that 
problem, they’re worth considering.  I’ve been spending a lot of time recently 
looking at the state of the IoT world, and it’s not good.

 

-Tim

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Wednesday, December 13, 2017 9:52 AM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CA generated keys

 

 

 

On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:


Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the key and certificate
at the same time should be allowed, and I have no problem with a requirement
to delete the key after delivery.  I also think server side generation along
the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC 7030
is about client certificates, but in a world with lots of tiny communicating
devices that interface with people via web browsers, there are lots of highly
resource constrained devices with poor access to randomness out there running
web servers.  And I think we are heading quickly towards that world.
Tightening up the requirements to allow specific, approved mechanisms is fine.
We don't want people doing random things that might not be secure.

 

Tim,

 

I'm afraid that the use case to justify this change seems to be inherently 
flawed and insecure. I'm hoping you can correct my misunderstanding, if I am 
doing so.

 

As I understand it, the motivation for this is to support devices with insecure 
random number generators that might be otherwise incapable of generating secure 
keys. The logic goes that by having the CAs generate these keys, we end up with 
better security - fewer keys leaking.

 

Yet I would challenge that assertion, and instead suggest that CAs generating 
keys for these devices inherently makes the system less secure. As you know, 
CAs are already on the hook to evaluate keys against known weak sets and reject 
them. There is absent a formal definition of this in the BRs, other than 
calling out illustrative examples such as Debian-generated keys (which share 
the flaw you mention), or, in more recent discussions, the ROCA-affected keys. 
Or, for the academic take, https://factorable.net/weakkeys12.extended.pdf , or 
the research at https://crocs.fi.muni.cz/public/papers/usenix2016 that itself 
appears to have lead to ROCA being detected.

 

Quite simply, the population you're targeting - "tiny communication devices ... 
with poor access to randomness" - are inherently insecure in a TLS world. TLS 
itself depends on entropy, especially for the ephemeral key exchange 
ciphersuites required for use in HTTP/2 or TLS 1.3, and so such devices do not 
somehow become 'more' secure by having the CA generate the key, but then 
negotiate poor TLS ciphersuites.

 

More importantly, the change you propose would have the incidental effect of 
making it more difficult to detect such devices and work with vendors to 
replace or repair them. This seems to overall make Mozilla users less secure, 
and the ecosystem less secure.

 

I realize that there is somewhat a conflict - we're today requiring that CDNs 
and vendors can generate these keys (thus masking off the poor entropy from 
detection), while not allowing the CA to participate - but I think that's 
consistent with a viewpoint that the CA sh

RE: CA generated keys

2017-12-13 Thread Tim Hollebeek via dev-security-policy
So ECHDE is an interesting point that I had not considered, but as Matt noted, 
the quality of randomness in the devices does generally improve with time.  It 
tends to be the initial bootstrapping where things go horribly wrong.

 

A couple years ago I was actually on the opposite side of this issue, so it’s 
very easy for me to see both sides.  I just don’t see it as useful to 
categorically rule out something that can provide a significant security 
benefit in some circumstances.

 

-Tim

 

As an unrelated but funny aside, I once heard about a expensive, high assurance 
device with a embedded bi-stable circuit for producing high quality hardware 
random numbers.  As part of a rigorous validation and review process in order 
to guarantee product quality, the instability was noticed and corrected late in 
the development process, and final testing showed that the output of the key 
generator was completely free of any pesky one bits that might interfere with 
the purity of all zero keys.

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Wednesday, December 13, 2017 11:11 AM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: r...@sleevi.com; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CA generated keys

 

Tim,

 

I appreciate your reply, but that seems to be backwards looking rather than 
forwards looking. That is, it looks and assumes static-RSA ciphersuites are 
acceptable, and thus the entropy risk to TLS is mitigated by client-random to 
this terrible TLS-server devices, and the issue to mitigate is the poor entropy 
on the server.

 

However, I don't think that aligns with what I was mentioning - that is, the 
expectation going forward of the use of forward-secure cryptography and 
ephemeral key exchanges, which do become more relevant to the quality of 
entropy. That is, negotiating an ECDHE_RSA exchange with terrible ECDHE key 
construction does not meaningfully improve the security of Mozilla users.

 

I'm curious whether any use case can be brought forward that isn't "So that we 
can aid and support the proliferation of insecure devices into users everyday 
lives" - as surely that doesn't seem like a good outcome, both for Mozilla 
users and for society at large. Nor do I think the propose changes meaningfully 
mitigate the harm caused by them, despite the well-meaning attempt to do so.

 

On Wed, Dec 13, 2017 at 12:40 PM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

As I’m sure you’re aware, RSA key generation is far, far more reliant on the 
quality of the random number generation and the prime selection algorithm than 
TLS is dependent on randomness.  In fact it’s the combination of poor 
randomness with attempts to reduce the cost of RSA key generation that has and 
will continue to cause problems.



While the number of bits in the key pair is an important security parameter, 
the number of potential primes and their distribution has historically not 
gotten as much attention as it should.  This is why there have been a number of 
high profile breaches due to poor RSA key generation, but as far as I know, no 
known attacks due to the use of randomness elsewhere in the TLS protocol.  This 
is because TLS, like most secure protocols, has enough of gap between secure 
and insecure that small deviations from ideal behavior don’t break the entire 
protocol.  RSA has a well-earned reputation for finickiness and fragility.



It doesn’t help that RSA key generation has a sort of birthday paradoxy feel to 
it, given that if any two key pairs share a prime number, it’s just a matter of 
time before someone uses Euclid’s algorithm in order to find it.  There are 
PLENTY of possible primes of the appropriate size so that this should never 
happen, but it’s been seen to happen.  I would be shocked if we’ve seen the 
last major security breach based on poor RSA key generation by resource 
constrained devices.



Given that there exist IETF approved alternatives that could help with that 
problem, they’re worth considering.  I’ve been spending a lot of time recently 
looking at the state of the IoT world, and it’s not good.



-Tim



From: Ryan Sleevi [mailto:r...@sleevi.com <mailto:r...@sleevi.com> ]
Sent: Wednesday, December 13, 2017 9:52 AM
To: Tim Hollebeek <tim.holleb...@digicert.com 
<mailto:tim.holleb...@digicert.com> >
Cc: mozilla-dev-security-pol...@lists.mozilla.org 
<mailto:mozilla-dev-security-pol...@lists.mozilla.org> 
Subject: Re: CA generated keys








On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org>  
<mailto:dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > > wrote:


Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the k

RE: On the value of EV

2017-12-13 Thread Tim Hollebeek via dev-security-policy
There are also the really cool hash-based revocation ideas that actually do help
even against active attackers on the same network.  I really wish those ideas 
got
more serious attention.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Tim
> Shirley via dev-security-policy
> Sent: Wednesday, December 13, 2017 2:47 PM
> To: Gervase Markham ; mozilla-dev-security-
> pol...@lists.mozilla.org
> Subject: Re: On the value of EV
> 
> As I understand it, Adam’s argument there was that to get value out of a
> revoked certificate, you need to be between the user and the web server so
> you can direct the traffic to your web server, so you’re already in position 
> to
> also block revocation checks.  I don’t think that maps here because a lot of 
> the
> scenarios EV assists with don’t involve an attacker being in that position.
> 
> I know the question has been raised before as to why most phishing sites use
> DV.  Some argue it’s because OV/EV are harder for people with bad intent to
> obtain.  Some argue it’s because DV is more ubiquitous across the web and
> thus more ubiquitous on phishing sites.  But regardless of which (or neither) 
> is
> true, the very fact that EV certs are rarely (never?) used on phishing sites 
> is in
> and of itself providing protection today to those of us who pay attention to 
> it.
> I’d argue that alone means the seat belt isn’t worthless, and we should focus
> on building better seat belts rather than cutting them out and relying on the
> air bag alone.
> 
> 
> 
> On 12/13/17, 3:46 PM, "Gervase Markham via dev-security-policy"  security-pol...@lists.mozilla.org> wrote:
> 
> On 13/12/17 11:58, Tim Shirley wrote:
> > So many of the arguments made here, such as this one, as well as the
> recent demonstrations that helped start this thread, focus on edge cases.  And
> while those are certainly valuable to consider, they obscure the fact that
> “Green Bar” adds value in the mainstream use cases.  If we were talking about
> how to improve EV, then by all means focus on the edge cases.  The thing I
> don’t see in all this is a compelling argument to take away something that’s
> useful most of the time.
> 
> My concern with this argument is that it's susceptible to the criticism
> that Adam Langley made of revocation checking:
> https://clicktime.symantec.com/a/1/oIyM4YfpaH03Is-
> zFRH8AJzVNaqfkUt09K3WEgNPHOw=?d=uXDB34hHU71idZadw5ip3nRlsyu-
> Farb4fe8P50v8eGeFFyo2uKWwJ4Owcn1ya1sP6zsnOxx541A3GOFiGV3Cf5xeA
> C4qommBEsD51KyHnN1oECe8T_yt8LZ6ZCjx8lUkHA5M71KtHURAAzZWV7FY
> W2u82WBSW6GLHWpUZAjFGUha5-
> UmlfcwC2w_ObguO5luns9CJP7vlg2dgz6CGb-
> qAUfdN84H9LFGImuQWG9kuOWmMJcPEtw37KtxFYHCUMUhYVoEv863RTwkj
> agPy1iVmYeDYR3xVul3nPvwyGqiZxJFxeziNE-
> gCzFthw99KCm3R75bz2c8DaSqvfSupR5AeE0exbXmWyQsLe7rCIHgOOKttvpaa
> uSMp0gMzX-
> AKZKGFpnyyt0VDxm9VA1jGMekaZ0QJfVj_l_rAFBGuauBVoWFBg_LOH5tQ%3D
> %3D=https%3A%2F%2Fscanmail.trustwave.com%2F%3Fc%3D4062%26d%
> 3DkJGx2vx-xMRho_TXqyD3e8mI4fM_V-
> yKUKn2tKZHNQ%26s%3D5%26u%3Dhttps%3A%2F%2Fwww.imperialviolet.or
> g%2F2012%2F02%2F05%2Fcrlsets.html
> 
> "So [EV identity is] like a seat-belt that snaps when you crash. Even
> though it works 99% of the time, it's worthless because it only works
> when you don't need it."
> 
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> 
> https://clicktime.symantec.com/a/1/x_xKxQuisdeMQ99vEcSB8fYv9i3RYV6ppz
> _my_vxbgs=?d=uXDB34hHU71idZadw5ip3nRlsyu-
> Farb4fe8P50v8eGeFFyo2uKWwJ4Owcn1ya1sP6zsnOxx541A3GOFiGV3Cf5xeA
> C4qommBEsD51KyHnN1oECe8T_yt8LZ6ZCjx8lUkHA5M71KtHURAAzZWV7FY
> W2u82WBSW6GLHWpUZAjFGUha5-
> UmlfcwC2w_ObguO5luns9CJP7vlg2dgz6CGb-
> qAUfdN84H9LFGImuQWG9kuOWmMJcPEtw37KtxFYHCUMUhYVoEv863RTwkj
> agPy1iVmYeDYR3xVul3nPvwyGqiZxJFxeziNE-
> gCzFthw99KCm3R75bz2c8DaSqvfSupR5AeE0exbXmWyQsLe7rCIHgOOKttvpaa
> uSMp0gMzX-
> AKZKGFpnyyt0VDxm9VA1jGMekaZ0QJfVj_l_rAFBGuauBVoWFBg_LOH5tQ%3D
> %3D=https%3A%2F%2Fscanmail.trustwave.com%2F%3Fc%3D4062%26d%
> 3DkJGx2vx-xMRho_TXqyD3e8mI4fM_V-
> yKUK2gu_0caA%26s%3D5%26u%3Dhttps%3A%2F%2Flists.mozilla.org%2Flisti
> nfo%2Fdev-security-policy
> 
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/kC1oRrCZP6m7Z3MS1qjK8ooXeqYR94Ar
> rjso4ZlYIqk=?d=uXDB34hHU71idZadw5ip3nRlsyu-
> Farb4fe8P50v8eGeFFyo2uKWwJ4Owcn1ya1sP6zsnOxx541A3GOFiGV3Cf5xeA
> C4qommBEsD51KyHnN1oECe8T_yt8LZ6ZCjx8lUkHA5M71KtHURAAzZWV7FY
> W2u82WBSW6GLHWpUZAjFGUha5-
> UmlfcwC2w_ObguO5luns9CJP7vlg2dgz6CGb-
> qAUfdN84H9LFGImuQWG9kuOWmMJcPEtw37KtxFYHCUMUhYVoEv863RTwkj
> agPy1iVmYeDYR3xVul3nPvwyGqiZxJFxeziNE-
> gCzFthw99KCm3R75bz2c8DaSqvfSupR5AeE0exbXmWyQsLe7rCIHgOOKttvpaa
> uSMp0gMzX-
> AKZKGFpnyyt0VDxm9VA1jGMekaZ0QJfVj_l_rAFBGuauBVoWFBg_LOH5tQ%3D
> 

RE: CA generated keys

2017-12-11 Thread Tim Hollebeek via dev-security-policy

> The more I think about it, the more I see this is actually a interesting
question :-)

I had the same feeling.  It seems like an easy question to answer until you
start thinking about it.

> I suspect the first thing Mozilla allowing this would do would be to make
it much more common. (Let's assume 
> there are no other policy barriers.) I suspect there are several simpler
workflows for certificate issuance and
> installation that this could enable, and CAs would be keen to make their
customers lives easier and reduce 
> support costs.

This may or may not be true.  I think it probably isn't.  The standard
method via a CSR is actually simpler, so I think that will continue to be
the predominant way of doing things.  I think it's more likely to remain
limited to large enterprise customers with unique requirements, IoT use
cases, and so on.

> > First, third parties who are *not* CAs can run key generation and 
> > escrow services, and then the third party service can apply for a  
> > certificate for the key, and deliver the certificate and the key to a
customer.
>
> That is true. Do you know how common this is in SSL/TLS?

I know it happens.  I can try to find out how common it is, and what the use
cases are.

> > Second, although I strongly believe that in general, as a best 
> > practice, keys should be generated by the device/entity it belongs to 
> > whenever possible, we've seen increasing evidence that key generation 
> > is difficult and many devices cannot do it securely.  I doubt that 
> > forcing the owner of the device to generate a key on a commodity PC is 
> > any better (it's probably worse).
> 
> That's also a really interesting question. We've had dedicated device key
generation failures, but we've also had 
> commodity PC key generation failures (Debian weak keys, right?). Does that
mean it's a wash? What do the risk 
> profiles look like here? One CA uses a MegaRNG2000 to generate hundreds of
thousands of certs.. and then a
> flaw is found in it. Oops.
> Better or worse than a hundred thousand people independently using a
broken OpenSSL shipped by their 
> Linux vendor?

I'd argue that the second is worse, since the large number of independent
people are going to have a much harder time becoming aware of the issue,
applying the appropriate fixes, and performing whatever remediation is
necessary.

The general rule is that you're able to do more rigorous things at scale
than you can when you're generating a key or two a year.

> > With an increasing number of small devices running web servers, keys 
> > generated by audited, trusted third parties under whatever rules 
> > Mozilla chooses to enforce about secure key delivery may actually in 
> > many circumstances be superior than what would happen if the practice is
banned.
> 
> Is there a way to limit the use of this to those circumstances?

I don't know but it's worth talking about.  I think the discussion should be
"when should this be allowed, and how can it be done securely?"

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-11 Thread Tim Hollebeek via dev-security-policy
Nobody is disputing the fact that these certificates were legitimate given the 
rules that exist today.

However, I don't believe "technically correct, but intentionally misleading" 
information should be included in certificates.  The question is how best to 
accomplish that.

-Tim

-Original Message-
From: Jonathan Rudenberg [mailto:jonat...@titanous.com] 
Sent: Monday, December 11, 2017 12:34 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: Ryan Sleevi <r...@sleevi.com>; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: On the value of EV


> On Dec 11, 2017, at 14:14, Tim Hollebeek via dev-security-policy 
> <dev-security-policy@lists.mozilla.org> wrote:
> 
> 
> It turns out that the CA/Browser Validation working group is currently 
> looking into how to address these issues, in order to tighten up 
> validation in these cases.

This isn’t a validation issue. Both certificates were properly validated and 
have correct (but very misleading information) in them. Business entity names 
are not unique, so it’s not clear how validation changes could address this.

I think it makes a lot of sense to get rid of the EV UI, as it can be trivially 
used to present misleading information to users in the most security-critical 
browser UI area. My understanding is that the research done to date shows that 
EV does not help users defend against phishing attacks, it does not influence 
decision making, and users don’t understand or are confused by EV.

Jonathan


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-11 Thread Tim Hollebeek via dev-security-policy

It turns out that the CA/Browser Validation working group is currently
looking into how to address these issues, in order to tighten up validation
in these cases.  We discussed it a bit last Thursday, and will be continuing
the discussion on the 21st.

If anyone has any good ideas, we'd be more than happy to hear them.

-Tim

-Original Message-
From: dev-security-policy
[mailto:dev-security-policy-bounces+tim.hollebeek=digicert.com@lists.mozilla
.org] On Behalf Of Ryan Sleevi via dev-security-policy
Sent: Monday, December 11, 2017 12:01 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: On the value of EV

Recently, researchers have been looking into the value proposition of EV
certificates, and more importantly, how easy it is to obtain certificates
that may confuse or mislead users - a purpose that EV is supposedly intended
to avoid.

James Burton was able to obtain a certificate for "Identity Verified", as
described in
https://clicktime.symantec.com/a/1/UMvfjhjcKci8WaOicVRiVWm_NzyoAX0Pc2qXQBXjH
nE=?d=4GxSxTMvs_XrCwnblzpDidRZeFwt4_CpS4UexlQ_QRYfMXTACGlU9KcLjcIV2AmJ-zJBtL
FaDv8U-F04Ie90QpnF8tK-ybyXlpLa2rqOTh9r7oBUmc1owCqd-3508LqFwnMSFygeNRYQQYxQ02
VE4dkt0wPLETCFlfrS7_BHqaxO5w6BikwFhE-nrVLpigRJAQlM14eULh56NL69CQWUVKrPl_t11B
ctsMNiFHBfSsJIZQ-82hU2y9cXYXVjjBcvic6aPKW8LtO7NZsXhDeVSSC6deBqC3QcR-K_Rip9Vt
yCDvYUoxnv9khLm24jo5M6xium8o1FiYEr5jvgfuRegHNRO1YAs1qwAmURlvecDTXHAOGDfgwKo7
DsjmEeyhtB5pylwlXn6YvgPEnUzvJZqqgb-lNj1M94f08yucGQETp7UZXA19h3qg%3D%3D=htt
ps%3A%2F%2F0.me.uk%2Fev-phishing%2F , which is a fully valid and legal EV
certificate, but which can otherwise confuse users.

Today, Ian Carroll disclosed how easy he was able to get a certificate for
"Stripe, Inc", registered within the US, and being granted the full EV
treatment as the 'legitimate' stripe.com. He's written up the explanation at
https://clicktime.symantec.com/a/1/Fahzn1Xee7EnTLqF7kqdnVFVklYxzLF8hiDkGN7kU
UM=?d=4GxSxTMvs_XrCwnblzpDidRZeFwt4_CpS4UexlQ_QRYfMXTACGlU9KcLjcIV2AmJ-zJBtL
FaDv8U-F04Ie90QpnF8tK-ybyXlpLa2rqOTh9r7oBUmc1owCqd-3508LqFwnMSFygeNRYQQYxQ02
VE4dkt0wPLETCFlfrS7_BHqaxO5w6BikwFhE-nrVLpigRJAQlM14eULh56NL69CQWUVKrPl_t11B
ctsMNiFHBfSsJIZQ-82hU2y9cXYXVjjBcvic6aPKW8LtO7NZsXhDeVSSC6deBqC3QcR-K_Rip9Vt
yCDvYUoxnv9khLm24jo5M6xium8o1FiYEr5jvgfuRegHNRO1YAs1qwAmURlvecDTXHAOGDfgwKo7
DsjmEeyhtB5pylwlXn6YvgPEnUzvJZqqgb-lNj1M94f08yucGQETp7UZXA19h3qg%3D%3D=htt
ps%3A%2F%2Fstripe.ian.sh%2F

I suppose this is both a question for policy and for Mozilla - given the
ability to provide accurate-but-misleading information in EV certificates,
and the effect it has on the URL bar (the lone trusted space for security
information), has any consideration been given to removing or deprecating EV
certificates?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://clicktime.symantec.com/a/1/kDDKlZK0leEPqVUm7AaittNvNX0qYVu4pVG8QnvM6
8E=?d=4GxSxTMvs_XrCwnblzpDidRZeFwt4_CpS4UexlQ_QRYfMXTACGlU9KcLjcIV2AmJ-zJBtL
FaDv8U-F04Ie90QpnF8tK-ybyXlpLa2rqOTh9r7oBUmc1owCqd-3508LqFwnMSFygeNRYQQYxQ02
VE4dkt0wPLETCFlfrS7_BHqaxO5w6BikwFhE-nrVLpigRJAQlM14eULh56NL69CQWUVKrPl_t11B
ctsMNiFHBfSsJIZQ-82hU2y9cXYXVjjBcvic6aPKW8LtO7NZsXhDeVSSC6deBqC3QcR-K_Rip9Vt
yCDvYUoxnv9khLm24jo5M6xium8o1FiYEr5jvgfuRegHNRO1YAs1qwAmURlvecDTXHAOGDfgwKo7
DsjmEeyhtB5pylwlXn6YvgPEnUzvJZqqgb-lNj1M94f08yucGQETp7UZXA19h3qg%3D%3D=htt
ps%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-11 Thread Tim Hollebeek via dev-security-policy
 

Happy to share the details.

 

We only had about 10 minutes on the agenda, so the discussion hasn’t been too 
detailed so far (there is still a lot of fallout from CAA that is dominating 
many validation discussions).  There was a general consensus that companies 
with intentionally misleading names, and companies that are recently created 
shell companies solely for the purpose of obtaining a certificate should not be 
able to get an EV certificate.

 

Exactly what additional validation or rules might help with that problem, while 
not unnecessarily burdening legitimate businesses will require more time and 
discussion, which is why if anyone has good ideas, I’d love to hear them.

 

-Tim

 

From: Alex Gaynor [mailto:agay...@mozilla.com] 
Sent: Monday, December 11, 2017 12:26 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: Ryan Sleevi <r...@sleevi.com>; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: On the value of EV

 

Can you share what the working group has been brainstorming on?

 

Near as I can tell, this is a validly issued EV cert, for a valid KY company. 
If "Stripe, Inc of Kentucky" were in a distinct industry from this Stripe there 
wouldn't even be a trademark claim (I'm not a lawyer, etc.).

 

Lest anyone think "well, they should be able to tell if this was being used 
maliciously", there's no reason a clever attacker couldn't make a fake landing 
page for their fake Stripe, Inc, while sending phishing emails that point to 
various other URLs, which show unrelated phishing contents.

 

Alex

 

On Mon, Dec 11, 2017 at 2:14 PM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:


It turns out that the CA/Browser Validation working group is currently
looking into how to address these issues, in order to tighten up validation
in these cases.  We discussed it a bit last Thursday, and will be continuing
the discussion on the 21st.

If anyone has any good ideas, we'd be more than happy to hear them.

-Tim

-Original Message-
From: dev-security-policy
[mailto:dev-security-policy-bounces+tim.hollebeek 
<mailto:dev-security-policy-bounces%2Btim.hollebeek> =digicert.com@lists.mozilla
.org] On Behalf Of Ryan Sleevi via dev-security-policy
Sent: Monday, December 11, 2017 12:01 PM
To: mozilla-dev-security-pol...@lists.mozilla.org 
<mailto:mozilla-dev-security-pol...@lists.mozilla.org> 
Subject: On the value of EV

Recently, researchers have been looking into the value proposition of EV
certificates, and more importantly, how easy it is to obtain certificates
that may confuse or mislead users - a purpose that EV is supposedly intended
to avoid.

James Burton was able to obtain a certificate for "Identity Verified", as
described in
https://clicktime.symantec.com/a/1/UMvfjhjcKci8WaOicVRiVWm_NzyoAX0Pc2qXQBXjH 
<https://clicktime.symantec.com/a/1/UMvfjhjcKci8WaOicVRiVWm_NzyoAX0Pc2qXQBXjHnE=?d=4GxSxTMvs_XrCwnblzpDidRZeFwt4_CpS4UexlQ_QRYfMXTACGlU9KcLjcIV2AmJ-zJBtLFaDv8U-F04Ie90QpnF8tK-ybyXlpLa2rqOTh9r7oBUmc1owCqd-3508LqFwnMSFygeNRYQQYxQ02VE4dkt0wPLETCFlfrS7_BHqaxO5w6BikwFhE-nrVLpigRJAQlM14eULh56NL69CQWUVKrPl_t11BctsMNiFHBfSsJIZQ-82hU2y9cXYXVjjBcvic6aPKW8LtO7NZsXhDeVSSC6deBqC3QcR-K_Rip9VtyCDvYUoxnv9khLm24jo5M6xium8o1FiYEr5jvgfuRegHNRO1YAs1qwAmURlvecDTXHAOGDfgwKo7>
 
nE=?d=4GxSxTMvs_XrCwnblzpDidRZeFwt4_CpS4UexlQ_QRYfMXTACGlU9KcLjcIV2AmJ-zJBtL
FaDv8U-F04Ie90QpnF8tK-ybyXlpLa2rqOTh9r7oBUmc1owCqd-3508LqFwnMSFygeNRYQQYxQ02
VE4dkt0wPLETCFlfrS7_BHqaxO5w6BikwFhE-nrVLpigRJAQlM14eULh56NL69CQWUVKrPl_t11B
ctsMNiFHBfSsJIZQ-82hU2y9cXYXVjjBcvic6aPKW8LtO7NZsXhDeVSSC6deBqC3QcR-K_Rip9Vt
yCDvYUoxnv9khLm24jo5M6xium8o1FiYEr5jvgfuRegHNRO1YAs1qwAmURlvecDTXHAOGDfgwKo7
DsjmEeyhtB5pylwlXn6YvgPEnUzvJZqqgb-lNj1M94f08yucGQETp7UZXA19h3qg%3D%3D=htt
ps%3A%2F%2F0.me.uk <http://2F0.me.uk> %2Fev-phishing%2F , which is a fully 
valid and legal EV
certificate, but which can otherwise confuse users.

Today, Ian Carroll disclosed how easy he was able to get a certificate for
"Stripe, Inc", registered within the US, and being granted the full EV
treatment as the 'legitimate' stripe.com <http://stripe.com> . He's written up 
the explanation at
https://clicktime.symantec.com/a/1/Fahzn1Xee7EnTLqF7kqdnVFVklYxzLF8hiDkGN7kU 
<https://clicktime.symantec.com/a/1/Fahzn1Xee7EnTLqF7kqdnVFVklYxzLF8hiDkGN7kUUM=?d=4GxSxTMvs_XrCwnblzpDidRZeFwt4_CpS4UexlQ_QRYfMXTACGlU9KcLjcIV2AmJ-zJBtLFaDv8U-F04Ie90QpnF8tK-ybyXlpLa2rqOTh9r7oBUmc1owCqd-3508LqFwnMSFygeNRYQQYxQ02VE4dkt0wPLETCFlfrS7_BHqaxO5w6BikwFhE-nrVLpigRJAQlM14eULh56NL69CQWUVKrPl_t11BctsMNiFHBfSsJIZQ-82hU2y9cXYXVjjBcvic6aPKW8LtO7NZsXhDeVSSC6deBqC3QcR-K_Rip9VtyCDvYUoxnv9khLm24jo5M6xium8o1FiYEr5jvgfuRegHNRO1YAs1qwAmURlvecDTXHAOGDfgwKo7>
 
UM=?d=4GxSxTMvs_XrCwnblzpDidRZeFwt4_CpS4UexlQ_QRYfMXTACGlU9KcLjcIV2AmJ-zJBtL
FaDv8U-F04Ie90QpnF8tK-ybyXlpLa2rqOTh9r7oBUmc1owCqd-3508LqFwnMSFygeNRYQ

RE: On the value of EV

2017-12-11 Thread Tim Hollebeek via dev-security-policy
 

Certainly, as you noted, one option is to improve EV beyond simply being an 
assertion of legal existence.

 

-Tim

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Monday, December 11, 2017 12:46 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: Jonathan Rudenberg <jonat...@titanous.com>; Ryan Sleevi <r...@sleevi.com>; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: On the value of EV

 

 

 

On Mon, Dec 11, 2017 at 2:39 PM, Tim Hollebeek <tim.holleb...@digicert.com 
<mailto:tim.holleb...@digicert.com> > wrote:

Nobody is disputing the fact that these certificates were legitimate given the 
rules that exist today.

However, I don't believe "technically correct, but intentionally misleading" 
information should be included in certificates.  The question is how best to 
accomplish that.

-Tim

 

Note: Jonathan did not mention "intentionally" misleading (instead "properly 
validated and have correct (but very misleading information) in them". 
Similarly, I noted that it was providing "accurate-but-misleading".

 

Unless the CA/Browser Forum has determined a way to discern intent (which would 
be a profound breakthrough in and of itself), we cannot and should not consider 
intent, and must merely evaluate based on result. As such, the only way to 
remedy this information is to deny one or more parties the ability to obtain 
certificates that correctly and accurately reflect their organizational 
information, which is nominally the value proposition of EV certificates. 
Unless we're willing to redefine EV certificates as being something other tied 
to the legal identifier, I don't believe it's fair or beneficial to suggest we 
can resolve this through validation means.

 

To that end, given the inherent confusion that results from legal identities - 
and, again, this is a fully valid legal identity being used - I raised the 
question as to whether or not it should be given the same UI treatment as the 
unambiguous, fully qualified URL.

 

One option, as noted, is to fully qualify the organization information, if 
users are to be expected to recognize the nuances of legal identities (and why 
so many sites seem to be in Delaware and Nevada). However, that seems 
exceptionally user-hostile and to ignore countless research studies, so another 
option would be to consider removing the (unqualified) legal identity from the 
address bar.

 


-Original Message-
From: Jonathan Rudenberg [mailto:jonat...@titanous.com 
<mailto:jonat...@titanous.com> ]
Sent: Monday, December 11, 2017 12:34 PM
To: Tim Hollebeek <tim.holleb...@digicert.com 
<mailto:tim.holleb...@digicert.com> >
Cc: Ryan Sleevi <r...@sleevi.com <mailto:r...@sleevi.com> >; 
mozilla-dev-security-pol...@lists.mozilla.org 
<mailto:mozilla-dev-security-pol...@lists.mozilla.org> 

Subject: Re: On the value of EV


> On Dec 11, 2017, at 14:14, Tim Hollebeek via dev-security-policy 
> <dev-security-policy@lists.mozilla.org 
> <mailto:dev-security-policy@lists.mozilla.org> > wrote:
>
>
> It turns out that the CA/Browser Validation working group is currently
> looking into how to address these issues, in order to tighten up
> validation in these cases.

This isn’t a validation issue. Both certificates were properly validated and 
have correct (but very misleading information) in them. Business entity names 
are not unique, so it’s not clear how validation changes could address this.

I think it makes a lot of sense to get rid of the EV UI, as it can be trivially 
used to present misleading information to users in the most security-critical 
browser UI area. My understanding is that the research done to date shows that 
EV does not help users defend against phishing attacks, it does not influence 
decision making, and users don’t understand or are confused by EV.

Jonathan

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-11 Thread Tim Hollebeek via dev-security-policy
 

On the contrary, everything needs to be improved with time.  Just because it 
could be made better doesn’t make it useless or bad.

 

-Tim

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Monday, December 11, 2017 1:09 PM
To: Tim Hollebeek 
Cc: r...@sleevi.com; Jonathan Rudenberg ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: On the value of EV

 

 

 

On Mon, Dec 11, 2017 at 2:50 PM, Tim Hollebeek  > wrote:

 

Certainly, as you noted, one option is to improve EV beyond simply being an 
assertion of legal existence.

 

Does this mean we're in agreement that EV doesn't provide value to justify the 
UI then? ;-)

 

I say it loaded and facetiously, but I think we'd need to be honest and open 
that if we're saying something needs to be 'more' than EV, in order to be 
useful and meaningful to users - which is what justifies the UI surface, versus 
being useful to others, as Matt highlighted - then either EV meets the bar of 
UI utility or it doesn't. And if it doesn't, then orthogonal to and separate 
from efforts to add "Validation ++" (whether they be QWACS in eIDAS terms or 
something else), then there's no value in the UI surface today, and whether 
there's any value in UI surface in that Validation++ should be evaluated on the 
merits of Validation++'s proposals, and not by invoking EV or grandfathering it 
in.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


CA generated keys

2017-12-09 Thread Tim Hollebeek via dev-security-policy
 

Apologies for the new thread.  It's difficult for me to reply to messages
that were sent before I joined Digicert.

 

With respect to CA generated SSL keys, there are a few points that I feel
should be considered.

 

First, third parties who are *not* CAs can run key generation and escrow
services, and then the third party service can apply for a  certificate for
the key, and deliver the certificate and the key to a customer.  I'm not
sure how this could be prevented.  So if this actually did end up being a
Mozilla policy, the practical effect would be that SSL keys can be generated
by third parties and escrowed, *UNLESS* that party is trusted by Mozilla.
This seems . backwards, at best.

 

Second, although I strongly believe that in general, as a best practice,
keys should be generated by the device/entity it belongs to whenever
possible, we've seen increasing evidence that key generation is difficult
and many devices cannot do it securely.  I doubt that forcing the owner of
the device to generate a key on a commodity PC is any better (it's probably
worse).  With an increasing number of small devices running web servers,
keys generated by audited, trusted third parties under whatever rules
Mozilla chooses to enforce about secure key delivery may actually in many
circumstances be superior than what would happen if the practice is banned.

 

-Tim

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-12 Thread Tim Hollebeek via dev-security-policy
This is useful feedback.  Thanks.

-Tim

-Original Message-
From: dev-security-policy 
[mailto:dev-security-policy-bounces+tim.hollebeek=digicert@lists.mozilla.org]
 On Behalf Of Jakob Bohm via dev-security-policy
Sent: Tuesday, December 12, 2017 6:36 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: On the value of EV

On 12/12/2017 01:08, Adam Caudill wrote:
 Even if it is, someone filed the paperwork.  Court houses have 
 clerks, guards, video cameras, etc...  It still may present a real 
 physical
> point
 from which to bootstrap an investigation.
>>>
>>> Court houses also have online systems. I think if you read both Ian 
>>> and
> James' work, you'll see the issues they're raising address this 
> hypothetical.
>>
>> I shall certainly read their work closely on that matter.  In my
> experience, these generally don't allow filings for new businesses 
> from those not previously known to the court/registrar in real life.
> 
> I can say from my own experience, in some states in the US, it's a 
> trivial matter to create a company online, with no validation of 
> identity or other information. It takes about 10 minutes, and you'll 
> have all the paperwork the next day. When I did this (in a state I had 
> never done business in before), there was absolutely no identity 
> checks, no identity documents, nothing at all that would tie the 
> business to me if I had lied. Creating a business with no connection 
> to the people behind it is a very, very simple thing to do.
> 

A lot of people have posed suggestions for countermeasures so extreme they 
should not be taken seriously.  This includes discontinuing EV, requiring that 
companies cannot get EV certs during their first year of existence, or 
suggesting that only "famous" companies can get EV certificates.

Here is a more reasonable suggestion:

1. In the Fx UI, display the actual jurisdictionOfIncorporation instead
   of just the country, especially where those differ (For example
   Kentucky versus all-of-US).

2. Add a rule that if there is a big national or international company
   with a name, other companies cannot get certificates for the same
   name in related jurisdictions.  For example if there is a company
   listed on NYSE or NASDAQ, no similarly named US company can get an
   EV or OV certificate for that name.  Ditto for a reasonable list of
   national registries in each country.  CAs should be required to
   publicly state which "big-status" lists beat local
   company/organization registrations in each country, and similar for
   any special lists of major global organizations, such as Google or
   The Red Cross.

3. Minimum (not maximum) standards for such things need to be published
   by the CAB/F.

4. Note that stock exchanges should not be the only list of "nationally
   significant companies", as that would exclude a lot of companies with
   different ownership structures, such as Mozilla Inc. or the pre-IPO
   Google.  However the list criteria should be clear and not rely
   too heavily on the subjective experience of vetting agents etc.

5. It is worth noting that some countries do use a national company
   registry which ensures uniqueness directly.  Denmark is one such
   country (though the uniqueness checking is probably limited to exact
   matches).

6. It should still be possible for local branches and franchise holders
   to get EV certificates, if the bigger company approves as part of the
   vetting process.  For example Google Canada should be able to get a
   3rd party EV certificate if the international headquarters of Alphabet
   approves it.

Formulating this into formal rules, and selecting appropriate per-country and 
global lists of name-dominating organizations will both take some time and 
should be done in parallel.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  
https://clicktime.symantec.com/a/1/ysoeSRBosKMv2y51HEgmye3dC3w01KcfPIlE7uDFUrg=?d=Mj2xApkajNC6NUPymFNEfV5jUyRoXYatn3xbe_lByhtvBUca3QjCjyhaIGmwyP7KNNQYWgpsKuZ5UOpqhOXzkWfEx5Q49kKdvaVDsMmrCXF1qDZ308yVoUNOuj54O3Hcywy1MV6DDNORzSB1HLrHF6H4QPXWkHwn7zC1NE61drhv701Lv9vqhPlAgM3UqEBFdvuv8SQ3rAqKRLMZUCKH8HfwOw28xg6GQL8K2m34lqKD3AUGpC1hiH0XNtxgaOpoPrF7Tu2pv69E3yNM79rVTdB_ikacGGQ4RVtUCJlxLfFvstZDs2dP2RsXHlH9ZMtvch7bZjuDDWCQPuDdT4VSw0VZEr_7jNECLrlf7haoNdtxbcv7-SfpfFBGmpS5unFU92yRHdgylNVBg3B8Dlui4NX4P3j2WjucbZw23st8fxV8vw%3D%3D=https%3A%2F%2Fwww.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10 This public 
discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org

RE: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-04 Thread Tim Hollebeek via dev-security-policy
It has generally been understood that a string still "contains at least 112
bits of output from a CSPRNG" if that string has been fed through an
encoding mechanism like Base64 or Base32.

Furthermore, explicit requirements about including mixed case or special
characters leads to some very, very bad and borderline toxic security
requirements.  NIST has recently recanted and admitted they were very, very
wrong in this area and we should not repeat their mistake.

Anything we can do to clarify that an awesome password is:

string password = Base32.Encode(CSPRNG.ReadBytes(n));

where n >= 112, we should do.

BTW United Airlines rates these sorts of passwords as "medium strength".
Password meters that only pay attention to password complexity and not
length are stupid.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of
Buschart,
> Rufus via dev-security-policy
> Sent: Thursday, May 3, 2018 6:01 PM
> To: Carl Mehner ; mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key
> generation to policy)
> 
> Basically I like the new wording:
> 
> > PKCS#12 files [...] SHALL have a password containing at least 112 bits
> > of output from a CSPRNG, [...]
> 
> But I think there is a practical problem here: Directly using the output
of any
> random number generator ("C" or not) to generate a password will lead to
> passwords which contain most probably characters that are either not
> printable or at least not type-able on a 'normal' western style keyboard.
> Therefore I think we need to reword the password strength section a little
bit,
> maybe like the following:
> 
> > PKCS#12 files [...] SHALL have a 14 character long password consisting
> > of characters, digits and special characters based on output from a
> > CSPRNG, [...]
> 
> When I originally proposed my wording, I had the serial numbers in my mind
> (for which directly using the output of a CSPRNG works), but didn't think
on the
> encoding problem.
> 
> 
> With best regards,
> Rufus Buschart
> 
> Siemens AG
> Information Technology
> Human Resources
> PKI / Trustcenter
> GS IT HR 7 4
> Hugo-Junkers-Str. 9
> 90411 Nuernberg, Germany
> Tel.: +49 1522 2894134
> mailto:rufus.busch...@siemens.com
> www.twitter.com/siemens
> 
> www.siemens.com/ingenuityforlife
> 
> Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim
Hagemann
> Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive
> Officer; Roland Busch, Lisa Davis, Klaus Helmrich, Janina Kugel, Cedrik
Neike,
> Michael Sen, Ralf P. Thomas; Registered offices: Berlin and Munich,
Germany;
> Commercial registries: Berlin Charlottenburg, HRB 12300, Munich, HRB 6684;
> WEEE-Reg.-No. DE 23691322
> 
> 
> > -Ursprüngliche Nachricht-
> > Von: dev-security-policy
> > [mailto:dev-security-policy-bounces+rufus.buschart=siemens.com@lists.m
> > ozilla.org] Im Auftrag von Carl Mehner via dev-security-policy
> > Gesendet: Mittwoch, 2. Mai 2018 07:45
> > An: mozilla-dev-security-pol...@lists.mozilla.org
> > Betreff: Re: Policy 2.6 Proposal: Add prohibition on CA key generation
> > to policy
> >
> > On Tuesday, May 1, 2018 at 6:40:53 PM UTC-5, Wayne Thayer wrote:
> > > Ryan - thanks for raising these issues again. I still have concerns
> > > about getting this specific in the policy, but since we're now
> > > headed down that road...
> > >
> > > On Tue, May 1, 2018 at 7:13 PM, Ryan Hurst via dev-security-policy <
> > > dev-security-policy@lists.mozilla.org> wrote:
> > >
> > > > A few problems I see with the proposed text:
> > > >
> > > > - What is sufficient? I would go with a definition tied to the
> > > > effective strength of the keys it protects; in other words, you
> > > > should protect a 2048bit RSA key with something that offers
> > > > similar properties or that 2048bit key does not live up to its
> > > > 2048 bit properties. This is basically the same CSPRNG
> > > > conversation but it's worth looking at
> > > >
> https://clicktime.symantec.com/a/1/MiD2ZQaRtfOOhnoE5EIpI34AP9rvA3o
> > > >
> INRRu6XdViYU=?d=5Cqt01e3JJ5HJjzKGE6nRW54FeE3IwbVJCyLgL32Lilma6QZm
> k
> > > > H2jvdL5ebp7STf-GpEiDhzmVlSKWJlJz8rGU-
> hyb22kClbCdDKNFH0hcAHEjtrhmva
> > > > pCtr5kNgTYlIotEeIpk2tXzkeWzMD-
> zxkh7R7mriLhGO5p2EWRejSrwIHrBj4b1wF0
> > > > b_wYIQDNW12oF8hKmnVApkn0sJxGRbcSk1-Pw-
> 0cO9oCmj7YktgoxEy_ChyJCL0rNR
> > > > VIAGL4FEFLugnwgUwhflFoN1ujWwINVoDV10imsz_uQ-
> rITP6m0ZtOOaUWWDRhh6rd
> > > > G73BizNHiOU8uKepckQXTmYUBYipG4q6HdZ_-
> bmLcZ4HtlteJxoytWRbIKzqf9X7ld
> > > >
> Pxgq1WlnDDzMiQmsQ0cVAf8MZCcYw8WTa6ax_O7cku54_qoiUKm4qq2Mgj2iz
> UKJ78
> > > > paomt7WfLIvU5KNWQeJ9KK-
> SWt8y9aLxh6QXvaobBri_WOyMUZmrh_tMbpRawssbZY
> > > >
> hA9x1BzLG3a6eWSDgd0MAvNrzh2qCrnGXlSkM6wzvQ%3D%3D=https%3A%
> 2F%2Fw
> > > > ww.keylength.com%2F
> > >
> > >
> > > The latest proposal replaces "sufficient" with "at least 64 bits of
> 

RE: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-14 Thread Tim Hollebeek via dev-security-policy
For the record, I posted someone else's strength testing algorithm, and pointed
out that it was bad   I personally don't think building strength testing 
algorithms 
is hopeless, and I think good ones are very useful.  I tend to agree with the 
current 
NIST recommendation, which is to primarily only consider length, along with 
things 
like history, dictionary words, and reuse.

But in this case, the public is at risk if the key is compromised, so I don't 
trust a 
password chosen by an end user, no matter what strength function it may or may 
not pass.

Some form of random password of sufficient length, with the randomness coming
from a CSPRNG, encoded into a more user friendly form, is the right answer here.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Ryan
> Hurst via dev-security-policy
> Sent: Friday, May 4, 2018 5:19 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key
> generation to policy)
> 
> 
> > True, but CAs can put technical constraints on that to limit the acceptable
> passwords to a certain strength. (hopefully with a better strength-testing
> algorithm than the example Tim gave earlier)
> 
> Tim is the best of us -- this is hard to do well :)
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/B4EQCI-
> M91W3VFdrYnu8NKa6AWUA0Oca9gCvph6YNAo=?d=1AFyDzj7qs0LPt1qH7YZK
> X7VDlKTG3u4_pF-smh1LdxQUjK6Fx2ySSFy5RdxazxX-
> o23v3NFfmxRdpLUwPqiW6yozAgZPzuSbInOcX3x3V3ANyskgECX5k4aeBDO0z1u
> RHJpH-
> Wb5WOBjb0n16kco9wf4jRlCIO7HgEH4pMHjx4H_POUivn493OPB7U9RX8BArU
> 5U87OFuHYndlG0UK-XvQOKqKu6t_3fatFfevp7IT8Jzm4Ze-
> xwk8jgsytRsxvWQ561mB9wFaxsYkiFLZMBHmsNDACgJKZxHouitR-aXhUbxF-
> fKeFXogKbfDCYiYLqHOe5i8KyS8AzFNsUaZTDGJisXeUJbui5n9H3tF5berZe0DuntP
> V7a9yad9-
> haeyu7NspHh92Niu71JNcWZks3gkKolxwuU9vUfZCdfiIIhMHniPOMkCkMl0ooM
> gbRFl0gnAgmiNcKuIizRC9Z35_snt4pKSXAU12MQLeTdYFZMGmKYEDTvkB2L_So
> 3AZHYfUXATSUeQQlo1zSRKZ5Mapw%3D%3D=https%3A%2F%2Flists.mozilla
> .org%2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: question about DNS CAA and S/MIME certificates

2018-05-14 Thread Tim Hollebeek via dev-security-policy
Yes, but as you correctly point out, this should be taken care of as part of 
the CAA-bis
effort.  The original RFC had enough errors with respect to web certificates; I 
think
it would be irresponsible to apply it to e-mail certificates right now without 
carefully
considering the consequences.

With CABF governance reform coming into effect on July 3rd, I'm cautiously 
optimistic
we can start writing requirements for e-mail certificates and phasing out bad 
practices
and phasing in good practices soon.  CAA for e-mail certificates is definitely 
worth
considering as part of that process.

Slightly higher priority is making sure authenticated encryption modes are used 
with
S/MIME, so people can't play silly games with CBC and harvested ciphertexts.
Everything really needs to start transitioning away from CBC ... but I digress.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Ryan
> Sleevi via dev-security-policy
> Sent: Monday, May 14, 2018 11:39 AM
> To: Pedro Fuentes 
> Cc: mozilla-dev-security-policy 
> 
> Subject: Re: question about DNS CAA and S/MIME certificates
> 
> It seems perfectly reasonable and desirable to require that CAs, regardless of
> the type of certificate they are issuing, respect CAA.
> 
> If an email provider wishes to restrict some types of certificates (e.g.
> HTTPS) while allow others (e.g. S/MIME), this could be accomplished through
> additional expressions within the CAA syntax.
> 
> However, it would be a long-lasting, and tragic mistake if CAA was presumed to
> 'only' apply to HTTPS - because it would make the same mistake of
> nameConstraints - namely, everything that is not expressly listed as
> permitted/restricted is implicitly permitted - rather than doing what security
> practitioners have long known is the safe and secure base - forbid unless
> expressly permitted (default-deny).
> 
> In terms of order of concerns and constituents, the domain holders needs and
> security goals outweigh those of the notion of users 'owning' an email 
> address.
> 
> On Mon, May 14, 2018 at 3:45 AM, Pedro Fuentes via dev-security-policy < dev-
> security-pol...@lists.mozilla.org> wrote:
> 
> > Just to say that looking at this from Europe, I don't see this feasible.
> >
> > Citizens getting their personal eIDAS-compliant certificate go through
> > face-to-face validation and will give virtually any valid e-mail
> > address to appear in their certificate.
> >
> > El sábado, 12 de mayo de 2018, 2:30:58 (UTC+2), Wayne Thayer  escribió:
> > > I created a new issue suggesting that we add this requirement to
> > > Mozilla
> > > policy: https://github.com/mozilla/pkipolicy/issues/135
> > >
> > > On Wed, May 9, 2018 at 4:59 PM Ryan Sleevi via dev-security-policy <
> > > dev-security-policy@lists.mozilla.org> wrote:
> > >
> > > > On Wed, May 9, 2018 at 11:47 AM, Adrian R. via dev-security-policy
> > > > < dev-security-policy@lists.mozilla.org> wrote:
> > > >
> > > > > Hello,
> > > > > this question is somewhat outside the current Baseline
> > > > > Requirements,
> > > > but...
> > > > >
> > > > > wouldn't it be normal for the same CAA rules for server
> > > > > certificates
> > to
> > > > > also apply to client certificates when the email address is for
> > > > > a
> > domain
> > > > > that already has a valid CAA policy published in DNS?
> > > > >
> > > > >
> > > > > RFC 6844 doesn't seem to make any distinction between server and
> > S/MIME
> > > > > client certificates, it combines them together by referring to
> > > > certificates
> > > > > "for that domain" as a whole.
> > > > >
> > > > >
> > > > > i tested this last night - i obtained an email certificate from
> > > > > one
> > of
> > > > the
> > > > > CAs participating here (not for this exact address though) and
> > > > > it was happily issued even if CAA records authenticated by
> > > > > DNSSEC do not
> > allow
> > > > > their CA to issue for this domain.
> > > > >
> > > > > Now, this is technically not a mis-issuance because it was a
> > > > > proper email-validated address and their CPS says that CAA is
> > > > > only checked
> > for
> > > > > server-type certificates. It doesn't say anything about CAA
> > validation
> > > > for
> > > > > such client certificates.
> > > > >
> > > > > I got in touch with them and they seemed equally surprised by
> > > > > such intended use case for CAA, so my second question is: is
> > > > > anyone
> > actually
> > > > > checking CAA records for client certificates where an email
> > > > > address
> > is
> > > > > included in the certificate subject info and the EKU includes
> > > > > Secure
> > > > Email?
> > > > >
> > > > >
> > > > > Or is CAA usually checked only for server-type certificates,
> > > > > even if
> > RFC
> > > > > 6844 refers to certificates "for that domain" as a whole?
> > > > >
> > > >
> > > > CAs are 

RE: question about DNS CAA and S/MIME certificates

2018-05-14 Thread Tim Hollebeek via dev-security-policy
There’s an IETF component, but minimum necessary standards for email 
certificate issuance is a policy issue, not a technical one.

 

Somewhere, it needs to say “CAs issuing e-mail certificates MUST check CAA in 
accordance with CAA-bis.”

 

-Tim

 

With CABF governance reform coming into effect on July 3rd, I'm cautiously 
optimistic
we can start writing requirements for e-mail certificates and phasing out bad 
practices
and phasing in good practices soon.  CAA for e-mail certificates is definitely 
worth
considering as part of that process.

 

Isn't this an IETF issue? Shouldn't those who issue e-mail certificates begin 
looking at the level of authentication provided for domains today?



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: question about DNS CAA and S/MIME certificates

2018-05-14 Thread Tim Hollebeek via dev-security-policy

> Today this is a "non-issue" because nothing is obligating CAs to respect 
> CAA,
> and thus they can (and are) doing the thing that helps them issue more
> certificates (and, presumably, make more money) - but that doesn't 
> necessarily
> mean its the right thing.

I can think of at least one CA that values "# of right things done" more
highly than "# of certificates issued".  Actually, I can think of two or 
three.
There are probably more.

> Yes, it means that introducing CAA restrictions for
> S/MIME necessarily means there will need to be a way to distinguish these
> cases, so that an organization could restrict e-mail vs HTTPS - so CAs that 
> wish
> to issue S/MIME should start working on these.

Right.  CAA-bis is a pre-requisite here.

As Neil correctly notes, it would be foolish to try to impose semantics and 
apply
policy from the web CAA records onto email certificate issuance without first
figuring out what the semantics, requirements and policies should be for email
certificate issuance.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: question about DNS CAA and S/MIME certificates

2018-05-14 Thread Tim Hollebeek via dev-security-policy
Normally I’d agree that IETF cannot and should not be a blocker for action at 
Mozilla and/or CABF, but based on our experience with CAA for web certificates, 
I would encourage people to get in their time machines and go back two to three 
years, and listen to Tim standing up and saying “I like CAA for the Web PKI, 
but what have we not thought of?”

 

-Tim

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Monday, May 14, 2018 8:24 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: r...@sleevi.com; Pedro Fuentes <pfuente...@gmail.com>; 
mozilla-dev-security-policy <mozilla-dev-security-pol...@lists.mozilla.org>
Subject: Re: question about DNS CAA and S/MIME certificates

 

I don't actually think there is any IETF component to this. There can be, but 
it's not required to be.

 

On Mon, May 14, 2018 at 6:20 PM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

There’s an IETF component, but minimum necessary standards for email 
certificate issuance is a policy issue, not a technical one.



Somewhere, it needs to say “CAs issuing e-mail certificates MUST check CAA in 
accordance with CAA-bis.”



-Tim




With CABF governance reform coming into effect on July 3rd, I'm cautiously 
optimistic
we can start writing requirements for e-mail certificates and phasing out bad 
practices
and phasing in good practices soon.  CAA for e-mail certificates is definitely 
worth
considering as part of that process.



Isn't this an IETF issue? Shouldn't those who issue e-mail certificates begin 
looking at the level of authentication provided for domains today?


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> 
https://lists.mozilla.org/listinfo/dev-security-policy

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-04 Thread Tim Hollebeek via dev-security-policy

> Maybe you want n = 112 / 8 = 14 bytes.

Doh!  Yes.

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-16 Thread Tim Hollebeek via dev-security-policy
When we debated it last, my predictions were hypothetical.



I wish they had remained hypothetical.



-Tim



From: Wayne Thayer [mailto:wtha...@mozilla.com]
Sent: Wednesday, May 16, 2018 12:33 AM
To: Tim Hollebeek ; mozilla-dev-security-policy 

Subject: Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key 
generation to policy)



On Tue, May 15, 2018 at 9:17 PM Tim Hollebeek  > wrote:

My only objection is that this will cause key generation to shift to partners 
and
affiliates, who will almost certainly do an even worse job.

>

This is already a Mozilla requirement [1] - we're just moving it into the 
policy document.

>

If you want to ban key generation by anyone but the end entity, ban key
generation by anyone but the end entity.

>

We've already debated this [2] and didn't come to that conclusion.

>

-Tim



[1] 
https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices#Distributing_Generated_Private_Keys_in_PKCS.2312_Files

[2] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/MRd8gDwGGA4/AC4xgZ9CBgAJ



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: question about DNS CAA and S/MIME certificates

2018-05-15 Thread Tim Hollebeek via dev-security-policy
CAA is HTTPS only today.  That’s the reality.

 

I don’t have to want to argue in favor of reality.  Reality wins regardless of 
what I do.

 

-Tim

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Monday, May 14, 2018 11:55 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: r...@sleevi.com; Pedro Fuentes <pfuente...@gmail.com>; 
mozilla-dev-security-policy <mozilla-dev-security-pol...@lists.mozilla.org>
Subject: Re: question about DNS CAA and S/MIME certificates

 

I'm not sure how that's advancing the discussion forward or adding new 
information. The discussion of CAA and wanting to get feedback predates even 
the IETF finalization, as multiple browsers kept encouraging CAs to experiment 
with and attempt to deploy CAA so that we could make sure the kinks were ironed 
out.

 

Regardless of posturing and grandstanding for past statements, can we at least 
agree that a model that argues "fail open" as a solution is a fundamentally 
insecure one? If there are proponents of a 'fail open' model, especially 
amongst CAs, then does it behove them to specify as quickly as possible a 'fail 
closed' model, so that we don't have to try and divine intent and second guess 
site operators as to whether they meant to restrict HTTPS or everything?

 

Put differently, if you want to argue that CAA is HTTPS only, then you need to 
define a way to ensure it's not HTTPS-only, and ASAP. Otherwise, the solution 
is that when S/MIME BRs come around, we simply cannot and should not second 
guess site operators and try to argue CAA was only 'those' type of certs - and 
instead require anyone with a CAA record to explicitly opt-in to allowing 
(potentially unbounded) S/MIME. I don't see any other realistic or practical 
solution - you can't say "This protects you" and then propose 2 years down the 
road, with S/MIME BRs, that it didn't actually 'protect' the site operator - 
the same way you can't say "Restrict access to these five email addresses" and 
then introduce a dozen more 2 years down the road.

 

On Mon, May 14, 2018 at 11:07 PM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

Normally I’d agree that IETF cannot and should not be a blocker for action at 
Mozilla and/or CABF, but based on our experience with CAA for web certificates, 
I would encourage people to get in their time machines and go back two to three 
years, and listen to Tim standing up and saying “I like CAA for the Web PKI, 
but what have we not thought of?”



-Tim



From: Ryan Sleevi [mailto:r...@sleevi.com <mailto:r...@sleevi.com> ] 
Sent: Monday, May 14, 2018 8:24 PM
To: Tim Hollebeek <tim.holleb...@digicert.com 
<mailto:tim.holleb...@digicert.com> >
Cc: r...@sleevi.com <mailto:r...@sleevi.com> ; Pedro Fuentes 
<pfuente...@gmail.com <mailto:pfuente...@gmail.com> >; 
mozilla-dev-security-policy <mozilla-dev-security-pol...@lists.mozilla.org 
<mailto:mozilla-dev-security-pol...@lists.mozilla.org> >
Subject: Re: question about DNS CAA and S/MIME certificates



I don't actually think there is any IETF component to this. There can be, but 
it's not required to be.



On Mon, May 14, 2018 at 6:20 PM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org>  
<mailto:dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > > wrote:

There’s an IETF component, but minimum necessary standards for email 
certificate issuance is a policy issue, not a technical one.



Somewhere, it needs to say “CAs issuing e-mail certificates MUST check CAA in 
accordance with CAA-bis.”



-Tim




With CABF governance reform coming into effect on July 3rd, I'm cautiously 
optimistic
we can start writing requirements for e-mail certificates and phasing out bad 
practices
and phasing in good practices soon.  CAA for e-mail certificates is definitely 
worth
considering as part of that process.



Isn't this an IETF issue? Shouldn't those who issue e-mail certificates begin 
looking at the level of authentication provided for domains today?


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org>  
<mailto:dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > 
https://lists.mozilla.org/listinfo/dev-security-policy




___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> 
https://lists.mozilla.org/listinfo/dev-security-policy

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: question about DNS CAA and S/MIME certificates

2018-05-16 Thread Tim Hollebeek via dev-security-policy


> On Wednesday, May 16, 2018 at 2:16:14 AM UTC-4, Tim Hollebeek wrote:
> > This is the point I most strongly agree with.
> >
> > I do not think it's at odds with the LAMPS charter for 6844-bis,
> > because I do not think it's at odds with 6844.
> 
> Updating 6844 is easy. Just define the tag and specify scope for issue /
> issuewild / issueclient sensibly.

Yup.  I'm optimistic it's something we can get done quickly.

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: question about DNS CAA and S/MIME certificates

2018-05-15 Thread Tim Hollebeek via dev-security-policy
Blatantly false.  I actually suspect DigiCert might already support CAA for 
email.  I haven’t double-checked.

 

-Tim

 

The only reason that "CAA is HTTPS-only" today is because CAs are not 
interested in doing the 'right' thing.

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: question about DNS CAA and S/MIME certificates

2018-05-15 Thread Tim Hollebeek via dev-security-policy
I think CAA is and should be HTTPS only until there are clear rules for how it 
should work for email, and how to keep web CAA from interfering with email CAA. 
 E-mail is currently the wild west and that needs to be fixed.

 

I’m strongly in favor of email CAA, once we get it ‘right’.  But there’s no 
document out there that specifies what ‘right’ is yet.  And there isn’t much 
value to CAA if only a few CAs do it.

 

That’s why I think we need 8644-bis first.  Or another RFC explaining CAA for 
email.

 

-Tim

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Tuesday, May 15, 2018 12:44 PM
To: Tim Hollebeek 
Cc: r...@sleevi.com; Pedro Fuentes ; 
mozilla-dev-security-policy 
Subject: Re: question about DNS CAA and S/MIME certificates

 

Tim,

 

Could you clarify then. Are you disagreeing that CAA is HTTPS only? As these 
were your words only 3 hours ago - 
https://groups.google.com/d/msg/mozilla.dev.security.policy/NIc2Nwa9Msg/0quxT0CpCQAJ

 

On Tue, May 15, 2018 at 12:28 PM, Tim Hollebeek  > wrote:

Blatantly false.  I actually suspect DigiCert might already support CAA for 
email.  I haven’t double-checked.

 

-Tim

 

The only reason that "CAA is HTTPS-only" today is because CAs are not 
interested in doing the 'right' thing.

 

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: question about DNS CAA and S/MIME certificates

2018-05-15 Thread Tim Hollebeek via dev-security-policy
The LAMPS re-charter is still open for discussion.  I personally have no 
problem with CAA for email being in scope for 6844-bis.  I’m actually in favor 
of that if it really is currently out of scope (I haven’t checked).  Best to 
ask on the LAMPS charter thread.

 

-Tim

 

From: Wayne Thayer [mailto:wtha...@mozilla.com] 
Sent: Tuesday, May 15, 2018 12:41 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: Ryan Sleevi <r...@sleevi.com>; Pedro Fuentes <pfuente...@gmail.com>; 
mozilla-dev-security-policy <mozilla-dev-security-pol...@lists.mozilla.org>
Subject: Re: question about DNS CAA and S/MIME certificates

 

I don't see how this debate is leading us to a solution. Can we just 
acknowledge that, prior to this discussion, the implications of CAA for the 
issuance of email certificates was not well understood by CAs or domain name 
registrants?

 

I share the desire to have a system that fails closed in the presence of any 
CAA record, but that is a challenge as long as ecosystem participants view CAA 
as applicable only to server certificates. The sooner we address this issue, 
the better.

 

Mozilla policy isn't a great place to define CAA syntax. The CA/Browser Forum 
currently has no jurisdiction over email, so at best could define syntax to 
limit CAA scope to server certificates. The scope of the LAMPS recharter for 
6844bis appears too narrow to include this. What is the best path forward?

 

- Wayne

 

On Tue, May 15, 2018 at 9:29 AM Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

Blatantly false.  I actually suspect DigiCert might already support CAA for 
email.  I haven’t double-checked.



-Tim



The only reason that "CAA is HTTPS-only" today is because CAs are not 
interested in doing the 'right' thing.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: question about DNS CAA and S/MIME certificates

2018-05-16 Thread Tim Hollebeek via dev-security-policy
This is the point I most strongly agree with.



-Tim



I do not think it's at odds with the LAMPS charter for 6844-bis, because I do 
not think it's at odds with 6844.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

2018-05-21 Thread Tim Hollebeek via dev-security-policy
Ok.  My biggest concern is not you guys, who are pretty security conscious,
but whether we need to improve the language to make it more clear that the
logging has to be sufficient so that in the event of a bug in the CAA logic,
it is possible to determine which issued certificates are affected and how.

It may be hard to come up with such language, but that was the intent of the
language that was currently there, and if it failed to adequately express
that and needs improvement, we should consider improvements.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of
> jacob.hoffmanandrews--- via dev-security-policy
> Sent: Friday, May 18, 2018 2:05 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: 2018.05.18 Let's Encrypt CAA tag value case sensitivity
incident
> 
> On Friday, May 18, 2018 at 10:52:25 AM UTC-7, Tim Hollebeek wrote:
> > > Our logging of the CAA records processed does not provide the case
> > > information we need to determine whether other issuances were
> > > affected by this bug.
> >
> > We put a requirement in the BRs specifically so this problem could not
occur:
> >
> > "The CA SHALL log all actions taken, if any, consistent with its
> > processing practice."
> 
> To be clear, we do log every CAA lookup
> (https://clicktime.symantec.com/a/1/3HdZcXUFLJSV752s3qQoA0A6fzGR2WGY
> aa8Vb4eW0is=?d=2Gn0FYgiBMMDYQjPk2an9e5zCmdH8aOEM_a2k8A8ew7ArD
> v0URhjtIEPzgzNAA47eRfCIlwMe3ctM0pXRF0VTUqLXosrX-
> i7uR64LKqy873Aqy3Mii7JCWLQHOPpQWcNp3FWnBu624ZZQANcMTNtqbgJea
> RmalbiW1vABzoOte0IZNRfmkmQES8Nr67RP515OPIifYcBpDbj7_SzCddoRw_Im
> KUgkD70LCvR8NLdXBfk2_bpdPsIPd2MYiWXCpp3qWI_1_XQ9z_eyC1QGzTtcxOF
> DLgSe4rRoyLJQqTaoooPKFGFUX_3SIzP6bjz_SEXUqSWbBz7XRVk1YrZczQFl1NM
> N2BdjOE5nsDTre28cQDZNQ-1dOqbirW3-
> CbCQwcvVjIQBfy3i8vCqAUh4xoVlvk16SNfyCeF3pFZYJ_TtcaaO9Tr8cUp9RHfdwC
> 20jfPFtyRHXscZwhVP2Lfucn9JLErK7kbSczQrqe3GrqCICQf27hRDOnBq5_C=ht
> tps%3A%2F%2Fgithub.com%2Fletsencrypt%2Fboulder%2Fblob%2Fmaster%2Fv
> a%2Fcaa.go%23L47). However, we do it at too high a level of abstraction:
It
> doesn't contain the unprocessed return values from DNS. We plan to improve
> that as part of our remediation.
> 
> Our ideal would be to log all DNS traffic associated with each issuance,
> including A, , TXT, and CAA lookups. We initially experimented with
this
> by capturing the full verbose output from our recursive resolver, but
concluded
> that it was not usable for investigations because it was not possible to
associate
> specific query/response pairs with the validation request that caused them
(for
> instance, consider NS referrals, CNAME indirection, and caching). I think
this is
> definitely an area of improvement we could pursue in the DNS ecosystem
that
> would be particularly beneficial for CAs.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/OveGoqfqvlk5eSNt6tWZIf0e1XY5TBocWaY
> xmYcWV4s=?d=2Gn0FYgiBMMDYQjPk2an9e5zCmdH8aOEM_a2k8A8ew7ArDv0
> URhjtIEPzgzNAA47eRfCIlwMe3ctM0pXRF0VTUqLXosrX-
> i7uR64LKqy873Aqy3Mii7JCWLQHOPpQWcNp3FWnBu624ZZQANcMTNtqbgJea
> RmalbiW1vABzoOte0IZNRfmkmQES8Nr67RP515OPIifYcBpDbj7_SzCddoRw_Im
> KUgkD70LCvR8NLdXBfk2_bpdPsIPd2MYiWXCpp3qWI_1_XQ9z_eyC1QGzTtcxOF
> DLgSe4rRoyLJQqTaoooPKFGFUX_3SIzP6bjz_SEXUqSWbBz7XRVk1YrZczQFl1NM
> N2BdjOE5nsDTre28cQDZNQ-1dOqbirW3-
> CbCQwcvVjIQBfy3i8vCqAUh4xoVlvk16SNfyCeF3pFZYJ_TtcaaO9Tr8cUp9RHfdwC
> 20jfPFtyRHXscZwhVP2Lfucn9JLErK7kbSczQrqe3GrqCICQf27hRDOnBq5_C=ht
> tps%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

2018-05-22 Thread Tim Hollebeek via dev-security-policy

> Given the TTLs and the key sizes in use on DNSSEC records, why do you
believe
> this?

DigiCert is not sympathetic to disk space as a reason to not keep sufficient
information
in order to detect misissuance due to CAA failures.

In fact, inspired by this issue, we are taking a look internally at what we
log, and
considering the feasibility of logging even more information, including full
DNSSEC 
signed RRs.

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

2018-05-22 Thread Tim Hollebeek via dev-security-policy
What precisely was the antecedent of “this” in your message?  Re-reading it, 
I’m not clear which sentence you were referring to.

 

The only reasons I can think of for not keeping DNSSEC signed RRs are storage 
and/or performance, and we think those concerns should not be the driving force 
in logging requirements (within reason).

 

Are there other good reasons not to keep the DNSSEC signed RRs associated with 
DNSSEC CAA lookups?

 

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

2018-05-22 Thread Tim Hollebeek via dev-security-policy
What that wall of text completely misses is the point I and others have been 
trying to make.

 

The logs have to have enough information so you don’t end up in the situation 
Let’s Encrypt is currently, and unfortunately, in.  Yes, what they did is 
compliant, and that’s exactly what most concerns me.  It’s not about Let’s 
Encrypt, which just appears to have made a mistake, it happens.  It’s about 
whether the rules need to be improved to reduce the likelihood of another CA 
ending up in the same situation.

 

As a separate issue, we’re looking into making sure we never end up in that 
situation, and as you say, other CAs should be too.  We always reserve the 
right to do things that vastly exceed minimal compliance.

 

That should be something you should support, instead of producing increasingly 
long and condescending walls of text.  I know how DNSSEC works.

 

-Tim

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Tuesday, May 22, 2018 12:43 PM
To: Tim Hollebeek 
Cc: r...@sleevi.com; Nick Lamb ; mozilla-dev-security-policy 
; Jacob Hoffman-Andrews 

Subject: Re: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

 

 

 

On Tue, May 22, 2018 at 12:14 PM, Tim Hollebeek  > wrote:

What precisely was the antecedent of “this” in your message?  Re-reading it, 
I’m not clear which sentence you were referring to.

 

The only reasons I can think of for not keeping DNSSEC signed RRs are storage 
and/or performance, and we think those concerns should not be the driving force 
in logging requirements (within reason).

 

Are there other good reasons not to keep the DNSSEC signed RRs associated with 
DNSSEC CAA lookups?

 

I believe you are operating on a flawed understanding of the value of DNSSEC 
for forensic purposes, given the statement that "I absolutely would expect 
Let's Encrypt to produce DNSSEC signed RRs that match up to their story. The 
smoking gun for such scenarios exists, and CAs are, or should be, under no 
illusions that it's their job to produce it."

 

To me, this demonstrates a flawed, naive understanding of DNSSEC, and in 
particular, its value in forensic post-issuance claims, and also a flawed 
understanding about how DNS works, in a way that, as proposed, would be rather 
severely damaging to good operation and expected use of DNS. While it's easy to 
take shots on the basis of this, or to claim that the only reason not to store 
is because disk space, it would be better to take a step back before making 
those claims.

 

DNSSEC works as short-lived signatures, in which the proper operation of DNSSEC 
is accounted for through frequent key rotation. DNS works through relying on 
factors such as TTLs to serve as effective safeguards against overloading the 
DNS system, and its hierarchal distribution allows for effective scaling of 
that system.

 

A good primer to DNSSEC can be had at 
https://www.cloudflare.com/dns/dnssec/how-dnssec-works/ , although I'm sure 
many other introductory texts would suffice to highlight the problem.

 

Let us start with a naive claim that the CA should be able to produce the 
entire provenance chain for the DNSSEC-signed leaf record. This would be the 
chain of KSKs, ZSKs, the signed RRSets, as well as the DS records, disabling 
caching for all of these (or, presumably, duplicating it such that the .com KSK 
and ZSK are recorded for millions of certs).

 

However, what does this buy us? Considering that the ZSKs are intentionally 
designed to be frequently rotated (24 - 72 hours), thus permitting weaker key 
sizes (RSA-512), a provenance chain ultimately merely serves to establish, in 
practice, one of a series of 512-bit RSA signatures. Are we to believe that 
these 512-bit signatures, on whose keys have explicitly expired, are somehow a 
smoking gun? Surely not, that'd be laughably ludicrous - and yet that is 
explicitly what you propose in the quoted text.

 

So, again I ask, what is it you're trying to achieve? Are you trying to provide 
an audit trail? If so, what LE did is fully conformant with that, and any CA 
that wishes to disagree should look inward, and see whether their audit trail 
records actual phone calls (versus records of such phone calls), whether their 
filing systems store the actual records (versus scanned copies of those 
records), whether all mail is delivered certified delivery, and how they recall 
the results of that certified delivery.

 

However, let us not pretend that recording the bytes-on-the-wire DNS responses, 
including for DNSSEC, necessarily helps us achieve some goal about repudiation. 
Rather, it helps us identify issues such as what LE highlighted - a need for 
quick and efficient information scanning to discover possible impact - which is 
hugely valuable in its own right, and is an area where I am 

RE: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

2018-05-18 Thread Tim Hollebeek via dev-security-policy
> Our logging of the CAA records processed does not provide the case
> information we need to determine whether other issuances were affected by
> this bug.

We put a requirement in the BRs specifically so this problem could not occur:

"The CA SHALL log all actions taken, if any, consistent with its processing 
practice."

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-15 Thread Tim Hollebeek via dev-security-policy
My only objection is that this will cause key generation to shift to partners 
and
affiliates, who will almost certainly do an even worse job.

If you want to ban key generation by anyone but the end entity, ban key 
generation by anyone but the end entity.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Wayne
> Thayer via dev-security-policy
> Sent: Tuesday, May 15, 2018 4:10 PM
> To: Dimitris Zacharopoulos 
> Cc: mozilla-dev-security-policy 
> 
> Subject: Re: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key
> generation to policy)
> 
> I'm coming to the conclusion that this discussion is about "security 
> theater"[1].
> As long as we allow CAs to generate S/MIME key pairs, there are gaping holes
> in the PKCS#12 requirements, the most obvious being that a CA can just
> transfer the private key to the user in pem format! Are there any objections 
> to
> dropping the PKCS#12 requirements altogether and just forbidding key
> generation for TLS certificates as follows?
> 
> CAs MUST NOT generate the key pairs for end-entity certificates that have an
> EKU extension containing the KeyPurposeIds id-kp-serverAuth or
> anyExtendedKeyUsage.
> 
> - Wayne
> 
> [1] https://en.wikipedia.org/wiki/Security_theater
> 
> On Tue, May 15, 2018 at 10:23 AM Dimitris Zacharopoulos 
> wrote:
> 
> >
> >
> > On 15/5/2018 6:51 μμ, Wayne Thayer via dev-security-policy wrote:
> >
> > Did you consider any changes based on Jakob’s comments?  If the
> > PKCS#12 is distributed via secure channels, how strong does the password
> need to be?
> >
> >
> >
> >
> >
> > I think this depends on our threat model, which to be fair is not
> > something we've defined. If we're only concerned with protecting the
> > delivery of the
> > PKCS#12 file to the user, then this makes sense. If we're also
> > concerned with protection of the file while in possession of the user,
> > then a strong password makes sense regardless of the delivery mechanism.
> >
> >
> > I think once the key material is securely delivered to the user, it is
> > no longer under the CA's control and we shouldn't assume that it is.
> > The user might change the passphrase of the PKCS#12 file to whatever,
> > or store the private key without any encryption.
> >
> >
> > Dimitris.
> >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: question about DNS CAA and S/MIME certificates

2018-05-15 Thread Tim Hollebeek via dev-security-policy
I agree with Phillip; if we want email CAA to be a thing, we need to define
and
specify that thing.  And I think it should be a thing.

New RFCs are not that hard and need not even be that long.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Phillip
> Hallam-Baker via dev-security-policy
> Sent: Tuesday, May 15, 2018 9:22 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: question about DNS CAA and S/MIME certificates
> 
> When I wrote CAA, my intention was for it to apply to SSL/TLS certs only.
I did
> not consider S/MIME certs to be relevant precisely because of the
> al...@gmail.com problem.
> 
> I now realize that was entirely wrong and that there is in fact great
utility in
> allowing domain owners to control their domains (or not).
> 
> If gmail want to limit the issue of Certs to one CA, fine. That is a
business choice
> they have made. If you want to have control of your online identity, you
need
> to have your own personal domain. That is why I have hallambaker.com. All
my
> mail is forwarded to gmail.com but I control my identity and can change
mail
> provider any time I want.
> 
> One use case that I see as definitive is to allow paypal to S/MIME sign
their
> emails. That alone could take a bite out of phishing.
> 
> But even with gmail, the only circumstance I could see where a mail
service
> provider like that would want to restrict cert issue to one CA would be if
they
> were to roll out S/MIME with their own CA.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/XjZsJF4yykVlCzqgt57_FIwsOTe0fR6a3C5kS
> Yh_IZ4=?d=lJg-wcLQ8TKi5x2vK8SJOJCjdKNbOFzJppz0UZwOpX_uS1wS1Mw-
> 5j_nOlfxrvZ_g0tSYqMRWJezQvAWyNySPmWiq8oV2gEI6bF-
> MXCodHj66yn6adEuwqxiAwHJd6tamadI6Kf-
> pHadUoBbCN15Wb8AEG3D126zrUxw7umhl5JRMC5lYu4kHiYb5kss5F0cvapf8h_
> U7XuRliUCpAUdVY_VtggCy6Hbk0u6x2IlNY411Cb49wMqOGMavYTwrT8CADJZ_
> OJ3cmVnrJLAclZ2Y96VSVSZpzc4h5UeBneGuFjm8T-ikCgGY3kDZfTHOOex-
> VrdHh0nbhZf-yoOgGiXg0naMQ0MnoHA_-L9tUotMKl1e-yScY5S-
> BG6sVyAe68iMOFtJaUYcyEV14-JlCiHpK8pRgYpdvB1V8O3IASeKCzuOiTPvJLrn-
> gCM2xICBAH-
> QzxWPVhgGZtP9OqMlqRDCJUeiAg9PJt=https%3A%2F%2Flists.mozilla.org%
> 2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Bit encoding (AW: Policy 2.6 Proposal: Add prohibition on CA key generation to policy)

2018-05-15 Thread Tim Hollebeek via dev-security-policy

> This going to require 19 randomly generated Base64 characters and that does
> not include removing common confused characters which will drive up the
> length a bit more, but if this is what the Mozilla risk assessment came up 
> with,
> then we’ll all have to comply.  I hope there is a sufficiently long time for 
> CAs to
> change their processes and APIs and to roll out updated training and
> documentation to their customers (for this unplanned change).

A reasonable transition period is reasonable.

> 2) Trying to compute the entropy of a user generated password is  nearly
> impossible.  According to NIST Special Publication 800-63, a good 20 character
> password will have just 48 bits of entropy, and characters after that only 
> add 1
> bite of entropy each.  User stink at generating Entropy (right Tim?)

Yes, users struggle to generate a single bit of entropy per character.  This is 
why
users should not generate keys or passwords.

An encoded CSPRNG can hit 5-6 bits of entropy per character, so 20 is a pretty 
good number for password lengths.  Copy/paste solves most of the usability 
issues.

There are some subtleties that require some care, but the general gist is right.

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

2018-05-23 Thread Tim Hollebeek via dev-security-policy
Right, this is a fair and excellent summary, and there are things I would 
improve about my responses if I had access to a time machine.  Constraints on 
my time are pretty brutal right now, and that does not always allow me to 
express myself as well as I would like.

 

I perceived, possibly incorrectly, a hesitation that adding at least some 
information about DNSSEC lookups would blow up the size of log files and would 
be difficult at scale.  Our discussion internally reached the conclusion that 
we’re supportive of requiring even more extensive CAA logging, even if it is 
expensive.  At Let’s Encrypt’s scale and our scale, that’s an important 
concern, and we think it should be publicly discussed (Comodo’s perspective 
would be interesting too).  So that’s what I was thinking and ended up saying 
really, really badly.

 

Your discussion here is excellent and worthy of a longer term discussion.  I 
was thinking more along the lines of “are there any appropriate quick fixes we 
might want to consider?”  The answer may be no.  But I do find it dangerous 
that minimal compliance with the current requirement can lead to situations 
like this.  That alone makes me want to improve the requirement.

 

And while I’m on the subject, since it’s related: Jeremy and I do have a new 
policy of trying to err on the side of publicly oversharing internal 
information and deliberations, whenever we can.  We think it’s the right thing 
to do.

 

-Tim

 

I definitely think we've gone off the rails here, so I want to try to right the 
cart here. You jumped in on a thread talking about DNSSEC providing smoking 
guns [1] - which is a grandstanding bad idea. It wasn't yours, but it's one 
that you jumped into the middle of the discussion, and began offering other 
interpretations (such as it being about disk space [2]), when the concern was 
precisely about trying to find a full cryptographic proof that can be stable 
over the lifetime of the certificate - which for Let's Encrypt is 90 days, but 
for some CAs, is up to 825-days [3].

 

As a systemic improvement, I think we're in violent agreement about the goal - 
which is to make sure that when things go wrong, there are reliable ways to 
identify where and why they went wrong - and perhaps simply in disagreement on 
the means and ways to effect that. You posited that the original motivation was 
that this specifically could not occur - but I don't think that was actually 
shared or expressed, precisely because there were going to be inherent limits 
to that information. I provided examples of where and how, under the existing 
BRs, that the steps taken are both consistent with and, arguably, above and 
beyond, what is required elsewhere - which is not to say we should not strive 
for more, but is to put down the notion from (other) contributors that somehow 
there's been less here.

 

I encouraged you to share more of your thinking, precisely because this is what 
allows us to collectively evaluate the fitness for purpose [4] - and the 
potential risks that well-intentioned changes can pose [5]. I don't think it 
makes sense to anchor on the CAA aspect as the basis to improve [6], when the 
real risk is the validation methods themselves. If our intent is to provide 
full data for diagnostic purposes, then how far does that rabbit hole go - do 
HTTP file-based validations need to record their DNS lookup chains? Their IP 
flows? Their BGP peer broadcasts? The question of this extreme rests on what is 
it we're trying to achieve - and the same issue here (namely, CAA being 
misparsed) could just as equally apply to HTTP streams, to WHOIS dataflows, or 
to BGP peers.

 

That's why I say it's systemic, and why I say that we should figure out what it 
is we're trying to achieve - and misguided framing [1] does not help further 
that.

 

[1] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/7L2_zfgfCwAJ

[2] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/gUT3t7B1CwAJ

[3] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/O7QTGmInCwAJ

[4] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/juHBkWV4CwAJ

[5] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/O5rwCV96CwAJ

[6] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/7AcHi_MgKWE/lpU2dpl8CwAJ

 

 

On Wed, May 23, 2018 at 11:29 AM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

You’re free to misattribute whatever motives you want to me.  They’re not true. 
 In fact, I would like to call on you yet again to cease speculating and 
imputing malicious motives onto well-intentioned posts.



The CAA logging requirements failed in this instance.  How do we make them 
better?  I’ll repeat that this isn’t a criticism of Let’s Encrypt, other than 
they had a bug like many of us have.  M

RE: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident

2018-05-23 Thread Tim Hollebeek via dev-security-policy
You’re free to misattribute whatever motives you want to me.  They’re not true. 
 In fact, I would like to call on you yet again to cease speculating and 
imputing malicious motives onto well-intentioned posts.

 

The CAA logging requirements failed in this instance.  How do we make them 
better?  I’ll repeat that this isn’t a criticism of Let’s Encrypt, other than 
they had a bug like many of us have.  Mozilla wants this to be a place where we 
can reflect on incidents and improve requirements.

 

I’m not looking for something that is full cryptographic proof, that’s can’t be 
made to work.  What are the minimum logging requirements so that CAA logs can 
be used to reliably identify affected certificates when CAA bugs happen?  
That’s the discussion going on internally here.  Love to hear other thoughts on 
this issue.

 

Also, we’re trying to be increasingly transparent about what goes on at 
DigiCert.  I believe we’re the only CA that publishes what we will deliver 
*next* sprint.  I would actually like to share much MORE information than we 
currently do, and have authorization to do so, but the current climate is not 
conducive to that.

 

The fact that I tend to get attacked in response to my sharing of internal 
thinking and incomplete ideas is not helpful or productive.  It will 
unfortunately just cause us to have to stop being as transparent.

 

-Tim

 

I am opposed to unnecessary grand-standing and hand-wringing, when demonstrably 
worse things are practiced.

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-05-01 Thread Tim Hollebeek via dev-security-policy
I get that, but any CA that can securely erase and forget the user’s 
contribution to the password and certainly do the same thing to the entire 
password, so I’m not seeing the value of the extra complexity and interaction.

 

-Tim

 

From: Ryan Hurst [mailto:ryan.hu...@gmail.com] 
Sent: Tuesday, May 1, 2018 3:49 PM
To: Tim Hollebeek 
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

 

> I'm not sure I agree with this as a recommendation; if you want both parties
> to provide inputs to the generation of the password, use a well-established
> and vetted key agreement scheme instead of ad hoc mixing.

> Of course, at that point you have a shared transport key, and you should 
> probably
> just use a stronger, more modern authenticated key block than PKCS#12,
> but that's a conversation for another day.

 

I say this because it is desirable that the CA plausibly not be able to decrypt 
the key even if it holds the encrypted key blob.

 

 

 

On Tue, May 1, 2018 at 12:40 PM, Tim Hollebeek  > wrote:


> - What is sufficient? I would go with a definition tied to the effective 
> strength of
> the keys it protects; in other words, you should protect a 2048bit RSA key 
> with
> something that offers similar properties or that 2048bit key does not live 
> up to
> its 2048 bit properties.

Yup, this is the typical position of standards bodies for crypto stuff.  I 
noticed that
the 32 got fixed to 64, but it really should be 112.

> - The language should recommend that the "password" be a value that is a mix
> of a user-supplied value and the CSPRNG output and that the CA can not store
> the user-supplied value for longer than necessary to create the PKCS#12.

I'm not sure I agree with this as a recommendation; if you want both parties
to provide inputs to the generation of the password, use a well-established
and vetted key agreement scheme instead of ad hoc mixing.

Of course, at that point you have a shared transport key, and you should 
probably
just use a stronger, more modern authenticated key block than PKCS#12,
but that's a conversation for another day.

> - The language requires the use of a password when using PKCS#12s but
> PKCS#12 supports both symmetric and asymmetric key based protection also.
> While these are not broadly supported the text should not probit the use of
> stronger mechanisms than 3DES and a password.

Strongly agree.

-Tim

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-05-02 Thread Tim Hollebeek via dev-security-policy

> I'd recommend making a requirement that it be "protected" by at least as
many
> bits of strength as the key it protects. Not doing so could cause
compliance
> issues: things like PCI [1] and the NIST [2] recommendations require this
type of
> protection.

You don't have compliance problems because my proposal is weaker than PCI
and NIST (ANSI and ISO also have the same requirement).  It focuses on
RSA-2048
keys because those are what are prevalent in the industry.

If your key is larger than 2048 bits, you can and should use more entropy in
your password, and you have to if you need to comply with PCI/ANSI/ISO [1].
But that's ok because the requirement is >= 112, not exactly 112.

> However, like Wayne said, this still leaves room for interpretation, if
> mentioning bits is necessary, can we just bump it up to 256 rather than
112?

256 is overkill.  People do have to type these passwords sometimes.
112 is the NIST-blessed strength of RSA-2048 [2].  That's why I think it's
the
right number.

-Tim

[1] I left out NIST because it isn't actually a standards body, it just
provides guidance.

[2] Yes, comparing symmetric and asymmetric strengths gets all applesy and
orangey
sometimes, but it's in the right ballpark, and it's a useful number since
it's widely used
and you can point to something to justify it.


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure

2018-04-30 Thread Tim Hollebeek via dev-security-policy

> I don't think this opinion is in conflict with the suggestion that we 
> required
> DNSSEC validation on CAA records when (however rarely) it is deployed. I
> added this as https://github.com/mozilla/pkipolicy/issues/133

One of the things that could help quite a bit is to only require DNSSEC 
validation
when DNSSEC is deployed CORRECTLY, as opposed to some partial or broken
deployment.  It's generally broken or incomplete DNSSEC deployments that
cause all the problems.

Getting the rules for this right might be complicated, though.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-04-30 Thread Tim Hollebeek via dev-security-policy
32 bits is rather ... low.

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Buschart,
> Rufus via dev-security-policy
> Sent: Monday, April 30, 2018 2:25 AM
> To: mozilla-dev-security-policy 
> 
> Cc: Wichmann, Markus Peter ; Enrico
> Entschew ; Grotz, Florian
> ; Heusler, Juergen
> ; Wayne Thayer 
> Subject: AW: Policy 2.6 Proposal: Add prohibition on CA key generation to
> policy
> 
> ---=== Intern ===---
> Hello!
> 
> I would like to suggest to rephrase the central sentence a little bit:
> 
> Original:
> 
> CAs MUST NOT distribute or transfer certificates in PKCS#12 form through
> insecure electronic channels. The PKCS#12 file must have a  sufficiently 
> secure
> password, and the password must not be transferred  together with the file.
> 
> Proposal:
> 
> CAs SHOULD NOT distribute or transfer certificates in PKCS#12 form through
> insecure electronic channels. If the CA chooses to do so, the PKCS#12 file 
> SHALL
> have a  password containing at least 32 bit of output from a CSPRNG, and the
> password SHALL be transferred using a different channel as the PKCS#12 file.
> 
> 
> My proposal would allow a CA to centrally generate a P12 file, send it to the
> Subject by unencrypted email and send the P12 pin as a SMS or Threema
> message. This is an important use case if you want to have email encryption on
> a mobile device that is not managed by a mobile device management system.
> Additionally I made the wording a little bit more rfc2119-ish and made clear,
> what defines a 'sufficiently secure password' as the original wording lets a 
> lot
> of room for 'interpretation'.
> 
> What do you think?
> 
> /Rufus
> 
> 
> Siemens AG
> Information Technology
> Human Resources
> PKI / Trustcenter
> GS IT HR 7 4
> Hugo-Junkers-Str. 9
> 90411 Nuernberg, Germany
> Tel.: +49 1522 2894134
> mailto:rufus.busch...@siemens.com
> www.twitter.com/siemens
> 
> www.siemens.com/ingenuityforlife
> 
> Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann
> Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive
> Officer; Roland Busch, Lisa Davis, Klaus Helmrich, Janina Kugel, Cedrik Neike,
> Michael Sen, Ralf P. Thomas; Registered offices: Berlin and Munich, Germany;
> Commercial registries: Berlin Charlottenburg, HRB 12300, Munich, HRB 6684;
> WEEE-Reg.-No. DE 23691322
> 
> > -Ursprüngliche Nachricht-
> > Von: dev-security-policy
> > [mailto:dev-security-policy-bounces+rufus.buschart=siemens.com@lists.m
> > ozilla.org] Im Auftrag von Wayne Thayer via dev-security-policy
> > Gesendet: Freitag, 27. April 2018 19:30
> > An: Enrico Entschew
> > Cc: mozilla-dev-security-policy
> > Betreff: Re: Policy 2.6 Proposal: Add prohibition on CA key generation
> > to policy
> >
> > On Fri, Apr 27, 2018 at 6:40 AM, Enrico Entschew via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> >
> > > I suggest to make the requirement „* The PKCS#12 file must have a
> > > sufficiently secure password, and the password must be transferred
> > > via a separate channel than the PKCS#12 file.” binding for both
> > > transfer methods and not be limited to physical data storage.
> > > Otherwise I agree with this proposal.
> > >
> > > Enrico
> > >
> > > That seems like a good and reasonable change, resulting in the
> > > following
> > policy:
> >
> > CAs MUST NOT generate the key pairs for end-entity certificates that
> > have EKU extension containing the KeyPurposeIds id-kp- serverAuth or
> anyExtendedKeyUsage.
> >
> > CAs MUST NOT distribute or transfer certificates in PKCS#12 form
> > through insecure electronic channels. The PKCS#12 file must have a
> > sufficiently secure password, and the password must not be transferred
> > together with the file. If a PKCS#12 file is distributed via a
> > physical data storage device, then the storage must be packaged in a
> > way that the opening of the package causes irrecoverable physical
> > damage. (e.g. a security seal)
> >
> > Unless other comments are made, I'll consider this to be the conclusion of
> discussion on this topic.
> >
> > Wayne
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> >
> https://clicktime.symantec.com/a/1/HUkfAMaSNDzueZ8VpAI5fwA5As67iQ8caf
> D
> >
> WBr1uThY=?d=ckuHENZYbUJhkHaGquLNZOFaRZP9Zc8e3rxXzIneBrhS9PY6Y5iu
> qxaNSQ
> > CO7umlrvB6qtPvWhKg1hOt-
> 2VGAgBHkdp7nRO9u6gGSrCiQ5v77xypwOc0krIjNpHe3P_8
> > K4fNqBkxtFgHPPRsjnUrWo6Nfut4RREp2XdN4JmAN5a0_Cq-
> KD_3YVYUsmlED3KJBAPwUX
> >
> iunRGjvX_UO6wuk621g6OXR1oeHDV_bXTgF86SIyOLmLgkGFvIqEapcu7fJ586Bw
> XR1uCV
> > 8gq0HQREMlTc_HMD1E4L5sm7g1-GWjLMQdOIJJNK88wXlBK2yuCTd_0K-
> 7Qbvt8DWKSQME
> > 

RE: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure

2018-04-30 Thread Tim Hollebeek via dev-security-policy
What about the cases we discussed where there is DNSSEC, but only for a
subtree?
Or do you consider that "not DNSSEC" ?

-Tim

> -Original Message-
> From: Paul Wouters [mailto:p...@nohats.ca]
> Sent: Monday, April 30, 2018 11:07 AM
> To: Tim Hollebeek <tim.holleb...@digicert.com>
> Cc: mozilla-dev-security-policy
<mozilla-dev-security-pol...@lists.mozilla.org>
> Subject: RE: "multiple perspective validations" - AW: Regional BGP hijack
of
> Amazon DNS infrastructure
> 
> On Mon, 30 Apr 2018, Tim Hollebeek via dev-security-policy wrote:
> 
> >> I don't think this opinion is in conflict with the suggestion that we
> >> required DNSSEC validation on CAA records when (however rarely) it is
> >> deployed. I added this as
> >> https://github.com/mozilla/pkipolicy/issues/133
> >
> > One of the things that could help quite a bit is to only require
> > DNSSEC validation when DNSSEC is deployed CORRECTLY, as opposed to
> > some partial or broken deployment.  It's generally broken or
> > incomplete DNSSEC deployments that cause all the problems.
> >
> > Getting the rules for this right might be complicated, though.
> 
> It's also wrong. You can't soft-fail on that and you don't want to be in
the
> business of trying to figure out what is a sysadmin failure and what is an
actual
> attack.
> 
> The only somehwat valid soft-fail could come from recently expired RRSIGs,
but
> validating DNS resolvers like unbound already build in a margin of a few
hours,
> and I think you should not to anything special during CAA verification
other
> then using a validating resolver.
> 
> Paul


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-04-30 Thread Tim Hollebeek via dev-security-policy
Once again, CSPRNGs are not overkill.  They are widely available in virtually 
every
programming language in existence these days.  I have never understood why
there is so much pushback against something that often appears near the top of 
many top ten lists about basic principles for secure coding.

Also, while I'm responding, and since it got copied into your proposal, 32 bits 
is 
still way too small.

"irrecoverable physical damage" ?  You want to go beyond tamper evident,
and even tamper responsive, and require self-destruction on tamper??  
I personally think we probably want to get out of the area of writing 
requirements about physical distribution.  They're VERY hard to get right.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Doug
> Beattie via dev-security-policy
> Sent: Monday, April 30, 2018 1:06 PM
> To: Buschart, Rufus ; mozilla-dev-security-
> policy 
> Cc: Wichmann, Markus Peter ; Enrico
> Entschew ; Grotz, Florian
> ; Heusler, Juergen
> ; Wayne Thayer 
> Subject: RE: Policy 2.6 Proposal: Add prohibition on CA key generation to 
> policy
> 
> 
> I agree we need to tighten up Wayne's initial proposal a little.
> 
> -
> Initial proposal (Wayne):
> 
> CAs MUST NOT distribute or transfer certificates in PKCS#12 form through
> insecure electronic channels. The PKCS#12 file must have a sufficiently secure
> password, and the password must not be transferred together with the file. If 
> a
> PKCS#12 file is distributed via a physical data storage device, then the 
> storage
> must be packaged in a way that the opening of the package causes
> irrecoverable physical damage. (e.g. a security seal)
> 
> -
> Proposal #1 (Rufus):
> 
> CAs SHOULD NOT distribute or transfer certificates in PKCS#12 form through
> insecure electronic channels. If the CA chooses to do so, the PKCS#12 file 
> SHALL
> have a  password containing at least 32 bit of output from a CSPRNG, and the
> password SHALL be transferred using a different channel as the PKCS#12 file.
> 
> 
> Proposal #2 (Doug)
> 
> If the PKCS#12 is distributed thought an insecure electronic channel then the
> PKCS#12 file SHALL have a  password containing at least 32 bit of entropy and
> the PKCS#12 file and the password SHALL be transferred using a different
> channels.
> 
> If the PKCS#12 is distributed through a secure electronic channel, then...  
>  secure channels are used are there are any requirements on the strength of the
> password or the use of multiple distribution channels?  Can you send both the
> P12 and password in a secure S/MIME email or a user can view/download both
> in the same session from a website?  We should be clear.>
> 
> If a PKCS#12 file is distributed via a non-secure physical data storage device
> , then
> a) the storage must be packaged in a way that the opening of the package
> causes irrecoverable physical damage. (e.g. a security seal), or
> b) the PKCS#12 must have a password of at least 32 bits of entropy and the
> password must be sent via separate channel.
> 
> 
> Comments:
> 
> 1) The discussions to date have not addressed the use of secure channels on
> the quality of the password, nor on the use of multiple channels.  What is the
> intent?  We should specify that so it's clear.
> 
> 2) I think the use of CSPRNG is overkill for this application.  Can we leave 
> this at
> a certain entropy level?
> 
> 3) The tamper requirement would only seem applicable if the P12 wasn't
> protected well (via strong P12 password on USB storage or via "good" PIN on a
> suitably secure crypto token).
> 
> 
> 
> > -Original Message-
> >
> > I would like to suggest to rephrase the central sentence a little bit:
> >
> > Original:
> >
> > CAs MUST NOT distribute or transfer certificates in PKCS#12 form
> > through insecure electronic channels. The PKCS#12 file must have a
> > sufficiently secure password, and the password must not be transferred
> together with the file.
> >
> > Proposal:
> >
> > CAs SHOULD NOT distribute or transfer certificates in PKCS#12 form
> > through insecure electronic channels. If the CA chooses to do so, the
> > PKCS#12 file SHALL have a  password containing at least 32 bit of
> > output from a CSPRNG, and the password SHALL be transferred using a
> > different channel as the
> > PKCS#12 file.
> >
> > My proposal would allow a CA to centrally generate a P12 file, send it
> > to the Subject by unencrypted email and send the P12 pin as a SMS or
> > Threema message. This is an important use case if you want to have
> > email encryption on a mobile device that is not managed by a mobile device
> management system.
> > Additionally I made the wording a little bit more 

RE: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

2018-04-30 Thread Tim Hollebeek via dev-security-policy
OOB passwords are generally tough to integrate into automation, and if the 
channel really is “secure” then they might not be buying you anything, 
depending where the “secure” channel starts and ends and how it is 
authenticated.

 

That might not be a GOOD reason to allow it, but it is the one reason that 
comes to mind.  Taking the other side, I’d argue that it’s unlikely that the 
“secure” channel stretches unbroken from the site of key generation to the key 
loading/usage site.  And it’s possible that “secure” is being used incorrectly, 
and the channel is encrypted but not authenticated.  In that case, having a 
strong password does help for at least a portion of the transmission.

 

-Tim

 

From: Wayne Thayer [mailto:wtha...@mozilla.com] 
Sent: Monday, April 30, 2018 2:25 PM
To: Tim Hollebeek 
Cc: Doug Beattie ; Buschart, Rufus 
; mozilla-dev-security-policy 
; Wichmann, Markus Peter 
; Enrico Entschew ; Grotz, 
Florian ; Heusler, Juergen 

Subject: Re: Policy 2.6 Proposal: Add prohibition on CA key generation to policy

 

The current policy seems inconsistent on the trust placed in passwords to 
protect PKCS#12 files. On one hand, it forbids transmission via insecure 
electronic channels regardless of password protection. But it goes on to permit 
transmission of PKCS#12 files on a storage device as long as a "sufficiently 
strong" password is delivered via a different means. If we trust PKCS#12 
encryption with a strong password (it's not clear that we should [1]), then the 
policy could be:

 

PKCS#12 files SHALL have a password containing at least 64 bits of output from 
a CSPRNG, and the password SHALL be transferred using a different channel than 
the PKCS#12 file.

 

This eliminates the need for separate rules pertaining to physical storage 
devices.

 

Is there a good reason to allow transmission of PKCS#12 files with weak/no 
passwords over "secure" channels?

 

[1] http://unmitigatedrisk.com/?p=543

 

On Mon, Apr 30, 2018 at 10:46 AM, Tim Hollebeek  > wrote:

Once again, CSPRNGs are not overkill.  They are widely available in virtually 
every
programming language in existence these days.  I have never understood why
there is so much pushback against something that often appears near the top of 
many top ten lists about basic principles for secure coding.

Also, while I'm responding, and since it got copied into your proposal, 32 bits 
is 
still way too small.

"irrecoverable physical damage" ?  You want to go beyond tamper evident,
and even tamper responsive, and require self-destruction on tamper??  
I personally think we probably want to get out of the area of writing 
requirements about physical distribution.  They're VERY hard to get right.

That is copied from the current policy, and while it's confusing I believe it 
just means 'tamper evident'.

 

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy- 
>  
> bounces+tim.hollebeek=digicert@lists.mozilla.org 
>  ] On Behalf Of Doug
> Beattie via dev-security-policy
> Sent: Monday, April 30, 2018 1:06 PM
> To: Buschart, Rufus   >; mozilla-dev-security-
> policy   >
> Cc: Wichmann, Markus Peter   >; Enrico
> Entschew  >; Grotz, Florian
>  >; Heusler, 
> Juergen
>  >; Wayne 
> Thayer  >
> Subject: RE: Policy 2.6 Proposal: Add prohibition on CA key generation to 
> policy

> 
> 
> I agree we need to tighten up Wayne's initial proposal a little.
> 
> -
> Initial proposal (Wayne):
> 
> CAs MUST NOT distribute or transfer certificates in PKCS#12 form through
> insecure electronic channels. The PKCS#12 file must have a sufficiently secure
> password, and the password must not be transferred together with the file. If 
> a
> PKCS#12 file is distributed via a physical data storage device, then the 
> storage
> must be packaged in a way that the opening of the package causes
> irrecoverable physical damage. (e.g. a security seal)
> 
> -
> Proposal #1 (Rufus):
> 
> CAs SHOULD NOT distribute or transfer certificates in PKCS#12 form through
> insecure electronic channels. If the CA chooses to do so, the PKCS#12 file 
> SHALL
> have a  password containing at least 32 bit of 

RE: DRAFT November 2017 CA Communication

2017-10-26 Thread Tim Hollebeek via dev-security-policy
I don't like erratum 5097.  It just deletes the mention of DNAME, which can 
easily be misinterpreted as not permitting DNAME following for CAA (or even 
worse, allows DNAME to be handled however you want).  Erratum 5097 also has not 
been approved by IETF (and shouldn't be, for this reason).

The "natural" interpretation of DNAME, which has been discussed on various 
CA/Browser forum calls and at the Taiwan face to face meeting, is that DNAME 
must be handled in compliance with RFC 6672, which explains how synthesized 
CNAMEs work.

My own personal preferred fix for RFC 6844 is to replace "CNAME or DNAME alias 
record specified at the label X" with "CNAME alias record specified at the 
label X, or a DNAME alias record *in effect at* the label X (see RFC 6672)"

But anyway, I think everyone agrees what we want: DNAMEs work the way they do 
everywhere else.  There's nothing special about them for CAA.

-Tim

-Original Message-
From: dev-security-policy 
[mailto:dev-security-policy-bounces+thollebeek=trustwave@lists.mozilla.org] 
On Behalf Of Andrew Ayer via dev-security-policy
Sent: Wednesday, October 25, 2017 5:05 PM
To: Kathleen Wilson 
Cc: Kathleen Wilson via dev-security-policy 
; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: DRAFT November 2017 CA Communication

Hi Kathleen,

I suggest being explicit about which CAA errata Mozilla allows.

For CNAME, it's erratum 5065.

For DNAME, it's erratum 5097.

Link to errata: 
https://scanmail.trustwave.com/?c=4062=rPzw2czQrwDIggzGnHPfXELR5_onUMc5So6YlzbIiQ=5=https%3a%2f%2fwww%2erfc-editor%2eorg%2ferrata%5fsearch%2ephp%3frfc%3d6844

We don't want CAs to think they can follow any errata they like, or to come up 
with their own interpretation of what "natural" means :-)

Regards,
Andrew

On Wed, 25 Oct 2017 12:46:40 -0700 (PDT) Kathleen Wilson via 
dev-security-policy  wrote:

> All,
> 
> I will greatly appreciate your thoughtful and constructive feedback on 
> the DRAFT of Mozilla's next CA Communication, which I am hoping to 
> send in early November.
> 
> https://scanmail.trustwave.com/?c=4062=rPzw2czQrwDIggzGnHPfXELR5_onU
> Mc5St_PkWKbjQ=5=https%3a%2f%2fwiki%2emozilla%2eorg%2fCA%2fCommunic
> ations%23November%5f2017%5fCA%5fCommunication
> 
> Direct link to the survey:
> https://scanmail.trustwave.com/?c=4062=rPzw2czQrwDIggzGnHPfXELR5_onU
> Mc5SomUljKdiw=5=https%3a%2f%2fccadb-public%2esecure%2eforce%2ecom%
> 2fmozillacommunications%2fCACommunicationSurveySample%3fCACommunicatio
> nId%3da051J3mogw7
> 
> Thanks,
> Kathleen
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://scanmail.trustwave.com/?c=4062=rPzw2czQrwDIggzGnHPfXELR5_onU
> Mc5StnPlmSVhg=5=https%3a%2f%2flists%2emozilla%2eorg%2flistinfo%2fd
> ev-security-policy
> 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://scanmail.trustwave.com/?c=4062=rPzw2czQrwDIggzGnHPfXELR5_onUMc5StnPlmSVhg=5=https%3a%2f%2flists%2emozilla%2eorg%2flistinfo%2fdev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Tim Hollebeek via dev-security-policy

> For comparison of "What could be worse", you could imagine a CA using the
> .10 method to assert the Random Value (which, unlike .7, is not bounded in
its
> validity) is expressed via the serial number. In this case, a CA could
validate a
> request and issue a certificate. Then, every 3 years (or 2 years starting
later this
> year), connect to the host, see that it's serving their previously issued
> certificate, assert that the "Serial Number" constitutes the Random Value,
and
> perform no other authorization checks beyond that. In a sense, fully
removing
> any reasonable assertion that the domain holder has authorized (by proof
of
> acceptance) the issuance.

My "Freshness Value" ballot should fix this, by requiring that Freshness
Values actually be fresh.

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Incident report: Failure to verify authenticity for some partner requests

2018-01-10 Thread Tim Hollebeek via dev-security-policy
 

Hi everyone, 
 
There was a bug in our OEM integration that led to a lapse in the
verification of authenticity of some OV certificate requests coming in
through the reseller/partner system.
 
As you know, BR 3.2.5 requires CAs to verify the authenticity of a request
for an OV certificate through a Reliable Method of Communication (RMOC).
Email can be a RMOC, but in these cases, the email address was a constructed
email address as in BR 3.2.2.4.4.  Despite the fact that these addresses are
standardized in RFC 2142 or elsewhere, we do not believe this meets the
standard of "verified using a source other than the Applicant
Representative."
 
The issue was discovered by TBS Internet on Dec 30, 2018. Apologies for the
delay in reporting this. Because of the holidays, it took longer than we
wanted to collect the data we needed.  We patched the system to prevent
continued use of constructed emails for authenticity verification early, but
getting the number of impacted orgs took a bit more time. We are using the
lessons learned to implement changes that will benefit overall user security
as we migrate the legacy Symantec practices and systems to DigiCert.   
 
Here's the incident report:
 
1.How your CA first became aware of the problem (e.g. via a problem
report submitted to your Problem Reporting Mechanism, via a discussion in
mozilla.dev.security.policy, or via a Bugzilla bug), and the date. 
 
Email from JP at TBS about the issue on Dec 30, 2017.  
 
2.A timeline of the actions your CA took in response. 
 
A. Dec 30, 2017 - Received report that indirect accounts did not require a
third-party source for authenticity checks. Constructed emails bled from the
domain verification approval list to the authenticity approval list. 
B. Dec 30, 2017 - Investigation began. Shut off email verification of
authenticity.
C. Jan 3, 2017 - Call with JP to investigate what he was seeing and
confirmed that all indirect accounts were potentially impacted.
D. Jan 3, 2017 - Fixed issue where constructed emails were showing as a
permitted address for authenticity verification.
E. Jan 5, 2017 - Invalidated all indirect order's authenticity checks.
Started calling on verified numbers to confirm authenticity for impacted
accounts. 
F. Jan 6, 2017 - Narrowed scope to only identify customers impacted (where
the validation staff used a constructed email rather than a verified
number).
G. Jan 10, 2017 - This disclosure.
 
Ongoing: 
H. Reverification of all impacted accounts
I. Training of verification staff on permitted authenticity verification
 
3.Confirmation that your CA has stopped issuing TLS/SSL certificates
with the problem. 
 
Confirmed. Email verification of authenticity remains disabled until we can
ensure additional safeguards.
 
4.A summary of the problematic certificates. For each problem: number of
certs, and the date the first and last certs with that problem were issued. 
 
There are 3,437 orgs impacted, with a total of 5,067 certificates.  The
certificates were issued between December 1st and December 30th.
 
5.The complete certificate data for the problematic certificates. The
recommended way to provide this is to ensure each certificate is logged to
CT and then list the fingerprints or crt.sh IDs, either in the report or as
an attached spreadsheet, with one list per distinct problem. 
 
Will add to CT once we grab it all.  I will provide a list of affected
certificates in a separate email (it's big, so it was getting this post
moderated).
 
6.Explanation about how and why the mistakes were made or bugs
introduced, and how they avoided detection until now. 
 
In truth, it comes down to a short timeframe to implement the
Symantec-DigiCert system integration and properly train everyone we hired.
We are implementing lessons learned to correct this and improve security
overall as we migrate legacy Symantec practices and systems to DigiCert. In
this case, there are mitigating controls.  For example, these are mostly
existing Symantec certs that are migrating to the DigiCert backend. The
verification by Symantec previously means that the number of potentially
problematic certs is pretty low. There's also a mitigating factor that we
did not use method 1 to confirm domain control. In each case, someone from
the approved constructed emails had to sign off on the certificate before
issuance.  This is limited to OV certificates, meaning EV certificates were
not impacted. Despite the mitigating factors, we believe this is a
compliance issue, even though we believe the security risk is minimal.
 
7.List of steps your CA is taking to resolve the situation and ensure
such issuance will not be repeated in the future, accompanied with a
timeline of when your CA expects to accomplish these things. 
 
A. We clarified in the system what is required for an authenticity check. 
B. We removed email verification for authenticity checks until appropriate
new safeguards can be added.
C. We are re-validating 

RE: Updating Root Inclusion Criteria

2018-01-17 Thread Tim Hollebeek via dev-security-policy
Wayne,

I support "encouraging" those who are currently using the public web PKI for 
internal uses to move to their own private PKIs.  The current situation is an 
artifact of the old notion that there should be a global "One CA List to Rule 
Them All" owned by the operating system, and everyone should use that.
That notion is a bit antiquated, in my mind.  Applications and components
that need a trust list really need to carefully select (or create!) an 
appropriate 
one instead of just grabbing the most convenient one.

I'm familiar with a few efforts in the financial space to transition away from
browser trust lists for non-browser TLS, but as you can imagine, that's not a 
trivial effort and will take some time.  My only request would be that if the
rules are going to change, that large companies and entire industries that
may be affected be given enough notice to be able to come up with
reasonable transition plans.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Wayne
> Thayer via dev-security-policy
> Sent: Tuesday, January 16, 2018 4:46 PM
> To: mozilla-dev-security-policy 
> 
> Subject: Updating Root Inclusion Criteria
> 
> I would like to open a discussion about the criteria by which Mozilla decides
> which CAs we should allow to apply for inclusion in our root store.
> 
> Section 2.1 of Mozilla’s current Root Store Policy states:
> 
> CAs whose certificates are included in Mozilla's root program MUST:
> > 1.provide some service relevant to typical users of our software
> > products;
> >
> 
> Further non-normative guidance for which organizations may apply to the CA
> program is documented in the ‘Who May Apply’ section of the application
> process at https://wiki.mozilla.org/CA/Application_Process . The original
> intent of this provision in the policy and the guidance was to discourage a 
> large
> number of organizations from applying to the program solely for the purpose
> of avoiding the difficulties of distributing private roots for their own 
> internal
> use.
> 
> Recently, we’ve encountered a number of examples that cause us to question
> the usefulness of the currently-vague statement(s) we have that define which
> CAs to accept, along a number of different axes:
> 
> * Visa is a current program member that has an open request to add another
> root. They only issue a relatively small number of certificates per year to
> partners and for internal use. They do not offer certificates to the general
> public or to anyone with whom they do not have an existing business
> relationship.
> 
> * Google is also a current program member, admitted via the acquisition of an
> existing root, but does not currently, to the best of our knowledge, meet the
> existing inclusion criteria, even though it is conceivable that they would 
> issue
> certificates to the public in the future.
> 
> * There are potential applicants for CA status who deploy a large number of
> certificates, but only on their own infrastructure and for their own domains,
> albeit that this infrastructure is public-facing rather than company-internal.
> 
> * We have numerous government CAs in the program or in the inclusion
> process that only intend to issue certificates to their own institutions.
> 
> * We have at least one CA applying for the program that (at least, it has been
> reported in the press) is controlled by an entity which may wish to use it for
> MITM.
> 
> There are many potential options for resolving this issue. Ideally, we would 
> like
> to establish some objective criteria that can be measured and applied fairly. 
> It’s
> possible that this could require us to define different categories of CAs, 
> each
> with different inclusion criteria. Or it could be that we should remove the
> existing ‘relevance’ requirement and inclusion guidelines and accept any
> applicant who can meet all of our other requirements.
> 
> With this background, I would like to encourage everyone to provide
> constructive input on this topic.
> 
> Thanks,
> 
> Wayne
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Updating Root Inclusion Criteria

2018-01-18 Thread Tim Hollebeek via dev-security-policy
> I think this is a vote for the status quo, in which we have been accepting 
> CAs that don't meet the guidance provided under 'who may apply'

 

Perhaps slightly less strong than that.  I think Mozilla should be willing to 
consider accepting them if there is a compelling reason to do so.  “Why aren’t 
you running/participating in a private PKI?” should always be the first 
question, with the recognition that there are valid answers to that question.

 

-Tim

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: TLS-SNI-01 and compliance with BRs

2018-01-19 Thread Tim Hollebeek via dev-security-policy
My recollection is that there were a number of CA/B forum participants 
(including me) who asked repeatedly if method #10 could be expanded 
beyond a single sentence.

I don't remember anyone speaking up in opposition, just silence.

I continue to support making sure that all of the validation methods have
enough detail so that their security properties can be fully analyzed.
Hopefully that would help avoid incidents like this in the future.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of J.C.
Jones
> via dev-security-policy
> Sent: Thursday, January 18, 2018 3:34 PM
> To: Matthew Hardeman 
> Cc: Doug Beattie ; mozilla-dev-security-
> pol...@lists.mozilla.org; Alex Gaynor 
> Subject: Re: TLS-SNI-01 and compliance with BRs
> 
> That would be the right place. At the time there was not universal desire
for
> these validation mechanisms to be what I'd call 'fully specified'; the
point of
> having them written this way was to leave room for individuality in
meeting
> the requirements.
> 
> Of course, having a few carefully-specified-and-validated mechanisms
instead
> of individuality has worked rather well for other security-critical
operations,
> like the very transport security this whole infrastructure exists to
support.
> Perhaps that argument could be re-opened.
> 
> J.C.
> 
> 
> On Thu, Jan 18, 2018 at 3:25 PM, Matthew Hardeman
> 
> wrote:
> 
> >
> >
> > On Thu, Jan 18, 2018 at 4:14 PM, J.C. Jones via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> >
> >> As one of the authors of 3.2.2.4.10, Alex's logic is exactly how we
> >> walked through it in the Validation Working Group. The ADN lookup is
> >> DNS, and what you find when you connect there via TLS, within the
> >> certificate, should be the random value (somewhere). 3.2.2.4.10 was
> >> written to permit ACME's
> >> TLS-SNI-01 while being generic enough to permit CAs to accomplish the
> >> same general validation structure without following the
> >> ACME-specified algorithm.
> >>
> >> J.C.
> >
> >
> > I would presume that the CABforum would be the place to explore
> > further details, but it seems that the specifications for the #10
> > method should be reexamined as to what assurances they actually
> > provide with a view to revising those specifications.  At least 1 CA
> > so far has found that the real world experience of a (presumably)
> > compliant application of method #10 as it exists today was deficient
> > in mitigating the provision of certificates to incorrect/unauthorized
parties.
> >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: GlobalSign certificate with far-future notBefore

2018-01-24 Thread Tim Hollebeek via dev-security-policy
That's a good point, thank you.  I think I would lean towards making this
an end-entity only requirement until we've thought through the details
for other certificates.

We've been burned by this before (requirements for OCSP related certificates
were under-specified during the SHA1->SHA2 transition).

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Jakob
> Bohm via dev-security-policy
> Sent: Wednesday, January 24, 2018 12:11 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: GlobalSign certificate with far-future notBefore
> 
> Please also consider the practice of having an off-line CA (typically a
> root) pre-issue CRLs, OCSP responses, intermediary CAs and OCSP responder
> certificates for the period until the next root-key-usage ceremony.
> 
> This practice will naturally involve forward-dating of all of these items.
> 
> On 24/01/2018 19:03, Tim Hollebeek wrote:
> > With respect to the action item, I'll add it to next week's VWG agenda.
> >
> > -Tim
> >
> >> -Original Message-
> >> From: Doug Beattie [mailto:doug.beat...@globalsign.com]
> >> Sent: Wednesday, January 24, 2018 11:02 AM
> >> To: Tim Hollebeek <tim.holleb...@digicert.com>; Rob Stradling
> >> <rob.stradl...@comodo.com>; Jonathan Rudenberg
> >> <jonat...@titanous.com>; mozilla-dev-security-policy
> >  >> pol...@lists.mozilla.org>
> >> Subject: RE: GlobalSign certificate with far-future notBefore
> >>
> >> Can we consider this case closed with the action that the VWG will
> >> propose
> > a
> >> ballot that addresses pre and postdating certificates?
> >>
> >> Doug
> >>
> >>> -Original Message-
> >>> From: dev-security-policy [mailto:dev-security-policy-
> >>> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> >>> bounces+Tim
> >>> Hollebeek via dev-security-policy
> >>> Sent: Wednesday, January 24, 2018 11:49 AM
> >>> To: Rob Stradling <rob.stradl...@comodo.com>; Jonathan Rudenberg
> >>> <jonat...@titanous.com>; mozilla-dev-security-policy
> >>> 
> >>> Subject: RE: GlobalSign certificate with far-future notBefore
> >>>
> >>>
> >>>>> This incident makes me think that two changes should be made:
> >>>>>
> >>>>> 1) The Root Store Policy should explicitly ban forward and
> >>>>> back-dating
> >>> the
> >>>> notBefore date.
> >>>>
> >>>> I think it would be reasonable and sensible to permit back-dating
> >>>> insofar
> >>> as it is
> >>>> deemed necessary to accommodate client-side clock-skew.
> >>>
> >>> Indeed.  This was discussed at a previous Face to Face meeting, and
> >>> it was generally agreed that a requirement that the notBefore date
> >>> be within +-1 week of issuance would not be unreasonable.
> >>>
> >>> The most common practice is backdating by a few days for the reason
> >>> Rob mentioned.
> >>>
> >>> -Tim
> >
> 
> 
> Enjoy
> 
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.
> https://clicktime.symantec.com/a/1/w3EBVE2BUeC8MLN64pffPHe_ALFM8rW
> FtYSZz0xKgUQ=?d=0cp29hFCr5Urpdzx-
> Mfh962Yi0YHT8LIyoz29Y64zpxMuZ5acgO3veRerXVznnhS8okM5L2iK7Cfn-
> QM7GjRJRKhm9VLVunmzxGFY3ZEBLSt1WL9J_pv6EL3P9LWZ2hVLFetfoezYOko
> 0x-zSINeQfcGEdm1mIF6ToqUHA6FT-PImc0BUUM0RYQrHLClDEtxX9-
> CxA7_Q5Hi-
> dY_G2jx0s6sq6K5ezLrkKQ3gAzBEza0Zh3b1wW58ngKVU5vpeJvvlR_imWg-
> ZYQ3krbn6QKzJDxbo-
> uRsICLMequfXT4i7CTjcmzrWZ6i4wFJ_YwP7494F9dwa63QJ04UWu1VpygY_FO
> 9yp5t7UHK6F0Gm6dZv-
> Dbs0rvQeyRhJcD76INT9CIRVg0NYqzetnqGr_FXERUBlZySFZ5JHbXWLIq7YkZCEB
> 0bzP5csI62QM1CdL8dKJuEkEICGDDorGrSz8TIvahk%3D=https%3A%2F%2Fw
> ww.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10 This
public
> discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/kC_2afz6-
> xbFBgJiUlml8gw9eo6BNgViMVS2LeeuzvE=?d=0cp29hFCr5Urpdzx-
> Mfh962Yi0YHT8LIyoz29Y64zpxMuZ5acgO3veRerXVznnhS8okM5L2iK7Cfn-
> QM7GjRJRKhm9VLVunmzxGFY3ZEBLSt1WL9J_pv6EL3P9LWZ2hVLFetfoezYOko
> 0x-zSINeQfcGEdm1mIF6ToqUHA6FT-PImc0BUUM0RYQrHLClDEtxX9-
> CxA7_Q5Hi-
> dY_G2jx0s6sq6K5ezLrkKQ3gAzBEza0Zh3b1wW58ngKVU5vpeJvvlR_imWg-
> ZYQ3krbn6QKzJDxbo-
> uRsICLMequfXT4i7CTjcmzrWZ6i4wFJ_YwP7494F9dwa63QJ04UWu1VpygY_FO
> 9yp5t7UHK6F0Gm6dZv-
> Dbs0rvQeyRhJcD76INT9CIRVg0NYqzetnqGr_FXERUBlZySFZ5JHbXWLIq7YkZCEB
> 0bzP5csI62QM1CdL8dKJuEkEICGDDorGrSz8TIvahk%3D=https%3A%2F%2Fli
> sts.mozilla.org%2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: DRAFT January 2018 CA Communication

2018-01-24 Thread Tim Hollebeek via dev-security-policy
Wayne,

You might want to highlight that method 1 sub-method 3 would survive even if
ballot 218 passes, as a new method 12 with some changes and improvements
that CAs who use sub-method 3 should pay close attention to.

With regards to TLS-SNI-01, I believe TLS-SNI-02 is also affected by the same
issue and should be mentioned as well.

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: GlobalSign certificate with far-future notBefore

2018-01-24 Thread Tim Hollebeek via dev-security-policy
With respect to the action item, I'll add it to next week's VWG agenda.

-Tim

> -Original Message-
> From: Doug Beattie [mailto:doug.beat...@globalsign.com]
> Sent: Wednesday, January 24, 2018 11:02 AM
> To: Tim Hollebeek <tim.holleb...@digicert.com>; Rob Stradling
> <rob.stradl...@comodo.com>; Jonathan Rudenberg
> <jonat...@titanous.com>; mozilla-dev-security-policy
 pol...@lists.mozilla.org>
> Subject: RE: GlobalSign certificate with far-future notBefore
> 
> Can we consider this case closed with the action that the VWG will propose
a
> ballot that addresses pre and postdating certificates?
> 
> Doug
> 
> > -Original Message-
> > From: dev-security-policy [mailto:dev-security-policy-
> > bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> > bounces+Tim
> > Hollebeek via dev-security-policy
> > Sent: Wednesday, January 24, 2018 11:49 AM
> > To: Rob Stradling <rob.stradl...@comodo.com>; Jonathan Rudenberg
> > <jonat...@titanous.com>; mozilla-dev-security-policy
> > 
> > Subject: RE: GlobalSign certificate with far-future notBefore
> >
> >
> > > > This incident makes me think that two changes should be made:
> > > >
> > > > 1) The Root Store Policy should explicitly ban forward and
> > > > back-dating
> > the
> > > notBefore date.
> > >
> > > I think it would be reasonable and sensible to permit back-dating
> > > insofar
> > as it is
> > > deemed necessary to accommodate client-side clock-skew.
> >
> > Indeed.  This was discussed at a previous Face to Face meeting, and it
> > was generally agreed that a requirement that the notBefore date be
> > within +-1 week of issuance would not be unreasonable.
> >
> > The most common practice is backdating by a few days for the reason
> > Rob mentioned.
> >
> > -Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: GlobalSign certificate with far-future notBefore

2018-01-24 Thread Tim Hollebeek via dev-security-policy

> > This incident makes me think that two changes should be made:
> >
> > 1) The Root Store Policy should explicitly ban forward and back-dating
the
> notBefore date.
> 
> I think it would be reasonable and sensible to permit back-dating insofar
as it is
> deemed necessary to accommodate client-side clock-skew.

Indeed.  This was discussed at a previous Face to Face meeting, and it was
generally
agreed that a requirement that the notBefore date be within +-1 week of
issuance
would not be unreasonable.

The most common practice is backdating by a few days for the reason Rob
mentioned.

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: IP Validation using method 3.2.2.5 (4) "any other method"

2018-01-30 Thread Tim Hollebeek via dev-security-policy
Good point.  If you want your method preserved, please send it to one of the 
CA/Browser forum lists.



-Tim



From: Ryan Sleevi [mailto:r...@sleevi.com]
Sent: Tuesday, January 30, 2018 8:46 AM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: mozilla-dev-security-policy 
<mozilla-dev-security-pol...@lists.mozilla.org>
Subject: Re: IP Validation using method 3.2.2.5 (4) "any other method"







On Tue, Jan 30, 2018 at 10:37 AM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:



I'm sending this to this list because CAs are required to monitor this list,
and I need to get feedback from smaller and more obscure CAs.



The validation working group is thinking about proposing removal of 3.2.2.5
(4) in the near future.  If you are currently using that method to validate
IP certificates, please reply with the details of what you are doing so the
procedure can be examined and potentially added to the Baseline Requirements
as a valid method for validating IP certificates.  FAILURE TO DO SO MAY
RESULT IN YOUR METHOD BECOMING NON-COMPLIANT WITH LITTLE OR NO NOTICE.



Just a note: Replying with those details to *this* list won't offer the 
CA/Browser Forum's IP protections.



I would instead suggest that CAs that do not participate in the CA/Browser 
Forum, but use this method, join the CA/Browser Forum and contribute such 
methods. The failure to disclose in a way that is agreed upon by the IP policy 
of the CA/Browser Forum is a reasonably high enough risk that it should be 
prevented from adding it to the CA/Browser Forum documents.





smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


IP Validation using method 3.2.2.5 (4) "any other method"

2018-01-30 Thread Tim Hollebeek via dev-security-policy
 

I'm sending this to this list because CAs are required to monitor this list,
and I need to get feedback from smaller and more obscure CAs.

 

The validation working group is thinking about proposing removal of 3.2.2.5
(4) in the near future.  If you are currently using that method to validate
IP certificates, please reply with the details of what you are doing so the
procedure can be examined and potentially added to the Baseline Requirements
as a valid method for validating IP certificates.  FAILURE TO DO SO MAY
RESULT IN YOUR METHOD BECOMING NON-COMPLIANT WITH LITTLE OR NO NOTICE.

 

Thank you,

 

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Misissuance/non-compliance remediation timelines

2018-02-07 Thread Tim Hollebeek via dev-security-policy
That’s pretty much exactly not what I said.

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Tuesday, February 6, 2018 10:38 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: Paul Kehrer <paul.l.keh...@gmail.com>; 
mozilla-dev-security-pol...@lists.mozilla.org; r...@sleevi.com
Subject: Re: Misissuance/non-compliance remediation timelines

 

So your view is the “carrot” is getting to use Mozilla’s brand as an 
endorsement, and the “stick” being that if you don’t get that endorsement for a 
while, you get kicked out?

 

The assumption is that the branding of “best” is valuable - presumably, through 
the indirect benefit of being able to appeal to customers as “the highest rated 
(by Mozilla) CA”.

 

In practice, much like the CA/Browser Forum indirectly gave birth to the CA 
“Security” Council, or the existence of firms like Netcraft or NSS Labs, the 
more common outcome seems to be that if you don’t like the rules of the game 
you’re playing, you make up your own/redefine them and try to claim equivalency 
(much lol “alternative facts”). That is, I’m skeptical of approaches that 
attempt to say “most good,” because those seem to encourage all sorts of games 
of coming up with their own schemes, while “least bad” is more actionable - as 
“most bad” is more likely to receive sanctions.

 

On Tue, Feb 6, 2018 at 10:03 PM Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

Absolutely not.  I view the competition as being based as the “most best”.



You cannot get an “A” (or even A- or B+) without significantly exceeding the 
minimum requirements, or demonstrating behaviors and practices that, while not 
required, are behaviors Mozilla wants to encourage.



Sticks are good.  Carrots are tasty.



-Tim



Do you see the competition based on being the 'least bad' (i.e. more likely to 
get an A because of no issues than a B because of some?)

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> 
https://lists.mozilla.org/listinfo/dev-security-policy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Misissuance/non-compliance remediation timelines

2018-02-07 Thread Tim Hollebeek via dev-security-policy
Alex,

 

Most CAs probably wouldn’t aim for an A.  I don’t think doing this would be a 
game changer.

 

However there are some CAs that would.  And I think that would be a positive 
thing, and lead to more innovation in best practices that could become 
mandatory for everyone over time.

 

And I don’t disagree with you that action is needed on those who are currently 
getting Ds.  I’m very disturbed by the behavior of about half of the CAs in the 
industry.

 

-Tim

 

From: Alex Gaynor [mailto:agay...@mozilla.com] 
Sent: Wednesday, February 7, 2018 8:15 AM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: Paul Kehrer <paul.l.keh...@gmail.com>; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Misissuance/non-compliance remediation timelines

 

Hey Tim,

 

A piece I think I'm missing is what you see as the incentive for CAs to aim for 
an "A" rather than being happy to have a "B". It reminds me of the old joke: 
What do you call the Dr^W CA who graduated with a C average? Dr.^W trusted to 
assert www-wide identity :-)

 

That said, given the issues Paul highlighted in his original mail (which I 
wholeheartedly concur with), it seems the place to focus is the folks who are 
getting Ds right now. Therefore I think the essential part of your email is 
your agreement that CAs which are persistently low performing need to be 
recognized and potentially penalized for the sum total of their behaviors.

 

Alex

 

On Tue, Feb 6, 2018 at 8:30 PM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

Paul,

I understand your frustration.  I've read some of the recent threads about
"how long does it take to update a CPS?" and clearly there needs to be
some stronger compliance language in either the BRs or Mozilla policy
("CAs MUST be able to update their CPS within 90 days").  And as you note
such policies need to have teeth otherwise there will be some who will
just ignore them.

However  negative penalties are not the only thing that should be
considered.
Mozilla should also have some way of recognizing CAs that are performing
above and beyond the minimum requirements.  I would love to see Mozilla
encourage CAs to compete to be the best CA in Mozilla's program.

To satisfy both goals, I'd like to suggest an idea I've had for a while: at
some
point in time (annually?), Mozilla should assess their opinion of how well
each CA in the program is performing, and give them a letter grade.  This
could include policy improvements like "Two consecutive failing grades,
or three consecutive C or lower grades and you're out of the Mozilla
program."

This would not preclude other actions as Mozilla deems necessary.  But it
would provide a regular checkpoint for CAs to understand either "Hey,
you're great, keep up the good work!" or "Meh, we think you're ok." or
"Your performance to date is unacceptable.  Get your sh*t together or
you're gone."

-Tim


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy- 
> <mailto:dev-security-policy-> 
> bounces+tim.hollebeek=digicert@lists.mozilla.org 
> <mailto:digicert@lists.mozilla.org> ] On Behalf Of Paul
> Kehrer via dev-security-policy
> Sent: Tuesday, February 6, 2018 6:03 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org 
> <mailto:mozilla-dev-security-pol...@lists.mozilla.org> 
> Subject: Misissuance/non-compliance remediation timelines
>
> A bit over 5 months ago I reported a series of OCSP responders that were
> violating BRs (section 4.9.10) by returning GOOD on unknown serial
numbers.
> Since that time the majority of those responder endpoints have been fixed,
but
> several are still non-compliant; either with little communication or
continuing
> assurances that it will be fixed "soon", where soon is a date that
continuously
> slides into the future.
>
> At the moment Mozilla possesses very few options when it comes to punitive
> action and the lesson some CAs appear to be learning is that as long as
you
> don't rise to PROCERT levels of malfeasance/incompetence then the maximum
> penalty is censure on bugzilla and email threads. Clearly this is not a
deterrent.
>
> So, how long is too long? At what point should a CA incur consequences
(and
> what form can those consequences take) for failure to remediate despite
being
> given such immense latitude?
>
> As a straw man: what if we developed a set of technical constraints
related to
> minimizing risk associated with CAs that are deemed to be acting poorly?
> For example, CAs deemed a risk would have their certificate maximum
lifetime
> constrained to some amount less than the BRs currently allow.
> Additional penalties could include removal of EV trust indicat

RE: Misissuance/non-compliance remediation timelines

2018-02-06 Thread Tim Hollebeek via dev-security-policy
Absolutely not.  I view the competition as being based as the “most best”.

 

You cannot get an “A” (or even A- or B+) without significantly exceeding the 
minimum requirements, or demonstrating behaviors and practices that, while not 
required, are behaviors Mozilla wants to encourage.

 

Sticks are good.  Carrots are tasty.

 

-Tim

 

Do you see the competition based on being the 'least bad' (i.e. more likely to 
get an A because of no issues than a B because of some?)



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Mozilla’s Plan for Symantec Roots

2018-02-13 Thread Tim Hollebeek via dev-security-policy

> OK. I'm researching what approach should be used for the Fedora Linux
> distribution, where a single CA trust list (based on Mozilla's CA trust
> list) is used for the whole system, including Firefox, and other 
> applications that
> use other certificate validation logic, like the ones built into the GnuTLS, 
> NSS
> and OpenSSL libraries.

FWIW, I realize we are where we are, but it's high time people started 
migrating
away from the concept of a single operating system trust list that is consumed
by all applications on the platform.  It just doesn't work very well since 
each
application type has its own unique security considerations, risks, and 
challenges.
And threat model, risk tolerance, value of data being protected, necessary
assurance level, etc etc etc.

It's ok to rely heavily on other trust stores to assist with bootstrapping or
maintaining a trust store, and this can even be codified directly into the new
trust store's policy.  For example, this is the approach taken by Cisco whose
trust store policy is basically the union of what's trusted by other major 
trust
stores.  It's a good baby step towards establishing an independent and well
maintained trust store.

Major trust stores have taken various actions nudging certificate authorities 
to
use a combination of technical constraints and/or EKUs and/or different
intermediate CAs in order to better segregate certificates by use case, and 
I'd
encourage them to continue with those efforts.  The current situation is a bit
of a mess, and it will take us years to get it all untangled.

-Tim




smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Serial number length

2018-01-02 Thread Tim Hollebeek via dev-security-policy


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Paul
> Kehrer via dev-security-policy
> Sent: Friday, December 29, 2017 12:46 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Serial number length
> 
> On December 29, 2017 at 12:27:35 PM, David E. Ross via dev-security-policy
(
> dev-security-policy@lists.mozilla.org) wrote:
> 
> On 12/28/2017 10:33 PM, Peter Bowen wrote:
> > On Thu, Dec 28, 2017 at 10:24 PM, Jakob Bohm via dev-security-policy
> >  wrote:
> >> After looking at some real certificates both in the browser and on
> crt.sh, I
> >> have some followup questions on certificate serial numbers:
> >>
> >> 4. If the answers are yes, no, yes, why doesn't cablint flag
> >> certificates with serial numbers of less than or equal to 64 bits as
> >> non-compliant?
> >
> > I can answer #4 -- your trusty cablint maintainer has fallen behind
> > and hasn't added lints for recent ballots.
> >
> 
> I know this would require changing not only software but also the format
of
> certificates. However, why not use UUID version 1? UUIDs (Universally
Unique
> IDentifiers) require no central registry. UUIDs are specified in RFC 4122.
> 
> Modern X509 uses serial number as both a source of randomness and a unique
> identifier. Unfortunately, trying to solve for uniqueness doesn't absolve
you
> from needing quality randomness. The reason for the "at least 64-bits of
> random" requirement is to add entropy to the tbsCertificate structure to
make
> hash collision attacks more difficult. UUIDv1 is (almost) entirely
predictable
> and thus not suitable for this. And if you have a good random source you
might
> as well just generate a long random serial which has a vanishingly small
> probability of collision.

The baseline requirements don't just require 64 bits of good randomness.
They
specifically require the use of a CSPRNG ("A random number generator
intended
for use in cryptographic system", the grammar error is in the BRs and the
original
ballot 164).

So things like UUIDs and MACs are clearly not compliant on their own, and
count
for zero bits, regardless of how unpredictable they may or may not be.

In fact, I noticed last month that there's no requirement that random
numbers
used for domain control validation come from a CSPRNG.  I intend to fix that
this month ... maybe I'll fix the grammar error while I'm at it.

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-20 Thread Tim Hollebeek via dev-security-policy
Wayne,

Thanks for updating us on Mozilla's thinking on this issue.  On behalf of the 
CA/Browser forum Validation Working Group, I would like to thank everyone
for their time and contributions.  We will be going over everyone's points
and take them all into consideration as we look into what potential ways
EV validation can be improved.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Wayne
> Thayer via dev-security-policy
> Sent: Monday, December 18, 2017 2:09 PM
> To: Ryan Sleevi 
> Cc: mozilla-dev-security-policy  pol...@lists.mozilla.org>
> Subject: Re: On the value of EV
> 
> Thank you Ryan for raising this question, and to everyone who has been
> contributing in a constructive manner to the discussion. A number of excellent
> points have been raised on the effectiveness of EV in general and on the
> practicality of solving the problems that exist with EV.
> 
> While we have concerns about the value of EV as well as the potential for EV
> to actually harm users, Mozilla currently has no definite plans to remove the
> EV UI from Firefox. At the very least, we want to see Certificate Transparency
> required for all certificates before making any change that is likely to 
> reduce
> the use of EV certificates.
> 
> Is Google planning to remove the EV UI from desktop Chrome? If so, how does
> that relate to the plan to mark HTTP sites as ‘Not secure’ [1]? Does this 
> imply
> the complete removal of HTTPS UI?
> 
> While we agree that improvements to EV validation won’t remove many of
> the underlying issues that have been raised here, we hope that CAs will move
> quickly to make the EV Subject information displayed in the address bar more
> reliable and less confusing.
> 
> - Wayne
> 
> [1]
> https://security.googleblog.com/2016/09/moving-towards-more-secure-
> web.html
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Possible violation of CAA by nazwa.pl

2018-07-27 Thread Tim Hollebeek via dev-security-policy
I agree.

I've actually thought about adding definitions of categories of misissuance 
to the BRs before.  Some of the requirements like revocation are really hard
to write and understand if you don't first categorize all the misissuance use
cases, many of which are very, very different.  And just when I think I have
a reasonable ontology of them in my head ... someone usually goes and 
invents a new one.

Despite how much people like to talk about it, misissuance isn't a defined 
term anywhere, AFAIK.  It can lead to a lot confusing discussions, even 
among experts at the CA/Browser Forum.

-Tim

> -Original Message-
> From: dev-security-policy  bounces+tim.hollebeek=digicert@lists.mozilla.org> On Behalf Of Jakob
> Bohm via dev-security-policy
> Sent: Friday, July 27, 2018 2:46 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Possible violation of CAA by nazwa.pl
> 
> On 26/07/2018 23:04, Matthew Hardeman wrote:
> > On Thu, Jul 26, 2018 at 2:23 PM, Tom Delmas via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> >
> >>
> >>> The party actually running the authoritative DNS servers is in
> >>> control
> >> of the domain.
> >>
> >> I'm not sure I agree. They can control the domain, but they are
> >> supposed to be subordinate of the domain owner. If they did something
> >> without the owner consent/approval, it really looks like a domain 
> >> hijacking.
> >
> >
> > But the agreement under which they're supposed to be subordinate to
> > the domain owner is a private matter between the domain owner and the
> > party managing the authoritative DNS.  Even if this were domain
> > hijacking, a certificate issued that relied upon a proper domain
> > validation method is still proper issuance, technically.  Once this
> > comes to light, there may be grounds for the proper owner to get the
> > certificate revoked, but the initial issuance was proper as long as
> > the validation was properly performed.
> >
> >
> >>
> >>
> >>> I'm not suggesting that the CA did anything untoward in issuing this
> >>> certificate.  I am not suggesting that at all.
> >>
> >> My opinion is that if the CA was aware that the owner didn't
> >> ask/consent to that issuance, If it's not a misissuance according to
> >> the BRs, it should be.
> >
> >
> > Others can weigh in, but I'm fairly certain that it is not misissuance
> > according to the BRs.  Furthermore, with respect to issuance via
> > domain validation, there's an intentional focus on demonstrated
> > control rather than ownership, as ownership is a concept which can't
> > really be securely validated in an automated fashion.  As such, I
> > suspect it's unlikely that the industry or browsers would accept such a
> change.
> >
> >
> 
> I see this as a clear case of the profound confusion caused by the community
> sometimes conflating "formal rule violation" with "misissuance".
> 
> It would be much more useful to keep these concepts separate but
> overlapping:
> 
>   - A BR/MozPolicy/CPS/CP violation is when a certificate didn't follow the
> official rules in some way and must therefore be revoked as a matter of
> compliance.
> 
>   - An actual misissuance is when a certificate was issued for a private key 
> held
> by a party other than the party identified in the certificate (in Subject 
> Name,
> SAN etc.), or to a party specifically not authorized to hold such a 
> certificate
> regardless of the identity (typically applies to SubCA, CRL-signing, OCSP-
> signing, timestamping or other certificate types where relying party trust
> doesn't check the actual name in the certificate).
> 
>  From these concepts, revocation requirements could then be reasonably
> classified according to the combinations (in addition to any specifics of a
> situation):
> 
>   - Rule violation plus actual misissuance.  This is bad, the 24 hours or 
> faster
> revocation rule should definitely be invoked.
> 
>   - Rule compliant misissuance.  This will inevitably happen some times, for
> example if an attacker successfully spoofs all the things checked by a CA or
> exploits a loophole in the compliant procedures.  This is the reason why there
> must be an efficient revocation process for these cases.
> 
>   - Rule violation, but otherwise correct issuance.  This covers any kind of
> formal violation where the ground truth of the certified matter can still be
> proven.  Ranging from formatting errors (like having "-" in a field that 
> should
> just be omitted, putting the real name with spaces in the common name as
> originally envisioned in X.509, encoding CA:False
> etc.) over potentially dangerous errors (like having a 24 byte serial number,
> which prevents some clients from checking revocation should it ever become
> necessary) to directly dangerous errors (like having an unverified DNS-syntax
> name in CN, or not including enough randomness in the serial number of an
> SHA-1 certificate).
> 
>   - Situation-changed no-longer valid issuance.  

RE: localhost.megasyncloopback.mega.nz private key in client

2018-08-09 Thread Tim Hollebeek via dev-security-policy
IIRC we recently passed a CABF ballot that the CPS must contain instructions
for submitting problem reports in a specific section of its CPS, in an attempt
to solve problems like this.  This winter or early spring, if my memory is 
correct.

-Tim

> -Original Message-
> From: dev-security-policy  On
> Behalf Of Alex Cohn via dev-security-policy
> Sent: Wednesday, August 8, 2018 4:01 PM
> To: ha...@hboeck.de
> Cc: mozilla-dev-security-pol...@lists.mozilla.org; ssl_ab...@comodoca.com;
> summern1...@gmail.com
> Subject: Re: localhost.megasyncloopback.mega.nz private key in client
> 
> On Wed, Aug 8, 2018 at 9:17 AM Hanno Böck  wrote:
> 
> >
> > As of today this is still unrevoked:
> > https://crt.sh/?id=630835231=ocsp
> >
> > Given Comodo's abuse contact was CCed in this mail I assume they knew
> > about this since Sunday. Thus we're way past the 24 hour in which they
> > should revoke it.
> >
> > --
> > Hanno Böck
> > https://hboeck.de/
> 
> 
> As Hanno has no doubt learned, the ssl_ab...@comodoca.com address
> bounces.
> I got that address off of Comodo CA's website at
> https://www.comodoca.com/en-us/support/report-abuse/.
> 
> I later found the address "sslab...@comodo.com" in Comodo's latest CPS,
> and forwarded my last message to it on 2018-08-05 at 20:32 CDT (UTC-5). I
> received an automated confirmation immediately afterward, so I assume
> Comodo has now known about this issue for ~70 hours now.
> 
> crt.sh lists sslab...@comodoca.com as the "problem reporting" address for
> the cert in question. I have not tried this address.
> 
> Comodo publishes at least three different problem reporting email addresses,
> and at least one of them is nonfunctional. I think similar issues have come up
> before - there's often not a clear way to identify how to contact a CA. Should
> we revisit the topic?
> 
> Alex
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: localhost.megasyncloopback.mega.nz private key in client

2018-08-09 Thread Tim Hollebeek via dev-security-policy
Yup, it was Mozilla policy that I was thinking of.  Thanks.

 

I’m sad it didn’t make it into official Mozilla policy, as I thought it was a 
pretty reasonable and non-controversial requirement.  I’d support putting it in 
the BRs.

 

-Tim

 

From: Ryan Sleevi  
Sent: Thursday, August 9, 2018 7:15 AM
To: Tim Hollebeek 
Cc: Alex Cohn ; ha...@hboeck.de; 
mozilla-dev-security-pol...@lists.mozilla.org; ssl_ab...@comodoca.com; 
summern1...@gmail.com
Subject: Re: localhost.megasyncloopback.mega.nz private key in client

 

Unfortunately, that's not correct. The CA/Browser Forum has passed no such 
resolution, as can be seen at https://cabforum.org/ballots/ .

 

I believe you're confusing this with the discussion from 
https://github.com/mozilla/pkipolicy/issues/98, which highlighted that the BRs 
4.9.3 requires clear instructions for reporting key compromise. That language 
has existed since the BRs 1.3.0 (the conversion to 3647 format).

 

Alternatively, you may be confusing this discussion with 
https://wiki.mozilla.org/CA/Communications#November_2017_CA_Communication , 
which required CAs to provide a tested email address for a Problem Reporting 
Mechanism. However, as captured in Issue 98, this did not result in a normative 
change to the CP/CPS.

 

On Wed, Aug 8, 2018 at 10:22 PM, Tim Hollebeek via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org> > wrote:

IIRC we recently passed a CABF ballot that the CPS must contain instructions
for submitting problem reports in a specific section of its CPS, in an attempt
to solve problems like this.  This winter or early spring, if my memory is 
correct.

-Tim


> -Original Message-
> From: dev-security-policy  <mailto:dev-security-policy-boun...@lists.mozilla.org> > On
> Behalf Of Alex Cohn via dev-security-policy
> Sent: Wednesday, August 8, 2018 4:01 PM
> To: ha...@hboeck.de <mailto:ha...@hboeck.de> 
> Cc: mozilla-dev-security-pol...@lists.mozilla.org 
> <mailto:mozilla-dev-security-pol...@lists.mozilla.org> ; 
> ssl_ab...@comodoca.com <mailto:ssl_ab...@comodoca.com> ;
> summern1...@gmail.com <mailto:summern1...@gmail.com> 
> Subject: Re: localhost.megasyncloopback.mega.nz 
> <http://localhost.megasyncloopback.mega.nz>  private key in client
> 
> On Wed, Aug 8, 2018 at 9:17 AM Hanno Böck  <mailto:ha...@hboeck.de> > wrote:
> 
> >
> > As of today this is still unrevoked:
> > https://crt.sh/?id=630835231 <https://crt.sh/?id=630835231=ocsp> 
> > =ocsp
> >
> > Given Comodo's abuse contact was CCed in this mail I assume they knew
> > about this since Sunday. Thus we're way past the 24 hour in which they
> > should revoke it.
> >
> > --
> > Hanno Böck
> > https://hboeck.de/
> 
> 
> As Hanno has no doubt learned, the ssl_ab...@comodoca.com 
> <mailto:ssl_ab...@comodoca.com>  address
> bounces.
> I got that address off of Comodo CA's website at
> https://www.comodoca.com/en-us/support/report-abuse/.
> 
> I later found the address "sslab...@comodo.com <mailto:sslab...@comodo.com> " 
> in Comodo's latest CPS,
> and forwarded my last message to it on 2018-08-05 at 20:32 CDT (UTC-5). I
> received an automated confirmation immediately afterward, so I assume
> Comodo has now known about this issue for ~70 hours now.
> 
> crt.sh lists sslab...@comodoca.com <mailto:sslab...@comodoca.com>  as the 
> "problem reporting" address for
> the cert in question. I have not tried this address.
> 
> Comodo publishes at least three different problem reporting email addresses,
> and at least one of them is nonfunctional. I think similar issues have come up
> before - there's often not a clear way to identify how to contact a CA. Should
> we revisit the topic?
> 
> Alex
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org 
> <mailto:dev-security-policy@lists.mozilla.org> 
> https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> 
https://lists.mozilla.org/listinfo/dev-security-policy

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: localhost.megasyncloopback.mega.nz private key in client

2018-08-09 Thread Tim Hollebeek via dev-security-policy
Also, I'd like to encourage other CAs to comply with Issue 98 pro-actively, 
even if it is not required.  We're already in compliance.

-Tim

> -Original Message-
> From: dev-security-policy  On
> Behalf Of Tim Hollebeek via dev-security-policy
> Sent: Thursday, August 9, 2018 10:26 AM
> To: r...@sleevi.com
> Cc: Alex Cohn ; mozilla-dev-security-
> pol...@lists.mozilla.org; ha...@hboeck.de; ssl_ab...@comodoca.com;
> summern1...@gmail.com
> Subject: RE: localhost.megasyncloopback.mega.nz private key in client
> 
> Yup, it was Mozilla policy that I was thinking of.  Thanks.
> 
> 
> 
> I’m sad it didn’t make it into official Mozilla policy, as I thought it was a 
> pretty
> reasonable and non-controversial requirement.  I’d support putting it in the
> BRs.
> 
> 
> 
> -Tim
> 
> 
> 
> From: Ryan Sleevi 
> Sent: Thursday, August 9, 2018 7:15 AM
> To: Tim Hollebeek 
> Cc: Alex Cohn ; ha...@hboeck.de; mozilla-dev-
> security-pol...@lists.mozilla.org; ssl_ab...@comodoca.com;
> summern1...@gmail.com
> Subject: Re: localhost.megasyncloopback.mega.nz private key in client
> 
> 
> 
> Unfortunately, that's not correct. The CA/Browser Forum has passed no such
> resolution, as can be seen at https://cabforum.org/ballots/ .
> 
> 
> 
> I believe you're confusing this with the discussion from
> https://github.com/mozilla/pkipolicy/issues/98, which highlighted that the
> BRs 4.9.3 requires clear instructions for reporting key compromise. That
> language has existed since the BRs 1.3.0 (the conversion to 3647 format).
> 
> 
> 
> Alternatively, you may be confusing this discussion with
> https://wiki.mozilla.org/CA/Communications#November_2017_CA_Communi
> cation , which required CAs to provide a tested email address for a Problem
> Reporting Mechanism. However, as captured in Issue 98, this did not result in
> a normative change to the CP/CPS.
> 
> 
> 
> On Wed, Aug 8, 2018 at 10:22 PM, Tim Hollebeek via dev-security-policy
> mailto:dev-security-
> pol...@lists.mozilla.org> > wrote:
> 
> IIRC we recently passed a CABF ballot that the CPS must contain instructions
> for submitting problem reports in a specific section of its CPS, in an 
> attempt to
> solve problems like this.  This winter or early spring, if my memory is 
> correct.
> 
> -Tim
> 
> 
> > -Original Message-
> > From: dev-security-policy
> >  > <mailto:dev-security-policy-boun...@lists.mozilla.org> > On Behalf Of
> > Alex Cohn via dev-security-policy
> > Sent: Wednesday, August 8, 2018 4:01 PM
> > To: ha...@hboeck.de <mailto:ha...@hboeck.de>
> > Cc: mozilla-dev-security-pol...@lists.mozilla.org
> > <mailto:mozilla-dev-security-pol...@lists.mozilla.org> ;
> > ssl_ab...@comodoca.com <mailto:ssl_ab...@comodoca.com> ;
> > summern1...@gmail.com <mailto:summern1...@gmail.com>
> > Subject: Re: localhost.megasyncloopback.mega.nz
> > <http://localhost.megasyncloopback.mega.nz>  private key in client
> >
> > On Wed, Aug 8, 2018 at 9:17 AM Hanno Böck  <mailto:ha...@hboeck.de> > wrote:
> >
> > >
> > > As of today this is still unrevoked:
> > > https://crt.sh/?id=630835231 <https://crt.sh/?id=630835231=ocsp>
> > > =ocsp
> > >
> > > Given Comodo's abuse contact was CCed in this mail I assume they
> > > knew about this since Sunday. Thus we're way past the 24 hour in
> > > which they should revoke it.
> > >
> > > --
> > > Hanno Böck
> > > https://hboeck.de/
> >
> >
> > As Hanno has no doubt learned, the ssl_ab...@comodoca.com
> > <mailto:ssl_ab...@comodoca.com>  address bounces.
> > I got that address off of Comodo CA's website at
> > https://www.comodoca.com/en-us/support/report-abuse/.
> >
> > I later found the address "sslab...@comodo.com
> > <mailto:sslab...@comodo.com> " in Comodo's latest CPS, and forwarded
> > my last message to it on 2018-08-05 at 20:32 CDT (UTC-5). I received
> > an automated confirmation immediately afterward, so I assume Comodo
> has now known about this issue for ~70 hours now.
> >
> > crt.sh lists sslab...@comodoca.com <mailto:sslab...@comodoca.com>
> as
> > the "problem reporting" address for the cert in question. I have not tried 
> > this
> address.
> >
> > Comodo publishes at least three different problem reporting email
> > addresses, and at least one of them is nonfunctional. I think similar
> > issues have come up before - there's often not a clear way to identify
> > how to contact a CA. Should we re

RE: Telia CA - problem in E validation

2018-08-21 Thread Tim Hollebeek via dev-security-policy
The BRs indeed do not have many requirements about the validation of email
addresses, but Mozilla policy is much more proscriptive here.  See, for
example, the first two items of section 2.2.

These make it pretty clear that unverified addresses are prohibited by
Mozilla policy, and validation of email addresses is not just a "best
practice"; it's required.

-Tim

> -Original Message-
> From: dev-security-policy 
On
> Behalf Of pekka.lahtiharju--- via dev-security-policy
> Sent: Tuesday, August 21, 2018 6:18 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Telia CA - problem in E validation
> 
> I agree that this culminates to what does it mean when requirement is
"verified
> by CA". When that is not specified anywhere and specifically not in E
validation
> chapter of BR I have interpreted that also weak E verification methods are
> acceptable. I understand that it would be "nice" to use stronger methods
but
> the point is that is it "illegal" to use weak method when such method is
not
> prohibited.
> 
> In our old process we have accepted personal addresses because in some
cases
> a single person is really the "support point" of a server. In practise
personal
> address has only been accepted if the same person is also the technical or
> administrative contact of the application. If anybody would complain or we
> notice in our visual check that the name or address can't be correct we
revoke
> or don't accept. In practice there hasn't been any complaints ever related
to
> our approved E values (except now in the this discussion). Note that all
used E
> values have originated from authenticated customers' CSR.
> 
> Note! Because we want to follow "best practices" we have already stopped
> using these weak methods based on these discussions.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/M8RQ_FBpnEWessS6VTb2TML0gjx1vzlJ-
> To0H9f1Upk=?d=mm6rf8hUuQt34DHH-
> 53_nlyLzE85Upq8F9coCGaDGTmCGJqbCuAdYHeE6BZrlgL64266orhG4-
> qpaAxW71xS5LPicNsPA_DXJT563uavmGor9blfsKv5oGec1ZEtL6DeN_B2af59ky
> WJgTwRpJgPyaePtW0bS56tNfBkLD37-
> _2hgrxOetTnhO0RZE_zIAMg5JQcDNT7HI1pv-
> VWy3I8yTyEv6uw4jcgBZnvM1M8tEXKyVuA9YACauy_kKPqA96LdRL15tLb65uhB
> KHNxSLMNPu3DyrV7cqoOYtj5T0WnlzQCvr8w5KvOuRlrR3p9Em4zmnyGVioHn6
> 64CTycuByUDrGAL6BB806izNaJ_mduZaFq5fgCRIz1Cyjo-
> 0WVWuWqcwLrJFX0Ro-
> 4igDlcfMXvP_f1rwhPByjdggp4xXTQ%3D%3D=https%3A%2F%2Flists.mozilla.
> org%2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Telia CA - problem in E validation

2018-08-21 Thread Tim Hollebeek via dev-security-policy
There are lots of useful ways to publish unverified and potentially
inaccurate information.

Putting that information into a certificate signed by a public Certificate
Authority is
not one of them.

By the way, OUs need to be accurate as well, not just "partially verified",
so you might
want to look into that part of your processes as well.

BR 7.1.4.2: "By issuing the Certificate, the CA represents that it followed
the procedure 
set forth in its Certificate Policy and/or Certification Practice Statement
to verify that, as 
of the Certificate's issuance date, all of the Subject Information was
accurate."

-Tim

> -Original Message-
> From: dev-security-policy 
On
> Behalf Of pekka.lahtiharju--- via dev-security-policy
> Sent: Tuesday, August 21, 2018 10:45 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Telia CA - problem in E validation
> 
> I believe it has been useful to our users even though it was only
partially
> verified like OU. Now when it no more exists it certainly won't provide
any help
> to anybody.
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/tLUDvyC5tYQiVfqxZIo-c6Uq1a-
> jYOSGbZgRSHyUu1I=?d=zx9qYFefn2ZoXZK3hmoD2hX8Ch__jFtWDZM2CKgWQJ
> Ch5jZYL0ITP0GCk4W9UJI_8nQ6wryVSVMb4y504R9AbIRgEYDp_Umfk051kQR7s
> GVVgzxufqgL7iW3mtbBnroiKhwVEtkMa0IAxmXRTpWu9-
> pldvu8X2WSILON7AWHr-Twz3K3XJ0Ta9hXzKo2YjG4Qhxied-
> um1T97LsQ8H4mpGKC-
> zWuvaCTASohQCwcYAYMEhBqMfI9QS5AYzG3Ba5k10Kum32iQh9lrzUZP-
> 1JnjpJ8PRepHhaa7uNWbZbK_3JMKc_e6PKjA7dXMIqsa846_H9JlvO8TS4cmrHLv
> U0EkO0yv8s75TfAUqiRJlODRxOdcmNpG7-IByKbQxcsYwj1ZFmGkThjIl0AVQ_Y-
> GBp7X48byWDcHqqEkf10tsuQ%3D%3D=https%3A%2F%2Flists.mozilla.org%
> 2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: A vision of an entirely different WebPKI of the future...

2018-08-20 Thread Tim Hollebeek via dev-security-policy

The only thing I'm going to say in this thread is that ICANN, registrars,
and registries had two years to figure out how to handle GDPR and email
addresses in WHOIS, and we all know how that turned out.

Maybe we should let them figure out how to handle their existing
responsibilities before we start giving them new ones.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Telia CA - problem in E validation

2018-08-21 Thread Tim Hollebeek via dev-security-policy
Previous discussions on this list, which all CAs are required to follow,
have made
it clear that either challenge-response or domain validation is sufficient
to meet
Mozilla's policy for e-mail addresses.

Yes, the context was SMIME validation, but I am very troubled to hear that
instead of using the same rules for E validation, a CA would argue that it's
appropriate or allowed to do virtually no validation at all.  It's not.

-Tim

> -Original Message-
> From: dev-security-policy 
On
> Behalf Of pekka.lahtiharju--- via dev-security-policy
> Sent: Tuesday, August 21, 2018 9:41 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Telia CA - problem in E validation
> 
> The first item in Mozilla policy is impossible for all CAs related to E
verification
> because there aren't any valid independent sources to check support email
> addresses. You potentially could validate only domain part of the email
address
> which doesn't cover the requirement that ALL information must be verified
> from such source. Most persons in this discussion have recommended using
> challenge-response method in E verification but I'm afraid it is also
against
> Mozilla requirement 2.1step1 because no independent source or similar is
> involved.
> 
> The second item in Mozilla policy is not valid because these SSL
certificates are
> not capable in email messaging. It is clear for SMIME certificates and
with them
> we follow it.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/_lQ2yVFZFmZcMjnytNPPhWO033O4qV_A
> d55EzA51Pnk=?d=Y3bT5wPI37DMxsvQ8o4N0HWiVOyK-
> eNjbf7Jxhf7xvbeeJ8yf2cm7BADzRYUkQBvJRPouhxTXVjeAHvJIbLkG1NtZ1dnYnq
> Y9ml3RxSoU8xz4soa15OSeMmOPKzQVmJY7ww9X4cgmfNXg_uQol0UxeXzoYO
> yGMgMGSxVEC9cnLih0UOrXrJ5LjeSUxitIBgvH5FkQI1xfXEjNw9wtpbPvdyEhaqo
> ON0bDkt0yC_Hu_UdML9zgpKAP49LuY60sd9_6Qq96a8c8-
> fyjS0hTrOnMPIXsWafHYDN9NT4eHV5nEf1efk9v28xBU02Kv-
> J_s5IwNByYW_TzPwQUEE4faBuitNYmCr_sJkSY2jMpE3xWHJxAGZWtkcKHHOm
> gv6V4X3GGPDexnyYYzEaV2tSYdUJi7zc-uno0zG9-
> CjM7SqOuA%3D%3D=https%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-
> security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Telia CA - problem in E validation

2018-08-21 Thread Tim Hollebeek via dev-security-policy
Yeah, but unvalidated "information" is not "informative" in any useful way.

-Tim

> -Original Message-
> From: dev-security-policy 
On
> Behalf Of pekka.lahtiharju--- via dev-security-policy
> Sent: Tuesday, August 21, 2018 9:59 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Telia CA - problem in E validation
> 
> The purpose of this E value and SAN-rfc822 value is completely different.
The
> former is typically an information to server users where is its support.
The
> latter for email messaging. Thus it is natural that the verification
requirements
> of those two fields are also different (like they are).
> 
> I completely agree that verification of SAN-rfc822 has to be
challenge-response
> or domain based but the same doesn't apply to this E which is only
informative
> field like OU.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/L6gW5CkSOwyu-
> 5hl92vrKoozZhevZGTi1bqkARk0lDA=?d=tcaVpOxV1GZEsht-O5I-
> U1jUfOFbghk57eRNA4QIgc3Uw4rUol-c03Y4fMcVWJF1ZerQdZi4v4h-np-
> 1dARE42nMHSf8aUFNZjD_8NbzDyxU48VdpbKSdRNuh9TCm1_xS39jm-
> iu5N39wqrGYHD09F1LIinG2AXeJODvae0i3tBZynuZyDpFRwK5fgr87sR8O6J9gzW
> vb6SiokKC-
> 2Vd7BTaTuruLtXnLBM25IHfj77EQICOI2CKxe3iYbKmYS7XsoLfUBjpvdbXQ7AwL9
> sV56X2vvD74hClclwAD85eyRj5DtN6_7eqs95arC4rNn3vVKlBuXwUv5M83ljY_sFi
> EBHNG0-8TOuURHS9h-
> L841SrtQumQ8qWSMjOCKHG2Jnn8Xr2OOLWnoY7ZKVoGhEmT7RD8NgG29ipn
> F320B_Lcw%3D%3D=https%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-
> security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Do We Now Require Separate Cross-certificates for SSL and S/MIME?

2018-07-13 Thread Tim Hollebeek via dev-security-policy
Doesn't the "created after January 1, 2019" mean that there is no problem with 
old crosses?  It would just be a new policy for new crosses as of next year?

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Bruce via
> dev-security-policy
> Sent: Thursday, July 12, 2018 10:28 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Do We Now Require Separate Cross-certificates for SSL and
> S/MIME?
> 
> Note the BRs define Cross Certificate as "a certificate that is used to 
> establish a
> trust relationship between two Root CAs."
> 
> I think the intent was to technically restrict subordinate CAs or rather CAs
> which are online and issue end entity certificates. My assumption is that we
> want to 1) not allow a subordinate CA to issue a certificate which it was no
> intended to issue and 2) mitigate the risk if an online subordinate CA has 
> been
> compromised.
> 
> Typically, if an old root cross-certifies a new root, the purpose is to give 
> the
> new root browser/OS ubiquity. The new root may be used to support end
> entity certificates of many types (e.g., Server Auth, S/MIME, Client Auth, 
> Code
> Signing, Document Signing, Time-stamping ...). If we restrict the cross-
> certificate, then this will limit the use of a new root. Also note that the 
> new
> root is 1) not an issuing CA and 2) is offline, so the mitigation of technical
> restriction may already be satisfied.
> 
> Thanks, Bruce.
> 
> On Tuesday, July 10, 2018 at 7:21:26 PM UTC-4, Wayne Thayer wrote:
> > During a 2.6 policy discussion [1], we agreed to add the following
> > language to section 5.3 "Intermediate Certificates":
> >
> > > Intermediate certificates created after January 1, 2019:
> > >
> > >
> > > * MUST contain an EKU extension; and,
> > > * MUST NOT include the anyExtendedKeyUsage KeyPurposeId; and,
> > > * MUST NOT include both the id-kp-serverAuth and
> > > id-kp-emailProtection KeyPurposeIds in the same certificate.
> > >
> >
> > It has been pointed out to me that the very next paragraph of section
> > 5.3
> > states:
> >
> > These requirements include all cross-certified certificates which
> > chain to
> > > a certificate that is included in Mozilla’s CA Certificate Program.
> > >
> >
> > The term "cross-certified certificates" could refer to the actual
> > cross-certificate, or it could refer to intermediate certificates that
> > chain up to the cross certificate. In the case of a root that is being
> > cross-certified, the former interpretation effectively means that
> > distinct cross-certificates would be required for serverAuth and
> > emailProtection, as
> > follows:
> >
> > 1 - Root <-- Cross-certificate (EKU=emailProtection) <-- Intermediate
> > certificate (EKU=emailProtection) <-- leaf certificate (S/MIME)
> > 2 - Root <-- Cross-certificate (EKU=serverAuth) <-- Intermediate
> > certificate (EKU=serverAuth) <-- leaf certificate (SSL/TLS)
> >
> > Should our policy require cross-certificates to be constrained to
> > either serverAuth or emailProtection via EKU, or should this
> > requirement only apply to [all other] intermediate certificates?
> >
> > What is the correct interpretation of section 5.3 of the policy as
> > currently written?
> >
> > I would appreciate everyone's input on these questions.
> >
> > - Wayne
> >
> > [1]
> >
> https://clicktime.symantec.com/a/1/82jRdde1a_TDsUNagUMK3MwXRBX4JdeH
> iAL
> > jfsD2zgM=?d=5n7PC5UMMMf8ow60aA_zACOHRkVy-
> 9DLApGl29o_1WR_vWTXMDk0d9kBFu
> >
> rU6JcvMPF1WEp4WBRfAgKXpN15C1244hstaDLxsVmE8bwd8UMj0MNvk5w_Q
> C8ibEWzPC_L
> > UljJwJbyQ12v-
> eTKN6FpHJwudbiXqkteAL6SsQfa0QGrVhJI2REzKkz7jXD0KovgoCzWAR
> > mueHAHVd9wo-
> Zf8cGao91RrkdklVah1kaEBTyUKOPMlGbnavPLTjmV4ZRDrnrDCFX4rkD1
> > Lo77olEchKsy8cAbTYPtzG0lkCI1j4UDxcZ-FUsyVeArS-
> GdV0BnikfsrccHi35Z67abn6
> > -KrVJCFHHsHbG6kEl9IjbK_HVe2tyNOP4Fkxpq2kv_Dws_N9PMOE-
> HQoRmqNABl-nFDxHu
> > Oru-
> 2ncWO24MRiohMbTk2xrGlehqHvYR2QII6nyw79ouwqK9GVtOi8GsmBewEssvkv
> Y6H_
> >
> W_xOw3VB6Mp7gtxMSK0v72SLI%3D=https%3A%2F%2Fgroups.google.com
> %2Fd%2Fm
> > sg%2Fmozilla.dev.security.policy%2FQIweY3cHRyA%2FvbtnfJ4zCAAJ
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/UqWB6X0ty8ImZMghiLXK4dj9WfPgHxf31p
> FYXlE5W5k=?d=5n7PC5UMMMf8ow60aA_zACOHRkVy-
> 9DLApGl29o_1WR_vWTXMDk0d9kBFurU6JcvMPF1WEp4WBRfAgKXpN15C1244
> hstaDLxsVmE8bwd8UMj0MNvk5w_QC8ibEWzPC_LUljJwJbyQ12v-
> eTKN6FpHJwudbiXqkteAL6SsQfa0QGrVhJI2REzKkz7jXD0KovgoCzWARmueHAH
> Vd9wo-
> Zf8cGao91RrkdklVah1kaEBTyUKOPMlGbnavPLTjmV4ZRDrnrDCFX4rkD1Lo77olE
> chKsy8cAbTYPtzG0lkCI1j4UDxcZ-FUsyVeArS-GdV0BnikfsrccHi35Z67abn6-
> KrVJCFHHsHbG6kEl9IjbK_HVe2tyNOP4Fkxpq2kv_Dws_N9PMOE-HQoRmqNABl-
> nFDxHuOru-
> 2ncWO24MRiohMbTk2xrGlehqHvYR2QII6nyw79ouwqK9GVtOi8GsmBewEssvkv
> Y6H_W_xOw3VB6Mp7gtxMSK0v72SLI%3D=https%3A%2F%2Flists.mozilla.or
> 

RE: Do We Now Require Separate Cross-certificates for SSL and S/MIME?

2018-07-13 Thread Tim Hollebeek via dev-security-policy
Yeah, I agree I don’t think it was intended.  But now that I am aware of the 
issue, I think the crossing workaround per EKU is actually a good thing for 
people to be doing.  Unless someone can point out why it's bad ...

Might want to give people a little more time to plan and adapt to that change 
though since I doubt anyone thought of it and people need planning runway to 
change their procedures if it is going to be interpreted this way.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Bruce via
> dev-security-policy
> Sent: Friday, July 13, 2018 10:17 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Do We Now Require Separate Cross-certificates for SSL and
> S/MIME?
> 
> Agreed that old cross-certificates will not be impacted, but this does impact
> new cross-certificates. I assume the work around would be to just issue more
> than one cross-certificate with different EKUs, but I don't assume that was
> intended by this policy change.
> 
> Bruce.
> 
> On Friday, July 13, 2018 at 8:02:00 AM UTC-4, Tim Hollebeek wrote:
> > Doesn't the "created after January 1, 2019" mean that there is no problem
> with old crosses?  It would just be a new policy for new crosses as of next 
> year?
> >
> > -Tim
> >
> > > -Original Message-
> > > From: dev-security-policy [mailto:dev-security-policy-
> > > bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of
> > > bounces+Bruce via
> > > dev-security-policy
> > > Sent: Thursday, July 12, 2018 10:28 AM
> > > To: mozilla-dev-security-pol...@lists.mozilla.org
> > > Subject: Re: Do We Now Require Separate Cross-certificates for SSL
> > > and S/MIME?
> > >
> > > Note the BRs define Cross Certificate as "a certificate that is used
> > > to establish a trust relationship between two Root CAs."
> > >
> > > I think the intent was to technically restrict subordinate CAs or
> > > rather CAs which are online and issue end entity certificates. My
> > > assumption is that we want to 1) not allow a subordinate CA to issue
> > > a certificate which it was no intended to issue and 2) mitigate the
> > > risk if an online subordinate CA has been compromised.
> > >
> > > Typically, if an old root cross-certifies a new root, the purpose is
> > > to give the new root browser/OS ubiquity. The new root may be used
> > > to support end entity certificates of many types (e.g., Server Auth,
> > > S/MIME, Client Auth, Code Signing, Document Signing, Time-stamping
> > > ...). If we restrict the cross- certificate, then this will limit
> > > the use of a new root. Also note that the new root is 1) not an
> > > issuing CA and 2) is offline, so the mitigation of technical restriction 
> > > may
> already be satisfied.
> > >
> > > Thanks, Bruce.
> > >
> > > On Tuesday, July 10, 2018 at 7:21:26 PM UTC-4, Wayne Thayer wrote:
> > > > During a 2.6 policy discussion [1], we agreed to add the following
> > > > language to section 5.3 "Intermediate Certificates":
> > > >
> > > > > Intermediate certificates created after January 1, 2019:
> > > > >
> > > > >
> > > > > * MUST contain an EKU extension; and,
> > > > > * MUST NOT include the anyExtendedKeyUsage KeyPurposeId; and,
> > > > > * MUST NOT include both the id-kp-serverAuth and
> > > > > id-kp-emailProtection KeyPurposeIds in the same certificate.
> > > > >
> > > >
> > > > It has been pointed out to me that the very next paragraph of
> > > > section
> > > > 5.3
> > > > states:
> > > >
> > > > These requirements include all cross-certified certificates which
> > > > chain to
> > > > > a certificate that is included in Mozilla’s CA Certificate Program.
> > > > >
> > > >
> > > > The term "cross-certified certificates" could refer to the actual
> > > > cross-certificate, or it could refer to intermediate certificates
> > > > that chain up to the cross certificate. In the case of a root that
> > > > is being cross-certified, the former interpretation effectively
> > > > means that distinct cross-certificates would be required for
> > > > serverAuth and emailProtection, as
> > > > follows:
> > > >
> > > > 1 - Root <-- Cross-certificate (EKU=emailProtection) <--
> > > > Intermediate certificate (EKU=emailProtection) <-- leaf
> > > > certificate (S/MIME)
> > > > 2 - Root <-- Cross-certificate (EKU=serverAuth) <-- Intermediate
> > > > certificate (EKU=serverAuth) <-- leaf certificate (SSL/TLS)
> > > >
> > > > Should our policy require cross-certificates to be constrained to
> > > > either serverAuth or emailProtection via EKU, or should this
> > > > requirement only apply to [all other] intermediate certificates?
> > > >
> > > > What is the correct interpretation of section 5.3 of the policy as
> > > > currently written?
> > > >
> > > > I would appreciate everyone's input on these questions.
> > > >
> > > > - Wayne
> > > >
> > > > [1]
> > > >
> > >
> 

RE: Allowing WebExtensions to Override Certificate Trust Decisions

2018-02-27 Thread Tim Hollebeek via dev-security-policy
Wow, this is a tough one.  I've wanted to write such an extension myself for
quite some time.  In fact, I probably would write one or more extensions, if 
Mozilla were to allow this, for a variety of use cases.
 
That said, such extensions are extremely dangerous, and users are just going
to accept any warning that might exist about using such an extension.  But I
don't think designing for the ignorant and clueless is wise.  You'll just find
better idiots.

I personally find persuasive the argument that an extension already has the
ability to do equivalently bad things.  The research group I used to work with
many years ago did lots of work with application extensions of all kinds, and
web extensions in particular were obscenely powerful because of the very
rich structure of the Document Object Model.

I'm sure (I hope!) things have been tightened up at least a little bit since 
then,
but I think in the presence of a malicious extension, the question of whether
it can affect the connection UI is rather moot.  Naïve users are going to lose
to a malicious extension every time, no matter what, and I seriously doubt
that even sophisticated users will have much of a chance in such scenarios,
whether the connection UI can be changed by the extension, or not.

It's probably useful to discuss this in conjunction with what controls Mozilla
has available in its ecosystem to combat malicious extensions in general,
as opposed to this particular case, which doesn't seem to be very special.
That more general question might lead to good principles that can be
applied in this specific situation.

Basically, I think it's a question of what the security model/policy for
extensions should be, how to balance the risks vs benefits of various pieces of
exposed functionality.  The tension between powerful, open APIs and
limited, but safer APIs has existed forever, and there isn't one point on
the spectrum that is optimal.

We recently had a case internally where some Office automation was not
possible due to some ad hoc restrictions imposed during the ILOVEYOU
era.  Addressing security risks piecemeal instead of holistically generally
results in a random set of arbitrary restrictions instead of a coherent
security model.  Figure out what the security policy and security model
is, and it will tell you whether allowing extensions to modify the connection
UI is ok.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Wayne
> Thayer via dev-security-policy
> Sent: Tuesday, February 27, 2018 9:21 AM
> To: mozilla-dev-security-policy 
> 
> Subject: Allowing WebExtensions to Override Certificate Trust Decisions
> 
> I am seeking input on this proposal:
> 
> Work is underway to allow Firefox add-ons to read certificate information via
> WebExtensions APIs [1]. It has also been proposed [2] that the WebExtensions
> APIs in Firefox be enhanced to allow a 3rd party add-on to change or ignore 
> the
> normal results of certificate validation.
> 
> This capability existed in the legacy Firefox extension system that was
> deprecated last year. It was used to implement stricter security mechanisms
> (e.g. CertPatrol) and to experiment with new mechanisms such as Certificate
> Transparency and DANE.
> 
> When used to override a certificate validation failure, this is a dangerous
> capability, and it’s not clear that requiring a user to grant permission to 
> the
> add-on is adequate protection. One solution that has been proposed [4] is to
> allow an add-on to affect the connection but not the certificate UI.
> In other words, when a validation failure is overridden, the page will load 
> but
> the nav bar will still display it as a failure.
> 
> I would appreciate your constructive feedback on this decision. Should this
> capability be added to the Firefox WebExtensions APIs?
> 
> - Wayne
> 
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1322748
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1435951
> [3] https://mail.mozilla.org/pipermail/dev-addons/2018-
> February/003629.html
> [4] https://mail.mozilla.org/pipermail/dev-addons/2018-
> February/003641.html
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: TunRootCA2 root inclusion request

2018-03-12 Thread Tim Hollebeek via dev-security-policy
My reaction was primarily based on the following suggestion:

"Generally speaking I would insist on the fact that for country CAs, some
kind 
of fast tracks should be established because the impact of time losing at
country level is highly expensive."

The answer is, and must be, no.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of
> taher.mestiri--- via dev-security-policy
> Sent: Monday, March 12, 2018 10:54 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: TunRootCA2 root inclusion request
> 
> Dear Tim,
> 
> Not sure your penguin-related example would make the picture sharper or
> ideas clearer.
> 
> I asked about fast tracks because it's taking long time to get things
processed
> related to the fact that all this is running by a community and I think it
can be
> great to brainstorm ways to handle maybe work overloads even through paid
> assessments for example.
> 
> I don't think it's worth to answer either your comments about special
> treatment, as no one has asked for it apart of speeding the process which
is not
> special treatment but respect for users and community, or about how
special
> we feel we are, etc.
> 
> I am not a member of the government, I consider myself member of an open
> global IT community, including but not limited to mozilla, that shares
same
> values of respect and mutual help. I find your answer a bit aggressive
but,
> anyway, maybe I was wrong about something that made you answer the way
> you did... That was not my intention.
> 
> I hope that you guys can give us a list of major corrections or
verifications to do
> within a certain limited time to give us the opportunity to get our CA
approved
> without restarting the whole process.
> I hope this is not considered as special treatment as maybe I don't know
what
> kind of support you provide in such cases.
> 
> At the end, I would reiterate that I shared personal opinions and I am not
> member of the government as this is a public open discussion and I don't
want
> that my opinion impacts negaively the decision taking.
> 
> Best,
> 
> Taher.
> 
> 
> 
> On Tuesday, 13 March 2018 03:06:40 UTC+1, Tim Hollebeek  wrote:
> > Nobody is blocking any country from advancing.  There are no Mozilla
> > rules that prevent any country from having the best CA on the planet.
> > If a bunch of penguins at McMurdo station run an awesome CA, I'll ask
> > some hard questions about how they meet the OCSP requirements with
> > their limited bandwidth, but if they have good answers, I'm fine with
> > internet security now being penguins all the way down.
> >
> > If you want your certificates to be accepted everywhere on the planet,
> > you need to follow the same rules as everyone else on the planet.  No
> > fast tracks or special rules for anyone, no matter how special they
> > feel they are.
> >
> > The same rules for everyone is the only sane route forward.
> > Governments often believe they deserve special treatment, and they may
> > have the ability to force that to be true within their own country,
> > but that doesn't make it a good idea for Mozilla.
> >
> > -Tim
> >
> > > -Original Message-
> > > From: dev-security-policy [mailto:dev-security-policy-
> > > bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of
> > > taher.mestiri--- via dev-security-policy
> > > Sent: Monday, March 12, 2018 7:31 PM
> > > To: mozilla-dev-security-pol...@lists.mozilla.org
> > > Subject: Re: TunRootCA2 root inclusion request
> > >
> > > Dear All,
> > >
> > > Thank you for your detailed description of your concerns with the
> > > Tunisian
> > CA.
> > >
> > > I have been one of those guys that developped IT communities for
> > > more than
> > 7
> > > years in Tunisia, starting by Tunandroid (Tunisian Android
> > > Community),
> > Google
> > > Developers Groups, organized the best Software Freedom Day in 2012,
> > > supported local Mozilla Community 2013-2014, GDG Country Champion in
> > > Tunisia 2012-2014 and represented the IT community in law projects
> > > to help developing the local ecosystem since 2013 and still.
> > >
> > > The reason why I am telling you this is to assure you that I
> > > perfectly
> > understand
> > > what a community is about: helping each others, making things better
> > > and sharing knowledge. Things have always been inclusive.
> > >
> > > The Tunisian national digital certification agency has been under
> > > pressure
> > for
> > > more then 3 years to have its CA certificates recognized by Mozilla
> > > and
> > they did
> > > all which is possible to do to have the best security standards when
> > > they
> > got
> > > audited and criticized and they have alwyas been very reactive.
> > >
> > > I would highlight that we are speaking here about a national CA
> > > which is completely different from any other type of agencies. We
> > > are speaking
> > about
> > > blocking a 

RE: TunRootCA2 root inclusion request

2018-03-12 Thread Tim Hollebeek via dev-security-policy
Nobody is blocking any country from advancing.  There are no Mozilla rules 
that prevent any country from having the best CA on the planet.  If a bunch
of penguins at McMurdo station run an awesome CA, I'll ask some hard
questions about how they meet the OCSP requirements with their limited
bandwidth, but if they have good answers, I'm fine with internet security 
now being penguins all the way down.

If you want your certificates to be accepted everywhere on the planet, you
need to follow the same rules as everyone else on the planet.  No fast
tracks
or special rules for anyone, no matter how special they feel they are.

The same rules for everyone is the only sane route forward.  Governments
often believe they deserve special treatment, and they may have the ability
to force that to be true within their own country, but that doesn't make it
a good idea for Mozilla.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of
> taher.mestiri--- via dev-security-policy
> Sent: Monday, March 12, 2018 7:31 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: TunRootCA2 root inclusion request
> 
> Dear All,
> 
> Thank you for your detailed description of your concerns with the Tunisian
CA.
> 
> I have been one of those guys that developped IT communities for more than
7
> years in Tunisia, starting by Tunandroid (Tunisian Android Community),
Google
> Developers Groups, organized the best Software Freedom Day in 2012,
> supported local Mozilla Community 2013-2014, GDG Country Champion in
> Tunisia 2012-2014 and represented the IT community in law projects to help
> developing the local ecosystem since 2013 and still.
> 
> The reason why I am telling you this is to assure you that I perfectly
understand
> what a community is about: helping each others, making things better and
> sharing knowledge. Things have always been inclusive.
> 
> The Tunisian national digital certification agency has been under pressure
for
> more then 3 years to have its CA certificates recognized by Mozilla and
they did
> all which is possible to do to have the best security standards when they
got
> audited and criticized and they have alwyas been very reactive.
> 
> I would highlight that we are speaking here about a national CA which is
> completely different from any other type of agencies. We are speaking
about
> blocking a whole country from advancing.
> 
> It's already unacceptable to have such long process for country CA, if we
have
> to fail and restart we have to fail quickly because time is very valuable.
We
> can't afford restarting the process if the Tunisian CA gets rejected but
instead I
> think anything can be corrected and updated this is how I.T. works.
> 
> Generally speaking I would insist on the fact that for country CAs, some
kind of
> fast tracks should be established because the impact of time losing at
country
> level is highly expensive.
> 
> I have no doubt about your support and hope you can help my country move
> forward and I am sure that the team in our national digital certification
agency
> will do its best to assure you about how seriously we are working to make
> users globally trusting our CA protected.
> 
> Best regards,
> 
> Taher Mestiri
> 
> 
> 
> On Monday, 12 March 2018 15:59:55 UTC+1, Ryan Sleevi  wrote:
> > These responses demonstrate why the request is troubling. They attempt
> > to paint it as "other people do it"
> >
> > The risk of removing an included CA must balance the ecosystem
> > disruption to those non-erroneous certs, while the risk to ecosystem
> > inclusion needs to balance both the aggregate harm to the ecosystem
> > (through lowered
> > standards) and the risk to the ecosystem of rejecting the request (of
> > which, until inclusion is accepted, is low)
> >
> > The pattern of issues - particularly for a new CA - is equally
problematic.
> > A CA, especially in light of the public discussions, should not be
> > having these issues in 2018, and yet, here we are.
> >
> > We are in agreement on the objective facts - namely, that there is a
> > prolonged pattern of issues - and the criteria - namely, that CAs
> > should adhere to the policy in requesting inclusion. A strict
> > adherence to those objectives would be to fully deny the request. It
> > sounds like where we disagree, then, is not in the objective facts and
> > criteria, but rather, where the evaluation of that leaves relative to
risk.
> >
> > The position I am advocating is that, even if these individual matters
> > might be seen as less risky, especially, as has been mentioned, this
> > CA is "only" intended for .tn for the most case, the existence of such
> > a pattern (and the means of acknowledging-but-not-resolving-completely
> > these issues) is indicative that there will continue to be serious
> > issues, and that the risk is not simply limited to .tn, but threatens
> > global Internet stability 

RE: Policy 2.6 Proposal: Require English Language Audit Reports

2018-04-04 Thread Tim Hollebeek via dev-security-policy
Call me crazy, but for this particular requirement, I think simple sentences
might
be better.

"The audit information MUST be publicly available.  An English version MUST
be provided.  The English version MUST be authoritative."

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Ryan
> Hurst via dev-security-policy
> Sent: Wednesday, April 4, 2018 7:19 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Policy 2.6 Proposal: Require English Language Audit Reports
> 
> 
> > An authoritative English language version of the publicly-available
> > audit information MUST be supplied by the Auditor.
> >
> > it would be helpful for auditors that issue report in languages other
> > than English to confirm that this won't create any issues.
> 
> That would address my concern.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/qGy7WL45gRate5ccNJV7plt7IjXPV-pd-
> LTa9gPkQc8=?d=fgUiNjCpj8UK6ue4NShfzLGHGzkJWwPb3tOchiTvGntTxuK9bVX
> 5aMMPzBijLrabsuGnsFF4O9QSQsBjPBTpEb0gpSmHGiantqc2OcSQ0D4jZ5aLA1u
> eomyRD8-dNmIp4I87-T1G40WpIGyLEnm-
> Z2ye83FoVpIrjeWcM6ujsgxkvPTYEEPgJJ5S8QA9fQctHsjXIyT8HT8j6vDTknG1enh
> GZ_T_dA6JBbp81zJ4L1Ca2eX6aXcvz5BgcHvS6yotf6bd2EfLLWJKAZnR6o1yRxbzw
> lGl0_7xHVJs8xbMEdUuaI4b4pcup6QbPJsW1UQHIPAR6GFsxCauMSz5EJ-
> 5c38HJOLDPZLF5Tj0N6r-
> JIozX3YVUyZqRdSb4iIILNv8LsXVCwyud6ALgaqx4PJwF_leqzOCmmHBoYDZqI9z0
> 932I7QTktLec_1ZHGSkFGA664AXspslouRvtqP4eZfikJgsBoxEO1G2a2tx6n5uwZle
> -vFX=https%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Sigh. stripe.ian.sh back with EV certificate for Stripe, Inc of Kentucky....

2018-04-12 Thread Tim Hollebeek via dev-security-policy

> Independent of EV, the BRs require that a CA maintain a High Risk
Certificate
> Request policy such that certificate requests are scrubbed against an
internal
> database or other resources of the CAs discretion.

Unless you're Let's Encrypt, in which case you can opt out of this
requirement via a blog post.

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.6 Proposal: Require disclosure of S/MIME validation practices

2018-03-26 Thread Tim Hollebeek via dev-security-policy
I like this one.

It will be very useful as a starting point if we finally get a CABF S/MIME
working
group, which is likely to happen.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Wayne
> Thayer via dev-security-policy
> Sent: Monday, March 26, 2018 2:50 PM
> To: mozilla-dev-security-policy

> Subject: Policy 2.6 Proposal: Require disclosure of S/MIME validation
practices
> 
> Mozilla policy section 2.2(2) requires validation of email addresses for
S/MIME
> certificates, but doesn't require disclosure of these practices as it does
for TLS
> certificates.
> 
> I propose adding the following language from 2.2 (3) (TLS) to 2.2(2)
> (S/MIME):
> 
> The CA's CP/CPS must clearly specify the procedure(s) that the CA employs
to
> perform this verification.
> 
> This is: https://github.com/mozilla/pkipolicy/issues/114
> 
> ---
> 
> This is a proposed update to Mozilla's root store policy for version 2.6.
Please
> keep discussion in this group rather than on GitHub. Silence is consent.
> 
> Policy 2.5 (current version):
> https://github.com/mozilla/pkipolicy/blob/2.5/rootstore/policy.md
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: 825 days success and future progress!

2018-04-02 Thread Tim Hollebeek via dev-security-policy
18 months is not significantly different from 825 days.   So there's really
no benefit.

People have to stop wanting to constantly change the max validity period.
It's difficult enough to communicate these changes to consumers and
customers, and it really drives them nuts.  I can only imagine what a 
non-integral number of years will do to various company's planning 
and budgeting processes.

I would propose, instead, a minimum one year moratorium on proposals
to change the max validity period after the previous change to the max
validity period goes into effect.  That would make much more sense.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Alex
> Gaynor via dev-security-policy
> Sent: Monday, April 2, 2018 1:07 PM
> To: MozPol 
> Subject: 825 days success and future progress!
> 
> Afternoon all!
> 
> A month ago a new BR rule went into effect, putting a maximum validity
period
> of 825 days on newly issued certificates.
> 
> Truthfully, I was expecting tons of CAs to screw up, forget to implement
it, or
> have no technical controls, and there to be tons of miss-issuance.
> To me delight, the results have been pretty good:
> https://crt.sh/?zlint=1081=2018-03-01 the majority of
> violations have been from the US Government (whose PKI isn't remotely BR
> compliant, nor trusted by Mozilla).
> 
> In light of this incredible success, I think it's time to begin a
discussion on what
> the next in this chain is. While obviously actually encoding this in the
BRs will
> be a function of the CABF, as mdsp is the premier public discussion forum
for
> the PKI, I wanted to start here.
> 
> I propose that our next target should be a max validity period of 18
months
> (~550 days), starting in ~6 months from now.
> 
> The value of shorter-lived certificates has been discussed many times, but
to
> rehash: They afford the ecosystem significantly more agility, by allowing
us to
> remove mistakes in shorter periods of time without breaking valid
certificates.
> It also encourages subscribers to adopt more automation, which further
helps
> with agility.
> 
> Alex
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: 825 days success and future progress!

2018-04-02 Thread Tim Hollebeek via dev-security-policy
Yes, if we wanted to go to 13 months quickly, we would have gone directly 
there.  Getting to 13 months quickly is not a goal.  That’s not having it both 
ways, that having an understanding of how the ecosystem actually works.

 

The majority of the CAB forum, and not just CAs, but also many browsers, 
believe this sort of change so quickly after the most recent change, which came 
very quickly after the change before that, would be unnecessarily disruptive 
and unhelpful to the ecosystem.

 

-Tim

 

From: Alex Gaynor [mailto:agay...@mozilla.com] 
Sent: Monday, April 2, 2018 2:51 PM
To: Tim Hollebeek 
Cc: MozPol 
Subject: Re: 825 days success and future progress!

 

Hi Tim,

 

I'd have suggested an even shorter period, say 13 months, except I anticipated 
CAs would object that it was too great a change too suddenly, precisely as they 
did when this subject was last discussed!

 

While I appreciate that changing BRs can be difficult for customer 
communications, the fact that we are doing this in multiple steps instead of in 
one fell swoop is a result of CAs saying such a big leap was too disruptive. 
Frankly, you can't have it both ways.

 

Alex

 

On Mon, Apr 2, 2018 at 2:28 PM, Tim Hollebeek  > wrote:

18 months is not significantly different from 825 days.   So there's really
no benefit.

People have to stop wanting to constantly change the max validity period.
It's difficult enough to communicate these changes to consumers and
customers, and it really drives them nuts.  I can only imagine what a
non-integral number of years will do to various company's planning
and budgeting processes.

I would propose, instead, a minimum one year moratorium on proposals
to change the max validity period after the previous change to the max
validity period goes into effect.  That would make much more sense.

-Tim


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy- 
>  
> bounces+tim.hollebeek=digicert@lists.mozilla.org 
>  ] On Behalf Of Alex
> Gaynor via dev-security-policy
> Sent: Monday, April 2, 2018 1:07 PM
> To: MozPol   >
> Subject: 825 days success and future progress!
>
> Afternoon all!
>
> A month ago a new BR rule went into effect, putting a maximum validity
period
> of 825 days on newly issued certificates.
>
> Truthfully, I was expecting tons of CAs to screw up, forget to implement
it, or
> have no technical controls, and there to be tons of miss-issuance.
> To me delight, the results have been pretty good:
> https://crt.sh/?zlint=1081 
>  =2018-03-01 
> the majority of
> violations have been from the US Government (whose PKI isn't remotely BR
> compliant, nor trusted by Mozilla).
>
> In light of this incredible success, I think it's time to begin a
discussion on what
> the next in this chain is. While obviously actually encoding this in the
BRs will
> be a function of the CABF, as mdsp is the premier public discussion forum
for
> the PKI, I wanted to start here.
>
> I propose that our next target should be a max validity period of 18
months
> (~550 days), starting in ~6 months from now.
>
> The value of shorter-lived certificates has been discussed many times, but
to
> rehash: They afford the ecosystem significantly more agility, by allowing
us to
> remove mistakes in shorter periods of time without breaking valid
certificates.
> It also encourages subscribers to adopt more automation, which further
helps
> with agility.
>
> Alex

> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org 
>  
> https://lists.mozilla.org/listinfo/dev-security-policy

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: 825 days success and future progress!

2018-04-02 Thread Tim Hollebeek via dev-security-policy
Ryan, I’ve warned you several times, do not put words in my mouth.  I support 
the status quo, for now.  We can talk about future changes in the future.

 

-Tim

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Monday, April 2, 2018 2:58 PM
To: Tim Hollebeek <tim.holleb...@digicert.com>
Cc: Alex Gaynor <agay...@mozilla.com>; MozPol 
<mozilla-dev-security-pol...@lists.mozilla.org>
Subject: Re: 825 days success and future progress!

 

 

 

On Mon, Apr 2, 2018 at 2:28 PM, Tim Hollebeek via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

18 months is not significantly different from 825 days.   So there's really
no benefit.

 

So it sounds like you're supportive of 13 months, then, so that we arrive at an 
effective and meaningful maximum.

 

People have to stop wanting to constantly change the max validity period.

 

This is an entirely unproductive line of reasoning. The only reason that we're 
at a point of discussing incremental approaches seems to be because CAs 
resisted making meaningful steps all at once, and instead preferred a phase-in, 
like SHA-1. Proposals were put forward to make it a significant and meaningful 
difference, and there appeared to be wide browser support in spirit - and the 
only question being about the timing of the phase in. Thus, it seems reasonable 
to begin discussing how to approach that - and it doesn't seem productive to 
suggest the community should not discuss this.

 

It's difficult enough to communicate these changes to consumers and
customers, and it really drives them nuts.  I can only imagine what a
non-integral number of years will do to various company's planning
and budgeting processes.

 

So this argues in favor of 13 months, rather than 18 months. The communication 
difficulties are not expanded upon here, but it seems that if CAs spent more 
time investing in interoperable automation, these communication issues would 
evaporate, because they'd no longer be an issue.

 

I would propose, instead, a minimum one year moratorium on proposals
to change the max validity period after the previous change to the max
validity period goes into effect.  That would make much more sense.

 

I'm sure to a CA it makes sense, especially if the argument is that change is 
hard for them to do. Yet, at the same time, attempts to propose moratoriums on 
misissuance by CAs have consistently failed. A moratorium on discussions on how 
to reduce risk only seems valuable if would also imposed a moratorium on trust 
for those CAs that have issues. Since I'm sure that's not desirable for CAs, I 
hope we can agree that discussions of how to reduce the risk of such issues is 
highly relevant and necessary to resolve.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.6 Proposal: Update domain validation requirements

2018-03-20 Thread Tim Hollebeek via dev-security-policy
The language you quoted from me is a bit imprecise, my apologies.

 

I was trying to get CAs to disclose previously undisclosed uses of (4).  There 
are

disclosed uses of (4), including from DigiCert, that haven’t made it into the BR

methods yet, because in the past year, we have failed to pass Jeremy’s IP 
validation

ballot (https://cabforum.org/pipermail/validation/2017-February/000477.html).

I was aware of those at the time I wrote what you quoted, but thought they’d be

in the BRs by now … there was a time when it looked like that ballot was close

to being finalized.

 

The point is that there are IP methods that are not in the BRs that currently 
fall

under (4) that we’ve been trying to get into the BRs for a while now.  DigiCert

would like to be able to continue using the IP validation methods we disclosed

last year, unless there is a reason why they are worse than other disclosed or 

undisclosed methods.

 

-Tim

 

From: Wayne Thayer [mailto:wtha...@mozilla.com] 
Sent: Tuesday, March 20, 2018 5:08 PM
To: Tim Hollebeek 
Cc: mozilla-dev-security-policy 
Subject: Re: Policy 2.6 Proposal: Update domain validation requirements

 

Tim,

 

On Tue, Mar 20, 2018 at 9:57 AM, Tim Hollebeek  > wrote:


> * Add a new bullet on IP Address validation that forbids the use of BR
> 3.2.2.5(4) (“any other method”) and requires disclosure of IP Address
> validation processes in the CA’s CP/CPS.

This is a bit premature.  Most CA's IP validation procedures still fall under
any other method, and the draft ballot that we've been trying to pass
for a year or so is not done yet (I was writing it when the Validation
Summit started taking over my life...)  There's a good chance we will
get a ballot passed on this issue this summer, but there's also a good
chance that work on improving the non-IP validation methods will be
prioritized above it.

This seems to contradict your comment in issue 116 [1]:

 

I think the solution to Ryan's issue is to remove 3.2.2.5 (4). The VWG is 
currently discussing changes to 3.2.2.5 (in order to remove 3.2.2.5 (4)), and 
we haven't heard of any CA that is using it, though we should check the smaller 
ones.

It's possible 3.2.2.5 (4) could be removed with an aggressive timeline if it's 
really true no one is using it.

It would be great to hear from CAs on the impact they would feel from Mozilla 
banning 3.2.2.5(4) prior to passage of the VWG ballot you mentioned.

 

- Wayne

 

[1] https://github.com/mozilla/pkipolicy/issues/116



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Google OCSP service down

2018-02-25 Thread Tim Hollebeek via dev-security-policy
Ryan,

Wayne and I have been discussing making various improvements to 1.5.2
mandatory for all CAs.  I've made a few improvements to DigiCert's CPSs in
this area, but things probably still could be better.  There will probably be
a CA/B ballot in this area soon.

DigiCert's 1.5.2 has our support email address, and our Certificate Problem 
Report email (which I recently added).  That doesn't really cover everything 
(yet).

It looks like GTS 1.5.2 splits things into security (including CPRs), 
non-security
requests.

I didn't chase down any other 1.5.2's yet, but it'd be interesting to hear what
other CAs have here.  I suspect most only have one address for everything.

Something to keep in mind once the CA/B thread shows up.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Ryan
> Hurst via dev-security-policy
> Sent: Wednesday, February 21, 2018 9:53 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Google OCSP service down
> 
> I wanted to follow up with our findings and a summary of this issue for the
> community.
> 
> Bellow you will see a detail on what happened and how we resolved the issue,
> hopefully this will help explain what hapened and potentially others not
> encounter a similar issue.
> 
> Summary
> ---
> January 19th, at 08:40 UTC, a code push to improve OCSP generation for a
> subset of the Google operated Certificate Authorities was initiated. The 
> change
> was related to the packaging of generated OCSP responses. The first time this
> change was invoked in production was January 19th at 16:40 UTC.
> 
> NOTE: The publication of new revocation information to all geographies can
> take up to 6 hours to propagate. Additionally, clients and middle-boxes
> commonly implement caching behavior. This results in a large window where
> clients may have begun to observe the outage.
> 
> NOTE: Most modern web browsers “soft-fail” in response to OCSP server
> availability issues, masking outages. Firefox, however, supports an advanced
> option that allows users to opt-in to “hard-fail” behavior for revocation
> checking. An unknown percentage of Firefox users enable this setting. We
> believe most users who were impacted by the outage were these Firefox users.
> 
> About 9 hours after the deployment of the change began (2018-01-20 01:36
> UTC) a user on Twitter mentions that they were having problems with their
> hard-fail OCSP checking configuration in Firefox when visiting Google
> properties. This tweet and the few that followed during the outage period were
> not noticed by any Google employees until after the incident’s post-mortem
> investigation had begun.
> 
> About 1 day and 22 hours after the push was initiated (2018-01-21 15:07 UTC),
> a user posted a message to the mozilla.dev.security.policy mailing list where
> they mention they too are having problems with their hard-fail configuration 
> in
> Firefox when visiting Google properties.
> 
> About two days after the push was initiated, a Google employee discovered the
> post and opened a ticket (2018-01-21 16:10 UTC). This triggered the
> remediation procedures, which began in under an hour.
> 
> The issue was resolved about 2 days and 6 hours from the time it was
> introduced (2018-01-21 22:56 UTC). Once Google became aware of the issue, it
> took 1 hour and 55 minutes to resolve the issue, and an additional 4 hours and
> 51 minutes for the fix to be completely deployed.
> 
> No customer reports regarding this issue were sent to the notification
> addresses listed in Google's CPSs or on the repository websites for the 
> duration
> of the outage. This extended the duration of the outage.
> 
> Background
> --
> Google's OCSP Infrastructure works by generating OCSP responses in batches,
> with each batch being made up of the certificates issued by an individual CA.
> 
> In the case of GIAG2, this batch is produced in chunks of certificates issued 
> in
> the last 370 days. For each chunk, the GIAG2 CA is asked to produce the
> corresponding OCSP responses, the results of which are placed into a separate
> .tar file.
> 
> The issuer of GIAG2 has chosen to issue new certificates to GIAG2 
> periodically,
> as a result GIAG2 has multiple certificates. Two of these certificates no 
> longer
> have unexpired certificates associated with them. As a result, and as 
> expected,
> the CA does not produce responses for the corresponding periods.
> 
> All .tar files produced during this process are then concatenated with the -
> concatenate command in GNU tar. This produces a single .tar file containing 
> all
> of the OCSP responses for the given Certificate Authority, then this .tar 
> file is
> distributed to our global CDN infrastructure for serving.
> 
> A change was made in how we batch these responses, specifically instead of
> outputting many .tar files within a batch, a concatenation was of all tar 

RE: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure

2018-04-26 Thread Tim Hollebeek via dev-security-policy

> > which is why in the near future we can hopefully use RDAP over TLS
> > (RFC
> > 7481) instead of WHOIS, and of course since the near past, DNSSEC :)
> 
> I agree moving away from WHOIS to RDAP over TLS is a good low hanging fruit
> mitigator once it is viable.

My opinion is it is viable now, and the time to transition to optionally 
authenticated RDAP over TLS is now.  It solves pretty much all the problems we 
are currently having in a straightforward, standards-based way.  

The only opposition I've seem comes from people who seem to want to promote 
alternative models that destroy the WHOIS ecosystem, leading to proprietary 
distribution and monetization of WHOIS data.

I can see why that is attractive to some people, but I don’t think it's best 
for everyone.

I also agree that DNSSEC is a lost cause, though I understand why Paul doesn't 
want to give up   I've wanted to see it succeed for basically my entire 
career, but it seems to be making about as much progress as fusion energy.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SHA-1 exception history

2018-09-27 Thread Tim Hollebeek via dev-security-policy
Speaking for myself ...

My personal impression is that by the time they are brought up here, far too
many issues have easily predicted and pre-determined outcomes.

I know most of the security and key management people for the payment
industry very well [1], and they're good people.  The discussions are
generally one or two orders of magnitude more sophisticated (and far more
polite) than what happens in the web ecosystem.  Yes, there's a lot of
silliness in payments, but that's what happens when you try to run and
manage a low cost/high volume payment system with complex interconnected
audit requirements from multiple SDOs, implemented by hundreds of companies
with their own unique perspectives at global scale.

They did not deserve the treatment they received.  Perhaps things would have
gone better if Symantec wasn't involved, but I was shocked at how the
situation was handled.

I attempted to speak up a few times in various fora but it was pretty clear
that anything that wasn't security posturing wasn't going to be listened to,
and finding a practical solution was not on the agenda.  It was pretty clear
sitting in the room that certain persons had already made up their minds
before they even understood what a payment terminal was, how they are
managed, and what the costs and risks were for each potential alternative.

-Tim

[1] whenever you swipe a payment card, the card number is likely encrypted
with keys from an algorithm that I was first to implement: 

https://x9.org/x9news/asc-x9-releases-standard-ensuring-security-symmetric-k
ey-management-retail-financial-transactions-aes-dukpt-algorithm/

https://x9.org/wp-content/uploads/2018/03/X9.24-3-2017-Python-Source-2018012
9-1.pdf

> -Original Message-
> From: dev-security-policy 
On
> Behalf Of Nick Lamb via dev-security-policy
> Sent: Thursday, September 27, 2018 5:34 AM
> To: dev-security-policy@lists.mozilla.org
> Cc: Nick Lamb 
> Subject: Re: Google Trust Services Root Inclusion Request
> 
> On Wed, 26 Sep 2018 23:02:45 +0100
> Nick Lamb via dev-security-policy
>  wrote:
> > Thinking back to, for example, TSYS, my impression was that my post on
> > the Moral Hazard from granting this exception had at least as much
> > impact as you could expect for any participant. Mozilla declined to
> > authorise the (inevitable, to such an extent I pointed out that it
> > would happen months before it did) request for yet another exception
> > when TSYS asked again.
> 
> Correction: The incident I'm thinking of is First Data, not TSYS, a
different SHA-
> 1 exception.
> 
> Nick.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/FEUDWpqLnNV5UXAkVPLzsHo_VYc5BQ
> WHYUSdSzjAW5Q=?d=LmNFimUxfoPxKiRYG3qhoRqwu2zE3CPQipLjtaTDkdRpP
> KDL2JS8yPFFNKYTcWKtHyZ4rfj1O0ZZS5x3vkArKDCzRP3ZCC07l-
> SNhD8B4TkkcnDmXJPFlTmuf9Jbc_AGZOos_RYIwD_0TM7s5q9yJyB2Xw6t5iggY1
> qYMgWdJXSo_R6PJYrWiQCv3l_B3q3HEhjoTqZLi0nRxnuoK_Q5ROt-Zy0xZpG-
> sj5lFU44sFfHxhQZR6NBUP6c04vZz2FSHrPV6tFf4x3Sa_hEAhK45l3xKbycZO3xCai
> M4pZCF2dAtJ2mTfuGBl9_FgLu3Btz2-siKIw39AtkuiKptp6JWNszrsiDBQb66B-
> GVQX7M4F7fgMvyaalslF6KHHg5RFi-uOgM8PlilUBCygn0pZylNrU2thPuy-
> Nn9jC=https%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Concerns with Dun & Bradstreet as a QIIS

2018-09-27 Thread Tim Hollebeek via dev-security-policy

> The question and concern about QIIS is extremely reasonable. As discussed in
> past CA/Browser Forum activities, some CAs have extended the definition to
> treat Google Maps as a QIIS (it is not), as well as third-party WHOIS services
> (they’re not; that’s using a DTP).

It's worth noting that the BRs currently say "WHOIS: Information retrieved 
directly from the Domain Name Registrar or registry operator ..." so I'm not 
sure using a DTP is actually permitted.  Though I don't think we've discussed 
that point since the language was added.

> In the discussions, I proposed a comprehensive set of reforms that would
> wholly remedy this issue. Given that the objective of OV and EV certificates 
> is
> nominally to establish a legal identity, and the legal identity is derived 
> from
> State power of recognition, I proposed that only QGIS be recognized for such
> information. This wholly resolves differences in interpretation on suitable 
> QIIS.

We agree with this.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: SHA-1 exception history

2018-09-27 Thread Tim Hollebeek via dev-security-policy

> On Thu, 27 Sep 2018 14:52:27 +
> Tim Hollebeek via dev-security-policy
>  wrote:
> 
> > My personal impression is that by the time they are brought up here,
> > far too many issues have easily predicted and pre-determined outcomes.
> 
> It is probably true that many issues have predictable outcomes but I think
> predictability is on the whole desirable. Are there in fact CA
representatives
> who'd rather they had no idea how Mozilla would react when there's an
issue?

Asserting that the only alternative to pre-determined outcomes is  "no idea"
is a straw man.

If there is a lack of predictability because I can't predict the results of
an open and honest deliberation among a diverse community, then yes, I want
that.  I would actually love to see more widespread participation by
community members.  Different perspectives are useful.

> > I know most of the security and key management people for the payment
> > industry very well [1], and they're good people.
> 
> I mean this not sarcastically at all, but almost everybody is "good
people".
> That's just not enough. I would like to think that I'm "good people" and
yet it
> certainly would not be a good idea for the Mozilla CA root trust programme
to
> trust some CA root I have on this PC.

Almost everyone is certainly not "good people" in the sense that I meant.
Security is a difficult subject, and people who understand it well are rare.
It unfortunately also tends to attract the personality type that is keen on
finding faults and inherently suspicious of the motivations of others.  I
have a great deal of respect for many of the people I've met who have both a
profound understanding of technical issues and the ability to make sound
decisions.

If you read the entire long historical list of SHA-1 exchanges, you'll find
a profound lack of respect for the opinions of others in many places.  That
tends to cause people to not participate, in much the same way as it caused
me to slowly back away from the conversation at the time.

> > I attempted to speak up a few times in various fora but it was pretty
> > clear that anything that wasn't security posturing wasn't going to be
> > listened to, and finding a practical solution was not on the agenda.
> > It was pretty clear sitting in the room that certain persons had
> > already made up their minds before they even understood what a payment
> > terminal was, how they are managed, and what the costs and risks were
> > for each potential alternative.
> 
> If we're being frank, my impression is that First Data lied in their
submission to
> us and if it came solely to my discretion that would be enough to have
justified
> telling them "No" on its own the first time.

Honestly, First Data is not my favorite company.  I tend to disagree with
their representatives more often than not.  And I'm not asserting they or
others should have gotten what they wanted, only that the level of discourse
was not where it should have been.  This is perhaps less obvious to those
who only followed the discussions on the list, and did not participate on
the calls and in person.

I actually think the tone on m.d.s-p has improved quite a bit in the last
year or two.  It's one of the reasons I participate here from time to time,
where previously I rarely if ever did.  I would like to see it continue
moving in the right direction.

> As to understanding what a payment terminal is, how about "The cheapest
> possible device that passes the bare minimum of tests to scrape through" ?
Is
> that a good characterisation?

It is not.  Such extreme cynicism is generally a symptom of a lack of
objectivity.

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: What does "No Stipulation" mean, and when is it OK to use it in CP/CPS

2018-10-11 Thread Tim Hollebeek via dev-security-policy
I think "Not applicable" would be superior to "No stipulation", when 
appropriate.

"3.2.2.5. No IP address certificates are issued under this CPS." is even 
clearer.

I haven't looked into the implications of this, but perhaps it would be worth 
considering not allowing "No stipulation" in CPSs for sections that are not
marked "No stipulation" in the Baseline Requirements.

-Tim

> -Original Message-
> From: dev-security-policy 
> On Behalf Of Jakob Bohm via dev-security-policy
> Sent: Wednesday, October 10, 2018 6:09 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Cc: Jakob Bohm 
> Subject: Re: What does "No Stipulation" mean, and when is it OK to use it in
> CP/CPS
> 
> On 09/10/2018 23:15, Wayne Thayer wrote:
> > On Tue, Oct 9, 2018 at 12:48 PM Kathleen Wilson via
> > dev-security-policy < dev-security-policy@lists.mozilla.org> wrote:
> >
> >> Oh, so rather than trying to define what "No Stipulation" means and
> >> when it can be used, we could take a different approach -- list the
> >> sections that cannot contain "No Stipulation" in the CPS.
> >>
> > This approach implies that we are adopting the RFC 3647 definition of
> > "no stipulation" meaning "we can do whatever we want", not the meaning
> > of "we don't do this" that I believe is intended in the examples your
> > provided. If we take this approach, we should specify those section
> > that **must be
> > present** and cannot contain "no stipulation" (or similar permissive
> > language). Omitting a section defined in RFC 3647 is equivalent to "no
> > stipulation".
> >
> 
> In formulating Mozilla Policy one should also consider the case that a section
> is rendered inapplicable by the contents of another section.
> 
> For example if another CP/CPS section clearly states that the certificates 
> will
> not contain IP addresses as names (alternative or otherwise), then it would
> be OK to not state how IP addresses are validated.  (Such a section might for
> example state that certificates only contain DNS names).
> 
> As second example, if another CP/CPS section enumerates the validation
> methods used, then it would be OK to omit sections about methods not in
> that enumeration.
> 
> As a third example, the parent section of the section listing BR methods
> could state (in various ways) that only the methods explicitly listed will be
> used.  This particular notation could avoid a CP/CPS change to a "This
> method is not used" section when the corresponding section in the BR is
> changed, added or removed.
> 
> >
> >> On 10/9/18 12:31 PM, Brown, Wendy (10421) wrote:
> >>> Tim  -
> >>>
> >>> I think that statement leaves out the next paragraph of RFC3647:
> >>> In a CP, it is possible to leave certain components, subcomponents,
> >> and/or elements unspecified, and to stipulate that the required
> >> information will be indicated in a policy qualifier, or the document
> >> to which a policy qualifier points. Such CPs can be considered
> >> parameterized definitions. The set of provisions should reference or
> >> define the required policy qualifier types and should specify any 
> >> applicable
> default values.
> >>>
> >>> I think normally the policy qualifier points to a CPS, but it might
> >>> be
> >> some other document.
> >>> But in any case if both CP & CPS say "No stipulation" in regards to
> >> something that Mozilla cares about like what validation methods are
> >> supported for TLS certificates, then it is very hard to evaluate that
> >> set of "disclosed business practices" to determine if the CA operates
> >> in accord with the BRs or Mozilla's policy.
> >>> I think there may be some sections of a CP/CPS that are less
> >>> critical,
> >> but in terms of any section that is critical to the evaluation for
> >> inclusion in a particular trust store, I would expect one of the 2
> >> documents to clearly state the operational practices of the CA rather
> >> than just stating "No Stipulation" in both CP & CPS, unless the
> >> Policy Qualifier in issued certificates points to some other document
> >> that contains that information.
> >>>
> >>> Again - just my personal opinion.
> >>>
> 
> 
> 
> Enjoy
> 
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.
> https://clicktime.symantec.com/a/1/sXnCWBuQE3OxwGzDGFCP8tz2qr4y8b
> NKgbC6-
> _XfwpE=?d=Z2HQ1P_v8531y6J8lUlVsUTcmCX72dez2n3uy6rBVqj3AP_W9Le5
> Kck3YIgBpWiS77d8jWkRS0b6l9KDqFwcKEocqyvVnN5uK-qbUnOkjeAK5nOY-
> I07AC1KoUfhN33_MJaNeohcavrTshCIAAtrsPn_ccAchU2O65lWqwDaHUoHRh
> 9gIYPwwxf7tCdkXlf5pf2-RTSRUapCGMR5i-
> D5rFzE5bLaRqyIJQawRpDBOC8lwAgcAIYySICtdAPtmTtxZaS1ekVbuYxfKKAqnD
> QXB4SuFx0Pm6w9JPnU0xYppl0EUNTCMyfc9XtS_ZVRv5C30dxjSrwQjQ4azrub
> pnWxwa2bSJTbuMGd25gNskRQAmSpLbSupgWEe7g2WWrxkA0nnmE8J4ksZu
> JonRs5qSCPxAduJkwssCKkmmZatvuGPimdKnfVibZ07vgopAqoQ7ZmszJyA1jt3
> Wv4weiQ=https%3A%2F%2Fwww.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10 This
> public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for 

  1   2   >