On Friday, October 14, 2016 at 3:01:16 AM UTC-7, Gervase Markham wrote:
> There are indeed more of these than I remember or knew about. Perhaps it
> would have been sensible to start a StartCom issues list earlier. In my
> defence, investigating one CA takes up a lot of time on its own, let
> alone two :-)

10 minutes with "site:bugzilla.mozilla.org StartCom"

Which was only possible due to the many Mozilla contributors who, when they saw 
something improper, filed bugs. I just want to make sure to thank all of the 
contributors who have done so, and hopefully continue to do so.

> On the other hand, this happened 8 years ago. I'd be interested in your
> comments, Ryan, on whether you think it's appropriate for us to have
> some sort of informal "statute of limitations". That is to say, in
> earlier messages you were worried about favouring incumbents. But if
> there is no such statute, doesn't that disadvantage incumbents? No code
> is bug-free, and so a large CA with many products is going to have
> occasional troubles over the years. If they then have a larger issue, is
> it reasonable to go trawling back 10 years through the archives and pull
> out every problem there's ever been? This is a genuine question, not a
> rhetorical one.

Right, I had the same question when investigating. We know Eddy's position on 
it ('It was in the past, get over it' - if I may so aggressively strawman). I 
suppose a core question is: What is the goal of the root program? Should there 
be a higher bar for removing CAs than adding them? Does trust increase or 
decrease over time?

That is, I can totally see the argument that frequently adding new CAs is bad, 
because new CAs may not have the organizational or operational experience to 
meet the high bar expected of CAs. We frequently see this with addition 
requests - CAs well below what the community standard might be. In this model, 
the more time passes, the more institutional knowledge and expertise the 
organization develops, the better users are protected by keeping CAs in longer 
(and allowing them to remediate).

Another view is that we want a consistent bar, and that means CAs that fall 
short of that should be culled, regardless of age of the CA. This model 
suggests that a longitudinal analysis of a CA's operation is necessary - that 
we must never forget past mistakes when evaluating current mistakes or in 
predicting future mistakes.

We can hopefully assume that CA are rare, at least for an individual CA, and so 
we may never get sufficient samples to accurately predict future behaviour. 
Analyzing a longer period of time gives us data to establish a pattern and 
trend, but also disadvantages those who go through 'growing pains' on their way 
to becoming a mature CA.

I don't have a good answer for your question in the general case, for reasons 
hopefully explained, but I think towards the question of predicting 
responsiveness to incidents, and how they'll be treated, I think the longer 
analysis of StartCom is useful for the discussion. My own gut is that the 
ecosystem is better served if we look at the whole of a CA's operation. My view 
is that the theory that experience is developed over time is one not borne out 
by practice - what we see instead is the same players from existing CAs 
shifting around to different organizations.

> All the WoSign issues I documented where the past two years. Many of the
> StartCom issues you list are 2.5 - 3.5 years old. That may not be long
> enough, but how long is?

Well, the past year it's been run by WoSign, so 2.5 - 3.5 years really reflects 
the past 1.5 to 2.5 years of independent operation, right?
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to