Re: [FORGED] Re: Firefox removes UI for site identity

2019-10-25 Thread Phillip Hallam-Baker via dev-security-policy
On Fri, Oct 25, 2019 at 4:21 AM James Burton  wrote:

> Extended validation was introduced at a time when mostly everyone browsed
> the internet using low/medium resolution large screen devices that provided
> the room for an extended validation style visual security indicator .
> Everything has moved on and purchases are made on small screen devices that
> has no room to support an extended validation style visual security
> indicator. Apple supported  extended validation style visual security
> indicator in iOS browser and it failed [1] [2].
>
> It's right that we are removing the extended validation style visual
> security indicator from browsers because of a) the above statement b)
> normal users don't understand extended validation style visual security
> indicators c) the inconsistencies of extended validation style visual
> security indicator between browsers d) users can't tell who is real or not
> based on extended validation style visual security indicators as company
> names sometimes don't match the actual site name.
>
> [1]  https://www.typewritten.net/writer/ev-phishing
> [2]  https://stripe.ian.sh
>

The original proposal that led to EV was actually to validate the company
logos and present them as logotype.
There was a ballot proposed here to bar any attempt to even experiment with
logotype. This was withdrawn after I pointed out to Mozilla staff that
there was an obvious anti-Trust concern in using the threat of withdrawing
roots from a browser with 5% market share to suppress deployment of any
feature.

Now for the record, that is what a threat looks like: we will destroy your
company if you do not comply with our demands. Asking to contact the
Mozilla or Google lawyers because they really need to know what one of
their employees is doing is not.

Again, the brief here is to provide security signals that allow the user to
protect themselves.


-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Firefox removes UI for site identity

2019-10-24 Thread Phillip Hallam-Baker via dev-security-policy
On Thu, Oct 24, 2019 at 9:54 PM Peter Gutmann via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Paul Walsh via dev-security-policy 
> writes:
>
> >we conducted the same research with 85,000 active users over a period of
> >12 months
>
> As I've already pointed out weeks ago when you first raised this, your
> marketing department conducted a survey of EV marketing effectiveness.  If
> you have a refereed, peer-reviewed study published at a conference or in
> an academic journal, please reference it, not a marketing survey
> masquerading as a "study".


There are certainly problems with doing usability research. But right now
there is very little funding for academic studies that are worth reading.

You didn't criticize the paper with 27 subjects split into three groups
from 2007. Nor did you criticize the fact that the conclusions were totally
misrepresented.

So it doesn't appear to be spurious research that you have a problem with
or the misrepresentation of the results. What you seem to have a problem
with is the conclusions.

At least with 85,000 subjects there is some chance that Paul himself has
found out something of interest. That doesn't mean that we have to accept
his conclusions as correct, or incontrovertible but I think it does mean
that he deserves to be treated with respect.
I am not at all happy with the way this discussion has gone. It seems that
contrary to the claims of openness, Mozilla has a group think problem. For
some reason it is entirely acceptable to attack CAs for any reason and with
the flimsiest of evidence.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Firefox removes UI for site identity

2019-10-24 Thread Phillip Hallam-Baker via dev-security-policy
On Thu, Oct 24, 2019 at 5:31 PM Paul Walsh  wrote:

> So, the next time a person says “EV is broken” or “website identity can’t
> work” please think about what I just said and imagine actual browser
> designers and developers who were/are responsible for that work. They were
> never given a chance to get it right.
>

The point I wanted to bring to people's attention here is that the world
has moved on since. At the present moment we are engaged in a political
crisis on both sides of the Atlantic. Those are the particular issues on
which I have been focused and those are the issues that I expect will be my
primary concern for a few months longer.

But one way or another, those issues will eventually be resolved. And as
soon as that happens, the blamestorming will begin. And once they have run
out of the guilty, they will be going after the innocent (as of course will
the people who were also guilty hoping to deflect attention from their own
culpability). And who else is there going to be left to blame who is
withing reach apart from 'BigTech'?

The security usability approach of the 1990s doesn't work any more. We
don't need people to tell us what doesn't work, we need people who are
committed to making it work.

The brief here is how to provide people with a way that they can be safe on
the Internet that they can use. That includes providing them with a means
of being able to tell a fake site from a real one. That also includes the
entirely separate problem of how to prevent phishing type attacks.


And one of the things we need to start doing is being honest about what the
research actually shows. From the paper cited by Julien.

" The participants who were asked to read the Internet Explorer help file
were more likely to classify both real and fake sites as legitimate
whenever the phishing warning did not appear."

This is actually the exact opposite of the misleading impression he gave of
the research.

The green bar is not enough, I never expected it to be. To be successful,
the green bar required the browser providers to provide a consistent UI
that users could rely on and explain what it means. It seems that every day
I am turning on a device or starting an app only to be told it has updated
and they want to tell me about some new feature they have added. Why is it
only the features that the providers want to tell me about get that
treatment? Why not also use it to tell people how to be safe.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Firefox removes UI for site identity

2019-10-24 Thread Phillip Hallam-Baker via dev-security-policy
Eric,

I am not going to be gaslighted here.

Just what was your email supposed to do other than "suppressing dialogue
within this community"?

I was making no threat, but if I was still working for a CA, I would
certainly get the impression that you were threatening me.

The bullying and unprofessional behavior of a certain individual is one of
the reasons I have stopped engaging in CABForum, an organization I
co-founded. My contributions to this industry began in 1992 when I began
working on the Web with Tim Berners-Lee at CERN.


The fact that employees who work on what is the third largest browser also
participate in the technical and policy discussions of the third largest
browser which is also the only multi-party competitor should be a serious
concern to Google and Mozilla. It is a clear anti-Trust liability to both
concerns. People here might think that convenient, but it is not the sort
of arrangement I for one would like to be having to defend in Congressional
hearings.

As I said, I do not make threats. My concern here is that we have lost
public confidence. We are no longer the heroes we once were and politicians
in your own party are now running against 'Big Tech'. We already had DoH
raised in the House this week and there is more to come. We have six months
at most to put our house in order.



On Thu, Oct 24, 2019 at 12:29 PM Eric Mill  wrote:

> Phillip, that was an unprofessional contribution to this list, that could
> be read as a legal threat, and could contribute to suppressing dialogue
> within this community. And, given that the employee to which it is clear
> you are referring is not only a respected community member, but literally a
> peer of the Mozilla Root Program, it is particularly unhelpful to Mozilla's
> basic operations.
>
> On Wed, Oct 23, 2019 at 10:33 AM Phillip Hallam-Baker via
> dev-security-policy  wrote:
>
>> On Tue, Oct 22, 2019 at 7:49 PM Matt Palmer via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>> > On Tue, Oct 22, 2019 at 03:35:52PM -0700, Kirk Hall via
>> > dev-security-policy wrote:
>> > > I also have a question for Mozilla on the removal of the EV UI.
>> >
>> > This is a mischaracterisation.  The EV UI has not been removed, it has
>> been
>> > moved to a new location.
>> >
>> > > So my question to Mozilla is, why did Mozilla post this as a subject
>> on
>> > > the mozilla.dev.security.policy list if it didn't plan to interact
>> with
>> > > members of the community who took the time to post responses?
>> >
>> > What leads you to believe that Mozilla didn't plan to interact with
>> members
>> > of the community?  It is entirely plausible that if any useful responses
>> > that warranted interaction were made, interaction would have occurred.
>> >
>> > I don't believe that Mozilla is obliged to respond to people who have
>> > nothing useful to contribute, and who don't accurately describe the
>> change
>> > being made.
>> >
>> > > This issue started with a posting by Mozilla on August 12, but despite
>> > 237
>> > > subsequent postings from many members of the Mozilla community, I
>> don't
>> > > think Mozilla staff ever responded to anything or anyone - not to
>> explain
>> > > or justify the decision, not to argue.  Just silence.
>> >
>> > I think the decision was explained and justified in the initial
>> > announcement.  No information that contradicted the provided
>> justification
>> > was presented, so I don't see what argument was required.
>> >
>> > > In the future, if Mozilla has already made up its mind and is not
>> > > interested in hearing back from the community, it might be better NOT
>> to
>> > > start a discussion on the list soliciting feedback.
>> >
>> > Soliciting feedback and hearing back from the community does not require
>> > response from Mozilla, merely reading.  Do you have any evidence that
>> > Mozilla staff did not, in fact, read the feedback that was given?
>> >
>>
>> If you are representing yourselves as having an open process, the lack of
>> response on the list does undermine that claim. The lack of interaction on
>> that particular topic actually speaks volumes.
>>
>> Both parties in Congress have already signalled that they intend to go
>> after 'big tech'. Security is an obvious issue to focus on. While it is
>> unlikely Mozilla will be a target of those discussions, Google certainly
>> is
>> and one employee in particular.
>>
>> This is the point at which the smart people are going to lawyer up.
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>>
>
>
> --
> Eric Mill
> 617-314-0966 | konklone.com | @konklone <https://twitter.com/konklone>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Firefox removes UI for site identity

2019-10-23 Thread Phillip Hallam-Baker via dev-security-policy
On Tue, Oct 22, 2019 at 7:49 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tue, Oct 22, 2019 at 03:35:52PM -0700, Kirk Hall via
> dev-security-policy wrote:
> > I also have a question for Mozilla on the removal of the EV UI.
>
> This is a mischaracterisation.  The EV UI has not been removed, it has been
> moved to a new location.
>
> > So my question to Mozilla is, why did Mozilla post this as a subject on
> > the mozilla.dev.security.policy list if it didn't plan to interact with
> > members of the community who took the time to post responses?
>
> What leads you to believe that Mozilla didn't plan to interact with members
> of the community?  It is entirely plausible that if any useful responses
> that warranted interaction were made, interaction would have occurred.
>
> I don't believe that Mozilla is obliged to respond to people who have
> nothing useful to contribute, and who don't accurately describe the change
> being made.
>
> > This issue started with a posting by Mozilla on August 12, but despite
> 237
> > subsequent postings from many members of the Mozilla community, I don't
> > think Mozilla staff ever responded to anything or anyone - not to explain
> > or justify the decision, not to argue.  Just silence.
>
> I think the decision was explained and justified in the initial
> announcement.  No information that contradicted the provided justification
> was presented, so I don't see what argument was required.
>
> > In the future, if Mozilla has already made up its mind and is not
> > interested in hearing back from the community, it might be better NOT to
> > start a discussion on the list soliciting feedback.
>
> Soliciting feedback and hearing back from the community does not require
> response from Mozilla, merely reading.  Do you have any evidence that
> Mozilla staff did not, in fact, read the feedback that was given?
>

If you are representing yourselves as having an open process, the lack of
response on the list does undermine that claim. The lack of interaction on
that particular topic actually speaks volumes.

Both parties in Congress have already signalled that they intend to go
after 'big tech'. Security is an obvious issue to focus on. While it is
unlikely Mozilla will be a target of those discussions, Google certainly is
and one employee in particular.

This is the point at which the smart people are going to lawyer up.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-19 Thread Phillip Hallam-Baker via dev-security-policy
Like I said, expect to defend this in House and Senate hearings.

This is a restraint of trade. You are using your market power to impede
development of the market.

Mozilla corp made no complaint when VeriSign deployed Issuer LogoTypes.


On Tue, Jul 16, 2019 at 8:17 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> It seems to me that this discussion has veered away from the original
> question, which was seeking consent to "experiment" with logotypes in
> publicly-trusted certificates. I don't think there is much doubt that RFC
> 3709 has been and can be implemented, and as pointed out, it can be tested
> in private hierarchies. I fail to understand the point of this type of
> "experiment", especially when it leaves all of the difficult questions -
> such as global trademark validation and the potential to mislead users -
> unanswered. The risks of permitting such "experimentation" appear to far
> outweigh the benefits.
>
> The discussion has morphed into a question of a CA's right to encode
> additional information into a publicly-trusted certificate, beyond the
> common profile defined in the BRs, for use in a subset of Browsers or other
> client software. The argument here seems to be that BR 7.1.2.4(b)
> ("semantics that, if included, will mislead a Relying Party about the
> certificate information") can't be triggered if the user agent doesn't
> understand the data, or that there needs to be proof that the data is
> misleading (versus could be misleading) to trigger that clause. This seems
> like a much more difficult problem to solve, and one that doesn't need to
> be addressed in the context of the original question.
>
> Given this, and the fact that I believe it is in everyone's best interest
> to resolve the current ambiguity over Mozilla's policy on logotypes, I
> again propose to add logotype extensions to our Forbidden Practices[1], as
> follows:
>
> ** Logotype Extension **
> Due to the risk of misleading Relying Parties and the lack of defined
> validation standards for information contained in this field, as discussed
> here [2], CAs MUST NOT include the RFC 3709 Logotype extension in CA or
> Subscriber certificates.
>
> I will greatly appreciate additional feedback on my analysis and proposal.
>
> - Wayne
>
> [1] https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices
> [2]
>
> https://groups.google.com/d/msg/mozilla.dev.security.policy/nZoK5akw2c8/ZtF0WZY8AgAJ
>
> On Fri, Jul 12, 2019 at 2:26 PM Ryan Sleevi  wrote:
>
> > And they will mislead relying parties. Which is why you cannot use *this*
> > particular extension. Sorry, that ship sailed in 2005.
> >
> > A CA that would be remotely be considering exercising this clause would
> > strongly benefit from checking with the Root stores they’re in, no matter
> > the extension proposed.
> >
> > It’s also Subject Identifying Information.
> >
> > On Fri, Jul 12, 2019 at 5:11 PM Jeremy Rowley <
> jeremy.row...@digicert.com>
> > wrote:
> >
> >> The language of the BRs is pretty permissive.  Assuming Mozilla didn't
> >> update its policy, then issuance would be permitted if the CA can show
> that
> >> the following was false:
> >>
> >> b. semantics that, if included, will mislead a Relying Party about the
> >> certificate information verified by
> >> the CA (such as including extendedKeyUsage value for a smart card, where
> >> the CA is not able to verify
> >> that the corresponding Private Key is confined to such hardware due to
> >> remote issuance)..
> >>
> >> I think this is section you are citing as prohibiting issuance correct?
> >> So as long as the CA can show that this is not true, then issuance is
> >> permitted under the current policy.
> >>
> >>
> >>
> >> -Original Message-
> >> From: dev-security-policy <
> dev-security-policy-boun...@lists.mozilla.org>
> >> On Behalf Of Ryan Sleevi via dev-security-policy
> >> Sent: Friday, July 12, 2019 3:01 PM
> >> To: Doug Beattie 
> >> Cc: mozilla-dev-security-policy <
> >> mozilla-dev-security-pol...@lists.mozilla.org>; Wayne Thayer <
> >> wtha...@mozilla.com>
> >> Subject: Re: Logotype extensions
> >>
> >> Alternatively:
> >>
> >> There is zero reason these should be included in publicly trusted certs
> >> used for TLS, and ample harm. It is not necessary nor essential to
> securing
> >> TLS, and that should remain the utmost priority.
> >>
> >> CAs that wish to issue such certificates can do so from alternate
> >> hierarchies. There is zero reason to assume the world of PKI is limited
> to
> >> TLS, and tremendous harm has been done to the ecosystem, as clearly and
> >> obviously demonstrated by the failures of CAs to correctly implement and
> >> validate the information in a certificate, or timely revoke them. The
> fact
> >> that were multiple CAs who issued certificates like “Some-State” is a
> >> damning indictment not just on those CAs, but in an industry that does
> not
> >> see such certificates as an existential threat to the C

Re: Logotype extensions

2019-07-11 Thread Phillip Hallam-Baker via dev-security-policy
On Thu, Jul 11, 2019 at 12:19 PM Wayne Thayer  wrote:

> On Wed, Jul 10, 2019 at 7:26 PM Phillip Hallam-Baker <
> ph...@hallambaker.com> wrote:
>
>> Because then the Mozilla ban will be used to prevent any work on
>> logotypes in CABForum and the lack of CABForum rules will be used as
>> pretext for not removing the ban.
>>
>> I have been doing standards for 30 years. You know this is exactly how
>> that game always plays out.
>>
>
> Citation please? The last two examples I can recall of a Browser
> clarifying or overriding CAB Forum policy are:
> 1. banning organizationIdentifier - resulting in ballot SC17 [1] , which
> properly defines the requirements for using this Subject attribute.
> 2. banning domain validation method #10 - resulting in the ACME TLS ALPN
> challenge [2], which is nearly through the standards process.
>
> In both examples, it appears that Browser policy encouraged the
> development of standards.
>

It is what happened when I proposed logotypes ten years ago.



> If you don't want to use the extension, that is fine. But if you attempt
>> to prohibit anything, ruin it by your lawyers first and ask them how it is
>> not an a restriction on trade.
>>
>> It is one thing for CABForum to make that requirement, quite another for
>> Mozilla to use its considerable market power to prevent other browser
>> providers making use of LogoTypes.
>>
>
> If this proposal applied to any certificate issued by a CA, I might agree,
> but it doesn't. CAs are free to do whatever they want with hierarchies that
> aren't trusted by Mozilla. It's not clear to me how a CA would get a
> profile including a Logotype through a BR audit, but that's beside the
> point.
>

Since Mozilla uses the same hierarchies that are used by all the other
browsers and are the only hierarchies available, I see a clear restraint of
trade issue.

It is one thing for Mozilla to decide not to use certain data in the
certificate, quite another to prohibit CAs from providing that data to
other parties.

The domain validation case is entirely different because the question there
is how data Mozilla intends to rely on is validated.


A better way to state the requirement is that CAs should only issue
>>>> logotypes after CABForum has agreed validation criteria. But I think that
>>>> would be a mistake at this point because we probably want to have
>>>> experience of running the issue process before we actually try to
>>>> standardize it.
>>>>
>>>>
>>> I would be amenable to adding language that permits the use of the
>>> Logotype extension after the CAB Forum has adopted rules governing its use.
>>> I don't see that as a material change to my proposal because, either way,
>>> we have the option to change Mozilla's position based on our assessment of
>>> the rules established by the CAB Forum, as documented in policy section 2.3
>>> "Baseline Requirements Conformance".
>>>
>>> I do not believe that changing the "MUST NOT" to "SHOULD NOT" reflects
>>> the consensus reached in this thread.
>>>
>>> I also do not believe that publicly-trusted certificates are the safe
>>> and prudent vehicle for "running the issue process before we actually try
>>> to standardize it".
>>>
>>
>> You are free to ignore any information in a certificate. But if you
>> attempt to limit information in the certificate you are not intending to
>> use in your product, you are arguably crossing the line.
>>
>>
> It's quite clear from the discussions I've been involved in that at least
> one goal for Logotypes is that Browsers process them.  You implied so
> yourself above by stating that this proposal would "prevent other browser
> providers making use of LogoTypes." So you are now suggesting that Browsers
> ignore this information while others are suggesting precisely the opposite.
>

Mozilla is free to make the choice to ignore it. If you want to go ahead
and use your significant market power to prevent the logotypes being added
to other browsers to use them and are confident that it is compliant with
US anti-trust law, EU competition law (and the 27 member states) plus any
other state you may have picked a fight with recently, well go ahead.

In case you hadn't noticed, there is a storm brewing over 'big-tech' on
capitol hill. It is not yet clear which issues are going to be picked up or
by whom. It is not certain that the WebPKI will be the focus of that but I
would not count on avoiding it. It would be prudent for every party with
significant market powe

Fwd: Logotype extensions

2019-07-11 Thread Phillip Hallam-Baker via dev-security-policy
[Fixing the From to match list membership]

On Wed, Jul 10, 2019 at 2:41 PM housley--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, July 5, 2019 at 7:53:45 PM UTC-4, Wayne Thayer wrote:
> > Based on this discussion, I propose adding the following statement to the
> > Mozilla Forbidden Practices wiki page [1]:
> >
> > ** Logotype Extension **
> > Due to the risk of misleading Relying Parties and the lack of defined
> > validation standards for information contained in this field, as
> discussed
> > here [2], CAs MUST NOT include the RFC 3709 Logotype extension in CA or
> > Subscriber certificates.
> >
> > Please respond if you have concerns with this change. As suggested in
> this
> > thread, we can discuss removing this restriction if/when a robust
> > validation process emerges.
> >
> > - Wayne
> >
> > [1] https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices
> > [2]
> >
> https://groups.google.com/d/msg/mozilla.dev.security.policy/nZoK5akw2c8/ZtF0WZY8AgAJ
>
> People find logos very helpful.  That is why many browsers display a tiny
> logo in the toolbar.
>
> I would suggest that a better way forward is to start the hard work on the
> validation process.  It will not be difficult for that to become more
> robust and accessible than the logos in the toolbar.
>

[I am not currently employed by a CA. Venture Cryptography does not operate
one or plan to.]

I agree with Russ.

The Logotype extension has technical controls to protect the integrity of
the referenced image by means of a digest value.

I do find the discussion of the usability factors rather odd when I am
looking at my browser tabs decorated with completely unauthenticated
favicons. Why is it that browser providers have no problem putting that
information in front of users?

If Mozilla or Chrome or the like don't see the value of using the logotype
information, don't. But if you were to attempt to prevent others making use
of this capability, that looks a lot like anti-Trust to me.

The validation scheme I proposed when we discussed this some years back was
to build on the Madrid Treaty for registration of trademarks. International
business is already having to deal with the issue of logos being used in
multiple jurisdiction. It is a complex, difficult problem but one that the
international system is very much aware of and working to address. They
will take time but we can leave the hard issues to them.

I see multiple separate security levels here:

1) Anyone can create a Web page that appears to look like Ethel's Bank

2) Ethel's Bank Carolina and Ethel's Bank Spain both have trademarks in
their home countries and can register credentials showing they are Ethel's
Bank.

3) When someone goes to Ethel's Bank online they are assured that it is the
canonical Ethel's Bank and no other.

There are obvious practical problems that make (3) unreachable. Not least
the fact that one of the chief reasons that trademarks are often fractured
geographically is that they were once part of a single business that split.
Cadbury's chocolate sold in the US is made by a different company to that
sold in the UK which is why some people import the genuine article at
significant expense.

But the security value lies in moving from level 1 to level 2. Stopping a
few million Internet thieves easily setting up fake web sites that look By
Ethel's bank is the important task. The issue of which Ethel's Bank is the
real one is something they can sort out (expensively) between themselves,
20 paces with loaded lawyers.


For the record, I am not sure that we can graft logotypes onto the current
Web browser model as it stands. I agree with many of Ryan's criticisms, but
not his conclusions. Our job is to make the Internet safe for the users. I
am looking at using logotypes but in a very different interaction model.
The Mesh does have a role for CAs but it is a very different role.

I will be explaining that model elsewhere. But the basic idea here is that
the proper role of the CA is primarily as an introducer. One of the reasons
the Web model is fragile today is that every transaction is essentially
independent as far as the client is concerned. The server has cookies that
link the communications together but the client starts from scratch each
time.

So imagine that I have a Bookmarks catalog that I keep my financial service
providers in and this acts as a local name provider for all of my Internet
technology. When I add Ethel's bank to my Bookmarks catalog, I see the
Ethel's bank logo as part of that interaction. A nice big fat logo, not a
small one. And I give it my pet name 'Ethel'. And when I tell Siri, or
Alexa or Cortana, 'call ethel', it call's Ethel's bank for me. Or if I type
'Ethel' into a toolbar, that is the priority.

Given where we have come from, the CA will have to continue to do the trust
management part of the WebPKI indefinitely. And I probably want the CA to
also have the role of warning me when a party I pr

Fwd: Logotype extensions

2019-07-11 Thread Phillip Hallam-Baker via dev-security-policy
On Wed, Jul 10, 2019 at 4:54 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Russ,
>
> >
> Perhaps one of us is confused because I think we're saying the same thing -
> that  rules around inclusion of Logotype extensions in publicly-trusted
> certs should be in place before CAs begin to use this extension.
>

I don't see how your proposed ban on logotypes is consistent. What that
would do is set up a situation in which it was impossible for CABForum to
develop rules for logotypes because one of the browsers had already banned
their use.

A better way to state the requirement is that CAs should only issue
logotypes after CABForum has agreed validation criteria. But I think that
would be a mistake at this point because we probably want to have
experience of running the issue process before we actually try to
standardize it.

I can't see Web browsing being the first place people are going to use
logotypes. I think they are going to be most useful in other applications.
And we actually have rather a lot of those appearing right now. But they
are Applets consisting of a thin layer on top of a browser and the logotype
stuff is relevant to the thin layer rather than the substrate.


For example, I have lots of gadgets in my house. Right now, every different
vendor who does an IoT device has to write their own app and run their own
service. And the managers are really happy with that at the moment because
they see it as all upside.

I think they will soon discover that most devices that are being made to
Internet aren't actually very useful if the only thing they connect to is a
manufacturer site and those start to cost money to run. So I think we will
end up with an open interconnect approach to IoT in the end regardless of
what a bunch of marketing VPs think should happen. Razor and blades models
are really profitable but they are also vanishingly rare because the number
2 and 3 companies have an easy way to enter the market by opening up.

Authenticating those devices to the users who bought them, authenticating
the code updates. Those are areas where the logotypes can be really useful.


-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Logotype extensions

2019-07-11 Thread Phillip Hallam-Baker via dev-security-policy
[Fixing the From]

n Wed, Jul 10, 2019 at 6:11 PM Wayne Thayer  wrote:

> On Wed, Jul 10, 2019 at 2:31 PM Phillip Hallam-Baker <
> ph...@hallambaker.com> wrote:
>
>> On Wed, Jul 10, 2019 at 4:54 PM Wayne Thayer via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> Russ,
>>>
>>> >
>>> Perhaps one of us is confused because I think we're saying the same
>>> thing -
>>> that  rules around inclusion of Logotype extensions in publicly-trusted
>>> certs should be in place before CAs begin to use this extension.
>>>
>>
>> I don't see how your proposed ban on logotypes is consistent. What that
>> would do is set up a situation in which it was impossible for CABForum to
>> develop rules for logotypes because one of the browsers had already banned
>> their use.
>>
>>
> How exactly does a Browser banning the use of an extension prevent the CAB
> Forum from developing rules to govern the use of said extension? If
> anything, it would seem to encourage the CAB Forum to take on that work.
> Also, as has been discussed, it is quite reasonable to argue that the
> inclusion of this extension is already forbidden in a BR-compliant
> certificate.
>

Because then the Mozilla ban will be used to prevent any work on logotypes
in CABForum and the lack of CABForum rules will be used as pretext for not
removing the ban.

I have been doing standards for 30 years. You know this is exactly how that
game always plays out.

If you don't want to use the extension, that is fine. But if you attempt to
prohibit anything, ruin it by your lawyers first and ask them how it is not
an a restriction on trade.

It is one thing for CABForum to make that requirement, quite another for
Mozilla to use its considerable market power to prevent other browser
providers making use of LogoTypes.




> A better way to state the requirement is that CAs should only issue
>> logotypes after CABForum has agreed validation criteria. But I think that
>> would be a mistake at this point because we probably want to have
>> experience of running the issue process before we actually try to
>> standardize it.
>>
>>
> I would be amenable to adding language that permits the use of the
> Logotype extension after the CAB Forum has adopted rules governing its use.
> I don't see that as a material change to my proposal because, either way,
> we have the option to change Mozilla's position based on our assessment of
> the rules established by the CAB Forum, as documented in policy section 2.3
> "Baseline Requirements Conformance".
>
> I do not believe that changing the "MUST NOT" to "SHOULD NOT" reflects the
> consensus reached in this thread.
>
> I also do not believe that publicly-trusted certificates are the safe and
> prudent vehicle for "running the issue process before we actually try to
> standardize it".
>

You are free to ignore any information in a certificate. But if you attempt
to limit information in the certificate you are not intending to use in
your product, you are arguably crossing the line.




> I can't see Web browsing being the first place people are going to use
>> logotypes. I think they are going to be most useful in other applications.
>> And we actually have rather a lot of those appearing right now. But they
>> are Applets consisting of a thin layer on top of a browser and the logotype
>> stuff is relevant to the thin layer rather than the substrate
>>
>
> If the use case isn't server auth or email protection, then publicly
> trusted certificates shouldn't be used. Full stop. How many times do we
> need to learn that lesson?
>

That appears to be an even more problematic statement. There have always
been more stakeholders than just the browser providers on the relying
applications side.

Those applets are competing with your product. Again, talk to your legal
people. If you use your market power to limit the functionalities that your
competitors can offer, you are going to have real problems.




-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-10 Thread Phillip Hallam-Baker via dev-security-policy
On Wed, Jul 10, 2019 at 6:11 PM Wayne Thayer  wrote:

> On Wed, Jul 10, 2019 at 2:31 PM Phillip Hallam-Baker <
> ph...@hallambaker.com> wrote:
>
>> On Wed, Jul 10, 2019 at 4:54 PM Wayne Thayer via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> Russ,
>>>
>>> >
>>> Perhaps one of us is confused because I think we're saying the same
>>> thing -
>>> that  rules around inclusion of Logotype extensions in publicly-trusted
>>> certs should be in place before CAs begin to use this extension.
>>>
>>
>> I don't see how your proposed ban on logotypes is consistent. What that
>> would do is set up a situation in which it was impossible for CABForum to
>> develop rules for logotypes because one of the browsers had already banned
>> their use.
>>
>>
> How exactly does a Browser banning the use of an extension prevent the CAB
> Forum from developing rules to govern the use of said extension? If
> anything, it would seem to encourage the CAB Forum to take on that work.
> Also, as has been discussed, it is quite reasonable to argue that the
> inclusion of this extension is already forbidden in a BR-compliant
> certificate.
>

Because then the Mozilla ban will be used to prevent any work on logotypes
in CABForum and the lack of CABForum rules will be used as pretext for not
removing the ban.

I have been doing standards for 30 years. You know this is exactly how that
game always plays out.

If you don't want to use the extension, that is fine. But if you attempt to
prohibit anything, ruin it by your lawyers first and ask them how it is not
an a restriction on trade.

It is one thing for CABForum to make that requirement, quite another for
Mozilla to use its considerable market power to prevent other browser
providers making use of LogoTypes.




> A better way to state the requirement is that CAs should only issue
>> logotypes after CABForum has agreed validation criteria. But I think that
>> would be a mistake at this point because we probably want to have
>> experience of running the issue process before we actually try to
>> standardize it.
>>
>>
> I would be amenable to adding language that permits the use of the
> Logotype extension after the CAB Forum has adopted rules governing its use.
> I don't see that as a material change to my proposal because, either way,
> we have the option to change Mozilla's position based on our assessment of
> the rules established by the CAB Forum, as documented in policy section 2.3
> "Baseline Requirements Conformance".
>
> I do not believe that changing the "MUST NOT" to "SHOULD NOT" reflects the
> consensus reached in this thread.
>
> I also do not believe that publicly-trusted certificates are the safe and
> prudent vehicle for "running the issue process before we actually try to
> standardize it".
>

You are free to ignore any information in a certificate. But if you attempt
to limit information in the certificate you are not intending to use in
your product, you are arguably crossing the line.




> I can't see Web browsing being the first place people are going to use
>> logotypes. I think they are going to be most useful in other applications.
>> And we actually have rather a lot of those appearing right now. But they
>> are Applets consisting of a thin layer on top of a browser and the logotype
>> stuff is relevant to the thin layer rather than the substrate
>>
>
> If the use case isn't server auth or email protection, then publicly
> trusted certificates shouldn't be used. Full stop. How many times do we
> need to learn that lesson?
>

That appears to be an even more problematic statement. There have always
been more stakeholders than just the browser providers on the relying
applications side.

Those applets are competing with your product. Again, talk to your legal
people. If you use your market power to limit the functionalities that your
competitors can offer, you are going to have real problems.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-10 Thread Phillip Hallam-Baker via dev-security-policy
On Wed, Jul 10, 2019 at 4:54 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Russ,
>
> >
> Perhaps one of us is confused because I think we're saying the same thing -
> that  rules around inclusion of Logotype extensions in publicly-trusted
> certs should be in place before CAs begin to use this extension.
>

I don't see how your proposed ban on logotypes is consistent. What that
would do is set up a situation in which it was impossible for CABForum to
develop rules for logotypes because one of the browsers had already banned
their use.

A better way to state the requirement is that CAs should only issue
logotypes after CABForum has agreed validation criteria. But I think that
would be a mistake at this point because we probably want to have
experience of running the issue process before we actually try to
standardize it.

I can't see Web browsing being the first place people are going to use
logotypes. I think they are going to be most useful in other applications.
And we actually have rather a lot of those appearing right now. But they
are Applets consisting of a thin layer on top of a browser and the logotype
stuff is relevant to the thin layer rather than the substrate.


For example, I have lots of gadgets in my house. Right now, every different
vendor who does an IoT device has to write their own app and run their own
service. And the managers are really happy with that at the moment because
they see it as all upside.

I think they will soon discover that most devices that are being made to
Internet aren't actually very useful if the only thing they connect to is a
manufacturer site and those start to cost money to run. So I think we will
end up with an open interconnect approach to IoT in the end regardless of
what a bunch of marketing VPs think should happen. Razor and blades models
are really profitable but they are also vanishingly rare because the number
2 and 3 companies have an easy way to enter the market by opening up.

Authenticating those devices to the users who bought them, authenticating
the code updates. Those are areas where the logotypes can be really useful.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-10 Thread Phillip Hallam-Baker via dev-security-policy
On Wed, Jul 10, 2019 at 2:41 PM housley--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, July 5, 2019 at 7:53:45 PM UTC-4, Wayne Thayer wrote:
> > Based on this discussion, I propose adding the following statement to the
> > Mozilla Forbidden Practices wiki page [1]:
> >
> > ** Logotype Extension **
> > Due to the risk of misleading Relying Parties and the lack of defined
> > validation standards for information contained in this field, as
> discussed
> > here [2], CAs MUST NOT include the RFC 3709 Logotype extension in CA or
> > Subscriber certificates.
> >
> > Please respond if you have concerns with this change. As suggested in
> this
> > thread, we can discuss removing this restriction if/when a robust
> > validation process emerges.
> >
> > - Wayne
> >
> > [1] https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices
> > [2]
> >
> https://groups.google.com/d/msg/mozilla.dev.security.policy/nZoK5akw2c8/ZtF0WZY8AgAJ
>
> People find logos very helpful.  That is why many browsers display a tiny
> logo in the toolbar.
>
> I would suggest that a better way forward is to start the hard work on the
> validation process.  It will not be difficult for that to become more
> robust and accessible than the logos in the toolbar.
>

[I am not currently employed by a CA. Venture Cryptography does not operate
one or plan to.]

I agree with Russ.

The Logotype extension has technical controls to protect the integrity of
the referenced image by means of a digest value.

I do find the discussion of the usability factors rather odd when I am
looking at my browser tabs decorated with completely unauthenticated
favicons. Why is it that browser providers have no problem putting that
information in front of users?

If Mozilla or Chrome or the like don't see the value of using the logotype
information, don't. But if you were to attempt to prevent others making use
of this capability, that looks a lot like anti-Trust to me.

The validation scheme I proposed when we discussed this some years back was
to build on the Madrid Treaty for registration of trademarks. International
business is already having to deal with the issue of logos being used in
multiple jurisdiction. It is a complex, difficult problem but one that the
international system is very much aware of and working to address. They
will take time but we can leave the hard issues to them.

I see multiple separate security levels here:

1) Anyone can create a Web page that appears to look like Ethel's Bank

2) Ethel's Bank Carolina and Ethel's Bank Spain both have trademarks in
their home countries and can register credentials showing they are Ethel's
Bank.

3) When someone goes to Ethel's Bank online they are assured that it is the
canonical Ethel's Bank and no other.

There are obvious practical problems that make (3) unreachable. Not least
the fact that one of the chief reasons that trademarks are often fractured
geographically is that they were once part of a single business that split.
Cadbury's chocolate sold in the US is made by a different company to that
sold in the UK which is why some people import the genuine article at
significant expense.

But the security value lies in moving from level 1 to level 2. Stopping a
few million Internet thieves easily setting up fake web sites that look By
Ethel's bank is the important task. The issue of which Ethel's Bank is the
real one is something they can sort out (expensively) between themselves,
20 paces with loaded lawyers.


For the record, I am not sure that we can graft logotypes onto the current
Web browser model as it stands. I agree with many of Ryan's criticisms, but
not his conclusions. Our job is to make the Internet safe for the users. I
am looking at using logotypes but in a very different interaction model.
The Mesh does have a role for CAs but it is a very different role.

I will be explaining that model elsewhere. But the basic idea here is that
the proper role of the CA is primarily as an introducer. One of the reasons
the Web model is fragile today is that every transaction is essentially
independent as far as the client is concerned. The server has cookies that
link the communications together but the client starts from scratch each
time.

So imagine that I have a Bookmarks catalog that I keep my financial service
providers in and this acts as a local name provider for all of my Internet
technology. When I add Ethel's bank to my Bookmarks catalog, I see the
Ethel's bank logo as part of that interaction. A nice big fat logo, not a
small one. And I give it my pet name 'Ethel'. And when I tell Siri, or
Alexa or Cortana, 'call ethel', it call's Ethel's bank for me. Or if I type
'Ethel' into a toolbar, that is the priority.

Given where we have come from, the CA will have to continue to do the trust
management part of the WebPKI indefinitely. And I probably want the CA to
also have the role of warning me when a party I previously trusted has
defaulted in some way.

Re: question about DNS CAA and S/MIME certificates

2018-05-16 Thread Phillip Hallam-Baker via dev-security-policy
On Wednesday, May 16, 2018 at 2:16:14 AM UTC-4, Tim Hollebeek wrote:
> This is the point I most strongly agree with.
> 
> I do not think it's at odds with the LAMPS charter for 6844-bis, because I do 
> not think it's at odds with 6844.

Updating 6844 is easy. Just define the tag and specify scope for issue / 
issuewild / issueclient sensibly. 

But that is only half the job really. If we want to get S/MIME widely used, we 
have to do ACME for client certs and integrate it into the MUAs. Not difficult 
but something needing to be done. 

More difficult is working out what an S/MIME CA does, where organizational 
validation etc. adds value and how this relates to the OpenPGP way of doing 
things. 


It occurred to me last night that the difference between S/MIME and OpenPGP 
trust is that one if by reference and the other is by value. S/MIME is 
certainly the solution for Paypal like situations because the trust 
relationship is (usually) with Paypal, not the individual I am talking to. Key 
fingerprints have the advantage of binding to the person which may be an 
advantage for non organizational situations.

These are not disjoint sets of course and there is no reason to switch mail 
encryption technologies depending on the context in which we are communicating. 
I would rather add certificate capabilities to OpenPGP-as-deployed and/or 
S/MIME-as-deployed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: question about DNS CAA and S/MIME certificates

2018-05-15 Thread Phillip Hallam-Baker via dev-security-policy
When I wrote CAA, my intention was for it to apply to SSL/TLS certs only. I did 
not consider S/MIME certs to be relevant precisely because of the 
al...@gmail.com problem.

I now realize that was entirely wrong and that there is in fact great utility 
in allowing domain owners to control their domains (or not).

If gmail want to limit the issue of Certs to one CA, fine. That is a business 
choice they have made. If you want to have control of your online identity, you 
need to have your own personal domain. That is why I have hallambaker.com. All 
my mail is forwarded to gmail.com but I control my identity and can change mail 
provider any time I want.

One use case that I see as definitive is to allow paypal to S/MIME sign their 
emails. That alone could take a bite out of phishing. 

But even with gmail, the only circumstance I could see where a mail service 
provider like that would want to restrict cert issue to one CA would be if they 
were to roll out S/MIME with their own CA.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Implementing a SHA-1 ban via Mozilla policy

2016-11-07 Thread Phillip Hallam-Baker
Remember the DigiNotar incident? At the time, I thought that pulling the
DigiNotar roots was exactly the right thing to do. I didn't say so as it
isn't proper for people to be suggesting putting their competitors out of
business. But I thought it the right thing to do.

Not long after I was sitting in a conference at NIST listening to a talk on
how shutting down DigiNotar had shut down the port of Amsterdam and left
meat rotting on the quays etc. Ooops.

The WebPKI is a complicated infrastructure that is used in far more ways
than any of us is aware of. And when it was being developed it wasn't clear
what the intended scope of use was. So it isn't very surprising that it has
been used for a lot of things like point of sale terminals etc.

It is all very well saying that people shouldn't have done these things
after the facts are known. But right now, I don't see any program in place
telling people in the IoT space what they should be doing for devices that
can't be upgraded in the field.

None of the current browser versions support SHA-1. Yes, people could in
theory turn it back on for some browsers but that isn't an argument because
the same people can edit their root store themselves as well. Yes people
are still using obsolete versions of Firefox etc. but do we really think
that SHA-1 is the weakest point of attack?

If digest functions are so important, perhaps the industry should be
focusing on deployment of SHA-3 as a backup in case SHA-2 is found wanting
in the future.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-06-30 Thread Phillip Hallam-Baker
On Thu, Jun 30, 2016 at 12:46 PM, Juergen Christoffel <
juergen.christof...@zv.fraunhofer.de> wrote:

> On 30.06.16 18:24, Phillip Hallam-Baker wrote:
>
>> What makes something easy to hack in Perl does not make for good security
>> architecture.
>>
>
> Bad design, engineering or implementation is not primarily a problem of
> the language used. Or we would never have seen buffer overflows in C.
> Please castigate the implementor instead.


​My college tutor, Tony Hoare used his Turing Award acceptance speech to
warn people why that feature of C was a terrible architectural blunder.

If you are writing security code without strong type checking and robust
memory management with array bounds checking then you are doing it wrong.​
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-06-30 Thread Phillip Hallam-Baker
Argh

As with Etherium, the whole engineering approach gives me a cold sweat.
Security and scripting languages are not a good mix.

What makes something easy to hack in Perl does not make for good security
architecture.


:(



On Thu, Jun 30, 2016 at 11:30 AM, Rob Stradling 
wrote:

> https://www.computest.nl/blog/startencrypt-considered-harmful-today/
>
> Eddy, is this report correct?  Are you planning to post a public incident
> report?
>
> Thanks.
>
> --
> Rob Stradling
> Senior Research & Development Scientist
> COMODO - Creating Trust Online
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Should we block Blue Coat's 'test' intermediate CA?

2016-06-10 Thread Phillip Hallam-Baker
On Fri, Jun 10, 2016 at 4:59 PM, Chris Palmer  wrote:

> On Tue, May 31, 2016 at 10:33 AM, Phillip Hallam-Baker <
> ph...@hallambaker.com> wrote:
>
> Intermediates are not independent CAs. That is a myth that EFF has
>> unfortunately chosen to publicize for their own political ends.
>>
>
> They don't stand to gain anything by pointing out that unconstrained
> issuer certs are unconstrained.
>

At the time name constraints were unusable because the NSA BULLRUN troll in
IETF had managed to get PKIX written so that name constraints MUST be
marked critical. Since that would have had a severe impact on a large
number of Apple devices that didn't understand name constraints at the
time, that was unacceptable.

We eventually fixed the problem by declaring the PKIX requirement to be
inapplicable.



> The point of having an intermediate is that it makes it possible to use the
>> path chain as part of the authorization mechanism. So for example, let us
>> say that you have chains:
>>
>> AliceCA -> BobCorpCA -> smtp.BobCorp.com  #1
>> AliceCA -> BobCorpCA -> smtp.BobCorp.com  #2
>> AliceCA -> BobCorpCA -> imap.BobCorp.com  #3
>>
>> An SMTP client could in theory be configured to require the TLS connection
>> to the mail servers to chain through BobCorpCA.
>>
>
> You are talking as if BobCorpCA were name-constrained. Which would be nice
> indeed. But not the case with the BlueCoat certificate.
>
> The constraints that matter are those that the relying party/UA applies at
> run-time.
>

If the customer doesn't have control of the signing key, the use of name
constraints isn't very important. It is a failsafe more than anything.

What it does provide is a check against a reputation attack though.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When good certs do bad things

2016-06-03 Thread Phillip Hallam-Baker
On Fri, Jun 3, 2016 at 2:03 PM, Nick Lamb  wrote:

> On Friday, 3 June 2016 17:25:11 UTC+1, Peter Kurrasch  wrote:
> > Regarding use of the term "bad", what does anyone think about this as an
> alternative: "furtherance of criminal activity"
>
> As far as I'm aware all of the following are examples of criminal activity:
>
> Gambling (in some but not all of the United States of America)
>
> Glorifying Adolf Hitler (in Germany).
>
> Advertising the availability of sexual services such as in-call
> prostitution (United Kingdom)
>
> Insulting the King of Thailand (Thailand)
>
> Maybe you personally don't think any of the above should be permitted on
> the World Wide Web. But this discussion is about the policy of Mozilla's
> trust store and not about you personally, so the question becomes whether
> any Mozilla users expect to be able to "further" any of these activities
> using Firefox and I think the unequivocal answer is yes, yes they do.
> ___
>

The original design of the WebPKI required authentication of the
organization for that exact reason.

If a company is registered in Germany, you probably expect it to follow
German laws. If you are buying from a company, the fact that they are
registered in Germany or Nigeria may affect the expectations you have for
performance of the contract - and the types of assurance you would require.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Should we block Blue Coat's 'test' intermediate CA?

2016-05-31 Thread Phillip Hallam-Baker
Intermediates are not independent CAs. That is a myth that EFF has
unfortunately chosen to publicize for their own political ends.

The point of having an intermediate is that it makes it possible to use the
path chain as part of the authorization mechanism. So for example, let us
say that you have chains:

AliceCA -> BobCorpCA -> smtp.BobCorp.com  #1
AliceCA -> BobCorpCA -> smtp.BobCorp.com  #2
AliceCA -> BobCorpCA -> imap.BobCorp.com  #3

An SMTP client could in theory be configured to require the TLS connection
to the mail servers to chain through BobCorpCA.

That is the theory at least. And I have sold a lot of PKI on that theory.
After I stopped selling them customers came and pointed out to me that it
is much less use than you would hope because the intermediate is typically
a short lived cert that you have to roll at least as often as the CA cert.

What you really want to be able to do in the mail client is to tie to a
root of trust you control as an enterprise. This is one of the things I am
trying to support in the Mathematical Mesh where we use fingerprints of
keys as roots of trust.





On Tue, May 31, 2016 at 12:59 PM, Nick Lamb  wrote:

> On Tuesday, 31 May 2016 16:19:24 UTC+1, Eric Mill  wrote:
> > Mozilla's Salesforce disclosures include the Blue Coat intermediate,
> which
> > is listed as under Symantec's CP and CPS:
> > https://mozillacaprogram.secure.force.com/CA/PublicAllIntermediateCerts
>
> So far as I've seen there's every reason to believe this only became news
> at all because Symantec finally disclosed the existence of this certificate
> earlier in May, and so it was added to the CT logs. Without the carrot +
> stick approach which has been taken for disclosure of intermediates, this
> CA cert would still exist (it was created nine months or so ago) but it
> wouldn't be known, so it wouldn't be news.
>
> If the message sent is "once you disclose an intermediate you'll get
> beaten up for that" there's a powerful disincentive to disclose at all.
> There's plenty of hysteria about this cert based on who it was issued to,
> which is funny because the best example of real trust ecosystem risk we
> have recently is from the Disney subCA [quietly revoked by its issuer when
> it ceased obeying the BRs...], yet I saw precisely zero people freaked out
> that Disney had an unconstrained intermediate when that information was
> first public.
>
> That said, so far as I understand the Mozilla requirement is actually that
> such intermediates be disclosed _and audited_. The present disclosure from
> Symantec asserts that this intermediate is covered by the same audit as for
> all their other intermediates, but the certificate was actually issued
> _long after_ the period that audit covers, so this assertion by Symantec is
> nonsense. We need to get CAs to be honest with us. If the situation is that
> you've got no audit coverage for an intermediate, you need to _fix_ that,
> not just pretend it's covered by an audit report that doesn't even mention
> the intermediate and was written months before it existed.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When good certs do bad things

2016-05-26 Thread Phillip Hallam-Baker
On Thu, May 26, 2016 at 12:23 PM, Ryan Sleevi  wrote:

> On Thu, May 26, 2016 at 7:40 AM, Peter Kurrasch  wrote:
> > My suggestion is to frame the issue‎ as: What is reasonable to expect of
> a
> > CA if somebody sees bad stuff going on? How should CA's be notified? What
> > sort of a response is warranted and in what timeframe? What guidelines
> > should CA's use when determining what their response should be?
> >
> > All of this is worthy of discussion, but it's gonna get complicated.
>
> With all due respect, a number of the items on your list are
> orthogonal to certificates - they're a discussion about "bad" things
> you can do if "encryption" is possible / if "privacy" is possible. I
> don't think it's ignorance about how encryption can be used to do bad
> things, it's a valuation that the *good* things
> encryption/confidentiality/integrity enable far outweigh the bad. We
> saw this in the First Crypto Wars, and we're seeing this now, arguably
> the Second Crypto Wars.
>
> You haven't actually addressed how or why CAs have a role to play here
> - it's presented as a given. You recognize there's nuance about
> expectations, which is an open question, but you're ignoring the more
> fundamental question - do CAs have a role to play in *preventing*
> encryption, or is the only role they have to *enable* encryption.
>
> While not speaking for Mozilla, I think the unquestionable desire from
> some here is to find ways to increase encryption, but not to introduce
> ways to prevent encryption - whether through means of policy or
> technology.
>

What has encryption got to do with it?

The reason the WebPKI exists is for authentication. Encryption is a
secondary concern that is only required because the credit card protocols
are lame and people use passwords for authentication which is also lame.


The WebPKI model was two stage. First we make it difficult for people to
gain unlimited numbers of credentials. There is a cost to acquire a
certificate that is (hopefully) low for a legitimate user but makes it
uneconomic for a criminal to treat them as disposable.

The second stage is revocation of credentials when the holders do bad
things. Such as running a phishing site, signing malware, or the type of
thing listed above.

The design brief was to make electronic commerce possible. That is why the
system is designed the way it is. in particular the threshold requirement
was to make online shopping 'as safe' for the consumer as bricks and mortar
stores or traditional MOTO transactions.


Now the problem here is that there are also folk who just want to turn on
encryption and that is all and they don't care about doing online commerce
or banking. They just want to keep their email secret. And that is fine.
But that does not mean that people who only want to do confidentiality
should rip up the infrastructure that is designed to serve a different
purpose.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private PKIs, Re: Proposed limited exception to SHA-1 issuance

2016-02-29 Thread Phillip Hallam-Baker
On Mon, Feb 29, 2016 at 7:09 AM, Peter Gutmann
 wrote:
> Jürgen Brauckmann  writes:
>
>>Nice example from the consumer electronics world: Android >= 4.4 is quite
>>resistant against private PKIs. You cannot import your own/your corporate
>>private Root CAs for Openvpn- or Wifi access point security without getting
>>persistent, nasty, user-confusing warning messages: "A third party is capable
>>of monitoring your network activity".
>>
>>http://www.howtogeek.com/198811/ask-htg-whats-the-deal-with-androids-persistent-network-may-be-monitored-warning/
>
> Ugh, yuck!  So on the one hand we have numerous research papers showing that
> Android apps that blindly trust any old cert they find are a major problem,
> and then we have Google sabotaging any attempt to build a proper trust chain
> for Android apps.

Not just Android. Windows has all sorts of cool cert chain building
algorithms in their APIs. But they require the certificates to be
installed in the machine cert store.

Which makes them totally useless for my purposes in the Mesh as the
point is to give users a personal PKI with themselves as the root of
trust.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Looking forward: Was: Proposed limited exception to SHA-1 issuance

2016-02-27 Thread Phillip Hallam-Baker
Every problem is simple if you have access to a working time machine.
In this case we could just go back in time and implement SHA-2-256
when it was first published rather than a decade later.

Which is precisely what some of us argued for at the time.


Rather than looking at how to make the best of the SHA-1 situation, I
would like us to spend some time and look at how we avoid ending up in
that situation again. And this is a particularly good time to do so
because CFRG has just delivered some new EC curves using the rigid
construction approach that I think are likely to be widely accepted.

As a general principle, I believe that we should always have support
for exactly two cryptographic algorithms in Internet applications. One
of these should be the current default algorithm, the second a backup
in case there is a problem with the first.

Use of any other algorithms outside that set should be discouraged for
purposes other than experimentation. If some country wants to insist
on its own peculiar crypto, they should be clearly seen to be 'on
their own'.

You do not improve security by adding algorithms that are more secure,
you only improve security by withdrawing weak algorithms.


As far as key sizes go, I see a need for only two work factors, 'more
than adequate' and 'excessive'. I have used AES128 extensively and
these days I am using AES256 extensively as well. I have never, ever
configured a system for AES192 and can't imagine a reason why I would
use it. Either I care about performance or I want the absolute best
guarantee of security possible.


Looking at the current situation for symmetric ciphers, I note that we
have AES and we have something of an emerging consensus on Cha-Cha 20
as a backup (maybe). We also have SHA-2 in widespread use and SHA-3
has been specified but we have not really started on SHA-3 deployment
at this point.

I would like to see a push for SHA-3 deployment as a backup to SHA-2.
While the fact that SHA-2 offers a 512 bit output means this is
perhaps not as high a priority as SHA-2 over SHA-1 should have been, I
think it would be prudent to offer both as a matter of course.


On public key algorithms, it will be a while before we can
decommission RSA but I think that point will come as people start to
understand that the real advantage of the EC systems doesn't lie in
the increased work factor so much as the greater flexibility of the
Diffie Hellman crypto system over RSA.

Right now there isn't a good way to support end to end security in
WebMail using RSA. You end up with keys that have been spread to far
too many places to consider the system 'end to end'. Until recently I
feared that the problem would be intractable. Then I (re)discovered
Matt Blaze's proxy re-encryption scheme from 20 years ago and I can
now see several ways to deal with the problem.

Given that we already have a move to EC using the NIST curves, adding
support for the CFRG curves will mean support for three algorithms
rather than the two I consider ideal. But I think this is worth it.

If we start deploying the CFRG curves now, we could expect to move to
them as the default algorithms by 2020 and decommission RSA sometime
after 2025 which is round about the time we can expect some robust
QC-resistant approaches to be available.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: [FORGED] Re: Nation State MITM CA's ?

2016-01-12 Thread Phillip Hallam-Baker
On Tue, Jan 12, 2016 at 11:46 AM, Jakob Bohm  wrote:
> On 12/01/2016 16:49, Phillip Hallam-Baker wrote:
>>
>> It really isn't a good idea for Mozilla to try to mitigate the
>> security concerns of people living in a police state. The cost of
>> doing so is you will set precedents that others demand be respected.
>>
>> Yes providing crypto with a hole in it will be better than no crypto
>> at all for the people who don't have access to full strength crypto.
>> But if you go that route only crypto with a hole will be available.
>>
>
> No one (except the MiTM CA itself, possibly) is suggesting that Mozilla
> include or authorize any MiTM CA to work in its browsers (or anything else
> using the Mozilla CA list).
>
> The discussion is how to *not* authorize it, without thereby causing too
> much collateral damage.

Yes, that is the issue we should be considering. The issue of
collateral damage isn't just limited to one set of governments though.
Anything we allow a police state, the FBI will demand and of course,
vice versa which is one of the reasons for rejecting the FBI demands.


> Questions being seriously discussed:
>
> - Should Mozilla add specific mechanisms that prevent the subjects of a
>  police state from obeying police orders to compromise their own
>  browser?  This is the most hotly debated topic in this thread.
>
> - Should Mozilla find a mechanism to allow only the non-MiTM part of a
>  dual use CA which is being used both as an MiTM CA and as the
>  legitimate CA for accessing government services, such as transit visa
>  applications by foreign travelers planning to cross the territory of
>  the police state on their way somewhere else?
>
> - How to most easily reject requests by the MiTM CAs to become
>  globally trusted CAs in the Mozilla CA store.  Without causing
>  precedents that would hurt legitimate CAs from countries that happen
>  not to be allies of the USA.  So far, the best suggestion (other than
>  to stall them on technicalities) is to find an interpretation of the
>  existing CA rules which cannot be satisfied by any MiTM CA.

Not accepting a demand, making clear a demand will never be accepted
is not the same as giving a refusal.

On the other questions, let us return to what the original basis for
the WebPKI was: Process.

There are existing precedents for revoking certificates that are
demonstrated to be malicious. One of the purposes of the CAA extension
was to provide an objective definition of malicious behavior. There
are at least two parties that have infrastructure that is capable of
detecting certificates that violate CAA constraints.

At the moment we don't have a very large number of domains with CAA
records. The more domain name holders we can persuade to deploy CAA,
the sooner an objective default will be detected.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: [FORGED] Re: Nation State MITM CA's ?

2016-01-12 Thread Phillip Hallam-Baker
It really isn't a good idea for Mozilla to try to mitigate the
security concerns of people living in a police state. The cost of
doing so is you will set precedents that others demand be respected.

Yes providing crypto with a hole in it will be better than no crypto
at all for the people who don't have access to full strength crypto.
But if you go that route only crypto with a hole will be available.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Nation State MITM CA's ?

2016-01-11 Thread Phillip Hallam-Baker
On Mon, Jan 11, 2016 at 1:45 PM, Jakob Bohm  wrote:
> On 09/01/2016 19:22, Kai Engert wrote:
>>
>> On Sat, 2016-01-09 at 14:11 +, Peter Gutmann wrote:
>>>
>>> That would have some pretty bad consequences.  With the MITM CA cert
>>> enabled,
>>> Borat [0] can read every Kazakh user's email, but no-one else can.  With
>>> the
>>> MITM CA blacklisted, Borat can still read every Kazakh user's email, but
>>> so
>>> can everyone else on the planet.  So the choice is between privacy
>>> against
>>> everyone but one party, and privacy against no-one.
>>
>>
>> I don't understand why blacklisting a MITM CA would enable everyone to
>> read the
>> data that passes through the MITM. Could you please explain? (It sounds
>> like
>> there is either a misunderstanding on your or on my side.)
>>
>
> He is obviously referring to the fact that refusing to encrypt using
> the MiTM certificate would force users to access their e-mails (etc.)
> using unencrypted connections (plain HTTP, plain IMAP, plain POP3
> etc.), thus exposing themselves to wiretapping by parties other than
> the government in question.

That does not concern me. What does concern me is that a user of the
Web believes that their communications are encrypted when they are not.

The browser should break when communication is not possible without
interception by a third party. In this particular case the party has
demonstrated its intention to use the CA to create MITM certificates.
I suggest that as soon as evidence of such certificates is seen, the
CA be blacklisted.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2016-01-08 Thread Phillip Hallam-Baker
On Thu, Jan 7, 2016 at 2:00 PM, Kathleen Wilson  wrote:
> On 1/6/16 3:07 PM, Paul Wouters wrote:
>>
>>
>> As was in the news before, Kazakhstan has issued a national MITM
>> Certificate Agency.
>>
>> Is there a policy on what to do with these? While they are not trusted,
>> would it be useful to explicitely blacklist these, as to make it
>> impossible to trust even if the user "wanted to" ?
>>
>> The CA's are available here:
>> http://root.gov.kz/root_cer/rsa.php
>> http://root.gov.kz/root_cer/gost.php
>>
>> One site that uses these CA's is:
>> https://pki.gov.kz/index.php/en/forum/
>>
>> Paul
>
>
>
> Kazakhstan has submitted the request for root inclusion:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1232689
>
> So, we really do need to have this discussion now.
>
> I will appreciate thoughtful and constructive input into this discussion.

I suggest waiting until they name their auditors before processing the request.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Let's Encrypt Incident Report: Broken CAA Record Checking

2015-12-08 Thread Phillip Hallam-Baker
People are using CAA.

Cool!

On Mon, Dec 7, 2015 at 11:25 PM,  wrote:

> ISRG CPS Section 4.2.1: "The CA checks for relevant CAA records prior to
> issuing certificates. The CA acts in accordance with CAA records if
> present."
>
> At 9:45am U.S. Pacific time on December 7th, 2015, it was reported to us
> that our Certificate Authority Authorization (CAA) record checks were not
> working properly [1]. We determined that the report was accurate.
>
> At 1:11pm U.S. Pacific time on the same day a fix was deployed to
> production. The fix has been verified to be correct.
>
> The cause of the problem was determined to be a bug in our "boulder" CA
> software.
>
> An analysis of logs and our certificate database determined that six
> certificates were improperly issued to domains restricted by CAA. These
> certificates have been revoked.
>
> https://crt.sh/?id=11015552
> https://crt.sh/?id=11129526
> https://crt.sh/?id=11129525
> https://crt.sh/?id=11145944
> https://crt.sh/?id=11146361
> https://crt.sh/?id=11147768
>
> We work hard to make sure that we're issuing in compliance with all
> relevant policies. We will be reviewing our policies and procedures to
> determine how we might best reduce the risk of such a mistake happening
> again.
>
> [1] https://github.com/letsencrypt/boulder/issues/1231
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A pragmatic solution for the S/MIME trust bit

2015-10-15 Thread Phillip Hallam-Baker
On Thu, Oct 15, 2015 at 11:24 AM, David E. Ross  wrote:
> On 10/15/2015 5:27 AM, Kai Engert wrote [in part]:
>>
>> (a) Only grant the S/MIME trust bit if a CA has been granted the SSL/TLS
>> trust bit already.
>>
>> If Mozilla decides to remove a SSL/TLS trust bit, the S/MIME trust bit (and
>> potentiall all other trust bits) for that CA will get removed, too.
>>
>> This eliminates the need to work on any CAs that are for the S/MIME purpose,
>> only.
>>
>>
>> (b) Only CAs that explicitly state they'd like to be granted the S/MIME
>> trust bit might potentially get it.
>>
>> This avoids the likelyhood that any CA's root gets accidentally used for the 
>> non
>> -SSL/TLS purpose.
>
> This might be okay if applied to certification authorities but not to
> individual root certificates.  We should not block the S/MIME trust bit
> when a certification authority chooses to have separate root
> certificates for TLS and S/MIME.
>
> --
> David E. Ross

What is the problem with the current situation?

Changing process takes time and effort. Does changing the process
really save any effort over leaving things as they are?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal: Remove Code Signing Trust Bit

2015-10-02 Thread Phillip Hallam-Baker
On Fri, Oct 2, 2015 at 12:36 PM, Brian Smith  wrote:

> -- Forwarded message --
> From: Brian Smith 
> Date: Thu, Oct 1, 2015 at 7:15 AM
> Subject: Re: Policy Update Proposal: Remove Code Signing Trust Bit
> To: Gervase Markham 
> Cc: "kirk_h...@trendmicro.com" 
>
>
> On Wed, Sep 30, 2015 at 11:05 PM, Gervase Markham 
> wrote:
>
> > On 01/10/15 02:43, Brian Smith wrote:
> > > Perhaps nobody's is, and the whole idea of using publicly-trusted CAs
> for
> > > code signing and email certs is flawed and so nobody should do this.
> >
> > I think we should divide code-signing and email here. I can see how one
> > might make an argument that using Mozilla's list for code-signing is not
> > a good idea; a vendor trusting code-signing certs on their platform
> > should choose which CAs they trust themselves.
> >
> > But if there is no widely-trusted set of email roots, what will that do
> > for S/MIME interoperability?
> >
>
> First of all, there is a widely-trusted set of email roots: Microsoft's.
> Secondly, there's no indication that having a widely-trusted set of email
> roots *even makes sense*. Nobody has shown any credible evidence that it
> even makes sense to use publicly-trusted CAs for S/MIME. History has shown
> that almost nobody wants to use publicly-trusted CAs for S/MIME, or even
> S/MIME at all.
>
> Further, there's been actual evidence presented that Mozilla's S/MIME
> software is not trustworthy due to lack of maintenance. And, really, what
> does Mozilla even know about S/MIME? IIRC, most of the S/MIME stuff in
> Mozilla products was made by Sun Microsystems. (Note: Oracle acquired Sun
> Microsystems in January 2010. But, I don't remember any Oracle
> contributions related to S/MIME. So, yes, I really mean Sun Microsystems
> that hasn't even existed for almost 6 years.)
>

While working on PRISMPROOF email (details on that next week hopefully) I
asked round and was surprised to discover that the number of CA issued
S/MIME certs is about the same as the number of OpenPGP keys on the key
servers. Further the S/MIME users are paying for the cert which suggests it
is rather more likely they are using them.

And this does not count the DoD deployment or the parts of the GSA
deployment that are not outsourced.


One of the reasons it has been so hard to deploy end-to-end mail has been
the scorched earth policy of the advocates of both sides and a refusal to
accept that the other side actually had a use case.

If people are serious about trust models and not just posturing for the
sake of it, they are going to describe the model they use to evaluate the
trust provided. PKI uses cryptography but that is never the weakest link
for a well designed system and usually not the weakest link even in a badly
designed one.

The model I have used for the past 20 years is to consider the work factor
for creating a bogus certificate. That is the model I used when we built
the WebPKI. The Web PKI is predicated on the costs associated with
acquiring a certificate being greater than the value to an attacker.
Requiring a corporate registration is not an insuperable obstacle but it
imposes a known cost and doing that on a repeated basis without being
caught or leaving a tell-tale signal is expensive. The point of revocation
was to reduce the window of vulnerability for use of a bogus certificate so
as to limit the value to the attacker.

Now one of the problems in that system was that it worked too well. And so
people who should have known better decided they could shut off the
controls I and others had predicated the security model on. Then they
blamed us for the consequences.

There has only been one occasion when the WebPKI has not worked within the
design parameters and that was the DigiNotar attack.


Two years ago I extended my model to consider time. Because one of the
astonishing things about notary hash chains is that it is actually quite
easy to build one with a work factor for a backdating attack that can be
considered infinite.

I am aware of the limitations of the PKIX trust model for the general trust
problem but it does work well within the organizations that it is designed
to serve and who do in fact use it on a substantial scale. Most people who
are serious about OpenPGP and not merely playing editor wars accept the
fact that the Web Of Trust model does not scale. The Moore bound problem
prevents WOT alone achieving global scale.

If however you combine the CA issued cert model, the WOT model and notary
hash chains, it is not only possible to establish a robust, scaleable email
PKI, it is reasonably straightforward.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Policy Update Proposal -- Remove Email Trust Bit

2015-09-25 Thread Phillip Hallam-Baker
On Fri, Sep 25, 2015 at 8:47 AM, Peter Gutmann 
wrote:

> Eric Mill  writes:
>
> >can anyone lay out what the steps to doing that would look like so the
> S/MIME
> >community can react in more concrete ways?
>
> Well, first you'll have to tell the S/MIME community what it is you want
> them
> to do...
>

Would people be interested in the suggestion I have?

If we are going to get anywhere with end to end secure email, we need to

1) End the silly OpenPGP / S/MIME standards war

2) Adopt a design for end to end secure messaging that is as easy to use as
regular mail.

3) Design any infrastructure so there is a compelling adoption incentive
for users when market penetration is less than 5% [currently we have about
2 million users of S/MIME and the same of OpenPGP or about 0.1% of total
Internet users]

4) Support the fact that users now need to be able to read their mail on a
wide range of platforms.


I have code running in the lab that I think meets these needs. And I think
that there is a compelling reason for every necessary stakeholder to
participate:


*Users*: The ability to sent E2E mail to 0.1% of mail users is not a
compelling adoption incentive. A really good password manager that allows
the user all the benefits of a cloud based password manager without relying
on the cloud service for security is probably enough to get to critical
mass.


*S/MIME & OpenPGP Community*: Yes, I get that neither of you wants to admit
defeat. But S/MIME has deployment ubiquity and OpenPGP has mindshare. You
need each other.

Fortunately we are at a technology inflection point. The transition to ECC
is going to make everyone want to throw their existing schemes away and
replace them. Not because of the ECC change but because of Matt Blaze's
work on proxy re-encryption which does not fit well with RSA but fits
marvelously with ECDHE.


*Thunderbird*:

Right now it takes me 20 minutes to configure Thunderbird to do S/MIME. I
can do the same thing for Windows Live Mail with the click of one button.
Not because of what Microsoft has done but because I took the instructions
for applying for a cert and converted them into code.

In general any set of user instructions that does not involve any user
choice can be eliminated and replaced by code.



There is also a big opportunity. Remember what originally made Mozilla the
browser to use? It wasn't being open source, it was having tabbed browsing.
I think there is a similar opportunity here. One of the things I have
noticed with the Internet and Web is that many ideas are tried before their
time has come. I saw dozens of Facebook-like schemes before that particular
one took off. Part of that is execution but another part is that people
take time to adapt to the new technology and be ready for another dose. We
had blogs back in 1994. They only took off in the Fall of 2000.

Right now Thunderbird isn't useful for much more than reading mail. It can
in theory be used for RSS feeds and for NNTP news. But those are withering.

Back when the Web began there was a product called Lotus Notes that did a
lot of very interesting things. That was the application that many of the
Internet mail standards were originally developed to support.

I think we now have most of the pieces in place that make a different type
of mail client possible, one that is a message based workflow system. The
critical piece that is missing is a usable model for security.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Specify audit criteria according to trust bit

2015-09-22 Thread Phillip Hallam-Baker
On Tue, Sep 22, 2015 at 4:47 AM, Brian Smith  wrote:

> Kathleen Wilson  wrote:
>
> > Arguments for removing the Email trust bit:
> > - Mozilla's policies regarding Email certificates are not currently
> > sufficient.
> > - What else?
> >
> >
> * It isn't clear that S/MIME using certificates from publicly-trusted CAs
> is a model of email security that is worth supporting. Alternatives with
> different models exist, such a GPG and TextSecure. IMO, the TextSecure
> model is more in line with what Mozilla is about that the S/MIME model.
>

The idea that there is one trust model that meets every need is completely
wrong.

Hierarchical trust models meet the needs of hierarchical organizations very
well. When I last did a survey I was rather surprised to find that there
are actually the same number of CA issued S/MIME certs as on the OpenPGP
servers. And that ignores a huge deployment in the US military that isn't
visible to us.

Governments and many enterprises are hierarchical. Which makes that the
preferred trust model for government and business uses. If I get an email
from my broker I really want it to be from someone who is still a Fidelity
employee.

Hierarchical is not sufficient by itself which is why email clients should
not be limited to a single trust model. It should be possible to specify
S/MIME keys directly by fingerprint.


* It is better to spend energy improving TLS-related work than
> S/MIME-related stuff. The S/MIME stuff distracts too much from the TLS
> work.
>

The TLS model is server side authentication. Saying client side
authentication distracts from server side makes no sense to me.



> * We can simplify the policy and tighten up the policy language more if the
> policy only has to deal with TLS certificates.
>

You could save even more time if you stopped supporting Thunderbird.

If Mozilla isn't going to do Thunderbird right and keep it up to date, that
might be the right choice of course.


* Mozilla's S/MIME processing isn't well supported. Large parts of it are
> out of date and the people who maintain the certificate validation logic
> aren't required to keeping S/MIME stuff working. In particular, it is OK
> according to current development policies for us to change Gecko's
> certificate validation logic so that it works for SSL but doesn't
> (completely) work for S/MIME. So, basically, Mozilla doesn't implement
> software that can properly use S/MIME certificates, as far as we know.



> Just to make sure people understand the last point: I think it is great
> that people try to maintain Thunderbird. But, it was a huge burden on Gecko
> developers to maintain Thunderbird on top of maintaining Firefox, and some
> of us (including me, when I worked at Mozilla) lobbied for a policy change
> that let us do our work without consideration for Thunderbird. Thus, when
> we completely replaced the certificate verification logic in Gecko last
> year, we didn't check how it affected Thunderbird's S/MIME processing.
> Somebody from the Thunderbird maintenance team was supposed to do so, but I
> doubt anybody actually did. So, it would be prudent to assume that
> Thunderbird's S/MIME certificate validation is broken.
>

The Internet has two killer applications, Mail and the Web. I invented
WebMail (no really we had a court case with a patent troll and it turns out
that I did) and I don't think it is the right answer.

Right now there are problems with the specs for OpenPGP, and with S/MIME.
Both are examples of 90/10 engineering from the days when that was
sufficient. Today they just don't make the grade.


If people want to have an email infrastructure that is end-to-end secure,
offers all the capabilities of OpenPGP, and S/MIME is fully backwards
compatible and makes email and the Web easier to use then I have an
architecture that does exactly that.

If someone was willing to work with me and help me to integrate with
Thunderbird in the same way that I currently integrate with Windows Live
Mail (and Outlook to come) then we could open with support for all the
major desktop email clients.


At some point, I can do the same thing for WebMail, but it isn't possible
to meet all my goals there until we can move to ECC.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-cert misissuance

2015-09-19 Thread Phillip Hallam-Baker
Before this goes too far. Perhaps we should have an in person meeting on
how to deal with this down in the valley and do a review on ACME at the
same time. These being somewhat linked.

The controls Tim Mather and co brought over from the NSA worked well for 20
years but it looks like they have been eroded. At this point we are on the
brink of a technology transition to ECC and also deploying CT.

There are options on the table today that we did not know existed in 1995.



On Sat, Sep 19, 2015 at 5:06 PM, Richard Barnes  wrote:

> On Sat, Sep 19, 2015 at 2:12 PM, Brian Smith  wrote:
>
> > On Sat, Sep 19, 2015 at 7:20 AM, Gervase Markham 
> wrote:
> >
> > > Symantec just fired people for mis-issuing a google.com 1-day
> pre-cert:
> > >
> >
> > By the way, Symantec didn't say "pre-cert," they said "certificates".
> >
>
> Well, a "pre-cert" is just a certificate with the poison extension in it.
>
> --Richard
>
>
>
> >
> > Also, I we shouldn't be splitting hairs at the difference between
> > pre-certificates and certificates as far as mis-issuance detection is
> > concerned. If people think there is a meaningful (technical, legal, etc.)
> > distinction between a pre-certificate being logged via CT and the
> > corresponding certificate being logged in CT, then we should consider
> > removing the pre-certificate mechanism from CT so that there's no doubts
> in
> > that. My view is that there is no meaningful difference.
> >
> > Cheers,
> > Brian
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
> >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Remove Roots used for only Email and CodeSigning?

2015-09-04 Thread Phillip Hallam-Baker
On Mon, Aug 31, 2015 at 7:02 PM, Kathleen Wilson 
wrote:

> Breaking this out into a separate discussion:
>
> ...should Mozilla continue to accept
>> certificates without the "Websites" trust bit? Considering that there are
>> not clear guidelines for how to process either code signing or email, and
>> considering their relevance (or lack thereof) to Mozilla, it would seem
>> wise to carefully consider both whether to accept new applications and
>> what to do with existing applications. My own personal suggestion is to
>> not accept new certificates, and to purge the existing ones.
>>
>
>
> I have always viewed my job as running the NSS root store, which has many
> consumers, including (but not limited to) Mozilla Firefox. So, to remove
> something like root certs that only have the email trust bit enabled
> requires input from the consumers of NSS. It should not be removed just
> because Firefox doesn't use it.
>
> Is the mozilla.dev.security.policy forum the correct place to have this
> discussion about the NSS root store only including root certs with the
> Websites trust bit enabled?
>
> Or should I start the discussion in another forum, such as
> mozilla.dev.tech.crypto?
>

Has Mozilla stopped supporting Thunderbird?

The S/MIME support in Thunderbird has an insane user interface. It took me
over 20 minutes to issue myself a cert. But it is there and it could be
fixed very easily. I would even be willing to do the fixing only the
instructions for setting up a development version of the library are
utterly incomprehensible, incomplete and wrong so after a couple of days, I
gave up.


To support a world in which everyone is using end-to-end secure mail we
need more than one trust model. The PKIX hierarchical approach works for
enterprises but not for individuals. OpenPGP has two models, the direct
trust model via fingerprints which works at an individual level and the Web
of Trust model that everyone agrees does not scale.

A couple of years ago, when I started work on what has become The Mesh, I
took a look at combining the PKIX and OpenPGP approaches using a 'work
factor' approach to provide an objective measure. Rather surprisingly, I
discovered that it is possible to make the Web of Trust scale if you
combine the Direct trust, CA Trust and Web of Trust concepts.


Right now I am working on a proposal that I think takes email messaging
security to the next level and makes ubiquitous use practical for the first
time. I have been publishing drafts on IETF as I go along but the next
increment should be a quantum leap forward. My goals are

* Make computers easier to use
* Make computers secure at the same time as being easier to use
* Put the user in full control of their security to the maximum extent that
they are able to take that responsibility.


This is not the time for Mozilla to be dropping support for email roots.
Moreover the principle of separating email roots and code signing roots
from TLS roots is sound. If Mozilla were to stop recognizing separate
roots, that would encourage CAs to conflate concerns that should be
separated.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Requirements for CNNIC re-application

2015-04-15 Thread Phillip Hallam-Baker
On Tue, Apr 14, 2015 at 8:09 AM, Kurt Roeckx  wrote:
> On 2015-04-14 13:54, Rob Stradling wrote:
>>
>> On 14/04/15 12:38, Kurt Roeckx wrote:
>>>
>>> On 2015-04-14 01:15, Peter Kurrasch wrote:

 Let's use an example. Suppose CNNIC issues a cert for
 whitehouse[dot]gov and let's further suppose that CNNIC includes this
 cert in the CT data since they have agreed to do that. What happens
 next?
>>>
>>>
>>> What I've been wondering about is whether we need a mechanism where the
>>> CT log should approve the transition from one issuer to an other.
>>
>>
>> Kurt, isn't CAA (RFC6844) the tool for this job?
>
>
> I don't see everybody publishing that.  Or do you want to make it a
> requirement that everybody publishes such a record?

I think that it is from today that CAs are required to state whether
they do CAA or not in their CPS.

Anyone who does not implement CAA and then miss-issues just one cert
that should have been caught is going to look exceptionally stupid.


CAA tells CAs what they should not do
CT tells everyone whether or not they did it.

Those are the accountability controls.

In addition, HSTS and HPKP provide access controls which are currently
being distributed through the HTTP and pre-loaded lists and I have a
proposal for publishing the exact same info through the DNS as CAA
attributes.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Requirements for CNNIC re-application

2015-04-15 Thread Phillip Hallam-Baker
CT is an accountability control, not an access control

We need both

Sent from my difference engine


> On Apr 14, 2015, at 18:05, Matt Palmer  wrote:
> 
>> On Tue, Apr 14, 2015 at 01:38:55PM +0200, Kurt Roeckx wrote:
>>> On 2015-04-14 01:15, Peter Kurrasch wrote:
>>> Let's use an example. Suppose CNNIC issues a cert for whitehouse[dot]gov 
>>> and let's further suppose that CNNIC includes this cert in the CT data 
>>> since they have agreed to do that. What happens next?
>> 
>> What I've been wondering about is whether we need a mechanism where the CT
>> log should approve the transition from one issuer to an other.
> 
> NO.  A CT log is a *log*, not a gatekeeper.
> 
> - Matt
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: What is the security benefit of certificate transparency?

2015-04-14 Thread Phillip Hallam-Baker
I am coming to the conclusion that 'Why fix X when the attacker can do
Y so lets not bother with X' is the worst form of security argument.

No security control is a magic bullet. Expecting the control that
addresses X to also address Y is unreasonable. It is an excuse for
inaction.

CT is merely one component in the PKI/2 infrastructure. It is a
measurement device so don't expect it to change anything on its own,
that is not the purpose. Measurement is not a control system but
accurate measurement is a requirement for a good control system.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Consequences of mis-issuance under CNNIC

2015-04-02 Thread Phillip Hallam-Baker
OK so there is policy. But what about enforcement.

At the moment we only have accountability controls. Can we turn them
into access controls?

On Thu, Apr 2, 2015 at 10:45 AM, Richard Barnes  wrote:
>
>
> On Thu, Apr 2, 2015 at 10:34 AM, Phillip Hallam-Baker
>  wrote:
>>
>> On Thu, Apr 2, 2015 at 9:41 AM, Gervase Markham  wrote:
>> > On 02/04/15 12:42, Sebastian Wiesinger wrote:
>> >> the plan would be to continue allowing current certificats (perhaps
>> >> with some sort of whitelist) while not accepting new certificates.
>> >>
>> >> Could you ask Google to share their whitelist?
>> >
>> > Until they announced, we were not aware that Google would be requesting
>> > a whitelist. It is quite possible CNNIC will supply us both with the
>> > same data.
>> >
>> >> As far as I understand it, without an explicit whitelist nothing would
>> >> prevent CNNIC to backdate new certificates so that they would be
>> >> accepted. Is this right or am I missing something?
>> >
>> > Well, if anyone detects them doing this, by e.g. scanning the internet,
>> > the consequences will be serious. I have no reason to believe that they
>> > would backdate certs but if they did, they would need to be very
>> > confident that no-one would notice. If I owned CNNIC, I would not be at
>> > all confident of this.
>>
>> Organizations are funny things.
>>
>> Facing a choice of coming clean, admitting a mistake and moving on
>> versus a cover up, pretty much every rational CEO will choose the
>> first.
>>
>> Faced with a choice between getting fired for making a mistake and
>> making a pathetic attempt to cover up with a small chance of success,
>> a rational junior manager will choose the second.
>>
>>
>> I think we need to rethink how the principle of least privilege
>> applies here and make sure we are doing everything we can to minimize
>> risk.
>>
>> As a matter of policy, no cert should ever issue for a private key
>> that is not under the direct control of a CA unless one of the
>> following apply to the corresponding cert:
>>
>> 1) The other party has CP, CPS and full audit for operating a CA.
>> 2) There is a name constraint.
>> 3) It is an end entity certificate.
>
>
> That's what the Mozilla policy already says!
>
> """
> 10. ... All certificates that are capable of being used to issue new
> certificates, that are not technically constrained, and that directly or
> transitively chain to a certificate included in Mozilla’s CA Certificate
> Program MUST be audited in accordance with Mozilla’s CA Certificate Policy
> and MUST be publicly disclosed by the CA that has their certificate included
> in Mozilla’s CA Certificate Program. The CA with a certificate included in
> Mozilla’s CA Certificate Program MUST disclose this information before any
> such subordinate CA is allowed to issue certificates.
> """
>
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/inclusion/
>
> Indeed, the lack of disclosure and audit is the core of the concern in this
> case.
>
> --Richard
>
>
>>
>> Further no private key should ever be in a network accessible device
>> unless the following apply:
>>
>> 1) There is a path length constraint that limits issue to EE certs.
>> 2) It is an end entity certificate.
>>
>> Perhaps we should take this to the IETF right key list.
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Consequences of mis-issuance under CNNIC

2015-04-02 Thread Phillip Hallam-Baker
On Thu, Apr 2, 2015 at 12:50 PM, Kurt Roeckx  wrote:
> On Thu, Apr 02, 2015 at 12:34:55PM -0400, Phillip Hallam-Baker wrote:
>> On Thu, Apr 2, 2015 at 11:05 AM, Kurt Roeckx  wrote:
>> > On 2015-04-02 16:34, Phillip Hallam-Baker wrote:
>> >>
>> >> Further no private key should ever be in a network accessible device
>> >> unless the following apply:
>> >>
>> >> 1) There is a path length constraint that limits issue to EE certs.
>> >> 2) It is an end entity certificate.
>> >
>> > Why 1)?
>>
>> Can you state a use case that requires online issue of Key Signing Certs?
>
> You suggested it, so I'm guessing you're asking yourself?
>
> The only use case I can think of is to be able to MITM people like
> we saw the firewall do here.  If you want to do something like
> that the key should not have been signed by any CA that chains back
> to a root CA in the Mozilla root store, they should use a private

Oh you mean why permit 1 at all?

If that is not permitted it would be impossible for a CA to issue any
end entity cert without an offline key ceremony. That is obviously
impractical.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Consequences of mis-issuance under CNNIC

2015-04-02 Thread Phillip Hallam-Baker
On Thu, Apr 2, 2015 at 11:05 AM, Kurt Roeckx  wrote:
> On 2015-04-02 16:34, Phillip Hallam-Baker wrote:
>>
>> Further no private key should ever be in a network accessible device
>> unless the following apply:
>>
>> 1) There is a path length constraint that limits issue to EE certs.
>> 2) It is an end entity certificate.
>
> Why 1)?

Can you state a use case that requires online issue of Key Signing Certs?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Consequences of mis-issuance under CNNIC

2015-04-02 Thread Phillip Hallam-Baker
On Thu, Apr 2, 2015 at 9:41 AM, Gervase Markham  wrote:
> On 02/04/15 12:42, Sebastian Wiesinger wrote:
>> the plan would be to continue allowing current certificats (perhaps
>> with some sort of whitelist) while not accepting new certificates.
>>
>> Could you ask Google to share their whitelist?
>
> Until they announced, we were not aware that Google would be requesting
> a whitelist. It is quite possible CNNIC will supply us both with the
> same data.
>
>> As far as I understand it, without an explicit whitelist nothing would
>> prevent CNNIC to backdate new certificates so that they would be
>> accepted. Is this right or am I missing something?
>
> Well, if anyone detects them doing this, by e.g. scanning the internet,
> the consequences will be serious. I have no reason to believe that they
> would backdate certs but if they did, they would need to be very
> confident that no-one would notice. If I owned CNNIC, I would not be at
> all confident of this.

Organizations are funny things.

Facing a choice of coming clean, admitting a mistake and moving on
versus a cover up, pretty much every rational CEO will choose the
first.

Faced with a choice between getting fired for making a mistake and
making a pathetic attempt to cover up with a small chance of success,
a rational junior manager will choose the second.


I think we need to rethink how the principle of least privilege
applies here and make sure we are doing everything we can to minimize
risk.

As a matter of policy, no cert should ever issue for a private key
that is not under the direct control of a CA unless one of the
following apply to the corresponding cert:

1) The other party has CP, CPS and full audit for operating a CA.
2) There is a name constraint.
3) It is an end entity certificate.

Further no private key should ever be in a network accessible device
unless the following apply:

1) There is a path length constraint that limits issue to EE certs.
2) It is an end entity certificate.

Perhaps we should take this to the IETF right key list.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Name Constraints

2015-03-09 Thread Phillip Hallam-Baker
On Mon, Mar 9, 2015 at 11:38 AM, Michael Ströder 
wrote:

> Ryan Sleevi wrote:
> > Given that sites in consideration already have multiple existing ways to
> > mitigate these threats (among them, Certificate Transparency, Public Key
> > Pinning, and CAA),
>
> Any clients which already make use of CAA RRs in DNS?
>
> Or did you mean something else with the acronym CAA?
>
> Ciao, Michael.
>
>
Sites can use CAA. But the checking is not meant to happen in the client as
the client cannot know what the CAA records looked like when the cert was
issued.

A third party can check the CAA records for each new entry on a CT log
however. And I bet that every CA that implements CAA will immediately start
doing so in the hope of catching out their competitors.


CAA also provides an extensible mechanism that could be used for more
general key distribution if you were so inclined.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Tightening up after the Lenovo and Comodo MITM certificates.

2015-02-25 Thread Phillip Hallam-Baker
On Wed, Feb 25, 2015 at 8:59 AM, Peter Kurrasch  wrote:

> I'm not sure I totally follow here because informed consent requires the
> ability to inform, and I don't think we have that yet.
>
> The way any attacker operates is to find gaps in a system and make use of
> them. In my questions I'm trying the same approach: what are some gaps in
> the Komodia solution and how might we exploit them ourselves?
>

There are multiple problems here. One of them is that what is obvious to
folk in the PKI community is not necessarily obvious to folk in the
Anti-Virus community. Another problem is that following the advice given
out by Harvard Business School and setting up separate arms-length
companies to work on speculative 'disruptive' products means that they are
operating without the usual QA processes you would expect of a larger
company.

I don't want to get into specifics at this point.

We can do finger pointing and blamestorming but what we really need is a
solution. I think informed consent is a major part of the problem.


Malware and crapware are a real problem. My problem with what Lenovo did
isn't just that the code they installed had bugs, it is that they installed
the stuff at all. If I pay $1,000 for a laptop, I do not expect the
manufacturer to fill the inside of the case with manure. It is clearly
worse if the manure carries a disease but the solution to the problem is to
not ship the manure at all rather than trying to pasteurize it.

So one part of the solution here is the Windows Signature edition program
which guarantees customers the crapware free computer they paid for.


Fixing the AV hole is harder. The problem as the Anti-Virus people see it
is how to scan data for potentially harmful content, whether it is mail or
documents or web pages. The AV world regards itself as being a part of the
trusted computing base and thus entitled to have full access to all data in
unencrypted form. AV code has from the start had a habit of hooking
operating system routines at a very low level and taking over the machine.

Now we in the PKI world have a rather different view here. We see the root
store as being the core of the trusted computing base and that the 'user'
should be the only party making changes. We do not accept the excuse that
an AV product is well intentioned. However recall that it was Symantec
bought VeriSign, not the other way round. We don't necessarily have the
leverage here.


The fundamental changeable aspect of the current model for managing the
root store is the lack of accountability or provenance. As a user I have
tools that tell me what roots are in the store but I have no idea how they
got there. On the Windows store (which I am most familiar with), don't have
any way to distinguish between roots from the Microsoft program and those
added by programs.

One quick fix here would be for all trust root managers to use the CTL
mechanism defined by Microsoft (and pretty much a defacto standard) to
specify the trusted roots in their program, thus enabling people to write
tools that would make it easy to see that this version of Firefox has the
200+ keys from the program plus these other five that are not in the
program.


Right now it takes a great deal of expertise to even tell if a machine has
been jiggered or not. That is the first step to knowing if the jiggering is
malicious or not and done competently or not.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DSA certificates?

2014-12-23 Thread Phillip Hallam-Baker
DSA was the mandatory to implement algorithm originally since that was out
of patent earlier than RSA.

I would like to kill as many unused crypto implementations as possible. The
algorithm might be sound but an implementation that has never been used in
practice is a huge liability.




On Tue, Dec 23, 2014 at 3:31 AM, Peter Gutmann 
wrote:

> Ryan Sleevi  writes:
>
> >(and for sure, Microsoft's stack _does_ implement it,
>
> Does anyone know the motivation for this?  MS also implemented support for
> X9.42 certificates, which no-one has ever seen in the wild, but it was in
> receive-only mode (it would never generate data using them) and was done
> solely in order to avoid any accusations that they weren't following
> standards
> (there was this antitrust thing going on at the time).  So having it
> present
> in a MS implementation doesn't necessarily mean that it's used or
> supported,
> merely that it's, well, present in a MS implementation.
>
> (I'm just curious, wondering what the story behind this one is).
>
> Peter.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New free TLS CA coming

2014-11-20 Thread Phillip Hallam-Baker
On Thu, Nov 20, 2014 at 6:22 AM, Richard Barnes  wrote:
> I am from Mozilla, and the replies here are exactly right.  From the 
> perspective of the Mozilla root CA program, Let's Encrypt will be treated as 
> any other applicant, should they choose to apply.  No "immediate acceptance", 
> no "less audited" -- same audit requirements and application process as 
> everyone else.

I don't see the issue here. Comodo has been giving away certs for 8
years now. So have other CAs. Mozilla has known about that. It has
never been raised as an issue at roll over.

The issue with CACert wasn't that they were refused, they withdrew
their application after they realized that they were never going to
meet the audit criteria.

The only different thing here is that this time there is a proposal
for an automated enrollment protocol as well and presumably a
commitment to implementing it.

I have been calling for an automated enrollment protocol for quite a
while. This is the one I wrote for PRISM-PROOF email:

http://tools.ietf.org/html/draft-hallambaker-omnipublish-00


I was considering a wide range of scenarios ranging from EV certs to
certs for the coffee pot. Paid, unpaid, strong validation, DV, etc. My
model is subtly different but that was in part because I have worked
with Stephen Farrell, the current Security AD on five different
enrollment protocols over the years and I wanted to avoid the 'what
again?' response.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Client certs

2014-10-20 Thread Phillip Hallam-Baker
A relevant point here is that one of the main reasons for the difficulty in
using client certs was a preposterous patent claim to the implementation of
RSA in a hardware device with a USB serial interface.

I kid you not.

That might not be as much of an issue these days. The patent might have
expired and even if it hasn't a sequence of recent SCOTUS rulings have made
those sorts of claims difficult to support.

But then again, since USB tokens are being replaced by smart phones anyway,
perhaps even that is irrelevant.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Client certs

2014-10-06 Thread Phillip Hallam-Baker
On Thu, Sep 25, 2014 at 8:29 AM, Gervase Markham  wrote:
> A question which occurred to me, and I thought I'd put before an
> audience of the wise:
>
> * What advantages, if any, do client certs have over number-sequence
>   widgets such as e.g. the HSBC Secure Key, used with SSL?
>
> http://www.hsbc.co.uk/1/2/customer-support/online-banking-security/secure-key
>
> It seems like they have numerous disadvantages (some subjective):
>
> * Client certs can be invisibly stolen if a machine is compromised
> * Client certs are harder to manage and reason about for an average
>   person
> * Client certs generally expire and need replacing, with no warning
> * Client certs are either single-machine, or need a probably-complex
>   copying process
>
> What are the advantages?

Going back to this thread because nobody seems to have addressed the
real issue - usability.

Right now I am working on email encryption but solving the usability
issues of email requires a general solution so I have worked on these
issues as well. And I think I have solved them.


Passwords have terrible security because they shift the cost to
someone who does not care about security of the asset because they
don't own it. I use the same password for my Washington Post, New York
Times, Slashdot, etc. accounts and it was leaked over five years ago.
I do not care because it isn't my asset involved. So the passwords get
forgotten and no password is better than the security of the recovery
system which is SMTP.

Passwords are also horribly insecure because they are disclosed to
validate them. Now we could solve this with some clever crypto scheme
but why bother when its actually easier to design a sensible PKI
scheme.

Passwords also have pretty horrid usability. But they get away with it
because implementation of client certificates is really, really bad.

One time tokens are pretty horrid usability as well. You have to carry
the thing about with you. Which I won't do unless I am paid. So most
of those schemes are migrating into smart phones. TAAA-DAAA! We are
now emulating 1970s technology with a computer that would have been
supercomputer class in the 1990s.

There is a much better way to use a smartphone, send it a message that
asks "Do you want to pay your gas bill of $503.43?" and have the user
answer yes or no and the app return a signed message.


I am currently working on making S/MIME and PGP email really really
easy to use. As in no more effort to use than regular email. As part
of that I have written a tool that creates and installs a certificate.
For a self signed cert the process is just run the configurator tool
and its done. For a CA issued cert, the user will specify the domain
name of the CA and it is done. Not even email links to click on
because the configurator has control of their mail app and can do the
challenge/response automatically. [There are other validation schemes
supported but I won't go into them here]

What I have taken out is all the makework that the user currently has
to suffer. And this is not just bad for Thunderbird, it is poke the
user in the eye with a sharp stick bad. It literally takes a quarter
of an hour to go through all the incantations. And that is me doing
it, I know what I am doing. I would expect no more than 60% of users
to follow instructions correctly. And all the effort is complete
makework. The user has to pull the certificate out of the Windows
store and install it in T-bird. Oh and repeat once a year.

Client SSL certs are just as bad and in addition the use interface is
horrid like it is on every other browser.

The basic problem with most Internet crypto is that the implementation
is 'enterprise grade'. By which I mean terrible usability because the
person who decides what stuff to buy will never use it.

The problems don't require a usability lab to sort out either. In fact
DON'T go to the lab. If the user is being given work to do then the
design is wrong. I don't need to test out my configurator in the
usability lab because there isn't a user interface to test.


OK so how do we solve the usability issues Gerv raised?

* Certs expire after 1 year
* Transferring keys between devices?

Answer: We don't. Look again at the requirements. What are the use
cases that drive them? I can't see any driver for enterprise or
consumer deployments. I can't even see a need to do either in the
government case, but the first is probably inherited from NSA
doctrine.

First step to sanity is that authentication keys are only ever in one
machine. If a user has two machines then they have two keys. If they
have eight machines then they have eight keys. This solves two
problems, first the key transport problem, second a large chunk of the
revocation problem. If a device is lost we only need to affect one
device, not every device.

[Decryption keys are another matter, there are good reasons to have a
single decryption key in multiple devices. And the reason that I got
into the per device authentication keys 

Re: Short-lived certs

2014-09-05 Thread Phillip Hallam-Baker
+1

Short lifetime certs don't solve every problem with revocation. But they
allow us to drastically reduce the problem space. That makes applying other
mechanisms viable.

The end goal has to be to reduce the time window of vulnerability to the
time it takes people to respond to phishing sites and other attacks. That
is minutes, not days.

We are not going to get there soon. But that is where we have to aim for.




On Fri, Sep 5, 2014 at 12:43 PM,  wrote:

> Hi Gerv, you've been busy!
>
> The cases Jeremy identified (thanks, Jeremy!) are all good problems to
> address and while I'm not unsympathetic I don't necessarily find them all
> that compelling. The situations involving network meddling by someone in
> power is especially troubling and goes beyond what I'm interested in
> covering in this discussion.
>
> That said, the case for performance ‎is troubling for a couple reasons.
> First is that I've seen many times where someone says "(whatever) would
> work so much better if we could bypass this security stuff". I don't mean
> to suggest that people who want small cert chains are wanting to bypass
> security but a practice such as this does open the door for people who
> might consider such things. So my initial concern is where this might lead,
> and what protections might be needed to ensure it doesn't go further.
>
> The bigger problem I have with this, however, really has nothing to do
> with people who have good server c‎onfigs and are competent server admins.
> In such cases, we can probably assume there are likely to be fewer mistakes
> made and thus less of an impact to security.
>
> My problem is what happens when the cert holder loses control of the
> private key, no matter what the reason is. Relying on the expiration date
> is only a partial answer for 2 reasons: 1) a user might choose to allow the
> expired cert with the compromised key anyway (hence my asking about its
> treatment); and 2) a short expiry might still be long enough to cause harm.
> Consider that a phishing site might only exist for 2 days, just as an
> example.
>
> So in order to safely proceed with a small cert solution I think we need
> to flesh out how key compromises can be mitigated.
>
>
>   *From: *Gervase Markham
> *Sent: *Friday, September 5, 2014 4:47 AM
> *To: *fhw...@gmail.com; Jeremy Rowley;
> mozilla-dev-security-pol...@lists.mozilla.org
> *Subject: *Re: Short-lived certs
>
> On 04/09/14 19:32, fhw...@gmail.com wrote:
> > Could you (or anyone) elaborate a bit on the use cases where short
> > lived certs are desirable?
>
> See other messages in this thread - it saves a significant amount of
> setup time not to have to wait for a response from the OCSP server.
>
> > I'm also wondering what the plan is for handling an expired short
> > term cert. Will the user be given a choice of allowing an exception
> > or does it get special handling?
>
> What if I say it's treated the same as any other expired cert?
>
> Gerv
>
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Short-lived certs

2014-09-05 Thread Phillip Hallam-Baker
On Fri, Sep 5, 2014 at 5:30 AM, Gervase Markham  wrote:
> On 04/09/14 14:25, Rob Stradling wrote:
>> When attempting to access an HTTPS site with an expired cert on Firefox
>> 32, you'll see a "This Connection is Untrusted" page that lets you add
>> an exception and proceed.
>>
>> But when attempting to access an HTTPS site with a revoked cert, you'll
>> see "Secure Connection Failed" and Firefox 32 does NOT let you proceed.
>>
>> Would it make sense to treat expired certs in the same way as revoked
>> certs?  (And if not, why not?)
>
> Logically, it does make sense. In practice, revocation has a near-zero
> false-positive rate, whereas expired sadly has a much greater
> false-positive rate. Which is why Firefox treats them differently.

Which means that expired short lived certs probably need to be treated
differently.

We probably need to mark them in some way as being intended to be
short lived. And we certainly need to fix the problem of getting them
renewed efficiently.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Short-lived certs

2014-09-04 Thread Phillip Hallam-Baker
On Thu, Sep 4, 2014 at 6:43 PM, Ryan Sleevi
 wrote:
> On Thu, September 4, 2014 11:20 am, Phillip Hallam-Baker wrote:
>>  Some constraints:
>>
>>  1) Any new scheme has to work equally well with legacy browsers and
>>  enabled browsers.
>
> Sure. However, this requires a definition of legacy.
>
>>
>>  2) Ditto for legacy servers and this is actually a harder problem as
>>  upgrading a server can force a complete redesign if they are using a
>>  middleware layer that has changed radically.
>
> Respectfully, Phillip, I disagree. CAs MAY offer such short-lived certs as
> an option. No one's requiring they exclusively limit issuance to it.
> There's no concern for legacy servers. If you're a legacy server, you
> don't use this. It's that simple.

It is still a problem.

The point I am trying to get across here is that there are very few
good reasons for an end user sticking to an obsolete browser and
almost all would upgrade given the choice. This is not the case for
servers and there are lots of folk who will complain if they are
forced to upgrade their server because that might require them to
change their PHP version which in turn requires them to completely
rework a ton of spaghetti code piled on top.


>>  Because of (1), the AIA field is going to have to be populated in EV
>>  certs for a very long time and so we probably don't need to raise any
>>  of this in CABForum right now. Lets do the work then let them follow
>>  the deployment. A browser doesn't have to check the AIA field just
>>  because it is there.
>
> I'm not sure I agree with your conclusion for 1. As noted elsewhere, a
> short-lived cert is effectively the same as the maximal attack window for
> a revocation response. That's it. The AIA can be dropped if they're
> equivalent.

It can be dropped as far as security is concerned. But that is only
going to save a few bytes and might cause legacy issues. So why make
being allowed to drop it a major concern at this point?

Dropping AIA is useful for the CA as I don't need to bother with OCSP
at all. But I can only drop AIA if it is not going to cause legacy
browsers to squeak about a missing OCSP distribution point.

If there are browsers that give appropriate treatment to short lived
certs then I am sure getting CABForum to update the BRs etc. is not
going to be hard. All I am saying here is that is not a critical path
concern.


>>  Short lived certs are just as easy in theory BUT they require some new
>>  infrastructure to do the job right. At minimum there needs to be a
>>  mechanism to tell the server how to get its supply of short lived
>>  certificates. And we haven't designed a standard for that yet or
>>  really discussed how to do it and so it isn't ready to press the fire
>>  button on.
>
> I disagree here. What's at stake is not the particular mechanisms of doing
> so, nor would I endorse going down the route of standardizing such
> mechanisms as you do. I would much rather see the relevant frameworks -
> Mozilla and the BRs - altered to support them, and then allow site
> operators and CAs interested in this model to work to develop the
> infrastructure and, based on real world experience, rough consensus, and
> running code, rather than idealized abstractions.

I am not interested in issuing any product until my customers can use
it. And I don't see how they can use it until the cert update process
can be automated.


>>  What I suggest browsers do right now is
>>
>>  1) Join in the IETF discussion on the TLS/PKIX lists saying that you
>>  support my TLS Feature extension proposal aka MUST STAPLE.
>>
>>  2) Read and comment on the proposal you have just committed to.
>>
>>  3) Implement an appropriate response to a certificate that specifies a
>>  MUST STAPLE condition when the server does not staple. This could be
>>  (1) Hard Fail immediately or (2) attempt to do an OCSP lookup and hard
>>  fail if it does not succeed or (3) choose randomly between options 1
>>  and 2 so as to disincentivize CAs misusing setting the flag to force
>>  hard fail.
>
> This is something you should nail down before 1 or 2.

OK, if I have to nail it down I will pick 1.

> The correct answer is hard fail. Any other answers and we'll be back here
> again in 5 years with the same issues.

That is my preference.


>>  4) Implement a mechanism that regards certificates with a total
>>  validity interval of 72 hours or less to be valid without checking. I
>>  do not expect this feature to be used very soon but implementing the
>>  feature in the browser is probably a gating function on starting the
>>  server folk thinking 

Re: Short-lived certs

2014-09-04 Thread Phillip Hallam-Baker
On Thu, Sep 4, 2014 at 7:52 AM, Hubert Kario  wrote:
> - Original Message -
>> From: "Gervase Markham" 
>> To: mozilla-dev-security-pol...@lists.mozilla.org
>> Sent: Thursday, September 4, 2014 12:21:50 PM
>> Subject: Short-lived certs
>>
>> Short-lived certs are one plank of our future revocation strategy.[0]
>> Currently, it is not permitted by the CAB Forum Baseline Requirements to
>> revocation pointers out of a cert, ever. However, this is part of the
>> big value of short-lived certs, as it's what unlocks their
>> speed-increasing potential across all browsers. (The logic is that a
>> 3-day expiry misissued cert with no revocation pointers has a similar
>> risk profile to a 1-year expiry misissued cert where the attacker has
>> captured a valid 3-day expiry OCSP response they can staple to it).
>
> It all depends on the exact definition of "short-lived". If the definition
> is basically the same as for OCSP responses or shorter, then yes, they
> provide the same security as regular certs with hard fail for OCSP
> querying/stapling.
>
> I'm not sure what gives us the removal of revocation info from certificate.
>
> I mean, if the recommendation for PKIX is to not check revocation info
> for certificates that have total validity period of less than, say 2 days,
> then inclusion or exclusion of AIA extension is secondary.
>
> There's also the must-staple extension in the works, which can be part of
> the plan: you either get short lived certs or you get a long lived with
> must-staple. They would provide the same security guarantees.

Some constraints:

1) Any new scheme has to work equally well with legacy browsers and
enabled browsers.

2) Ditto for legacy servers and this is actually a harder problem as
upgrading a server can force a complete redesign if they are using a
middleware layer that has changed radically.

3) The status vulnerability window needs to be no longer than 48 hours
for a machine with an accurate clock

4) The scheme must tolerate some degree of clock skew, though the
amount might vary over time.


Because of (1), the AIA field is going to have to be populated in EV
certs for a very long time and so we probably don't need to raise any
of this in CABForum right now. Lets do the work then let them follow
the deployment. A browser doesn't have to check the AIA field just
because it is there.

At worst we reword the requirements on browsers to say that they have
to verify that the status is current and not specify how. Short lived
certs would automatically qualify.


Must Staple and short lived certs are pretty much the same as far as
the security requirements go. The difference is that the server
requirements for supporting stapling with must staple are pretty
simple. All that is needed is that the server specify the must staple
extension when the certificate is applied for (just a flag on the key
generator) and then the server pulls the OCSP token from the AIA
extension every n hours which is already implemented almost
everywhere.

Short lived certs are just as easy in theory BUT they require some new
infrastructure to do the job right. At minimum there needs to be a
mechanism to tell the server how to get its supply of short lived
certificates. And we haven't designed a standard for that yet or
really discussed how to do it and so it isn't ready to press the fire
button on.


What I suggest browsers do right now is

1) Join in the IETF discussion on the TLS/PKIX lists saying that you
support my TLS Feature extension proposal aka MUST STAPLE.

2) Read and comment on the proposal you have just committed to.

3) Implement an appropriate response to a certificate that specifies a
MUST STAPLE condition when the server does not staple. This could be
(1) Hard Fail immediately or (2) attempt to do an OCSP lookup and hard
fail if it does not succeed or (3) choose randomly between options 1
and 2 so as to disincentivize CAs misusing setting the flag to force
hard fail.

4) Implement a mechanism that regards certificates with a total
validity interval of 72 hours or less to be valid without checking. I
do not expect this feature to be used very soon but implementing the
feature in the browser is probably a gating function on starting the
server folk thinking about the best way to implement the cert update
feature.


The simplest way to do cert update would be for the server to keep the
same key throughout and just issue fresh certs for the same old key.

A much better approach that provides a lot of robustness in all sorts
of ways is to rotate the private key with each new certificate. Under
one scheme the server would have some means of authenticating the cert
update request to a CA (probably a long term RSA or ECC key pair).

But in a large server farm where outsourcing etc is involved you might
want to have a scheme that makes use of trustworthy hardware to bind
unique keys to particular hardware in a manner that prevents
extraction.

I have a scheme of this type described here...

Re: DANE (was Re: Proposal: Switch generic icon to negative feedback for non-https sites)

2014-08-07 Thread Phillip Hallam-Baker
On Thu, Aug 7, 2014 at 3:08 PM, Richard Barnes  wrote:
>
> On Aug 7, 2014, at 2:17 PM, Chris Palmer  wrote:
>
>> On Thu, Aug 7, 2014 at 7:11 AM,   wrote:
>>
>>> I second that: DANE support is the right direction to go! It considerably 
>>> raises the effort required to do MITM attacks, it allows the site ops to 
>>> cut out the CAs and take control back.
>>
>> DANE relies on DNSSEC, which (apart from having had and lost its
>> chance to be widely deployed) ossifies trust in fewer, more powerful
>> third parties who use weaker keys.
>>
>> But now we are off the topic of this thread.
>
> Switched the subject line :)
>
> You're talking as if those fewer, more powerful third parties can't *already* 
> subvert the CA system.
>
> Whether you like the DNS or not, all the many DV certs in the world are 
> attesting to DNS names.  All the CAs are supposed to be doing is verifying 
> that the person requesting a cert is the person that holds a DNS name.  Since 
> the parent zones are authoritative for that, they can already screw you.  
> (Verisign can get a cert for your .com domain; the Libyan government for your 
> .ly domain.)  DANE just centralizes the risk in one place.

That is only the case for DV certs. And it is a situation that is
hardly acceptable.

It isn't really the case that its a permanent vulnerability either. If
a DNS registry was ever discovered to have acted as you suggest then I
would expect that the CAs and browser providers would establish a new
control to stop it happening again. That can't happen in the DNS
system.

Security by analogy almost always fails. Back in the day a lot of Web
sites decided that a 4 digit pin was sufficient for a password. After
all, ATMs use 4 digit PINs, so it must be OK. They discovered
otherwise the expensive way.


The problem with DV are that (1) it causes the user to be told that
they are safe and (2) the user is expected to check this information.
Both are bad.

I want to get rid of the first problem but to do that I need to
address the second which means that the computer must take over the
responsibility for checking if TLS should be on or not. Which means
having a security policy mechanism.

DANE does have a security policy mechanism. But the security policy
isn't generated from the system that is ensuring that the system is
secured. So there is a probability of the two being out of sync.

This is fixable but so far not been fixed.


The way to make this all work acceptably is to automate all the steps
required to administer deployment of a new network service in one
system. Right now the network admin has to poke their server with a
stick, then poke the DNS with another stick and in networks of any
size they also need to poke the firewall and the PKI and the
directory. Which is kind of clueless since if the directory was any
good it would drive all the rest but right now thats not what happens
and LDAP is a farce anyway.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New wiki page on certificate revocation plans

2014-08-04 Thread Phillip Hallam-Baker
On Mon, Aug 4, 2014 at 12:12 AM, Jeremy Rowley
 wrote:
> Why does OneCRL seem like a hack?  Considering how infrequently intermediates 
> and roots are revoked, OneCRL seems like a satisfactory way to provide this 
> information long-term, provided that the certs are removed from OneCRL at 
> some point.  I'd think they could safely remove the OneCRL certs after the 
> listed cert expires.  For EE, OneCRL is only necessary where the other 
> methods of revocation are considered insufficient.  If hard-fail OCSP is 
> turned on (the last point), then OneCRL for EE certs becomes obsolete.

Hack or not, its very important to check revocation there.

We don't have armed guards at the data centers and if we did any
attacker could easily come with more.

The only viable defense here is to make sure that what is being
guarded is not worth taking. ATMs are protected the same way - with a
dye-pack that explodes on the cash if someone attempts to remove the
cartridge.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: OCSP and must staple

2014-05-02 Thread Phillip Hallam-Baker
OK so the state of play is that

* A new draft was submitted to make it current

* Russ Housley tells me that the transfer of the OID arc back to IANA
is almost complete

* I am waiting for comments from Brian.




On Fri, May 2, 2014 at 12:41 PM, Ben Wilson  wrote:
> Does anyone have any update on the status of the must-staple OID?
>
> -Original Message-
> From: dev-security-policy
> [mailto:dev-security-policy-bounces+ben=digicert@lists.mozilla.org] On
> Behalf Of Brian Smith
> Sent: Thursday, April 10, 2014 5:06 PM
> To: Phillip Hallam-Baker
> Cc: dev-security-policy@lists.mozilla.org
> Subject: Re: OCSP and must staple
>
> On Thu, Apr 10, 2014 at 3:54 PM, Phillip Hallam-Baker
> wrote:
>
>> One of the problems with OCSP is the hardfail issue. Stapling reduces
>> latency when a valid OCSP token is supplied but doesn't allow a server
>> to hardfail if the token isn't provided as there is currently no way
>> for a client to know if a token is missing because the server has been
>> borked or if the server doesn't staple.
>>
>> This draft corrects the problem. It has been in IETF limbo due to the
>> OID registry moving. But I now have a commitment from the AD that they
>> will approve the OID assignment if there is support for this proposal
>> from a browser provider:
>>
>
> David Keeler was working on implementing Must-Staple in Gecko. You can point
> them to these two bugs:
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=921907
> https://bugzilla.mozilla.org/show_bug.cgi?id=901698
>
> The work got stalled because we decided to fix some infrastructure issues
> (like the new mozilla::pkix cert verification library) first. Now that work
> is winding down and I think we'll be able to finish the Must-Staple
> implementation soon. Check with David.
>
> Cheers,
> Brian
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy



-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Turn on hardfail?

2014-04-24 Thread Phillip Hallam-Baker
If there was a DoS attack it would be the first and the last.

OCSP is only a DoS issue for servers that don't staple. All modern
servers can staple if configured to do so. Further it is only the
weaker CAs that don't have DoS proof OCSP service.

So if there was a DoS attack we would see a sudden upgrade to server
stapling and the OCSP service could probably be phased out after a
short time (except for feeding the cert holders with their tokens).



On Thu, Apr 24, 2014 at 12:39 AM, Daniel Micay  wrote:
> I'm talking about the DoS vulnerability opened up by making a few OCSP
> servers a single point of failure for *many* sites.
>
> It's also not great that you have to let certificate authorities know
> about your browsing habits.
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>



-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Turn on hardfail?

2014-04-21 Thread Phillip Hallam-Baker
Given the current Heartbleed situation, wouldn't it be appropriate to
turn on hard fail for revocation checking so that unknown status
results in the cert being rejected.

I am seeing people suggest that a CA be dropped from the root for
their alleged improper handling of revocation. If revocation matters
so much that it must be enforced on CAs then it matters enough to turn
on hardfail for a major server coding error.

Every platform is vulnerable because the server key can be extracted
in certain situations. A browser does not need to use OpenSSL to be
vulnerable to the OpenSSL bug.



-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Convergence.

2014-04-18 Thread Phillip Hallam-Baker
Rather than argue over convergence, much better to consider what next
gen crypto looks like.

All these projects became possible because of the expiry of the Surety
patent on catenate certificates (chained hash function notaries).


Bitcoin demonstrates the sort of thing that is made possible if there
is a timestamp notary that cannot feasibly default. What it does not
do is to achieve that goal. The blockchain is horribly inefficient
currently burning through electricity at a rate of a quarter billion
dollars a year (and that can increase) while genuine bitcoin commerce
transactions are a few tens of millions.

But imagine what we could do if we had twenty notaries and each one
included the outputs of the others every hour. It would be impossible
for one notary to defect without all the others defecting. We could
achieve what the bitcoin notary does for less than a million bucks a
year.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Convergence.

2014-04-16 Thread Phillip Hallam-Baker
Sorry, I got Peter and Moxie mixed up there, Jet lag.

Sovereign keys and Convergence both suffered from the assumption that
you can change the internet by just throwing a project over the wall
and waiting for people to implement. The details are slightly
different though.

On Tue, Apr 15, 2014 at 8:23 PM, Phillip Hallam-Baker  wrote:
> On Tue, Apr 15, 2014 at 7:08 PM, Daniel Veditz  wrote:
>> On 4/15/2014 7:43 AM, nobody wrote:
>>>
>>> I just wondered... what is the pull back regarding Convergence to put it
>>> in
>>> the webbrowsers by default?
>
> Well one very big problem was that Peter was not prepared to do the
> work of engaging in the standards area himself. Which is an almost
> certain way to ensure that a proposal isn't going to thrive.
>
> Another problem that most PKI alternatives suffer from is the
> 'hydrogen car' mentality. According to a popular Internet meme the
> reason that we aren't driving hydrogen cars RIGHT NOW is that the evil
> Detroit car companies are desperate to kill it at any cost. Now
> imagine that you are a startup trying to build a hydrogen car, would
> that be a productive mindset to base business assumptions on? I think
> it is pretty clear that approaching the problem of building a hydrogen
> car from the point of view that all advice from GM and Ford was
> ill-intentioned attempts at sabotage would cut the startup off from
> essential knowledge.
>
> So Peter's approach of beginning his proposal with a canard against
> existing CAs and refusing to correct the public impression he gave was
> not a good start.
>
> The DNSSEC crowd suffer from this as well. I keep telling them that
> until they start signing the root with something more secure than
> RSA1024 then all they are doing is an extended science project.
> Unfortunately the only way I can get them to change is to raise this
> issue at senior US policy levels which has the unfortunate side effect
> of reinforcing the (entirely justified) belief that ICANN is just a US
> govt. proxy.
>
>
>> The main issue is who are the notaries? If they're simply reflecting back
>> "Yup, I see this valid CA cert" then they aren't adding a whole lot of value
>> for the amount of risk they introduce, and if they're making their own
>> judgement about the validity of the certificates on some other ground they
>> just become a type of Certificate Authority themselves. Who pays for that
>> infrastructure, and what is their motive?
>
> And that leads to the second problem of how reliable is that notary
> infrastructure going to be.
>
>
>> Firefox and Chrome are both working on implementing "key pinning" (and
>> participating in the standardization process for it) which won't "free us
>> from the CA system" but will at least ameliorate one of the worst aspects
>> which is that any two-bit CA anywhere in the world can issue a certificate
>> for any site, anywhere.
>
> And don't forget that we should deploy CAA as part of that solution.
>
> I am also working on ways of bringing the key pinning information into
> the DNS space so that we can get a 'secure on first use'. That is the
> reason I am interested in DNS Encryption. It is a change to the DNS
> ecosystem which will break the ludicrous practice of taking DNS
> service from the resolver advertised in DHCP. Which opens up the
> opportunity for ditching a bunch of legacy DNS stupid (500 byte
> message limit for instance).
>
>
>> The IETF is working on standardizing "Certificate Transparency", Chrome is
>> implementing it, and at least one CA is participating. This again doesn't
>> free us from the CA system, but it does make the public certificates
>> auditable so that mis-issuance could theoretically be detected.
>
> There is certainly a lot of functional overlap. I think that CT has
> pretty much addressed the major use case used to justify Convergence.
> But it does not meet the real goal of Convergence
>
>
>
>
>
> --
> Website: http://hallambaker.com/



-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Convergence.

2014-04-15 Thread Phillip Hallam-Baker
On Tue, Apr 15, 2014 at 7:08 PM, Daniel Veditz  wrote:
> On 4/15/2014 7:43 AM, nobody wrote:
>>
>> I just wondered... what is the pull back regarding Convergence to put it
>> in
>> the webbrowsers by default?

Well one very big problem was that Peter was not prepared to do the
work of engaging in the standards area himself. Which is an almost
certain way to ensure that a proposal isn't going to thrive.

Another problem that most PKI alternatives suffer from is the
'hydrogen car' mentality. According to a popular Internet meme the
reason that we aren't driving hydrogen cars RIGHT NOW is that the evil
Detroit car companies are desperate to kill it at any cost. Now
imagine that you are a startup trying to build a hydrogen car, would
that be a productive mindset to base business assumptions on? I think
it is pretty clear that approaching the problem of building a hydrogen
car from the point of view that all advice from GM and Ford was
ill-intentioned attempts at sabotage would cut the startup off from
essential knowledge.

So Peter's approach of beginning his proposal with a canard against
existing CAs and refusing to correct the public impression he gave was
not a good start.

The DNSSEC crowd suffer from this as well. I keep telling them that
until they start signing the root with something more secure than
RSA1024 then all they are doing is an extended science project.
Unfortunately the only way I can get them to change is to raise this
issue at senior US policy levels which has the unfortunate side effect
of reinforcing the (entirely justified) belief that ICANN is just a US
govt. proxy.


> The main issue is who are the notaries? If they're simply reflecting back
> "Yup, I see this valid CA cert" then they aren't adding a whole lot of value
> for the amount of risk they introduce, and if they're making their own
> judgement about the validity of the certificates on some other ground they
> just become a type of Certificate Authority themselves. Who pays for that
> infrastructure, and what is their motive?

And that leads to the second problem of how reliable is that notary
infrastructure going to be.


> Firefox and Chrome are both working on implementing "key pinning" (and
> participating in the standardization process for it) which won't "free us
> from the CA system" but will at least ameliorate one of the worst aspects
> which is that any two-bit CA anywhere in the world can issue a certificate
> for any site, anywhere.

And don't forget that we should deploy CAA as part of that solution.

I am also working on ways of bringing the key pinning information into
the DNS space so that we can get a 'secure on first use'. That is the
reason I am interested in DNS Encryption. It is a change to the DNS
ecosystem which will break the ludicrous practice of taking DNS
service from the resolver advertised in DHCP. Which opens up the
opportunity for ditching a bunch of legacy DNS stupid (500 byte
message limit for instance).


> The IETF is working on standardizing "Certificate Transparency", Chrome is
> implementing it, and at least one CA is participating. This again doesn't
> free us from the CA system, but it does make the public certificates
> auditable so that mis-issuance could theoretically be detected.

There is certainly a lot of functional overlap. I think that CT has
pretty much addressed the major use case used to justify Convergence.
But it does not meet the real goal of Convergence





-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


OCSP and must staple

2014-04-10 Thread Phillip Hallam-Baker
One of the problems with OCSP is the hardfail issue. Stapling reduces
latency when a valid OCSP token is supplied but doesn't allow a server
to hardfail if the token isn't provided as there is currently no way
for a client to know if a token is missing because the server has been
borked or if the server doesn't staple.

This draft corrects the problem. It has been in IETF limbo due to the
OID registry moving. But I now have a commitment from the AD that they
will approve the OID assignment if there is support for this proposal
from a browser provider:

https://tools.ietf.org/html/draft-hallambaker-tlsfeature-02

So anyone in mozilla space willing to co-author?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Revocation Policy

2014-04-10 Thread Phillip Hallam-Baker
Before we get any further into this conversation, I'll just point out
that business models are not something we can discuss in CABForum.

We can 'probably' tell you what we believe the rules to be but we
can't make any comment on what they should be either in CABForum or
here.




On Thu, Apr 10, 2014 at 10:28 AM, Rob Stradling
 wrote:
> The Mozilla CA Certificate Maintenance Policy (Version 2.2) [1] says
> (emphasis mine):
>
> "CAs _must revoke_ Certificates that they have issued upon the occurrence of
> any of the following events:
> ...
>   - the CA obtains _reasonable evidence_ that the subscriber’s private key
> (corresponding to the public key in the certificate) has been compromised or
> is _suspected of compromise_ (e.g. Debian weak keys)"
>
> I think that's pretty clear!
>
> The CABForum BRs go one step further, demanding that the CA revoke _within
> 24 hours_.
>
> AFAICT, non-payment by the Subscriber does not release the CA from this
> obligation to revoke promptly.
>
> Anyone disagree with my interpretation?
>
>
> [1]
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/maintenance/
>
>
> On 10/04/14 15:16, fhw...@gmail.com wrote:
>>
>> This an interesting issue Kaspar and I appreciate you raising it. I also
>> personally appreciate you framing it in terms of trust because that's really
>> what is at issue here.
>>
>> The whole idea of revocation is a gaping hole in the PKI landscape. The
>> ability to say "don't trust me" is so poorly implemented throughout PKI as
>> to be effectively non-existent.‎ If for some reason you need to revoke a
>> cert, you should do so because it's the right thing to do, but the best you
>> can hope for is that some anti-security person doesn't figure out a way to
>> use it anyway.
>> ‎
>> This means that theft and other compromises of private keys remain viable
>> attack vectors for those who wish to do so (government sponsored
>> organizations and otherwise).‎ Private keys and the certs that go with them
>> could be usable well after when people think they become invalid.
>>
>> This also means that ‎we should not be surprised to see an underground
>> market appear that seeks to sell "revoked" certs. Given that "high value"
>> internet destinations might have been impacted by the Heartbleed
>> vulnerability this could definitely become a concern. Should such a place
>> appear I would think StartCom - issued certs would easily be included for
>> sale.
>> ‎
>> This also means that all "pay to revoke" policies should be viewed as
>> anti-security and we need to "strongly encourage" they be discontinued in
>> short order. If a CA wishes to continue such policies I would question their
>> trustworthiness.
>> ‎
>> Further I think we are reaching the point where browsers have to refuse
>> SSL connections when OCSP validation fails. I think it's getting harder to
>> argue otherwise, but I'll let the Mozilla folks speak to that.
>>
>>
>> -  Original Message  -
>> From: Kaspar Janßen
>> Sent: Thursday, April 10, 2014 4:12 AM
>>
>> On 10/04/14 10:08, Peter Eckersley wrote:
>>>
>>> Kaspar, suppose that Mozilla followed your suggestion and removed
>>> StartCom's root certificates from its trust store (or revoked them!).
>>> What
>>> would the consequences of that decision be, for the large number of
>>> domains
>>> that rely on StartCom certs?
>>
>> I hope that an appropriate policy will force authorities to reconsider
>> their revocation principle. I don't want to harm someone nor I want to
>> work off in any way.
>>
>> The key is that anybody should be able to shout out "don't trust me
>> anymore!" without a fee. Isn't that part of the trustchain idea?
>>
>> I read a few times that Chrome doesn't even check if a certificate is
>> revoked or not (at least not the default settings). That leads me to the
>> question: Is it mandatory for a CA in mozilla's truststore to have to
>> ability to revoke a certificate or is is only an optional feature
>> provided by some CAs?
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>>
>
> --
> Rob Stradling
> Senior Research & Development Scientist
> COMODO - Creating Trust Online
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy



-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Which SHA-2 algorithm should be used?

2014-01-08 Thread Phillip Hallam-Baker
On Wed, Jan 8, 2014 at 8:34 PM, Peter Gutmann wrote:

> "Man Ho (Certizen)"  writes:
>
> >If there is no constraints on choosing SHA-256, SHA-384 or SHA-512, why
> CAs
> >are so conservative and prefer SHA-256 rather than SHA-512? I think going
> >directly to a higher security strength should be preferable.
>
> What extra security does -512 give you that -256 doesn't?  I mean actual
> security against real threats, rather than just "it has a bigger number so
> it
> must be better"?  What I've heard was that the extra-sized hashes were
> added
> mostly for political reasons, in the same way that AES-192 and -256 were
> added
> for political reasons (there was a perceived need to have a "keys go to 10"
> and a "keys go to 11" form for Suite B, since government users would look
> over
> at non-suite-B crypto with keys that went to 11 and wonder why they
> couldn't
> have that too).
>

The main advantage is more rounds to crypto.

In PPE I use SHA-512 and truncate to 128 bits for Phingerprints.

-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Stop using SHA1 in certificates

2014-01-03 Thread Phillip Hallam-Baker
The hashclash attack requires the CA to do more than just use SHA-1. They
have to use a predictable serial number.

That is not an argument for not withdrawing SHA-1 toute haste. It is
however a reason for folk not to do the usual headless chicken thing.


Striking out SHA-1 effectively means the end of RSA1024 because every
browser that can do SHA2 can almost certainly do RSA2048.

There will probably be some niche cases that call for continuing to issue
SHA-1 certs but only the genuine niche applications will want them at all
once the browsers start rejecting them.


On Fri, Jan 3, 2014 at 1:15 PM, Kurt Roeckx  wrote:

> Hi,
>
> Microsoft has proposed to stop issueing new certificates using
> SHA1 by 2016 in certificates.
> (
> http://blogs.technet.com/b/pki/archive/2013/11/12/sha1-deprecation-policy.aspx
> ).
>
> Mozilla also has a bug that even suggest to stop accepting some
> new certificates in 3 months and stop accepting any in 2017.
> https://bugzilla.mozilla.org/show_bug.cgi?id=942515
>
> But it's unclear if this is really a policy or just what some
> people think should happen.
>
> This seems to also recently have been discussed in the CA/Browser
> forum, but I have a feeling not everybody sees the need for this.
> https://cabforum.org/2013/12/19/2013-12-19-minutes/
>
> I want to point out the that SHA1 is broken for what it is used in
> certificates.  SHA1 should have a collision resistance of about
> 2^80 but the best known attack reduces this to about 2^60.  In
> 2012 it costs about 3M USD to break SHA-1, in 2015 this will only be
> about 700K USD.  See
> https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html
>
> With a collision it's possible to create a rogue CA.  See:
> http://www.win.tue.nl/hashclash/rogue-ca/
>
> This is only based on what is the best know attack currently
> publicly known.  There might be other attacks that we don't
> know about yet even further reducing the cost, specialised
> hardware and so on.
>
> This is just waiting to either happen or until someone finds out
> that it did happen.
>
> I would like to encourage everybody to start using SHA2 in
> certificates as soon as possible, since that's clearly the
> weakest part of the whole chain.
>
> This is more important that stopping to use 1024 RSA keys since
> they still have a complexity of 2^80.  But you really should
> also stop using that.
>
> Can someone please try to convince the CAB forum about the need
> for this?
>
>
> Kurt
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>



-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Exceptions to 1024-bit cert revocation requirement

2013-12-23 Thread Phillip Hallam-Baker
On Mon, Dec 23, 2013 at 8:54 AM, Rob Stradling wrote:

> On 21/12/13 22:57, Phillip Hallam-Baker wrote:
>
>> I thought that what we were trying to do here is break a deadlock
>> where Cas wait for browsers and vice versa.
>>
>> I have no trouble telling a customer with a 15 year 512 bit cert that
>> they need to change for a new one if they want it to work for ssl with
>> the browsers
>>
>
> Indeed.  Everyone agrees.
>
>
>  Revoking it without their consent is a problem though.
>>
>
> Indeed.  The subject of this thread is misleading.  Kathleen's last post
> clearly confirmed...
>
> Rob: Will CAs need to revoke all unexpired 1024-bit certs by the cut-off
> date?
> Kathleen: No.
>
>
It would be good if the sequence of operations to follow was documented for
future reference.

One of the problems that we have had in the industry is people assuming the
decision lies with another party. When I was with my last employer I had to
keep telling people not to follow our lead in choice of crypto because we
are forced to follow rather than lead the market. A CA can't introduce a
new crypto algorithm without the browsers implementing it five years,
preferably a decade earlier.

-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Exceptions to 1024-bit cert revocation requirement

2013-12-21 Thread Phillip Hallam-Baker
I thought that what we were trying to do here is break a deadlock
where Cas wait for browsers and vice versa.

I have no trouble telling a customer with a 15 year 512 bit cert that
they need to change for a new one if they want it to work for ssl with
the browsers

Revoking it without their consent is a problem though.


Sent from my difference engine


> On Dec 21, 2013, at 5:23 PM, Kathleen Wilson  wrote:
>
>> On 12/20/13 11:45 AM, Rob Stradling wrote:
>> To me, "cert revocation" means replying "revoked" via OCSP for that
>> cert's serial number, and also adding that cert's serial number to the CRL.
>>
>> I understand that new versions of browsers will stop accepting 1024-bit
>> certs and that site operators will naturally stop using 1024-bit certs.
>>  But neither stopping using nor stopping accepting are the same thing
>> as revocation.
>>
>> My question is simple: Will CAs need to revoke all unexpired 1024-bit
>> certs by the cut-off date?
>>
>> If "Yes", where is this requirement written?
>>
>> If "No", please simply reply "No".
>
> No.
> To my knowledge there is not a written requirement for CAs to revoke all 
> unexpired 1024-bit certs by a cut-off date.
>
> But everyone should keep the following in mind...
>
> https://wiki.mozilla.org/CA:MD5and1024
> "All end-entity certificates with RSA key size smaller than 2048 bits must 
> expire by the end of 2013.
> Under no circumstances should any party expect continued support for RSA key 
> size smaller than 2048 bits past December 31, 2013. This date could get moved 
> up substantially if necessary to keep our users safe. We recommend all 
> parties involved in secure transactions on the web move away from 1024-bit 
> moduli as soon as possible."
>
> Some long-lived certs were issued before the statement was made and 
> communicated.
>
> Some CAs have needed to re-issue 1024-bit certs that are valid beyond 2013 in 
> order for their customers to maintain operation while transitioning to new 
> software and hardware that will support 2048-bit certs. (I am OK with this)
>
> At this point in time, I think the 1024-bit certs will work in Mozilla 
> products until the April 2014 time frame. But, as per 
> https://wiki.mozilla.org/CA:MD5and1024, "Mozilla will take these actions 
> earlier and at its sole discretion if necessary to keep our users safe."
>
> Kathleen
>
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Revoking Trust in one ANSSI Certificate

2013-12-10 Thread Phillip Hallam-Baker
On Mon, Dec 9, 2013 at 2:17 PM, Jan Schejbal wrote:

>
> I would really love to see the explanation how someone accidentally
> issues and deploys a MitM Sub-CA...
>

I think it will turn out to be essentially the same reason that Microsoft
got burned with the Flame attack.

Just because an organization has PKI expertise does not mean that it is
evenly shared in the organization or that everyone understands what the
constraints are.

The organization does not have managing crypto as its primary goal so the
processes that manage the CA do not include awareness of current crypto
affairs as a requirement.

I have similar concerns about DANE. The expectations that are placed on the
registries and registrars are quite interesting.

-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Microsoft deprecating SHA-1 certs by 2016

2013-11-13 Thread Phillip Hallam-Baker
On Wed, Nov 13, 2013 at 6:37 AM, Jan Schejbal wrote:

> Am 2013-11-13 13:47, schrieb Gervase Markham:
> > We could update our program requirements to be identical to theirs, but
> > the effect on actual CA operations would be fairly small, I fancy -
> > because they are all doing it anyway. Is that what you are suggesting,
> > or something else?
>
> Wouldn't it make sense to add this in the CAB Forum Baseline Requirements?
>

Not really.

Putting a commitment in the Baseline requirements is necessary to break a
deployment deadlock situation where browser providers can't act without
support from CAs and CAs can't act without the browser providers taking the
first step.

Once it is clear that SHA-1 certs are not going to work on a large number
of browsers, demand for such certificates is going to fall rapidly.

The only people left using SHA-1 certs are going to be a handful of corner
case non-browser applications who mostly understand the risks of their
approach. I don't mind shooting those folk in the foot if that is the only
way to get a change to happen in the wider browser use case but I don't
think it is necessary to shoot them in the foot just for the sake of it.


One major consequence of this change is going to be that a huge number of
older browsers will just stop working with SSL. Which is good for browser
providers and CAs but is likely to require some people to upgrade their
computer so they can run a modern OS. It is also likely to brick a large
number of cell phones as far as online commerce goes.

The second is actually a big concern in large parts of the world where
renting a mobile phone with Internet access is many people's way of earning
a living.


-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla not compliant with RFC 5280

2013-11-08 Thread Phillip Hallam-Baker
I don't believe there are any parties who you would want as CAs that
support the idea of getting rid of revocation checking.




On Fri, Nov 8, 2013 at 9:35 AM, Jeremy Rowley wrote:

> I imagine every CA would agree with you.  OCSP stapling is a great idea,
> but the number of servers deploying it are very low.  I don’t believe any
> CAs support the idea of getting rid of revocation checking.
>
>
>
> From: dev-security-policy [mailto:
> dev-security-policy-bounces+jeremy.rowley=digicert@lists.mozilla.org]
> On Behalf Of fhw...@gmail.com
> Sent: Friday, November 08, 2013 6:42 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Mozilla not compliant with RFC 5280
>
>
>
> I was hoping to see more responses on this issue. Does that mean people
> agree it's a problem but aren't sure what to do about it? Is it a small
> problem because Firefox already does OCSP and all the CA's do too?  Or...?
>
>
>
> Thanks.
>
>
> From: fhw...@gmail.com
>
> Sent: Friday, November 1, 2013 5:50 PM
>
> To: Matthias Hunstock; mozilla-dev-security-pol...@lists.mozilla.org
>
> Subject: Re: Mozilla not compliant with RFC 5280
>
>
>
> I think that is correct, Matthias.
>
>
>
> What's more is that anyone who issues an end-entity cert will be unable to
> stop FF from using that cert in the future--without OCSP setup--until the
> expiration date. (I'll need someone to correct me on that.)
>
> 
>
> I gotta believe there are people out there who issue(d) CRL's thinking
> that they are now protected when in reality they are not.
>
>
>
>
> From: Matthias Hunstock
>
> Sent: Friday, November 1, 2013 10:46 AM
>
> To: mozilla-dev-security-pol...@lists.mozilla.org
>
> Subject: Re: Mozilla not compliant with RFC 5280
>
>
>
> Am 29.10.2013 19:37, schrieb Kathleen Wilson:
> > The goal is for the revocation-push mechanism to be used instead of
> > traditional CRL checking, for reasons described in the wiki page and the
> > research paper.
>
> Everyone with a "self-made" CA will be completely cut off from
> revocation checking, except there is an OCSP responder?
>
>
>
> Matthias
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
>
>
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>



-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Certificates of USA CAs still trustworthy?

2013-10-17 Thread Phillip Hallam-Baker
On Thu, Oct 17, 2013 at 6:04 AM, Gervase Markham  wrote:

> On 17/10/13 00:07, Phillip Hallam-Baker wrote:
> > Each HSM vendor has their own security controls but a FIPS140 level 4
> > device won't release them except to another FIPS-140 device. There is no
> > way to extract the key from the system unencrypted.
>
> Phil: what prevents a government just turning up with such a device and
> saying "copy your private key into here, please"?
>
> Gerv
>

They can do that but it would require the new device to be first
credentialed into the correct cryptographic device group. The vendors all
have roughly the same scheme but the nomenclature changes.

This type of attack is of course the one that Ben Laurie and co are trying
to defeat.

I wrote the following draft in an attempt to formalize the model of
hardening systems against PRISM-class attacks.

http://tools.ietf.org/html/draft-hallambaker-prismproof-trust-00


We cannot completely prevent this type of attack but Transparency does
increase the Social Work Factor over time.

-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Certificates of USA CAs still trustworthy?

2013-10-16 Thread Phillip Hallam-Baker
On Wed, Oct 16, 2013 at 5:26 PM, Oliver Loch  wrote:

> Hi,
>
> these devices are nothing else than a modified server that runs some
> special OS or services on them. The keys are stored inside and can be
> transferred for backup- or clustering reasons. So there are at least two
> ways to get your fingers on those keys. Even if they are still encrypted.
> The password for decryption needs to be known to be able to restore the
> backup on a vanilla system (and I don't think all systems of one vendor use
> the same password on all of them and for every backup).
>
> I also think that bigger CAs have multiple devices in at least two
> different locations to prevent any kind of physical damage to the CA like
> fire, power outage, missiles from NSA drones (ok, I admit the last one is a
> bit sci-fi, isn't it?).
>

Rather than speculate, try reading the Certificate Practices Statements of
the CAs. They all describe how the private keys are managed.

Each HSM vendor has their own security controls but a FIPS140 level 4
device won't release them except to another FIPS-140 device. There is no
way to extract the key from the system unencrypted.


-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Certificates of USA CAs still trustworthy?

2013-10-15 Thread Phillip Hallam-Baker
On Tue, Oct 15, 2013 at 6:59 AM, Oliver Loch  wrote:

> Hi,
>
> as we all know from the NSA disclosures of Edward Snowden, the NSA is
> collecting data and has access to any data that is available in the USA.
> We've also learned that companies which are located on USA soil, must hand
> the NSA and other governmental institutions any requested data available.
>
> This raises the question if the root certificates of CAs that are located
> on USA soil are still trustworthy or if the private keys of those
> certificates have been handed over to the NSA and allow the NSA to generate
> VALID certificates for any situation and in any form necessary.
>
> I'm talking about MITM attacks and redirects to web servers that do not
> belong to the domain that the certificate shown was issued for and which
> are manipulated to install spyware and stuff. There are tons of other
> possibilities imaginable…
>
> So are they still trustworthy?
>

I don't think any US based company is going to be considered trustworthy
until the use of National Security Letters is ruled unconstitutional by the
courts.

Especially not browser companies based in Mountain View California.


For what it is worth, our CA is based in the UK but any corporation that
has any part of its operations in the US could come under pressure.

Reading through the powers granted, I think the chance of using an NSL to
suborn a CA is very small since it is very observable. The browser is a far
better point of attack.


But the idea that the NSA is going round suborning companies on a
widespread basis seems a little silly to me since there is no way they
could expect to keep the engineers quiet.

It is possible that some of the cryptanachy cipherpunk folk are plants but
I have known most of them twenty years now. I rather doubt that they have
all been turned. If the NSA can't keep its own employees quiet, they can
hardly keep non employees quiet.

That is the handwavy explanation anyway.

I have a more mathematical treatment if anyone is interested.

-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy