Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Anne van Kesteren
On Tue, Apr 14, 2015 at 4:10 AM, Karl Dubost  wrote:
> 1. You do not need to register a domain name to have a Web site (IP address)

Name one site you visit regularly that doesn't have a domain name. And
even then, you can get certificates for public IP addresses.


> 2. You do not need to register a domain name to run a local blah.test.site

We should definitely allow whitelisting of sorts for developers. As a
start localhost will be a privileged context by default. We also have
an override in place for Service Workers.

This is not a reason not to do HTTPS. This is something we need to
improve along the way.


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread david . a . p . lloyd
>   http://sockpuppet.org/blog/2015/01/15/against-dnssec/
>   http://sockpuppet.org/stuff/dnssec-qa.html
>   https://www.imperialviolet.org/2015/01/17/notdane.html

Yawn - those were all terrible articles.  To summarise their points: "NSA is 
bad, some DNS servers are out of date, DNSSEC may be still using shorter 
1024bit RSA key lengths  (hmm... much like TLS then)"

The trouble is:  Just because something isn't perfect, doesn't make it a bad 
idea.  Certificates are not perfect, but they are not a bad idea.  Putting 
certificate thumbprints in DNS is not perfect, but it's not half a *good* idea.

Think about it: if your completely clear-text, unauthenticated DNS connection 
is compromised, then your browser is going to go to the wrong server anyway.  
If it goes to the wrong server, so will your email, as will the identity 
verification messages from your CA.

Your browser needs to retrieve A and  addresses from DNS anyway, so why not 
pull TLSA certificate hashes at the same time?  Even without DNSSEC, this could 
only improve things.

Casepoint, *absolutely* due to the frankly incomprehensible refusal to do this: 
 
http://www.zdnet.com/article/google-banishes-chinas-main-digital-certificate-authority-cnnic/
 

There is nothing you can do to fix this with traditional X509, or any single 
chain of trust.  You need multiple, independent proofs of identity.  A 
combination of X509 and a number of different signed DNS providers seem like a 
good way to approach this.

Finally - you can audit DNSSEC/TLSA responses programmatically as the response 
records are cached publicly in globally dispersed DNS servers, it's really hard 
to do the equivalent of "send a different chain when IP address 1.2.3.4 
connects".  

I have my own opinions why TLSA certificate pinning records are not being 
checked and, having written an implementation myself, I can guarantee you that 
it isn't due to any technical complexity.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Anne van Kesteren
On Tue, Apr 14, 2015 at 9:29 AM,   wrote:
> The trouble is:  Just because something isn't perfect, doesn't make it a bad 
> idea.

I think it's a pretty great idea and it's one people immediately think
of. However, as those articles explain in detail, it's also a far from
realistic idea. Meanwhile, HTTPS exists, is widely deployed, works,
and is the focus of this thread. Whether we can achieve similar
guarantees through DNS at some point is orthogonal and is best
discussed elsewhere:

  https://tools.ietf.org/wg/dane/


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread immibis
> Secondly the proposal to restrain unrelated new features like CSS attributes 
> to HTTPS sites only is simply a form of strong-arming. Favoring HTTPS is fine 
> but authoritarianism is not. Please consider that everyone is capable of 
> making their own decisions. 

One might note that this has already been tried, *and succeeded*, with SPDY and 
then HTTP 2.

HTTP 2 is faster than HTTP 1, but both Mozilla and Google are refusing to allow 
unencrypted HTTP 2 connections. Sites like http://httpvshttps.com/ 
intentionally mislead users into thinking that TLS improves connection speed, 
when actually the increased speed is from HTTP 2.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread david . a . p . lloyd
> realistic idea. Meanwhile, HTTPS exists, is widely deployed, works,
> and is the focus of this thread. 

http://www.zdnet.com/article/google-banishes-chinas-main-digital-certificate-authority-cnnic/
 

Sure it works :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread lorenzo . keller
> The goal of this thread is to determine whether there is support in the
> Mozilla community for a plan of this general form.  Developing a precise
> plan will require coordination with the broader web community (other
> browsers, web sites, etc.), and will probably happen in the W3C.
> 

>From the user/sysadmin point of view it would be very helpful to have 
>information on how the following issues will be handled:

1) Caching proxies: resources obtained over HTTPS cannot be cached by a proxy 
that doesn't use MITM certificates. If all users must move to HTTPS there will 
be no way to re-use content downloaded for one user to accelerate another user. 
This is an important issue for locations with many users and poor internet 
connectivity. 

2) Self signed certificates: in many situations it is hard/impossible to get 
certificates signed by a CA (e.g. provisioning embedded devices). The current 
approach in many of these situations is not to use HTTPS. If the plan goes into 
effect what other solution could be used?

Regarding problem 1: I guess that allowing HTTP for resources loaded with 
subresource integrity could be some sort of alternative, but would require 
collaboration from the server owner. Being more work than simply letting the 
webserver send out automatically caching headers I wonder how many sites will 
implement it.

Regarding problem 2: in my opinion it can be mitigated by offering the user a 
new standard way to validate self-signed certificates: the user is prompted to 
enter the fingerprint of the certificate that she must have received 
out-of-band, if the user enters the correct fingerprint the certificate is 
marked as trusted (see [1]). This clearly opens up some attacks that should be 
carefully assessed.

Best,
Lorenzo


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1012879
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Yoav Weiss
On Tue, Apr 14, 2015 at 8:22 AM, Anne van Kesteren  wrote:

> On Tue, Apr 14, 2015 at 7:52 AM, Yoav Weiss  wrote:
> > Limiting new features does absolutely nothing in that aspect.
>
> Hyperbole much? CTO of the New York Times cited HTTP/2 and Service
> Workers as a reason to start deploying HTTPS:
>
>   http://open.blogs.nytimes.com/2014/11/13/embracing-https/


I stand corrected. So it's the 8th reason out of 9, right before technical
debt.

I'm not saying using new features is not an incentive, and I'm definitely
not saying HTTP2 and SW should have been enabled on HTTP.
But, when done without any real security or deployment issues that mandate
it, you're subjecting new features to significant adoption friction that is
unrelated to the feature itself, in order to apply some indirect pressure
on businesses to do the right thing.
You're inflicting developer pain without any real justification. A sort of
collective punishment, if you will.

If you want to apply pressure, apply it where it makes the most impact with
the least cost. Limiting new features to HTTPS is not the place, IMO.


>
> (And anecdotally, I find it easier to convince developers to deploy
> HTTPS on the basis of some feature needing it than on merit. And it
> makes sense, if they need their service to do X, they'll go through
> the extra trouble to do Y to get to X.)
>
>
Don't convince the developers. Convince the business. Drive users away to
secure services by displaying warnings, etc.
Anecdotally on my end, I saw small Web sites that care very little about
security, move to HTTPS over night after Google added HTTPS as a (weak)
ranking signal
.
(reason #4 in that NYT article)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread immibis
Another note:

Nobody, to within experimental error, uses IP addresses to access public 
websites.

But plenty of people use them for test servers, temporary servers, and embedded 
devices. (My home router is http://192.168.1.254/, do they need to get a 
certificate for 192.168.1.254? Or do home routers need to come with 
installation CDs that install the router's root certificate? How is that not a 
worse situation, where every web user has to trust the router manufacturer?)

And even though nobody uses IP addresses, and many public websites don't work 
with IP addresses (because vhosts), nobody in their right mind would ever 
suggest removing the possibility of accessing web servers without domain names.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Anne van Kesteren
On Tue, Apr 14, 2015 at 9:51 AM,   wrote:
> 1) Caching proxies: resources obtained over HTTPS cannot be cached by a proxy 
> that doesn't use MITM certificates. If all users must move to HTTPS there 
> will be no way to re-use content downloaded for one user to accelerate 
> another user. This is an important issue for locations with many users and 
> poor internet connectivity.

Where is the evidence that this is a problem in practice? What do
these environments do for YouTube?


> 2) Self signed certificates: in many situations it is hard/impossible to get 
> certificates signed by a CA (e.g. provisioning embedded devices). The current 
> approach in many of these situations is not to use HTTPS. If the plan goes 
> into effect what other solution could be used?

Either something like
https://bugzilla.mozilla.org/show_bug.cgi?id=1012879 as you mentioned
or overrides for local devices. This definitely needs more research
but shouldn't preclude rolling out HTTPS on public resources.


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Anne van Kesteren
On Tue, Apr 14, 2015 at 9:55 AM, Yoav Weiss  wrote:
> You're inflicting developer pain without any real justification. A sort of
> collective punishment, if you will.

Why is that you think there is no justification in deprecating HTTP?


>> (And anecdotally, I find it easier to convince developers to deploy
>> HTTPS on the basis of some feature needing it than on merit. And it
>> makes sense, if they need their service to do X, they'll go through
>> the extra trouble to do Y to get to X.)
>
> Don't convince the developers. Convince the business.

Why not both? There's no reason to only attack this top-down.


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Yoav Weiss
On Tue, Apr 14, 2015 at 10:07 AM, Anne van Kesteren 
wrote:

> On Tue, Apr 14, 2015 at 9:55 AM, Yoav Weiss  wrote:
> > You're inflicting developer pain without any real justification. A sort
> of
> > collective punishment, if you will.
>
> Why is that you think there is no justification in deprecating HTTP?
>

Deprecating HTTP is totally justified. Enabling some features on HTTP but
not others is not, unless there's a real technical reason why these new
features shouldn't be enabled.


>
>
> >> (And anecdotally, I find it easier to convince developers to deploy
> >> HTTPS on the basis of some feature needing it than on merit. And it
> >> makes sense, if they need their service to do X, they'll go through
> >> the extra trouble to do Y to get to X.)
> >
> > Don't convince the developers. Convince the business.
>
> Why not both? There's no reason to only attack this top-down.
>
>
> --
> https://annevankesteren.nl/
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Anne van Kesteren
On Tue, Apr 14, 2015 at 10:39 AM, Yoav Weiss  wrote:
> Deprecating HTTP is totally justified. Enabling some features on HTTP but
> not others is not, unless there's a real technical reason why these new
> features shouldn't be enabled.

I don't follow. If HTTP is no longer a first-class citizen, why do we
need to treat it as such?


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Alex C
On Tuesday, April 14, 2015 at 8:44:25 PM UTC+12, Anne van Kesteren wrote:
> I don't follow. If HTTP is no longer a first-class citizen, why do we
> need to treat it as such?

When it would take more effort to disable a feature on HTTP than to let it 
work, and yet the feature is disabled anyway, that's more than just HTTP being 
"not a first class citizen".
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Alex C
On Tuesday, April 14, 2015 at 8:44:25 PM UTC+12, Anne van Kesteren wrote:
> I don't follow. If HTTP is no longer a first-class citizen, why do we
> need to treat it as such?

When it takes *more* effort to disable certain features on HTTP, than to let 
them work.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Yoav Weiss
On Tue, Apr 14, 2015 at 10:43 AM, Anne van Kesteren 
wrote:

> On Tue, Apr 14, 2015 at 10:39 AM, Yoav Weiss  wrote:
> > Deprecating HTTP is totally justified. Enabling some features on HTTP but
> > not others is not, unless there's a real technical reason why these new
> > features shouldn't be enabled.
>
> I don't follow. If HTTP is no longer a first-class citizen, why do we
> need to treat it as such?
>

I'm afraid the second class citizens in that scenario would be the new
features, rather than HTTP.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread intellectueel
Op maandag 13 april 2015 16:57:58 UTC+2 schreef Richard Barnes:
> There's pretty broad agreement that HTTPS is the way forward for the web.
> In recent months, there have been statements from IETF [1], IAB [2], W3C
> [3], and even the US Government [4] calling for universal use of
> encryption, which in the case of the web means HTTPS.

Each organisation has it own reasons to move away from HTTPS.
It doesn't mean that each of those reasons are ethical. 


> In order to encourage web developers to move from HTTP to HTTPS

Why ?
Large multinationals do not allow HTTPS traffic within their border gateways of 
their own infrastructure, why make it harder for them?

Why give people the impression in the future that because they are using HTTPS 
they are much safer, but instead the implication are much larger. (no 
dependability anymore, forced to trust root-CA etc..) 

Why force hosting companies and webmasters with extra costs ?


Do not forget that most used webmaster/webhoster controle panels do not support 
SNI, and that each HTTPS site has to have it own unique IP address. 
Here in EUROPE we are still using IPv4 and RIPE can't issue new IPv4 address 
because they are all gone. So as long that isn't resolved it can't be done. 


IMHO HTTPS would be safer if no larger companies or governments are involved 
with issuing the certificates, and the certificates would be free or somehow 
other wise being compensated. 

The countries where the people have lesser profiting from HTTPS because human 
rights are more respected have the means to pay for SSL certificates, but the 
people who you want to protect don't and even if they would have, they always 
have a government(s) to deal with. 

As long you think that ROOT-CA are 100% trustworthy and governments can't  
manipulate or do a replay attack afterwards, HTTPS is the way to go... until 
that (and SNI/IPv4) issue are not handled, don't, because it will cause more 
harm in the long run. 

Do not get me wrong, the intention is good. But trying to protect humanity from 
humanity also means to keep in mind the issues surrounding it.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Mike de Boer

> On 14 Apr 2015, at 11:42, intellectu...@gmail.com wrote:
> 

Something entirely off-topic: I’d like to inform people that your replies to 
popular threads like this unsigned, with only a notion of identity in an 
obscure email address, makes me - and I’m sure others too - skip your message 
or worse; not take it seriously. In my mind I fantasize your message signed off 
with something like:

"Cheers, mYLitTL3P0nIEZLuLZrAinBowZ.

 - Sent from a Galaxy Tab Nexuzzz Swift Super, Gold & Ruby Edition by an 8yr 
old stuck in Kindergarten.”

… which doesn’t feel like the identity anyone would prefer to assume.

Best, Mike.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Henri Sivonen
On Mon, Apr 13, 2015 at 5:57 PM, Richard Barnes  wrote:
> There's pretty broad agreement that HTTPS is the way forward for the web.
> In recent months, there have been statements from IETF [1], IAB [2], W3C
> [3], and even the US Government [4] calling for universal use of
> encryption, which in the case of the web means HTTPS.

I agree that we should get the Web onto https and I'me very happy to
see this proposal.

> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing
>
> Some earlier threads on this list [5] and elsewhere [6] have discussed
> deprecating insecure HTTP for "powerful features".  We think it would be a
> simpler and clearer statement to avoid the discussion of which features are
> "powerful" and focus on moving all features to HTTPS, powerful or not.

I understand that especially in debates about crypto, there's a strong
desire to avoid ratholing and bikeshedding. However, I think avoiding
the discussion on which features are "powerful" is the wrong way to
get from the current situation to where we want to be.

Specifically:

 1) I expect the non-availability on http origins of e.g. new CSS
effects that are equally non-privacy-sensitive as existing CSS effects
just as a way to force sites onto https to create resentment among Web
devs that would be better avoided in order to have Web devs support
the cause of encrypting the Web and that could be avoided by
withholding features from http on grounds that tie clearly to the
downsides of http relative to https.

 2) I expect withholding certain *existing* privacy-sensitive features
from http to have a greater leverage to push sites to https than
withholding privacy-neutral *new* features.

Specifically, on point #2, I think we should start by, by default,
forgetting all cookies that don't have the "secure" flag set at the
end of the Firefox session. Persistent cookies have two main use
cases:
 * On login-requiring sites, not requiring the user to have to
re-enter credentials in every browser session.
 * Behavioral profiling.

The first has a clear user-facing benefit. The second is something
that users typically don't want and breaking it has no obvious
user-visible effect of breaking Web compat of the browser.

Fortunately, the most-used login-requiring sites use https already, so
forgetting insecure cookies at the end of the session would have no
adverse effect on the most-user-visible use of persistent cookies.
Also, if a login-requiring site is not already using https, it's
pretty non-controversial that they are Doing It Wrong and should
migrate to https.

One big reason why mostly content-oriented sites, such as news sites,
haven't migrated to https is that they are ad-funded and the
advertising networks are lagging behind in https deployment. Removing
persistence from insecure cookies would give a reason for the ad
networks to accelerate https deployment and do so in a way that
doesn't break the Web in user-visible ways during the transition. That
is, if ad networks want to track users, at least they shouldn't enable
collateral tracking by network eavesdroppers while doing so.

So I think withholding cookie persistence from insecure cookies could
well be way more effective per unit of disruption of user-perceived
Web compat than anything in your proposal.

In addition to persistent cookies, I think we should seek to be more
aggressive in making other features that allow sites to store
persistent state on the client https-only than in making new features
in general https-only. (I realize that applying this consistently to
the HTTP cache could be infeasible on performance grounds in the near
future at least.)

Furthermore, I think this program should have a UI aspect to it:
Currently, the UI designation for http is neutral while the UI
designation for mixed content is undesirable. I think we should make
the UI designation of plain http undesirable once x% the sites that
users encounter on a daily basis are https. Since users don't interact
with the whole Web equally, this means that the UI for http would be
made undesirable much earlier than the time when x% of Web sites
migrates to https. x should be chosen to be high enough as to avoid
warning fatigue that'd desensitize users to the undesirable UI
designation.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Boris Zbarsky

On 4/14/15 3:28 AM, Anne van Kesteren wrote:

On Tue, Apr 14, 2015 at 4:10 AM, Karl Dubost  wrote:

1. You do not need to register a domain name to have a Web site (IP address)


Name one site you visit regularly that doesn't have a domain name.


My router's configuration UI.  Here "regularly" is probably once a month 
or so.



And even then, you can get certificates for public IP addresses.


It's not a public IP address.

We do need a solution for this space, which I expect includes the 
various embedded devices people are bringing up; I expect those are 
behind firewalls more often than on the publicly routable internet.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread david . a . p . lloyd
> Something entirely off-topic: I'd like to inform people that your replies to 
> popular threads like this unsigned, with only a notion of identity in an 
> obscure email address, makes me - and I'm sure others too - skip your message 
> or worse; not take it seriously. 


Not everyone has the luxury of being public on the Internet.  Especially in 
discussions about default Internet encryption.  The real decision makers won't 
be posting at all.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Eric Shepherd

Joshua Cranmer 🐧 wrote:
If you actually go to read the details of the proposal rather than 
relying only on the headline, you'd find that there is an intent to 
actually let you continue to use http for, e.g., localhost. The exact 
boundary between "secure" HTTP and "insecure" HTTP is being actively 
discussed in other forums. 
My main concern with the notion of phasing out unsecured HTTP is that 
doing so will cripple or eliminate Internet access by older devices that 
aren't generally capable of handling encryption and decryption on such a 
massive scale in real time.


While it may sound silly, those of us who are intro classic computers 
and making them do fun new things use HTTP to connect 10 MHz (or even 1 
MHz) machines to the Internet. These machines can't handle the demands 
of SSL. So this is a step toward making their Internet connections go away.


This may not be enough of a reason to save HTTP, but it's something I 
wanted to point out.


--

Eric Shepherd
Senior Technical Writer
Mozilla 
Blog: http://www.bitstampede.com/
Twitter: http://twitter.com/sheppy
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Gervase Markham
On 14/04/15 01:57, northrupthebandg...@gmail.com wrote:
> * Less scary warnings about self-signed certificates (i.e. treat
> HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do
> with HTTPS+selfsigned now); the fact that self-signed HTTPS is
> treated as less secure than HTTP is - to put this as politely and
> gently as possible - a pile of bovine manure 

http://gerv.net/security/self-signed-certs/ , section 3.

But also, Firefox is implementing opportunistic encryption, which AIUI
gives you a lot of what you want here.

Gerv

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Gervase Markham
On 14/04/15 08:47, david.a.p.ll...@gmail.com wrote:
>> realistic idea. Meanwhile, HTTPS exists, is widely deployed, works,
>> and is the focus of this thread. 
> 
> http://www.zdnet.com/article/google-banishes-chinas-main-digital-certificate-authority-cnnic/
>  
> 
> Sure it works :)

Yep. That's the system working. CA does something they shouldn't, we
find out, CA is no longer trusted (perhaps for a time).

Or do you have an alternative system design where no-one ever makes a
mistake and all the actors are trustworthy?

Gerv

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread enaeher
On Tuesday, April 14, 2015 at 3:05:09 AM UTC-5, Anne van Kesteren wrote:

> This definitely needs more research
> but shouldn't preclude rolling out HTTPS on public resources.

The proposal as presented is not limited to public resources. The W3C 
Privileged Context draft which it references exempts only localhost and 
file:/// resources, not resources on private networks. There are hundreds of 
millions of home routers and similar devices with web UIs on private networks, 
and no clear path under this proposal to keep them fully accessible (without 
arbitrary feature limitations) except to set up your own local CA, which is 
excessively burdensome.

Eli Naeher
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Gervase Markham
On 14/04/15 08:51, lorenzo.kel...@gmail.com wrote:
> 1) Caching proxies: resources obtained over HTTPS cannot be cached by
> a proxy that doesn't use MITM certificates. If all users must move to
> HTTPS there will be no way to re-use content downloaded for one user
> to accelerate another user. This is an important issue for locations
> with many users and poor internet connectivity.

Richard talked, IIRC, about not allowing subloads over HTTP with
subresource integrity. This is one argument to the contrary. Sites could
use HTTP-with-integrity to provide an experience which allowed for
better caching, with the downside being some loss of coarse privacy for
the user. (Cached resources, by their nature, are not going to be
user-specific, so there won't be leak of PII. But it might leak what you
are reading or what site you are on.)

Gerv
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread david . a . p . lloyd
> Yep. That's the system working. CA does something they shouldn't, we
> find out, CA is no longer trusted (perhaps for a time).
> 
> Or do you have an alternative system design where no-one ever makes a
> mistake and all the actors are trustworthy?
> 
> Gerv

Yes - as I said previously.  Do the existing certificate checks to a trusted CA 
root, then do a TLSA DNS look up for the certificate PIN and check that *as 
well*.  If you did this (and Google publish their SHA512 hashes in DNS) you'd 
could have had lots of copies of Firefox ringing back "potential compromise" 
messages.  Who knows how long those certificates were out there (or what other 
ones are currently out there that you could find just by implementing TLSA).

The more routes to the trust the better.  Trusted Root CA is "all eggs in one 
basket".  DANE is "all eggs in one basket", DNSSEC is "all eggs in one basket".

Put them all together and you have a pretty reliable basket :)

This is what I mean by working a security rating A,B,C,D,Fail - not just a 
"yes/no" answer.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to deprecate: doppler computation in the PannerNode/AudioListener

2015-04-14 Thread Paul Adenot
Continuing on the path to a less broken panning model on the Web Audio
API, this is a followup to [0]

The Web Audio API has a number of API that allow setting the velocity of
a "listener" and a "panner", along with the speed of sound, and a
"doppler factor", to be able to automatically pitch up or down connected
audio sources according to their speed.

This feature has a number of issues:
- It only pitches up or down AudioBufferSourceNodes (not other sources:
OscillatorNode, HTMLMediaElement, ScriptProcessorNode, AudioWorkerNode,
MediaStreams) connected to a PannerNode
- It's not clear what should happen if a single AudioBufferSourceNode is
connected to more than one PannerNode with different velocities, and the
right thing to do would depend on the use case
- By using methods (setVelocity(x, y, z), called on the main thread, at
requestAnimationFrame rate, for example), and not AudioParams (a Web
Audio API object that lets author schedule smooth transition at audio
rate on the audio thread, say 44100Hz), this mechanism was producing
noticeable glitches when relative velocity were high

For those reasons, the W3C Audio WG has agreed a while back to remove
those members entirely. Authors that need this feature can replicate the
effect in a much more generic and glitch-free way in their application.
I'm planning to do a writeup on how to do so on MDN.

This has been shipped in Gecko for some time, although I've been looking
at a large amount of code on GitHub (searching for the to be deprecated
members names and such), and it's not being used a lot, so I went ahead
and wrote two patch, one to remove the members, and one to warn authors
that those members are deprecated [1].

Blink currently marks those members as deprecated [2]. The plan is to
currently mark those as deprecated in Gecko as well, and then remove the
deprecated members from both implementations at roughly the same time
(Blink developers are targeting a version in 6 months). They will be
adding usage counters to watch the usage of those members in the wild.

Additionally, if someone has an idea on what to link to/what to say in
the actual deprecation notice, I'm all ears, the current message is more
or less a placeholder.

Thanks,
Paul.

[0]:
https://groups.google.com/forum/#!topic/mozilla.dev.platform/5pxl_Ht04sY
[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1148354
[2]:
https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/-1SI1GoHYO8
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-14 Thread Ehsan Akhgari

On 2015-04-10 9:43 PM, Gregory Szorc wrote:

On Fri, Apr 10, 2015 at 11:46 AM, Ehsan Akhgari mailto:ehsan.akhg...@gmail.com>> wrote:

I would like to propose that we should ban the usage of refcounted
objects
inside lambdas in Gecko.  Here is the reason:

Consider the following code:

nsINode* myNode;
TakeLambda([&]() {
   myNode->Foo();
});

There is nothing that guarantees that the lambda passed to TakeLambda is
executed before myNode is destroyed somehow.  While it's possible to
verify
on a case by case basis that this code is indeed safe, I think it's too
error prone to keep these invariants over time.  Given the fact that if
things go wrong we'd deal with a UAF bug which will be sec-critical, I
think it's better to be safe than sorry here.

If nobody objects, I'll update the style guide in the next few days.  If
you do object, please make a case for why the above analysis is
incorrect
and why the usage of refcounted objects in lambdas in general is safe.


Why do we merely update the style guide? We have a custom Clang compiler
plugin. I think we should have the Clang compiler plugin enforce our
style guidelines by refusing to compile code that is against policy.
This would enable reviewers to stop focusing on policing the style guide
and enable them to focus more on the correctness of code. This will
result in faster reviews and/or higher quality code.


Yes, I'm aware of the clang plugin, as I've written most of the analyses 
in it.  ;-)  See bug 1153304.  But that is orthogonal, since that plugin 
doesn't cover every single line of code in the code base.  Obvious 
examples are Windows/B2G specific code.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-14 Thread Ehsan Akhgari

On 2015-04-13 1:55 PM, Trevor Saunders wrote:

On Mon, Apr 13, 2015 at 01:28:05PM -0400, Ehsan Akhgari wrote:

On 2015-04-13 5:26 AM, Nicolas B. Pierron wrote:

On 04/10/2015 07:47 PM, Ehsan Akhgari wrote:

On 2015-04-10 1:41 PM, Nicolas B. Pierron wrote:

Also, what is the alternative? Acquiring a nsCOMPtr/nsRefPtr inside the
Lambda constructor (or whatever it's called)?


Yes, another option would be to ensure that the lambda cannot be used
after a specific point.

nsINode* myNode;
auto callFoo = MakeScopedLambda([&]() {
myNode->Foo();
})
TakeLambda(callFoo);

Any reference to the lambda after the end of the innermost scope where
MakeScopedLambda is used can cause a MOZ_CRASH.


How would you detect that at compile/run time?



Simply by replacing the reference to the lambda inside callFoo at the
end of the scope, and replace it by a constructing a dummy function
which expects the same type of arguments as the lambda, but calls
MOZ_CRASH instead.


Sorry, my question was: how do you implement this with C++?  (As in, how
would an actual implementation work?)


That actually seems kind of straight forward.  You want to have an
object that wraps the provided lambda in callFoo and then nukes the
wrapping when the scope exits.  So I guess it would look something like
this (totally untested).


Does this work though?  The |operator LambdaHolder| part is wrong, that 
operator will never be used since there is nothing which would call it 
when you want to pass the lambda to a different function.  Also, I don't 
know what the master member variable is supposed to do.


If someone can come up with an implementation of this idea which 
actually compiles and works fine, then by all means please do file a bug 
and add it.


But such a class, while useful, will not actually prevent the issues 
that this proposal is intended to prevent (since for example in the 
above code, other things besides just returning from the function can 
still invalidate myNode.)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-14 Thread Ehsan Akhgari

On 2015-04-10 9:43 PM, Gregory Szorc wrote:

On Fri, Apr 10, 2015 at 11:46 AM, Ehsan Akhgari mailto:ehsan.akhg...@gmail.com>> wrote:

I would like to propose that we should ban the usage of refcounted
objects
inside lambdas in Gecko.  Here is the reason:

Consider the following code:

nsINode* myNode;
TakeLambda([&]() {
   myNode->Foo();
});

There is nothing that guarantees that the lambda passed to TakeLambda is
executed before myNode is destroyed somehow.  While it's possible to
verify
on a case by case basis that this code is indeed safe, I think it's too
error prone to keep these invariants over time.  Given the fact that if
things go wrong we'd deal with a UAF bug which will be sec-critical, I
think it's better to be safe than sorry here.

If nobody objects, I'll update the style guide in the next few days.  If
you do object, please make a case for why the above analysis is
incorrect
and why the usage of refcounted objects in lambdas in general is safe.


Why do we merely update the style guide? We have a custom Clang compiler
plugin. I think we should have the Clang compiler plugin enforce our
style guidelines by refusing to compile code that is against policy.
This would enable reviewers to stop focusing on policing the style guide
and enable them to focus more on the correctness of code. This will
result in faster reviews and/or higher quality code.


Yes, I'm aware of the clang plugin, as I've written most of the analyses 
in it.  ;-)  See bug 1153304.  But that is orthogonal, since that plugin 
doesn't cover every single line of code in the code base.  Obvious 
examples are Windows/B2G specific code.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread hugoosvaldobarrera
I'm curious as to what would happen with things that cannot have TLS 
certificates: routers and similar web-configurable-only devices (like small 
PBX-like devices, etc).

They don't have a proper domain, and may grab an IP via radvd (or dhcp on 
IPv4), so there's no certificate to be had.

They'd have to use self-signed, which seems to be treated pretty badly (warning 
message, etc).

Would we be getting rid of the self-signed warning when visiting a website?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Aryeh Gregor
On Tue, Apr 14, 2015 at 3:36 PM, Gervase Markham  wrote:
> Yep. That's the system working. CA does something they shouldn't, we
> find out, CA is no longer trusted (perhaps for a time).
>
> Or do you have an alternative system design where no-one ever makes a
> mistake and all the actors are trustworthy?

No, but it would make sense to require that sites be validated through
a single specific CA, rather than allowing any CA to issue a
certificate for any site.  That would drastically reduce the scope of
attacks: an attacker would have to compromise a single specific CA,
instead of any one of hundreds.  IIRC, HSTS already allows this on an
opt-in basis.  If validation was done via DNSSEC instead of the
existing CA system, this would follow automatically, without sites
having to commit to a single CA.  It also avoids the bootstrapping
problem with HSTS, unless someone has solved that in some other way
and I didn't notice.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-14 Thread Ehsan Akhgari

On 2015-04-13 1:36 PM, Jan-Ivar Bruaroey wrote:

On 4/10/15 2:09 PM, Seth Fowler wrote:

On Apr 10, 2015, at 8:46 AM, Ehsan Akhgari 
wrote:

I would like to propose that we should ban the usage of refcounted
objects
inside lambdas in Gecko.  Here is the reason:

Consider the following code:

nsINode* myNode;
TakeLambda([&]() {
  myNode->Foo();
});

There is nothing that guarantees that the lambda passed to TakeLambda is
executed before myNode is destroyed somehow.


The above is a raw pointer bug, not a lambda bug.


Yes, I'm aware.  Please see bug 1114683.


IMHO the use of lambdas helps spot the problem, by

  1. Being more precise (less boilerplate junk for bugs to hide in), and

  2. lambda capture use safer copy construction by default (hence the
 standout [&] above for reviewers).


There is nothing safe about copying raw pointers to refcounted objects. 
 So the assertion that lambda capture is safe by default is clearly 
false.  Point #1 is objective, and I don't think has any bearing on what 
is safe and what unsafe operations need to be banned.



 > Lambdas will be much less useful if they can’t capture refcounted
objects, so I’m strongly against banning that.

+1.

Case in point, we use raw pointers with most runnables, a practice
established in NS_DispatchToMainThread [2]. Look in mxr/dxr for the 100+
uses of NS_DispatchToMainThread(new SomeRunnable()).


That is a terrible pattern that needs to be eliminated and not 
replicated in new code.  You can't argue against doing more terrible 
things because we do bad things elsewhere in the code.



The new ban would prevent us from passing runnables to lambdas, like [3]

   MyRunnableBackToMain* runnable = new MyRunnableBackToMain();

   nsRefPtr> p = SomeAsyncFunc();
   p->Then([runnable](nsCString result) mutable {
 runnable->mResult = result;
 NS_DispatchToMainThread(runnable);

   // future Gecko hacker comes around and does:
   runnable->FooBar(); // oops, is runnable still valid? maybe not!

   });

So I think this ban is misguided. These are old sins not new to lambdas.


Thanks for the great example of this unsafe code.  I modified it to 
inject a possible use after free in it.


I think you're missing the intent of my proposal.  My goal is to prevent 
a new class of sec-critical bugs that will creep into our code base 
through this pattern.  You seem to think that this is somehow a ban of 
lambda usage; please re-read the original post.  The only thing that you 
need to do in the code above to be able to still use lambdas is to make 
runnable an nsRefPtr, and that makes the code safe against the issue 
under discussion.


Cheers,
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 5:11 PM,  wrote:

> One limiting factor is that Firefox doesn't treat form data the same on
> HTTPS sites.
>
> Examples:
>
>
> http://stackoverflow.com/questions/14420624/how-to-keep-changed-form-content-when-leaving-and-going-back-to-https-page-wor
>
>
> http://stackoverflow.com/questions/10511581/why-are-html-forms-sometimes-cleared-when-clicking-on-the-browser-back-button
>
> After loosing a few forum posts or wiki edits to this bug in Firefox, you
> quickly insist on using unsecured HTTP as often as possible.
>

Interesting observation.  ISTM that that's a bug in HTTPS.  At least I
don't see an obvious security reason for the behavior to be that way.

More generally: I expect that this process will turn up bugs in HTTPS
behavior, either "actual" bugs in terms of implementation errors, or
"logical" bugs where the intended behavior does not meet the expectations
or needs of websites.  So we should be open to adapting our HTTPS behavior
some (within the bounds of the security requirements) in order to
facilitate this transition.

--Richard



> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 7:03 PM, Martin Thomson  wrote:

> On Mon, Apr 13, 2015 at 3:53 PM, Eugene 
> wrote:
> > In addition to APIs, I'd like to propose prohibiting caching any
> resources loaded over insecure HTTP, regardless of Cache-Control header, in
> Phase 2.N.
>
> This has some negative consequences (if only for performance).  I'd
> like to see changes like this properly coordinated.  I'd rather just
> treat "caching" as one of the features for Phase 2.N.
>

That seem sensible.

I was about to propose a lifetime limit on caching (say a few hours?) to
limit the persistence scope of MitM, i.e., require periodic re-infection.
There may be ways to circumvent this (e.g., the MitM's code sending cache
priming requests), but it seems incrementally better.

--Richard



> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Eric Rescorla
On Tue, Apr 14, 2015 at 7:01 AM, Aryeh Gregor  wrote:

> On Tue, Apr 14, 2015 at 3:36 PM, Gervase Markham  wrote:
> > Yep. That's the system working. CA does something they shouldn't, we
> > find out, CA is no longer trusted (perhaps for a time).
> >
> > Or do you have an alternative system design where no-one ever makes a
> > mistake and all the actors are trustworthy?
>
> No, but it would make sense to require that sites be validated through
> a single specific CA, rather than allowing any CA to issue a
> certificate for any site.  That would drastically reduce the scope of
> attacks: an attacker would have to compromise a single specific CA,
> instead of any one of hundreds.  IIRC, HSTS already allows this on an
> opt-in basis.


This is called "pinning".

https://developer.mozilla.org/en-US/docs/Web/Security/Public_Key_Pinning
https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21


>   If validation was done via DNSSEC instead of the
> existing CA system, this would follow automatically, without sites
> having to commit to a single CA.


Note that pinning does not require sites to commit to a single CA. You can
pin
multiple CAs.

Using DNS and DNSSEC for this purpose is described in
http://tools.ietf.org/html/rfc6698.
However, to my knowledge no mainstream browser presently accepts DANE/TLSA
authentication for reasons already described upthread.

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 9:43 PM,  wrote:

> On Monday, April 13, 2015 at 8:57:41 PM UTC-4, northrupt...@gmail.com
> wrote:
> >
> > * Less scary warnings about self-signed certificates (i.e. treat
> HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with
> HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less
> secure than HTTP is - to put this as politely and gently as possible - a
> pile of bovine manure
>
> This feature (i.e. opportunistic encryption) was implemented in Firefox
> 37, but unfortunately an implementation bug made HTTPS insecure too. But I
> guess Mozilla will fix it and make this feature available in a future
> release.
>
> > * Support for a decentralized (blockchain-based, ala Namecoin?)
> certificate authority
> >
> > Basically, the current CA system is - again, to put this as gently and
> politely as possible - fucking broken.  Anything that forces the world to
> rely on it exclusively is not a solution, but is instead just going to make
> the problem worse.
>
> I don't think the current CA system is broken. The domain name
> registration is also centralized, but almost every website has a hostname,
> rather than using IP address, and few people complain about this.
>

I would also note that Mozilla is contributing heavily to Let's Encrypt,
which is about as close to a decentralized CA as we can get with current
technology.

If people have ideas for decentralized CAs, I would be interested in
listening, and possibly adding support in the long run.  But unfortunately,
the state of the art isn't quite there yet.

--Richard




> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 11:26 PM,  wrote:

> > * Less scary warnings about self-signed certificates (i.e. treat
> HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with
> HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less
> secure than HTTP is - to put this as politely and gently as possible - a
> pile of bovine manure
>
> I am against this. Both are insecure and should be treated as such. How is
> your browser supposed to know that gmail.com is intended to serve a
> self-signed cert? It's not, and it cannot possibly know it in the general
> case. Thus it must be treated as insecure.
>

This is a good point.  This is exactly why the opportunistic security
feature in Firefox 37 enables encryption without certificate checks for
*http* resources.

--Richard



> > * Support for a decentralized (blockchain-based, ala Namecoin?)
> certificate authority
>
> No. Namecoin has so many other problems that it is not feasible.
>
> > Basically, the current CA system is - again, to put this as gently and
> politely as possible - fucking broken.  Anything that forces the world to
> rely on it exclusively is not a solution, but is instead just going to make
> the problem worse.
>
> Agree that it's broken. The fact that any CA can issue a cert for any
> domain is stupid, always was and always will be. It's now starting to bite
> us.
>
> However, HTTPS and the CA system don't have to be tied together. Let's
> ditch the immediately insecure plain HTTP, then add ways to authenticate
> trusted certs in HTTPS by means other than our current CA system. The two
> problems are orthogonal, and trying to solve both at once will just leave
> us exactly where we are: trying to argue for a fundamentally different
> system.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 10:10 PM, Karl Dubost  wrote:

>
> Le 14 avr. 2015 à 10:43, imfasterthanneutr...@gmail.com a écrit :
> > I don't think the current CA system is broken.
>
> The current CA system creates issues for certain categories of population.
> It is broken in some ways.
>
> > The domain name registration is also centralized, but almost every
> website has a hostname, rather than using IP address, and few people
> complain about this.
>
> Two points:
>
> 1. You do not need to register a domain name to have a Web site (IP
> address)
> 2. You do not need to register a domain name to run a local blah.test.site
>
> Both are still working and not deprecated in browsers ^_^
>
> Now the fact to have to rent your domain name ($$$) and that all the URIs
> are tied to this is in terms of permanent identifiers and the fabric of
> time on information has strong social consequences. But's that another
> debate than the one of this thread on deprecating HTTP in favor of HTTPS.
>

This is a fair point, and we should probably figure out a way to
accommodate these.  My inclination is to mostly punt this to manual
configuration (e.g., installing a new trusted cert/override), since we're
not talking about generally available public service on the Internet.  But
if there are more elegant solutions that don't reduce security, I would be
interested to hear them.



> I would love to see this discussion happening in Whistler too.
>

Agreed.  That sounds like an excellent opportunity to hammer out details
here, assuming we can agree on overall  direction in the meantime.

--Richard



>
> --
> Karl Dubost, Mozilla
> http://www.la-grange.net/karl/moz
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Tue, Apr 14, 2015 at 3:55 AM, Yoav Weiss  wrote:

> On Tue, Apr 14, 2015 at 8:22 AM, Anne van Kesteren 
> wrote:
>
> > On Tue, Apr 14, 2015 at 7:52 AM, Yoav Weiss  wrote:
> > > Limiting new features does absolutely nothing in that aspect.
> >
> > Hyperbole much? CTO of the New York Times cited HTTP/2 and Service
> > Workers as a reason to start deploying HTTPS:
> >
> >   http://open.blogs.nytimes.com/2014/11/13/embracing-https/
>
>
> I stand corrected. So it's the 8th reason out of 9, right before technical
> debt.
>
> I'm not saying using new features is not an incentive, and I'm definitely
> not saying HTTP2 and SW should have been enabled on HTTP.
> But, when done without any real security or deployment issues that mandate
> it, you're subjecting new features to significant adoption friction that is
> unrelated to the feature itself, in order to apply some indirect pressure
> on businesses to do the right thing.
>

Please note that there is no inherent security reason to limit HTTP/2 to be
used only over TLS (as there is for SW), at least not any more than the
security reasons for carrying HTTP/1.1 over TLS.  They're semantically
equivalent; HTTP/2 is just faster.  So if you're OK with limiting HTTP/2 to
TLS, you've sort of already bought into the strategy we're proposing here.



> You're inflicting developer pain without any real justification. A sort of
> collective punishment, if you will.
>
> If you want to apply pressure, apply it where it makes the most impact with
> the least cost. Limiting new features to HTTPS is not the place, IMO.
>

I would note that these options are not mutually exclusive :)  We can apply
pressure with feature availability at the same time that we work on the
ecosystem problems.  In fact, I had a call with some advertising folks last
week about how to get the ad industry upgraded to HTTPS.

--Richard



>
>
> >
> > (And anecdotally, I find it easier to convince developers to deploy
> > HTTPS on the basis of some feature needing it than on merit. And it
> > makes sense, if they need their service to do X, they'll go through
> > the extra trouble to do Y to get to X.)
> >
> >
> Don't convince the developers. Convince the business. Drive users away to
> secure services by displaying warnings, etc.
> Anecdotally on my end, I saw small Web sites that care very little about
> security, move to HTTPS over night after Google added HTTPS as a (weak)
> ranking signal
> <
> http://googlewebmastercentral.blogspot.fr/2014/08/https-as-ranking-signal.html
> >.
> (reason #4 in that NYT article)
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Tue, Apr 14, 2015 at 8:32 AM, Eric Shepherd 
wrote:

> Joshua Cranmer [image: 🐧] wrote:
>
>> If you actually go to read the details of the proposal rather than
>> relying only on the headline, you'd find that there is an intent to
>> actually let you continue to use http for, e.g., localhost. The exact
>> boundary between "secure" HTTP and "insecure" HTTP is being actively
>> discussed in other forums.
>>
> My main concern with the notion of phasing out unsecured HTTP is that
> doing so will cripple or eliminate Internet access by older devices that
> aren't generally capable of handling encryption and decryption on such a
> massive scale in real time.
>
> While it may sound silly, those of us who are intro classic computers and
> making them do fun new things use HTTP to connect 10 MHz (or even 1 MHz)
> machines to the Internet. These machines can't handle the demands of SSL.
> So this is a step toward making their Internet connections go away.
>
> This may not be enough of a reason to save HTTP, but it's something I
> wanted to point out.


As the owner of a Mac SE/30 with an 100MB Ethernet card, I sympathize.
However, consider it part of the challenge!  :)  There are definitely TLS
stacks that work on some pretty small devices.

--Richard



>
>
> --
>
> Eric Shepherd
> Senior Technical Writer
> Mozilla 
> Blog: http://www.bitstampede.com/
> Twitter: http://twitter.com/sheppy
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Tue, Apr 14, 2015 at 9:57 AM,  wrote:

> I'm curious as to what would happen with things that cannot have TLS
> certificates: routers and similar web-configurable-only devices (like small
> PBX-like devices, etc).
>
> They don't have a proper domain, and may grab an IP via radvd (or dhcp on
> IPv4), so there's no certificate to be had.
>
> They'd have to use self-signed, which seems to be treated pretty badly
> (warning message, etc).
>
> Would we be getting rid of the self-signed warning when visiting a website?
>

Well, no. :)

Note that the primary difference between opportunistic security (which is
HTTP) and HTTPS is authentication.  We should think about what sorts of
expectations people have for these devices, and to what degree those
expectations can be met.

Since you bring up IPv6, there might be some possibility that devices could
authenticate their IP addresses automatially, using cryptographically
generated addresses and self-signed certificates using the same public key.
http://en.wikipedia.org/wiki/Cryptographically_Generated_Address

--Richard




> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


changing the default platform and operating-system on bugzilla.mozilla.org to all / all

2015-04-14 Thread Byron Jones
bugzilla has a set of fields, "hardware" and "operating system", that 
i'll collectively call "platform" in this post.  their default values 
are detected from the reporter's user-agent string when a bug is created.


unfortunately on bmo, the platform fields have two distinctly different 
meanings: the reporter's platform and the platform a bug applies to. for 
too long have these two conflicting meanings coexisted within the same 
field, leading to confusion and a field that on many bugs is wrong or 
useless.


thanks to bug 579089 we plan on making the following changes early next 
week:


* each product gains the ability to set their default platform

* the default platform for all products initially will be all / all

* a "use my platform" action will be added to enter-bug, allowing the 
bug reporter to quickly change from the product's default


* a "from reporter" button will be visible when viewing untriaged bugs, 
which sets the platform to the reporter's



--
byron jones - :glob - bugzilla.mozilla.org team lead -

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Mon, Apr 13, 2015 at 7:13 PM, Karl Dubost  wrote:

> Richard,
>
> Le 13 avr. 2015 à 23:57, Richard Barnes  a écrit :
> > There's pretty broad agreement that HTTPS is the way forward for the web.
>
> Yes, but that doesn't make deprecation of HTTP a consensus.
>
> > In order to encourage web developers to move from HTTP to HTTPS, I would
> > like to propose establishing a deprecation plan for HTTP without
> security.
>
> This is not encouragement. This is call forcing. ^_^ Just that we are
> using the right terms for the right thing.
>

If so, then it's about the most gentle forcing we could do.  If your web
page works today over HTTP, it will continue working for a long time,
O(years) probably, until we get around to removing features you care about.

The idea of this proposal is to start communicating to web site operators
that in the *long* run, HTTP will no longer be viable, while giving them
time to transition.



> In the document
> >
> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing
>
> You say:
> Phase 3: Essentially all of the web is HTTPS.
>
> I understand this is the last hypothetical step, but it sounds like a bit
> let's move the Web to XML. It didn't work out very well.
>

The lack of XML doesn't enable things like the Great Cannon.
https://citizenlab.org/2015/04/chinas-great-cannon/



> I would love to have a more secure Web, but this can not happen without a
> few careful consideration.
>
> * Third tier person for certificates being mandatory is a no-go. It
> creates a system of authority and power, an additional layer of hierarchy
> which deeply modify the ability for anyone to publish and might in some
> circumstances increase the security risk.
>
> * If we have to rely, cost of certificates must be zero. These for the
> simple reason than not everyone is living in a rich industrialized country.
>

There are already multiple sources of free publicly-trusted certificates,
with more on the way.
https://www.startssl.com/
https://buy.wosign.com/free/
https://blog.cloudflare.com/introducing-universal-ssl/
https://letsencrypt.org/



> * Setup and publication through HTTPS should be as easy as HTTP. The Web
> brought a publishing power to any individuals. Imagine cases where you need
> to create a local network, web developing on your computer, hacking a
> server for your school, community, etc. If it relies on a heavy process, it
> will not happen.
>

I agree that we should work on this, and Let's Encrypt is making a big push
in this direction.  However, we're not that far off today.   Most hosting
platforms already allow HTTPS with only a few more clicks.  If you're
running your own server, there's lots of documentation, including
documentation provided by Mozilla:

https://mozilla.github.io/server-side-tls/ssl-config-generator/?1

In other words, this is a gradual plan, and while you've raised some
important things to work on, they shouldn't block us getting started.

--Richard




>
>
> So instead of a plan based on technical features, I would love to see a:
> "Let's move to a secure Web. What are the user scenarios, we need to solve
> to achieve that."
>
> These user scenarios are economical, social, etc.
>
>
> my 2 cents.
> So yes, but not the way it is introduced and plan now.
>
>
> --
> Karl Dubost, Mozilla
> http://www.la-grange.net/karl/moz
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: changing the default platform and operating-system on bugzilla.mozilla.org to all / all

2015-04-14 Thread Dave Townsend
Are the platform fields actually useful? Most bugs apply to all platforms
and in the cases that don't it is normally clear from the bug conversation
that it is platform specific. It seems like we rarely go an update the
platform fields to match the actual state of the bug. And then there is the
problem that OS doesn't allow for multi-selections where say a bug affects
a few versions of Windows or Windows and OSX. I've gotten used to just
ignoring these fields and reading the bugs instead. I wouldn't feel any
loss if they were just removed from display entirely.

On Tue, Apr 14, 2015 at 8:17 AM, Byron Jones  wrote:

> bugzilla has a set of fields, "hardware" and "operating system", that i'll
> collectively call "platform" in this post.  their default values are
> detected from the reporter's user-agent string when a bug is created.
>
> unfortunately on bmo, the platform fields have two distinctly different
> meanings: the reporter's platform and the platform a bug applies to. for
> too long have these two conflicting meanings coexisted within the same
> field, leading to confusion and a field that on many bugs is wrong or
> useless.
>
> thanks to bug 579089 we plan on making the following changes early next
> week:
>
> * each product gains the ability to set their default platform
>
> * the default platform for all products initially will be all / all
>
> * a "use my platform" action will be added to enter-bug, allowing the bug
> reporter to quickly change from the product's default
>
> * a "from reporter" button will be visible when viewing untriaged bugs,
> which sets the platform to the reporter's
>
>
> --
> byron jones - :glob - bugzilla.mozilla.org team lead -
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread david . a . p . lloyd

> There are already multiple sources of free publicly-trusted certificates,
> with more on the way.
> https://www.startssl.com/
> https://buy.wosign.com/free/
> https://blog.cloudflare.com/introducing-universal-ssl/
> https://letsencrypt.org/
> 

I think that you should avoid making this an exercise in marketing Mozilla's 
"Let's Encrypt" initiative.  "Let's Encrypt" is a great idea and definitely has 
a place in the world, but it's very important to be impartial.

In my mind there is no particular advantage in swapping lock in from one CA to 
another.  Even if the Mozilla one is free.  
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread justin . kruger
Dynamic DNS might be difficult to run on HTTPS as the IP address needs to 
change when say your cable modem IP updates.  HTTPS only would make running 
personal sites more difficult for individuals, and would make the internet 
slightly less democratic.

On Monday, April 13, 2015 at 7:57:58 AM UTC-7, Richard Barnes wrote:
> There's pretty broad agreement that HTTPS is the way forward for the web.
> In recent months, there have been statements from IETF [1], IAB [2], W3C
> [3], and even the US Government [4] calling for universal use of
> encryption, which in the case of the web means HTTPS.
> 
> In order to encourage web developers to move from HTTP to HTTPS, I would
> like to propose establishing a deprecation plan for HTTP without security.
> Broadly speaking, this plan would entail  limiting new features to secure
> contexts, followed by gradually removing legacy features from insecure
> contexts.  Having an overall program for HTTP deprecation makes a clear
> statement to the web community that the time for plaintext is over -- it
> tells the world that the new web uses HTTPS, so if you want to use new
> things, you need to provide security.  Martin Thomson and I drafted a
> one-page outline of the plan with a few more considerations here:
> 
> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing
> 
> Some earlier threads on this list [5] and elsewhere [6] have discussed
> deprecating insecure HTTP for "powerful features".  We think it would be a
> simpler and clearer statement to avoid the discussion of which features are
> "powerful" and focus on moving all features to HTTPS, powerful or not.
> 
> The goal of this thread is to determine whether there is support in the
> Mozilla community for a plan of this general form.  Developing a precise
> plan will require coordination with the broader web community (other
> browsers, web sites, etc.), and will probably happen in the W3C.
> 
> Thanks,
> --Richard
> 
> [1] https://tools.ietf.org/html/rfc7258
> [2]
> https://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/
> [3] https://w3ctag.github.io/web-https/
> [4] https://https.cio.gov/
> [5]
> https://groups.google.com/d/topic/mozilla.dev.platform/vavZdN4tX44/discussion
> [6]
> https://groups.google.com/a/chromium.org/d/topic/blink-dev/2LXKVWYkOus/discussion

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Adam Roach

On 4/14/15 10:53, justin.kru...@gmail.com wrote:

Dynamic DNS might be difficult to run on HTTPS as the IP address needs to 
change when say your cable modem IP updates.  HTTPS only would make running 
personal sites more difficult for individuals, and would make the internet 
slightly less democratic.


I'm not sure I follow. I have a cert for a web site running on a dynamic 
address using DynDNS, and it works just fine. Certs are bound to names, 
not addresses.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: doppler computation in the PannerNode/AudioListener

2015-04-14 Thread Ehsan Akhgari

On 2015-04-14 9:07 AM, Paul Adenot wrote:

Additionally, if someone has an idea on what to link to/what to say in
the actual deprecation notice, I'm all ears, the current message is more
or less a placeholder.


Perhaps we can document this in more detail on MDN and link to the docs 
page?


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Boris Zbarsky

On 4/14/15 11:53 AM, justin.kru...@gmail.com wrote:

Dynamic DNS might be difficult to run on HTTPS as the IP address needs to 
change when say your cable modem IP updates.


Justin, I'm not sure I follow the problem here.  If I understand 
correctly, you're talking about a domain name, say "foo.bar", which is 
mapped to different IPs via dynamic DNS, and a website running on the 
machine behind the relevant cable modem, right?


Is the site being accessed directly via the IP address or via the 
foo.bar hostname?  Because if it's the latter, then a cert issued to 
foo.bar would work fine as the IP changes; certificates are bound to a 
hostname string (which can happen to have the form "123.123.123.123", of 
course), not an IP address.


And if the site is being accessed via (changeable) IP address, then how 
is dynamic DNS relevant?


I would really appreciate an explanation of what problem you're seeing 
here that I'm missing.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: changing the default platform and operating-system on bugzilla.mozilla.org to all / all

2015-04-14 Thread Benjamin Smedberg



On 4/14/2015 11:40 AM, Dave Townsend wrote:

Are the platform fields actually useful? Most bugs apply to all platforms
and in the cases that don't it is normally clear from the bug conversation
that it is platform specific. It seems like we rarely go an update the
platform fields to match the actual state of the bug. And then there is the
problem that OS doesn't allow for multi-selections where say a bug affects
a few versions of Windows or Windows and OSX. I've gotten used to just
ignoring these fields and reading the bugs instead. I wouldn't feel any
loss if they were just removed from display entirely.


I've suggested this before (and still think that's the right user 
experience). In fact this turns out to be really difficult to do within 
bugzilla because those fields are baked into bugzilla guts and removing 
them would require not just hiding them on the edit-bug page but also 
from the query pages and various other locations.


I do think we should try to stop using these fields for anything important.

--BDS
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: changing the default platform and operating-system on bugzilla.mozilla.org to all / all

2015-04-14 Thread Milan Sreckovic
Having that field be wrong is worse than not having it, no doubt about that, 
but if we had a way of making sure that field actually contains correct 
information, it would be extremely useful to the graphics team.

I don’t have the numbers, but it certainly feels like a large majority of the 
bugs we get are in fact platform specific.  Even with the current limitation of 
“if it’s more than one, it’s all”, that’s still better than no information at 
all.

If I was going with the “what can we do that’s fairly simple and improves 
things”, I’d create a new value for that field, “Not specified” (or something 
like that) and make that the default.  That way, we differentiate between the 
values that were explicitly set, in which case, I have to hope they will be 
more correct than they are today, and values that were just left at the default 
value.
—
- Milan



On Apr 14, 2015, at 12:18 , Benjamin Smedberg  wrote:

> 
> 
> On 4/14/2015 11:40 AM, Dave Townsend wrote:
>> Are the platform fields actually useful? Most bugs apply to all platforms
>> and in the cases that don't it is normally clear from the bug conversation
>> that it is platform specific. It seems like we rarely go an update the
>> platform fields to match the actual state of the bug. And then there is the
>> problem that OS doesn't allow for multi-selections where say a bug affects
>> a few versions of Windows or Windows and OSX. I've gotten used to just
>> ignoring these fields and reading the bugs instead. I wouldn't feel any
>> loss if they were just removed from display entirely.
> 
> I've suggested this before (and still think that's the right user 
> experience). In fact this turns out to be really difficult to do within 
> bugzilla because those fields are baked into bugzilla guts and removing them 
> would require not just hiding them on the edit-bug page but also from the 
> query pages and various other locations.
> 
> I do think we should try to stop using these fields for anything important.
> 
> --BDS
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: changing the default platform and operating-system on bugzilla.mozilla.org to all / all

2015-04-14 Thread Andrew McCreight
On Tue, Apr 14, 2015 at 8:40 AM, Dave Townsend 
wrote:

> Are the platform fields actually useful? Most bugs apply to all platforms
> and in the cases that don't it is normally clear from the bug conversation
> that it is platform specific. It seems like we rarely go an update the
> platform fields to match the actual state of the bug. And then there is the
> problem that OS doesn't allow for multi-selections where say a bug affects
> a few versions of Windows or Windows and OSX. I've gotten used to just
> ignoring these fields and reading the bugs instead. I wouldn't feel any
> loss if they were just removed from display entirely.
>

I find them useful occasionally, though now that I think about it,
specifying affected operating systems through keywords would probably be
more useful.  Hopefully they will become more useful with the bugzilla
changes, so you don't get things set to random OSes when they don't need to
be.

Andrew


>
> On Tue, Apr 14, 2015 at 8:17 AM, Byron Jones  wrote:
>
> > bugzilla has a set of fields, "hardware" and "operating system", that
> i'll
> > collectively call "platform" in this post.  their default values are
> > detected from the reporter's user-agent string when a bug is created.
> >
> > unfortunately on bmo, the platform fields have two distinctly different
> > meanings: the reporter's platform and the platform a bug applies to. for
> > too long have these two conflicting meanings coexisted within the same
> > field, leading to confusion and a field that on many bugs is wrong or
> > useless.
> >
> > thanks to bug 579089 we plan on making the following changes early next
> > week:
> >
> > * each product gains the ability to set their default platform
> >
> > * the default platform for all products initially will be all / all
> >
> > * a "use my platform" action will be added to enter-bug, allowing the bug
> > reporter to quickly change from the product's default
> >
> > * a "from reporter" button will be visible when viewing untriaged bugs,
> > which sets the platform to the reporter's
> >
> >
> > --
> > byron jones - :glob - bugzilla.mozilla.org team lead -
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-14 Thread Jan-Ivar Bruaroey

On 4/14/15 10:06 AM, Ehsan Akhgari wrote:

On 2015-04-13 1:36 PM, Jan-Ivar Bruaroey wrote:

The above is a raw pointer bug, not a lambda bug.


Yes, I'm aware.  Please see bug 1114683.


Thanks, I was not aware of this larger effort. This somewhat addresses 
my concern that we seem overly focused on lambdas and calling them 
unsafe when they're just another heap instantiation pattern.



  2. lambda capture use safer copy construction by default (hence the
 standout [&] above for reviewers).


There is nothing safe about copying raw pointers to refcounted objects.


There's nothing safe about copying raw pointers to heap objects, period.
Not buying that refcounted objects or lambdas are worse.


  So the assertion that lambda capture is safe by default is clearly
false.


I said "safer", as in lambdas' defaults are safer for many copyable 
types like nsRefPtr than manual instantiation is. They are not less safe 
than other things because of raw pointers.



 > Lambdas will be much less useful if they can’t capture refcounted
objects, so I’m strongly against banning that.

+1.

Case in point, we use raw pointers with most runnables, a practice
established in NS_DispatchToMainThread [2]. Look in mxr/dxr for the 100+
uses of NS_DispatchToMainThread(new SomeRunnable()).


That is a terrible pattern that needs to be eliminated and not
replicated in new code.


This is a terrible pattern that we should not grandfather any code in 
under, so why is https://bugzil.la/833797 WONTFIX?


NS_DispatchToMainThread encourages use of raw pointers, so lets fix that 
for everyone. Until then, this seems to be the pattern.



You can't argue [I think you mean 'for'] doing more terrible things

> because we do bad things elsewhere in the code.

I write non-exception c++ code every day for that very reason.


The new ban would prevent us from passing runnables to lambdas, like [3]

   MyRunnableBackToMain* runnable = new MyRunnableBackToMain();

   nsRefPtr> p = SomeAsyncFunc();
   p->Then([runnable](nsCString result) mutable {
 runnable->mResult = result;
 NS_DispatchToMainThread(runnable);

// future Gecko hacker comes around and does:
runnable->FooBar(); // oops, is runnable still valid? maybe not!

   });

So I think this ban is misguided. These are old sins not new to lambdas.


Thanks for the great example of this unsafe code.  I modified it to
inject a possible use after free in it.


Thanks for making my point! The ban would not catch the general problem:

MyRunnableBackToMain* runnable = new MyRunnableBackToMain();

NS_DispatchToMainThread(runnable);
// future Gecko hacker comes around and does:
runnable->FooBar(); // oops, is runnable still valid? maybe not!

I hope this shows that lambdas are a red herring here.


I think you're missing the intent of my proposal.  My goal is to prevent
a new class of sec-critical bugs that will creep into our code base
through this pattern.


It's not a new class, and the new kids on the bus are not why the bus is 
broken.



You seem to think that this is somehow a ban of
lambda usage; please re-read the original post.  The only thing that you
need to do in the code above to be able to still use lambdas is to make
runnable an nsRefPtr, and that makes the code safe against the issue
under discussion.


Can't hold nsRefPtr to main-destined runnables off main-thread.

I've filed https://bugzil.la/1154337 to discuss a solution to that.


Cheers,
Ehsan


.: Jan-Ivar :.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread jww
Hi Mozilla friends. Glad to see this proposal! As Richard mentions, we over on 
Chromium are working on a similar plan, albeit limited to "powerful features." 

I just wanted to mention that regarding subresource integrity 
(https://w3c.github.io/webappsec/specs/subresourceintegrity/), the general 
consensus over here is that we will not treat origins as secure if they are 
over HTTP but loaded with integrity. We believe that security includes 
confidentiality, which that would approach would lack.
--Joel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread mh . in . england
> We believe that security includes confidentiality, which that would approach 
> would lack.

Hey Joel,

SSL already leaks which domain name you are visiting anyway, so the most 
confidentiality this can bring you is hiding the specific URL involved in a 
cache miss. That's a fairly narrow upgrade to confidentiality.

A scenario where it would matter: a MITM wishes to block viewing of a specific 
video on a video hosting site, but is unwilling to block the whole site. In 
such cases you would indeed want full SSL, assuming the host can afford it.

A scenario where it would not matter: some country wishes to fire a Great 
Cannon. There integrity is enough.

I think the case for requiring integrity for all connections is strong: malware 
injection is simply not on. The case for confidentiality of user data and 
cookies is equally clear. The case for confidentiality of cache misses of 
static assets is a bit less clear:  sites that host a lot of very different 
content like YouTube might care and a site where all the content is the same 
(e.g. a porn site) might feel the difference between a URL and a domain name is 
so tiny that it's irrelevant - they'd rather have the performance improvements 
from caching proxies. Sites that have a lot of users in developing countries 
might also feel differently to Google engineers with workstations hard-wired 
into the internet backbone ;)

Anyway, just my 2c.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Martin Thomson
On Tue, Apr 14, 2015 at 3:29 AM, Henri Sivonen  wrote:
> Specifically, on point #2, I think we should start by, by default,
> forgetting all cookies that don't have the "secure" flag set at the
> end of the Firefox session. Persistent cookies have two main use
> cases:
>  * On login-requiring sites, not requiring the user to have to
> re-enter credentials in every browser session.
>  * Behavioral profiling.

This is a reasonable proposal.  I think that this, as well as the
caching suggestion up-thread, fall into the general category of things
we've identified as "persistence" features.  Persistence has been
identified as one of the most dangerous aspects of the unsecured web.

I like this sort of approach, because it can be implemented at a much
lower https:// adoption rate (i.e., today's rate) than other more
obvious things.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Eric Shepherd

Richard Barnes wrote:
As the owner of a Mac SE/30 with an 100MB Ethernet card, I 
sympathize.  However, consider it part of the challenge!  :)  There 
are definitely TLS stacks that work on some pretty small devices.
That's a lot faster machine than the ones I play with. My fastest retro 
machine is an 8-bit unit with a 10 MHz processor and 4 MB of memory, 
with a 10 Mbps ethernet card. And the ethernet is underutilized because 
the bus speed of the computer is too slow to come anywhere close to 
saturating the bandwidth available. :)


--

Eric Shepherd
Senior Technical Writer
Mozilla 
Blog: http://www.bitstampede.com/
Twitter: http://twitter.com/sheppy
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


MemShrink Meeting - Today, 14 Apr 2015 at 1:00pm PDT

2015-04-14 Thread Jet Villegas
The next MemShrink meeting is brought to you by delayed Desktop Reader
parsing:
https://bugzilla.mozilla.org/show_bug.cgi?id=1142183

The wiki page for this meeting is at:
   https://wiki.mozilla.org/Performance/MemShrink

Agenda:
* Prioritize unprioritized MemShrink bugs.
* Discuss how we measure progress.
* Discuss approaches to getting more data.

Meeting details:

* Tue, 14 Apr 2015, 1:00 PM PDT
*
http://arewemeetingyet.com/Los%20Angeles/2015-04-14/13:00/MemShrink%20Meeting
* Vidyo: Memshrink
* Dial-in Info:
   - In office or soft phone: extension 92
   - US/INTL: 650-903-0800 or 650-215-1282 then extension 92
   - Toll-free: 800-707-2533 then password 369
   - Conference num 98802
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-14 Thread Ehsan Akhgari

On 2015-04-14 12:41 PM, Jan-Ivar Bruaroey wrote:

  2. lambda capture use safer copy construction by default (hence the
 standout [&] above for reviewers).


There is nothing safe about copying raw pointers to refcounted objects.


There's nothing safe about copying raw pointers to heap objects, period.
Not buying that refcounted objects or lambdas are worse.


OK, feel free to disagree.  There is data that supports my position 
though, but I cannot talk about it on a public list.  If you have 
security access, feel free to go through the list of sec-critical bugs 
that we fix to get a sense of how common the pattern of UAF with 
refcounted objects is.



  So the assertion that lambda capture is safe by default is clearly
false.


I said "safer", as in lambdas' defaults are safer for many copyable
types like nsRefPtr than manual instantiation is. They are not less safe
than other things because of raw pointers.


I have no position for or against what the lambda's defaults are. 
Again, it seems like you're under the impression that I'm against the 
usage of lambdas and are trying to defend lambdas.



 > Lambdas will be much less useful if they can’t capture refcounted
objects, so I’m strongly against banning that.

+1.

Case in point, we use raw pointers with most runnables, a practice
established in NS_DispatchToMainThread [2]. Look in mxr/dxr for the 100+
uses of NS_DispatchToMainThread(new SomeRunnable()).


That is a terrible pattern that needs to be eliminated and not
replicated in new code.


This is a terrible pattern that we should not grandfather any code in
under, so why is https://bugzil.la/833797 WONTFIX?


I don't know, I have not been involved in that.


NS_DispatchToMainThread encourages use of raw pointers,


Not sure why you think that.  Can you clarify why it does?  (Perhaps in 
a new thread since this is way off topic.)


> so lets fix that

for everyone. Until then, this seems to be the pattern.


I agree.  If there is a problem with NS_DispatchToMainThread, it needs 
to be fixed.  I have no idea why you think that should block fixing the 
issue that this thread is discussing.



The new ban would prevent us from passing runnables to lambdas, like [3]

   MyRunnableBackToMain* runnable = new MyRunnableBackToMain();

   nsRefPtr> p = SomeAsyncFunc();
   p->Then([runnable](nsCString result) mutable {
 runnable->mResult = result;
 NS_DispatchToMainThread(runnable);

// future Gecko hacker comes around and does:
runnable->FooBar(); // oops, is runnable still valid? maybe not!

   });

So I think this ban is misguided. These are old sins not new to lambdas.


Thanks for the great example of this unsafe code.  I modified it to
inject a possible use after free in it.


Thanks for making my point! The ban would not catch the general problem:

 MyRunnableBackToMain* runnable = new MyRunnableBackToMain();

 NS_DispatchToMainThread(runnable);
 // future Gecko hacker comes around and does:
 runnable->FooBar(); // oops, is runnable still valid? maybe not!

I hope this shows that lambdas are a red herring here.


No.  It just shows that my proposal doesn't completely fix all memory 
safety issues that you can have in C++.  ;-)  That's true, but it's not 
really relevant to the topic under discussion here.



I think you're missing the intent of my proposal.  My goal is to prevent
a new class of sec-critical bugs that will creep into our code base
through this pattern.


It's not a new class, and the new kids on the bus are not why the bus is
broken.


We're just talking past each other, I think.  Do you agree that the 
topic of this thread is a memory safety issue?  And if you agree on 
that, what exact issues do you see my proposal?



You seem to think that this is somehow a ban of
lambda usage; please re-read the original post.  The only thing that you
need to do in the code above to be able to still use lambdas is to make
runnable an nsRefPtr, and that makes the code safe against the issue
under discussion.


Can't hold nsRefPtr to main-destined runnables off main-thread.


You can if the object has thread safe reference counting.  For example, 
nsRunnable does, so you can hold nsRefPtrs to anything that inherits 
from it from any threads that you want.


For the places where you have main-thread only objects besides 
runnables, all you need to do is to ensure they are released on the 
right thread, as bug 1154389 shows.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread connor . behan
HTTPS has its moments, but the majority of the web does not need it. I 
certainly wouldn't appreciate the encryption overhead just for visiting David's 
lolcats website. As one of the most important organizations related to free 
software, it's sad to see Mozilla developers join the war on plaintext: 
http://arc.pasp.de/ The owners of websites like this have a right to serve 
their pages in formats that do not make hypocrites of themselves.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread emmanueldeloget53
Hello, 

On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote:
> In order to encourage web developers to move from HTTP to HTTPS, I would
> like to propose establishing a deprecation plan for HTTP without security.
>
> 
> 
> Thanks,
> --Richard

While I fully understand what's at stake here and the reasoning behind this, 
I'd like to ask an admittedly troll-like question : 

  Will Mozilla start to offer certificates to every single domain name owner ?

Without that, your proposal tells me: either you pay for a certificate or you 
don't use the latest supported features on your personal (or professional) web 
site. This is a call for a revival of the "best viewed with XXX browser" 
banners. 

Making the warning page easier to bypass is a very, very bad idea. The warning 
page is here for a very good reason, and its primary function is to scare 
non-technical literate people so that they don't put themselves in danger. Make 
it less scary and you'll get the infamous Windows Vista UAC dialog boxes where 
people click OK without even reading the content.

The proposal fails to foresee another consequence of a full HTTPS web: the rise 
and fall of root CAs. If everyone needs to buy a certificate you can be sure 
that some companies will sell them for a low price, with limited background 
check. These companies will be spotted - and their root CA will be revoked by 
browser vendors (this already happened in the past and I fail to see any reason 
why it would not happen again). Suddenly, a large portion of the web will be 
seen as even worse than "insecure HTTP" - it will be seen as "potentially 
dangerous HTTPS". The only way to avoid this situation is to put all the power 
in a very limited number of hands - then you'll witness a sharp rise on 
certificate prices.

Finally, Mozilla's motto is to keep the web open. Requiring one to pay a fee - 
even if it's a small one - in order to allow him to have a presence on the 
Intarweb is not helping.

Best regards, 

-- Emmanuel Deloget
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Adam Roach

On 4/14/15 15:35, emmanueldeloge...@gmail.com wrote:

Will Mozilla start to offer certificates to every single domain name owner ?


Yes [1].

https://letsencrypt.org/



[1] I'll note that Mozilla is only one of several organizations involved 
in making this effort happen.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread northrupthebandgeek
On Monday, April 13, 2015 at 8:26:59 PM UTC-7, ipar...@gmail.com wrote:
> > * Less scary warnings about self-signed certificates (i.e. treat 
> > HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with 
> > HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less 
> > secure than HTTP is - to put this as politely and gently as possible - a 
> > pile of bovine manure
> 
> I am against this. Both are insecure and should be treated as such. How is 
> your browser supposed to know that gmail.com is intended to serve a 
> self-signed cert? It's not, and it cannot possibly know it in the general 
> case. Thus it must be treated as insecure.

Except that one is encrypted, and the other is not.  *By logical measure*, the 
one that is encrypted but unauthenticated is more secure than the one that is 
neither encrypted nor authenticated, and the fact that virtually every 
HTTPS-supporting browser assumes the precise opposite is mind-boggling.

I agree that authentication/verification is necessary for security, but to 
pretend that encryption is a non-factor when it's the only actual subject of 
this thread as presented by its creator is asinine.

> 
> > * Support for a decentralized (blockchain-based, ala Namecoin?) certificate 
> > authority
> 
> No. Namecoin has so many other problems that it is not feasible.

Like?

And I'm pretty sure none of those problems (if they even exist) even remotely 
compare to the clusterfsck that is our current CA system.

> 
> > Basically, the current CA system is - again, to put this as gently and 
> > politely as possible - fucking broken.  Anything that forces the world to 
> > rely on it exclusively is not a solution, but is instead just going to make 
> > the problem worse.
> 
> Agree that it's broken. The fact that any CA can issue a cert for any domain 
> is stupid, always was and always will be. It's now starting to bite us.
> 
> However, HTTPS and the CA system don't have to be tied together. Let's ditch 
> the immediately insecure plain HTTP, then add ways to authenticate trusted 
> certs in HTTPS by means other than our current CA system. The two problems 
> are orthogonal, and trying to solve both at once will just leave us exactly 
> where we are: trying to argue for a fundamentally different system.

Indeed they don't, but with the current ecosystem they are, which is my point; 
by deprecating HTTP *and* continuing to treat self-signed certs as literally 
worse than Hitler *and* relying on the current CA system exclusively for 
verification of certificates, we're doing nothing to actually solve anything.

As orthogonal as those problems may seem, an HTTPS-only world will fail rather 
spectacularly without significant reform and refactoring on the CA side of 
things.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Chris Peterson

On 4/14/15 3:29 AM, Henri Sivonen wrote:

Specifically, on point #2, I think we should start by, by default,
forgetting all cookies that don't have the "secure" flag set at the
end of the Firefox session. Persistent cookies have two main use
cases:
  * On login-requiring sites, not requiring the user to have to
re-enter credentials in every browser session.
  * Behavioral profiling.


I searched for an existing bug to treat non-secure cookies as session 
cookies, but I couldn't find one.


However, I did find bug 530594 ("eternalsession"). Firefox's session 
restore, as the name suggests, restores session cookies even after the 
user quits and restarts the browser. This is somewhat surprising, but 
the glass-half-full perspective is that the negative effects of Henri's 
suggestion would be lessened (until bug 530594 is fixed).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Runnables and thread-unsafe members

2015-04-14 Thread Randell Jesup
(was: Re: Proposal to ban the usage of refcounted objects inside C++
lambdas in Gecko)

tl;dr: We should make DispatchToMainThread take already_AddRefed and
move away from raw ptrs for Dispatches in general.

So:

What I want to avoid is this pattern for runnables that hold
thread-restricted pointers (mostly MainThread-restricted, such as JS
callbacks):

 FooRunnable::FooRunnable(nsCOMPtr& aFoo)
 {
   mFoo.swap(aFoo); // Never AddRef/Release off MainThread
 }
...
{
  nsRefPtr foo = new FooRunnable(myFoo);
  // RefCnt == 1, myFoo.get() == nullptr now
  NS_DispatchToMainThread(foo); // infallible (soon), takes a ref to foo 
(refcnt 2)

  // XXX what is foo's refcnt here?  You don't know!
}

The reason is that foo may go out of scope here *after* it has Run() and
had the event ref dropped on mainthread.  I.e.: at the XXX comment, it
may "usually" be 2, but sometimes may be 1, and when foo goes out of
scope we delete it here, and violate thread-safety for mFoo.

Blindly taking an NS_DispatchToMainThread(new FooRunnable) and
converting it to the pattern above (with nsRefPtr) will introduce a
timing-based thread safety violation IF it holds thread-locked items
(which isn't super-common).  But also, it's extra thread-safe
addref/releasing when we don't need to.

A safe pattern is this:
{
  nsRefPtr foo = new FooRunnable(myFoo);
  // RefCnt == 1, myFoo.get() == nullptr now
  // This requires adding a version of
  // NS_DispatchToMainThread(already_AddRefed) or some such
  NS_DispatchToMainThread(foo.forget()); // infallible (soon)
  // foo is nullptr
}

And to be honest, that's generally a better pattern for runnables
anyways - we *want* to give them away on Dispatch()/DispatchToMainThread().

Note that you can also do ehsan's trick for safety instead:

   ~FooRunnable()
   {
 if (!NS_IsMainThread()) {
   // get mainthread
   NS_ProxyRelease(mainthread, mFoo);
 }
   }

I don't love this though.  It adds almost-never-executed code, we
addref/release extra times, it could bitrot or forget to grow new
mainthread-only refs.  And it wastes code and adds semantic boilerplate
to ignore when reading code.

You could add a macro to build these and hide the boilerplate some, but
that's not wonderful either.

So, if I have my druthers:  (And really only #1 is needed, probably)

1) Add already_AddRefed variants of Dispatch/DispatchToMainThread, and
   convert things as needed to .forget() (Preferably most
   Dispatch/DispatchToMainThreads would become this.)  If the Dispatch()
   can fail and it's not a logic error somewhere (due to Dispatch to a
   doomed thread perhaps in some race conditions), then live with a leak
   - we can't send it there to die.  DispatchToMainThread() isn't
   affected by this and will be infallible soon.

2) If you're building a runnable with a complex lifetime and a
   MainThread or thread-locked object, also add the ProxyRelease
   destructor.  Otherwise consider simple boilerplate (for
   MainThread-only runnables) to do:
   
 ~FooRunnable() { if (!NS_IsMainThread()) { MOZ_CRASH(); } }

   (Perhaps DESTROYED_ON_MAINTHREAD_ONLY(FooRunnable)).  Might not be a
   bad thing to have in our pockets for other reasons.
   
3) Consider replacing nsCOMPtr mFoo in the runnable with
   nsReleaseOnMainThread> mFoo and have
   ~nsReleaseOnMainThread() do NS_ProxyRelease, and also have
   nsReleaseOnMainThread block attempts to do assignments that AddRef
   (only set via swap or assign from already_AddRefed).  Unlike the
   MainThreadPtrHolder this could be created on other threads since it
   never AddRefs; it only inherits references and releases them on MainThread

I wonder if we could move to requiring already_AddRefed for
DispatchToMainThread (and Dispatch?), and thus block all attempts to do
DispatchToMainThread(new FooRunnable), etc.  :-)

-- 
Randell Jesup, Mozilla Corp
remove "news" for personal email
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Runnables and thread-unsafe members

2015-04-14 Thread Kyle Huey
On Tue, Apr 14, 2015 at 2:39 PM, Randell Jesup  wrote:
> (was: Re: Proposal to ban the usage of refcounted objects inside C++
> lambdas in Gecko)
>
> tl;dr: We should make DispatchToMainThread take already_AddRefed and
> move away from raw ptrs for Dispatches in general.

Agreed.

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Runnables and thread-unsafe members

2015-04-14 Thread Bobby Holley
+1.

On Tue, Apr 14, 2015 at 2:42 PM, Kyle Huey  wrote:

> On Tue, Apr 14, 2015 at 2:39 PM, Randell Jesup 
> wrote:
> > (was: Re: Proposal to ban the usage of refcounted objects inside C++
> > lambdas in Gecko)
> >
> > tl;dr: We should make DispatchToMainThread take already_AddRefed and
> > move away from raw ptrs for Dispatches in general.
>
> Agreed.
>
> - Kyle
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Cameron Kaiser

On 4/14/15 10:38 AM, Eric Shepherd wrote:

Richard Barnes wrote:

As the owner of a Mac SE/30 with an 100MB Ethernet card, I
sympathize.  However, consider it part of the challenge!  :)  There
are definitely TLS stacks that work on some pretty small devices.

That's a lot faster machine than the ones I play with. My fastest retro
machine is an 8-bit unit with a 10 MHz processor and 4 MB of memory,
with a 10 Mbps ethernet card. And the ethernet is underutilized because
the bus speed of the computer is too slow to come anywhere close to
saturating the bandwidth available. :)


Candidly, and not because I still run such a site, I've always found 
Gopher to be a better fit for resource-constrained computing. The 
Commodore 128 sitting next to me does very well for that because the 
protocol and menu parsing conventions are incredibly trivial.


What is your 10MHz 8-bit system?

Cameron Kaiser
gopher://gopher.floodgap.com/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Adam Roach

On 4/14/15 16:32, northrupthebandg...@gmail.com wrote:

*By logical measure*, the [connection] that is encrypted but unauthenticated is 
more secure than the one that is neither encrypted nor authenticated, and the 
fact that virtually every HTTPS-supporting browser assumes the precise opposite 
is mind-boggling.


That depends on what kind of resource you're trying to access. If the 
resource you're trying to reach (in both circumstances) isn't demanding 
security -- i.e., it is an "http" URL -- then your logic is sound. 
That's the basis for enabling OE.


The problem here is that you're comparing:

 * Unsecured connections working as designed

with

 * Supposedly secured connections that have a detected security flaw


An "https" URL is a promise of encryption _and_ authentication; and when 
those promises are violated, it's a sign that something has gone wrong 
in a way that likely has stark security implications.


Resources loaded via an "http" URL make no such promises, so the 
situation isn't even remotely comparable.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread northrupthebandgeek
On Tuesday, April 14, 2015 at 5:39:24 AM UTC-7, Gervase Markham wrote:
> On 14/04/15 01:57, northrupt...@gmail.com wrote:
> > * Less scary warnings about self-signed certificates (i.e. treat
> > HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do
> > with HTTPS+selfsigned now); the fact that self-signed HTTPS is
> > treated as less secure than HTTP is - to put this as politely and
> > gently as possible - a pile of bovine manure 
> 
> http://gerv.net/security/self-signed-certs/ , section 3.

That whole article is just additional shovelfuls of bovine manure slopped onto 
the existing heap.

The article assumes that when folks connect to something via SSH and something 
changes - causing MITM-attack warnings and a refusal to connect - folks default 
to just removing the existing entry in ~/.ssh/known_hosts without actually 
questioning anything.  This conveniently ignores the fact that - when people do 
this - it's because they already know there's been a change (usually due to a 
server replacement); most folks (that I've encountered at least) *will* stop 
and think before editing their known_hosts if it's an unexpected change.

"The first important thing to note about this model is that key changes are an 
expected part of life."

Only if they've been communicated first.  In the vast majority of SSH 
deployments, a host key will exist at least as long as the host does (if not 
longer).  If one is going to criticize SSH's model, one should, you know, 
actually freaking understand it first.

"You can't provide [Joe Public] with a string of hex characters and expect it 
to read it over the phone to his bank."

Sure you can.  Joe Public *already* has to do this with social security 
numbers, credit card numbers, checking/savings account numbers, etc. on a 
pretty routine basis, whether it's over the phone, over the Internet, by mail, 
in person, or what have you.  What makes an SSH fingerprint any different?  The 
fact that now you have the letters A through F to read?  Please.

"Everyone can [install a custom root certificate] manually or the IT department 
can use the Client Customizability Kit (CCK) to make a custom Firefox. "

I've used the CCK in the past for Firefox customizations in enterprise 
settings.  It's a royal pain in the ass, and is not nearly as viable a solution 
as the article suggests (and the alternate suggestion of "oh just use the 
broken, arbitrarily-trusted CA system for your internal certs!" is a hilarious 
joke at best; the author of the article would do better as a comedian than as a 
serious authority when it comes to security best practices).

A better solution might be to do this on a client workstation level, but it's 
still a pain and usually not worth the trouble for smaller enterprises v. just 
sticking to the self-signed cert.

The article, meanwhile, also assumes (in the section before the one you've 
cited) that the CA system is immune to being compromised while DNS is 
vulnerable.  Anyone with a number of brain cells greater than or equal to one 
should know better than to take that assumption at face value.

> 
> But also, Firefox is implementing opportunistic encryption, which AIUI
> gives you a lot of what you want here.
> 
> Gerv

Then that needs to happen first.  Otherwise, this whole discussion is moot, 
since absolutely nobody in their right mind would want to be shoehorned into 
our current broken CA system without at least *some* alternative.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Richard Barnes
On Tue, Apr 14, 2015 at 5:59 PM,  wrote:

> On Tuesday, April 14, 2015 at 5:39:24 AM UTC-7, Gervase Markham wrote:
> > On 14/04/15 01:57, northrupt...@gmail.com wrote:
> > > * Less scary warnings about self-signed certificates (i.e. treat
> > > HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do
> > > with HTTPS+selfsigned now); the fact that self-signed HTTPS is
> > > treated as less secure than HTTP is - to put this as politely and
> > > gently as possible - a pile of bovine manure
> >
> > http://gerv.net/security/self-signed-certs/ , section 3.
>
> That whole article is just additional shovelfuls of bovine manure slopped
> onto the existing heap.
>
> The article assumes that when folks connect to something via SSH and
> something changes - causing MITM-attack warnings and a refusal to connect -
> folks default to just removing the existing entry in ~/.ssh/known_hosts
> without actually questioning anything.  This conveniently ignores the fact
> that - when people do this - it's because they already know there's been a
> change (usually due to a server replacement); most folks (that I've
> encountered at least) *will* stop and think before editing their
> known_hosts if it's an unexpected change.
>
> "The first important thing to note about this model is that key changes
> are an expected part of life."
>
> Only if they've been communicated first.  In the vast majority of SSH
> deployments, a host key will exist at least as long as the host does (if
> not longer).  If one is going to criticize SSH's model, one should, you
> know, actually freaking understand it first.
>
> "You can't provide [Joe Public] with a string of hex characters and expect
> it to read it over the phone to his bank."
>
> Sure you can.  Joe Public *already* has to do this with social security
> numbers, credit card numbers, checking/savings account numbers, etc. on a
> pretty routine basis, whether it's over the phone, over the Internet, by
> mail, in person, or what have you.  What makes an SSH fingerprint any
> different?  The fact that now you have the letters A through F to read?
> Please.
>
> "Everyone can [install a custom root certificate] manually or the IT
> department can use the Client Customizability Kit (CCK) to make a custom
> Firefox. "
>
> I've used the CCK in the past for Firefox customizations in enterprise
> settings.  It's a royal pain in the ass, and is not nearly as viable a
> solution as the article suggests (and the alternate suggestion of "oh just
> use the broken, arbitrarily-trusted CA system for your internal certs!" is
> a hilarious joke at best; the author of the article would do better as a
> comedian than as a serious authority when it comes to security best
> practices).
>
> A better solution might be to do this on a client workstation level, but
> it's still a pain and usually not worth the trouble for smaller enterprises
> v. just sticking to the self-signed cert.
>
> The article, meanwhile, also assumes (in the section before the one you've
> cited) that the CA system is immune to being compromised while DNS is
> vulnerable.  Anyone with a number of brain cells greater than or equal to
> one should know better than to take that assumption at face value.
>
> >
> > But also, Firefox is implementing opportunistic encryption, which AIUI
> > gives you a lot of what you want here.
> >
> > Gerv
>
> Then that needs to happen first.  Otherwise, this whole discussion is
> moot, since absolutely nobody in their right mind would want to be
> shoehorned into our current broken CA system without at least *some*
> alternative.
>

OE shipped in Firefox 37.  It's currently turned off pending a bugfix, but
it will be back soon.

--Richard



> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Joshua Cranmer 🐧

On 4/14/2015 4:59 PM, northrupthebandg...@gmail.com wrote:
The article assumes that when folks connect to something via SSH and  > something changes - causing MITM-attack warnings and a refusal to > 
connect - folks default to just removing the existing entry in > 
~/.ssh/known_hosts without actually questioning anything.  This > 
conveniently ignores the fact that - when people do this - it's > 
because they already know there's been a change (usually due to a > 
server replacement); most folks (that I've encountered at least) > 
*will* stop and think before editing their known_hosts if it's an > 
unexpected change.
I've had an offending key at least 5 times. Only once did I seriously 
think to consider what specifically had changed to cause the ssh key to 
change. The other times, I assumed there was a good reason and deleted it.


This illustrates a very, very, very important fact about UX: the more 
often people see a dialog, the more routine it becomes to deal with 
it--you stop considering whether or not it applies, because it's always 
applied and it's just yet another step you have to go through to do it.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread commodorejohn
On Tuesday, April 14, 2015 at 2:51:32 PM UTC-7, Cameron Kaiser wrote:
> Candidly, and not because I still run such a site, I've always found 
> Gopher to be a better fit for resource-constrained computing. The 
> Commodore 128 sitting next to me does very well for that because the 
> protocol and menu parsing conventions are incredibly trivial.
Certainly true on a "how well can it keep up?" level, but unfortunately 
precious few sites support Gopher these days, so while it may be a better fit 
it offers vastly more constricted access to online resources.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: changing the default platform and operating-system on bugzilla.mozilla.org to all / all

2015-04-14 Thread Justin Dolske

On 4/14/15 8:40 AM, Dave Townsend wrote:

I've gotten used to just
ignoring these fields and reading the bugs instead. I wouldn't feel any
loss if they were just removed from display entirely.


+1. The fields are broadly unreliable, and we should just remove them. I 
think I've seen the whiteboard or summary used more often as a way to 
indicate that a bug is specific to a particular platform.


Justin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: changing the default platform and operating-system on bugzilla.mozilla.org to all / all

2015-04-14 Thread Eric Rescorla
+1

On Tue, Apr 14, 2015 at 3:59 PM, Justin Dolske  wrote:

> On 4/14/15 8:40 AM, Dave Townsend wrote:
>
>> I've gotten used to just
>> ignoring these fields and reading the bugs instead. I wouldn't feel any
>> loss if they were just removed from display entirely.
>>
>
> +1. The fields are broadly unreliable, and we should just remove them. I
> think I've seen the whiteboard or summary used more often as a way to
> indicate that a bug is specific to a particular platform.
>
> Justin
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: changing the default platform and operating-system on bugzilla.mozilla.org to all / all

2015-04-14 Thread Lawrence Mandel
+1 to Milan's suggestion. These fields are used somewhat consistently on
stability and graphics bugs, which release management pays attention to. If
we are going to continue with the fields, I like the idea of "Not
specified" as that makes it clear that no value was set whereas "All" is
currently used if the bug affects all platforms or if we don't know.

Lawrence

On Tue, Apr 14, 2015 at 12:36 PM, Milan Sreckovic 
wrote:

> Having that field be wrong is worse than not having it, no doubt about
> that, but if we had a way of making sure that field actually contains
> correct information, it would be extremely useful to the graphics team.
>
> I don’t have the numbers, but it certainly feels like a large majority of
> the bugs we get are in fact platform specific.  Even with the current
> limitation of “if it’s more than one, it’s all”, that’s still better than
> no information at all.
>
> If I was going with the “what can we do that’s fairly simple and improves
> things”, I’d create a new value for that field, “Not specified” (or
> something like that) and make that the default.  That way, we differentiate
> between the values that were explicitly set, in which case, I have to hope
> they will be more correct than they are today, and values that were just
> left at the default value.
> —
> - Milan
>
>
>
> On Apr 14, 2015, at 12:18 , Benjamin Smedberg 
> wrote:
>
> >
> >
> > On 4/14/2015 11:40 AM, Dave Townsend wrote:
> >> Are the platform fields actually useful? Most bugs apply to all
> platforms
> >> and in the cases that don't it is normally clear from the bug
> conversation
> >> that it is platform specific. It seems like we rarely go an update the
> >> platform fields to match the actual state of the bug. And then there is
> the
> >> problem that OS doesn't allow for multi-selections where say a bug
> affects
> >> a few versions of Windows or Windows and OSX. I've gotten used to just
> >> ignoring these fields and reading the bugs instead. I wouldn't feel any
> >> loss if they were just removed from display entirely.
> >
> > I've suggested this before (and still think that's the right user
> experience). In fact this turns out to be really difficult to do within
> bugzilla because those fields are baked into bugzilla guts and removing
> them would require not just hiding them on the edit-bug page but also from
> the query pages and various other locations.
> >
> > I do think we should try to stop using these fields for anything
> important.
> >
> > --BDS
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Cameron Kaiser

On 4/14/15 3:47 PM, commodorej...@gmail.com wrote:

On Tuesday, April 14, 2015 at 2:51:32 PM UTC-7, Cameron Kaiser wrote:

Candidly, and not because I still run such a site, I've always found
Gopher to be a better fit for resource-constrained computing. The
Commodore 128 sitting next to me does very well for that because the
protocol and menu parsing conventions are incredibly trivial.

>

Certainly true on a "how well can it keep up?" level, but unfortunately
precious few sites support Gopher these days, so while it may be a better
fit it offers vastly more constricted access to online resources.


The counter argument is, of course, that the "modern Web" (however you 
define it) is effectively out of reach of computers older than a decade 
or so, let alone an 8-bit system, due to loss of vendor or browser 
support, or just plain not being up to the task. So even if they could 
access the pages, meaningfully displaying them is another thing 
entirely. I won't dispute the much smaller amount of content available 
in Gopherspace, but it's still an option that has *some* support, and 
that support is often in the retrocomputing community already.


Graceful degradation went out the window a couple years back, unfortunately.

Anyway, I'm derailing the topic, so I'll put a sock in it now.
Cameron Kaiser
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Karl Dubost
Henri,
great points, about…

Le 14 avr. 2015 à 19:29, Henri Sivonen  a écrit :
> Currently, the UI designation for http is neutral while the UI
> designation for mixed content is undesirable. I think we should make
> the UI designation of plain http undesirable once x% the sites that
> users encounter on a daily basis are https.

What about changing the color of the grey world icon for http into something 
which is more telling.
An icon that would mean "eavesdropping possible". but yes UI should be part of 
the work.

About mixed content, insecure Web site, wrong certificates, etc. People should 
head to these bugs to understand, that it's not that simple.

* https://bugzilla.mozilla.org/show_bug.cgi?id=1126620
  [Bug 1126620] [META] TLS 1.1/1.2 version intolerant sites
* https://bugzilla.mozilla.org/show_bug.cgi?id=1138101
  [Bug 1138101] [META] Sites that still haven't upgraded to something better 
than RC4
* https://bugzilla.mozilla.org/show_bug.cgi?id=844556
  [Bug 844556] [tracking] compatibility issues with mixed content blocker on 
non-Mozilla websites


For Web Compatibility, dropping non secure cookies would be an interesting 
survey to do and see how much it breaks (or not) the Web and user experience.


-- 
Karl Dubost, Mozilla
http://www.la-grange.net/karl/moz

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: Enabling TSF mode in release builds (Vista or later)

2015-04-14 Thread Masayuki Nakano
TSF (Text Services Framework) is new API for IME on Windows. And also it 
supports not only CJK-IME, e.g., speech input system, handwriting system.

https://msdn.microsoft.com/en-us/library/windows/desktop/ms629032%28v=vs.85%29.aspx

This will be enabled only on Vista or later since TSF on WinXP (and 
WinServer 2k3) has a lot of problems.


Additionally, currently we disable TSF when e10s is enabled since there 
are some critical bugs to use IME in this mode.


This was already enabled in Nightly builds for a long time and in Aurora 
builds since 38. So, if some critical bugs will be reported, we should 
disable it before release.


The bug to turn on:
https://bugzilla.mozilla.org/show_bug.cgi?id=478029

--
Masayuki Nakano 
Manager, Internationalization, Mozilla Japan.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Runnables and thread-unsafe members

2015-04-14 Thread Jan-Ivar Bruaroey

On 4/14/15 5:39 PM, Randell Jesup wrote:

I wonder if we could move to requiring already_AddRefed for
DispatchToMainThread (and Dispatch?), and thus block all attempts to do
DispatchToMainThread(new FooRunnable), etc.  :-)


Yes! +1.

I like the .forget() semantics, but just to have options, here's a 
variation that might even be simpler: https://bugzil.la/1154337#c13


.: Jan-Ivar :.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: changing the default platform and operating-system on bugzilla.mozilla.org to all / all

2015-04-14 Thread Byron Jones

Lawrence Mandel wrote:
+1 to Milan's suggestion. These fields are used somewhat consistently 
on stability and graphics bugs, which release management pays 
attention to. If we are going to continue with the fields, I like the 
idea of "Not specified" as that makes it clear that no value was set 
whereas "All" is currently used if the bug affects all platforms or if 
we don't know.
thanks for the feedback.  defaulting to "unspecified" instead of "all" 
is a great idea.


i've updated the bug and will proceed along that path unless there are 
strong objections (keeping in mind that individual products will still 
be able to default to all/all should the owners desire that).



-glob

--
byron jones - :glob - bugzilla.mozilla.org team lead -

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform