Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-13 Thread Jan-Ivar Bruaroey

On 4/10/15 4:26 PM, smaug wrote:

I'd say that is rather painful for reviewers, since both Move() (I
prefer .swap()) and lambda hide what is actually happening to the refcnt.


Wanna ban copy construction? ;)

Higher-level constructs inherently hide something, but I disagree they 
make things harder to understand and read, quite the opposite.


And nsRefPtr's copy constructor is probably the safest part of that class.


So easy to forget to use nsCOMPtr explicitly there.

We should emphasize easy-to-read-and-understand code over fast-to-write.


I agree with your emphasis, however I draw the opposite conclusion.

The death to readability is the boilerplate that lambdas replace IMHO.

I find our existing runnable code hard to reason about because I have to 
jump between indirections to lots of disjunct classes, just to follow 
the flow of code, not to mention all the boiler plate needed simply to 
pass values forward.


I find a parallel here with callbacks vs promises in JavaScript.  The 
Promise chaining pattern relies on inline function definitions to make 
the flow of code match the flow of reading, making it easier to reason 
about and review.


.: Jan-Ivar :.


-Olli





- Seth





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Frederik Braun
On 13.04.2015 20:52, david.a.p.ll...@gmail.com wrote:
 
 2) Protected by subresource integrity from a secure host

 This would allow website operators to securely serve static assets from 
 non-HTTPS servers without MITM risk, and without breaking transparent 
 caching proxies.
 
 Is that a complicated word for SHA512 HASH? :)  You could envisage a new http 
 URL pattern http://video.vp9?SHA512-HASH

I suppose Subresource Integrity would be http://www.w3.org/TR/SRI/ -

But, note that this will not give you extra security UI (or less
warnings): Browsers will still disable scripts served over HTTP on an
HTTPS page - even if the integrity matches.

This is because HTTPS promises integrity, authenticity and
confidentiality. SRI only provides the former.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Gervase Markham
On 13/04/15 15:57, Richard Barnes wrote:
 Martin Thomson and I drafted a
 one-page outline of the plan with a few more considerations here:
 
 https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing

Are you sure privileged contexts is the right phrase? Surely contexts
are secure, and APIs or content is privileged by being only
available in a secure context?

There's nothing wrong with your plan, but that's partly because it's
hard to disagree with your principle, and the plan is pretty high level.
I think the big arguments will be over when and what features require a
secure context, and how much breakage we are willing to tolerate.

I know the Chrome team have a similar plan; is there any suggestion that
we might coordinate on feature re-privilegings?

Would we put an error on the console when a privileged API was used in
an insecure context?

Gerv

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread mh . in . england
 In order to encourage web developers to move from HTTP to HTTPS, I would
 like to propose establishing a deprecation plan for HTTP without security.

May I suggest defining security here as either:

1) A secure host (SSL)

or

2) Protected by subresource integrity from a secure host

This would allow website operators to securely serve static assets from 
non-HTTPS servers without MITM risk, and without breaking transparent caching 
proxies.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-13 Thread Jan-Ivar Bruaroey

On 4/10/15 2:09 PM, Seth Fowler wrote:

On Apr 10, 2015, at 8:46 AM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:

I would like to propose that we should ban the usage of refcounted objects
inside lambdas in Gecko.  Here is the reason:

Consider the following code:

nsINode* myNode;
TakeLambda([]() {
  myNode-Foo();
});

There is nothing that guarantees that the lambda passed to TakeLambda is
executed before myNode is destroyed somehow.


The above is a raw pointer bug, not a lambda bug. The above wo/lambdas:

class MyRunnable
{
public:
  MyRunnable(nsINode* aMyNode) : mMyNode(aMyNode) {}
  void Run() { myNode-Foo(); }
private:
  nsINode* mMyNode;
};

nsINode* myNode;
TakeFunc(new MyRunnable(myNode));

That's just as bad, and harder to spot! [1]

IMHO the use of lambdas helps spot the problem, by

 1. Being more precise (less boilerplate junk for bugs to hide in), and

 2. lambda capture use safer copy construction by default (hence the
standout [] above for reviewers).

 Lambdas will be much less useful if they can’t capture refcounted 
objects, so I’m strongly against banning that.


+1.

Case in point, we use raw pointers with most runnables, a practice 
established in NS_DispatchToMainThread [2]. Look in mxr/dxr for the 100+ 
uses of NS_DispatchToMainThread(new SomeRunnable()).


The new ban would prevent us from passing runnables to lambdas, like [3]

  MyRunnableBackToMain* runnable = new MyRunnableBackToMain();

  nsRefPtrmedia::ChildPledgensCString p = SomeAsyncFunc();
  p-Then([runnable](nsCString result) mutable {
runnable-mResult = result;
NS_DispatchToMainThread(runnable);
  });

So I think this ban is misguided. These are old sins not new to lambdas.


- Seth


.: Jan-Ivar :.

[1] Bonus if you caught that new MyRunnable() is a raw pointer as well.

[2] 
http://mxr.mozilla.org/mozilla-central/source/xpco/glue/nsThreadUtils.cpp?mark=164-168#164


[3] 
http://mxr.mozilla.org/mozilla-central/source/dom/media/MediaManager.cpp?mark=1516-1521#1501


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread david . a . p . lloyd

 2) Protected by subresource integrity from a secure host
 
 This would allow website operators to securely serve static assets from 
 non-HTTPS servers without MITM risk, and without breaking transparent caching 
 proxies.

Is that a complicated word for SHA512 HASH? :)  You could envisage a new http 
URL pattern http://video.vp9?SHA512-HASH
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Gervase Markham
On 13/04/15 18:40, DDD wrote:
 I think that you'll need to define a number of levels of security, and decide 
 how to distinguish them in the Firefox GUI:
 
 - Unauthenticated/Unencrypted [http]
 - Unauthenticated/Encrypted   [https ignoring untrusted cert warning]
 - DNS based auth/Encrypted[TLSA certificate hash in DNS]
 - Ditto with TLSA/DNSSEC 
 - Trusted CA Authenticated[Any root CA]
 - EV Trusted CA   [Special policy certificates]

I'm not quite sure what this has to do with the proposal you are
commenting on, but I would politely ask you how many users you think are
both interested in, able to understand, and willing to take decisions
based on _six_ different security states in a browser?

The entire point of this proposal is to reduce the web to 1 security
state - secure.

Gerv


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-13 Thread Jan-Ivar Bruaroey

On 4/13/15 1:36 PM, Jan-Ivar Bruaroey wrote:

[2]
http://mxr.mozilla.org/mozilla-central/source/xpco/glue/nsThreadUtils.cpp?mark=164-168#164


working link I hope:

http://mxr.mozilla.org/mozilla-central/source/xpcom/glue/nsThreadUtils.cpp?mark=164-168#164

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread DDD
 
 Note that Firefox does not presently support either DANE or DNSSEC,
 so we don't need to distinguish these.
 
 -Ekr
 

Nor does Chrome, and look what happened to both browsers...

http://www.zdnet.com/article/google-banishes-chinas-main-digital-certificate-authority-cnnic/

...the keys to the castle are in the DNS registration process.  It is illogical 
not to add TLSA support.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread DDD
I think that you'll need to define a number of levels of security, and decide 
how to distinguish them in the Firefox GUI:

- Unauthenticated/Unencrypted [http]
- Unauthenticated/Encrypted   [https ignoring untrusted cert warning]
- DNS based auth/Encrypted[TLSA certificate hash in DNS]
- Ditto with TLSA/DNSSEC 
- Trusted CA Authenticated[Any root CA]
- EV Trusted CA   [Special policy certificates]

Ironically, your problem is more a GUI thing.  All the security technology you 
need actually exists already...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement and ship: document.scrollingElement

2015-04-13 Thread Boris Zbarsky
Summary: A property that makes it possible for web pages to tell which 
element's scroll* attributes reflect the viewport scroll state.  This is 
needed because currently web pages have different codepaths (using 
document.body vs document.documentElement) for different browsers based 
on UA sniffing, which keeps Blink and WebKit from switching to our (and 
the standard's) behavior.  With this property the page could have a 
single codepath using document.scrollingElement and ditch the UA 
sniffing and then Blink/WebKit could switch their behavior, as long as 
they change both scroll* and .scrollingElement at the same time.


Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1153322 which has 
links to some of the other discussion.


Spec: http://dev.w3.org/csswg/cssom-view/#dom-document-scrollingelement

Platforms: All of them.

Preference: none.

DevTools bug: Don't think this needs devtools work.

Support in other UAs: Chrome is actively working on this, so they can 
update to the spec for scroll* sooner rather than later.


Target release: Gecko 40.

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Eric Rescorla
On Mon, Apr 13, 2015 at 10:40 AM, DDD david.a.p.ll...@gmail.com wrote:

 I think that you'll need to define a number of levels of security, and
 decide how to distinguish them in the Firefox GUI:

 - Unauthenticated/Unencrypted [http]
 - Unauthenticated/Encrypted   [https ignoring untrusted cert warning]
 - DNS based auth/Encrypted[TLSA certificate hash in DNS]
 - Ditto with TLSA/DNSSEC


Note that Firefox does not presently support either DANE or DNSSEC,
so we don't need to distinguish these.

-Ekr




 - Trusted CA Authenticated[Any root CA]
 - EV Trusted CA   [Special policy certificates]

 Ironically, your problem is more a GUI thing.  All the security technology
 you need actually exists already...
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Martin Thomson
On Mon, Apr 13, 2015 at 12:11 PM, Gervase Markham g...@mozilla.org wrote:
 Are you sure privileged contexts is the right phrase? Surely contexts
 are secure, and APIs or content is privileged by being only
 available in a secure context?

There was a long-winded group bike-shed-painting session on the
public-webappsec list and this is the term they ended up with.  I
don't believe that it is the right term either, FWIW.

 There's nothing wrong with your plan, but that's partly because it's
 hard to disagree with your principle, and the plan is pretty high level.
 I think the big arguments will be over when and what features require a
 secure context, and how much breakage we are willing to tolerate.

Not much, but maybe more than we used to.

 I know the Chrome team have a similar plan; is there any suggestion that
 we might coordinate on feature re-privilegings?

Yes, the intent is definitely to collaborate, as the original email
stated.  Chrome isn't the only stakeholder, which is why we suggested
that we go to the W3C so that the browser formerly known as IE and
Safari are included.

 Would we put an error on the console when a privileged API was used in
 an insecure context?

Absolutely.  That's likely to be a first step once the targets have
been identified.  That pattern has already been established for bad
crypto and a bunch of other things that we don't like but are forced
to tolerate for compatibility reasons.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread david . a . p . lloyd
 I would politely ask you how many users you think are
 both interested in, able to understand, and willing to take decisions
 based on _six_ different security states in a browser?

I think this thread is about deprecating things and moving developers onto more 
secure platforms.  To do that, you'll need to tell me *why* I need to make the 
effort.  The only thing that I am going to care about is to get users closer to 
that magic green bar and padlock icon.

You may hope that security is black and white, but in practice it isn't.  There 
is always going to be a sliding scale.  Do you show me a green bar and padlock 
if I go to www.google.com, but the certificate is issued by my intranet?  Do 
you show me the same certificate error I'd get as if I was connecting to a 
clearly malicious certificate.

What if I go to www.google.com, but the certificate has been issued incorrectly 
because Firefox ships with 500 equally trusted root certificates? 


So - yeah, you're going to need a rating system for your security:  A, B, C, D, 
Fail.  You're going to have to explain what situations get you into what group, 
how as a developer I can move to a higher group (e.g. add a certificate hash 
into DNS, get an EV certificate costing $10,000, implement DNSSEC, use PFS 
ciphersuites and you get an A rating). I'm sure that there'll be new security 
vulnerabilities and best practice in future, too.

Then it is up to me as a developer to decide how much effort I can 
realistically put into this...

...for my web-site containing pictures of cats...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread commodorejohn
Great, peachy, more authoritarian dictation of end-user behavior by the Elite 
is just what the Internet needs right now. And hey, screw anybody trying to use 
legacy systems for anything, right? Right!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread ipartola
I have given this a lot of thought lately, and to me the only way forward is to 
do exactly what is suggested here: phase out and eventually drop plain HTTP 
support. There are numerous reasons for doing this:

- Plain HTTP allows someone to snoop on your users.

- Plain HTTP allows someone to misrepresent your content to the users.

- Plain HTTP is a great vector for phishing, as well as injecting malicious 
code that comes from your domain.

- Plain HTTP provides no guarantees of identity to the user. Arguably, the 
current HTTPS implementation doesn't do much to fix this, but more on this 
below.

- Lastly, arguing that HTTP is cheaper than HTTPS is going to be much harder 
once there are more providers giving away free certs (looking at StartSSL and 
Let's Encrypt).

My vision would be that HTTP should be marked with the same warning (except for 
wording of course) as an HTTPS site secured by a self-signed cert. In terms of 
security, they are more or less equivalent, so there is no reason to treat them 
differently. This should be the goal.

There are problems with transitioning to giving a huge scary warning for HTTP. 
They include:

- A large number of sites that don't support HTTPS. To fix this, I think the 
best method is to show the http://; part of the URL in red, and publicly 
announce that over the next X months Firefox is moving to the model of giving a 
big scary warning a la self-signed cert warning if HTTPS is not enabled.

- A large number of corporate intranets that run plain HTTP. Perhaps a 
build-time configuration could be enabled that would enable system 
administrators to ignore the warning for certain subdomains or the RFC 1918 
addresses as well as localhost. Note that carrier grade NAT in IPv4 might make 
the latter a bad choice by default.

- Ad supported sites report a drop in ad revenue when switching to HTTPS. I 
don't know what the problem or solution here is, but I am certain this is a big 
hurdle for some sites.

- Lack of free wildcard certificates. Ideally, Let's Encrypt should provide 
these.

- Legacy devices that cannot be upgraded to support HTTPS or only come with 
self-signed certificates. This is a problem that can be addressed by letting 
the user bypass the scary warning (just like with self-signed certs).

Finally, some people conflate the idea of a global transition from plain HTTP 
to HTTPS as a move by CA's to make more money. They might argue that first, we 
need to get rid of CA's or provide an alternative path for obtaining 
certificates. I disagree. Switching from plain HTTP to HTTPS is step one. Step 
two might include adding more avenues for establishing trust and 
authentication. There is no reason to try to add additional methods of 
authenticating the servers while still allowing them to use no encryption at 
all. Let's kill off plain HTTP first, then worry about how to fix the CA 
system. Let's Encrypt will of course make this a lot easier by providing free 
certs.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread bryan . beicker
One limiting factor is that Firefox doesn't treat form data the same on HTTPS 
sites.

Examples:

http://stackoverflow.com/questions/14420624/how-to-keep-changed-form-content-when-leaving-and-going-back-to-https-page-wor

http://stackoverflow.com/questions/10511581/why-are-html-forms-sometimes-cleared-when-clicking-on-the-browser-back-button

After loosing a few forum posts or wiki edits to this bug in Firefox, you 
quickly insist on using unsecured HTTP as often as possible.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Joshua Cranmer 

On 4/13/2015 3:29 PM, stu...@testtrack4.com wrote:

HTTP should remain optional and fully-functional, for the purposes of 
prototyping and diagnostics. I shouldn't need to set up a TLS layer to access a 
development server running on my local machine, or to debug which point before 
hitting the TLS layer is corrupting requests.


If you actually go to read the details of the proposal rather than 
relying only on the headline, you'd find that there is an intent to 
actually let you continue to use http for, e.g., localhost. The exact 
boundary between secure HTTP and insecure HTTP is being actively 
discussed in other forums.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread byuusan
On Monday, April 13, 2015 at 3:36:56 PM UTC-4, commod...@gmail.com wrote:
 Great, peachy, more authoritarian dictation of end-user behavior by the Elite 
 is just what the Internet needs right now. And hey, screw anybody trying to 
 use legacy systems for anything, right? Right!

Let 'em do this. When Mozilla and Google drop HTTP support, then it'll be open 
season for someone to fork/make a new browser with HTTP support, and gain an 
instant 30% market share. These guys have run amok with major decisions (like 
the HTTP/2 TLS mandate) because of a lack of competition.

These guys can go around thinking they're secure while trusting root CAs like 
CNNIC whilst ignoring DNSSEC and the like; the rest of us can get back on track 
with a new, sane browser. While we're at it, we could start treating 
self-signed certs like we do SSH, rather than as being *infinitely worse* than 
HTTP (I'm surprised Mozilla doesn't demand a faxed form signed by a notary 
public to accept a self-signed cert yet. But I shouldn't give them any ideas 
...)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Boris Zbarsky

On 4/13/15 5:11 PM, bryan.beic...@gmail.com wrote:

After loosing a few forum posts or wiki edits to this bug in Firefox, you 
quickly insist on using unsecured HTTP as often as possible.


This is only done in cases in which the page explicitly requires that 
nothing about the page be cached (no-cache), yes?


That said, we should see if we can stop doing the state-not-saving thing 
for SSL+no-cache and tell banks who want it to use no-store.


-Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread ipartola
On Monday, April 13, 2015 at 4:43:25 PM UTC-4, byu...@gmail.com wrote:

 These guys can go around thinking they're secure while trusting root CAs like 
 CNNIC whilst ignoring DNSSEC and the like; the rest of us can get back on 
 track with a new, sane browser. While we're at it, we could start treating 
 self-signed certs like we do SSH, rather than as being *infinitely worse* 
 than HTTP (I'm surprised Mozilla doesn't demand a faxed form signed by a 
 notary public to accept a self-signed cert yet. But I shouldn't give them any 
 ideas ...)

A self-signed cert is worse than HTTP, in that you cannot know if the site you 
are accessing is supposed to have a self-signed cert or not. If you know that, 
you can check the fingerprint and bypass the warning. But let's say you go to 
download a fresh copy of Firefox, just to find out that 
https://www.mozilla.org/ is serving a self-singed cert. How can you possibly be 
sure that you are not being MITM'ed? Arguably, it's worse if we simply ignore 
the fact that the cert is self-signed, and simply let you download the 
compromised version, vs giving you some type of indication that the connection 
is not secure (e.g.: no green bar because it's plain HTTP).

That is not to say that we should continue as is. HTTP is insecure, and should 
give the same warning as HTTPS with a self-signed cert.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Non-UTF-8 file paths on Gtk platforms

2015-04-13 Thread Robert O'Callahan
The argument for suggestion #1 seems irrefutable.

Rob
-- 
oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
owohooo
osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
oioso
oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
owohooo
osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
ooofo
otohoeo ofoioroeo ooofo ohoeololo.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread stuart
HTTP should remain optional and fully-functional, for the purposes of 
prototyping and diagnostics. I shouldn't need to set up a TLS layer to access a 
development server running on my local machine, or to debug which point before 
hitting the TLS layer is corrupting requests.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Eugene
I fully support this proposal. In addition to APIs, I'd like to propose 
prohibiting caching any resources loaded over insecure HTTP, regardless of 
Cache-Control header, in Phase 2.N. The reasons are:
1) MITM can pollute users' HTTP cache, by modifying some JavaScript files with 
a long time cache control max-age.
2) It won't break any websites, just some performance penalty for them.
3) Many website operators and users avoid using HTTPS, since they believe HTTPS 
is much slower than plaintext HTTP. After deprecating HTTP cache, this argument 
will be more wrong.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread northrupthebandgeek
On Monday, April 13, 2015 at 7:57:58 AM UTC-7, Richard Barnes wrote:
 In order to encourage web developers to move from HTTP to HTTPS, I would
 like to propose establishing a deprecation plan for HTTP without security.
 Broadly speaking, this plan would entail  limiting new features to secure
 contexts, followed by gradually removing legacy features from insecure
 contexts.  Having an overall program for HTTP deprecation makes a clear
 statement to the web community that the time for plaintext is over -- it
 tells the world that the new web uses HTTPS, so if you want to use new
 things, you need to provide security.

I'd be fully supportive of this if - and only if - at least one of the 
following is implemented alongside it:

* Less scary warnings about self-signed certificates (i.e. treat 
HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with 
HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less 
secure than HTTP is - to put this as politely and gently as possible - a pile 
of bovine manure
* Support for a decentralized (blockchain-based, ala Namecoin?) certificate 
authority

Basically, the current CA system is - again, to put this as gently and politely 
as possible - fucking broken.  Anything that forces the world to rely on it 
exclusively is not a solution, but is instead just going to make the problem 
worse.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread imfasterthanneutrino
On Monday, April 13, 2015 at 8:57:41 PM UTC-4, northrupt...@gmail.com wrote:
 
 * Less scary warnings about self-signed certificates (i.e. treat 
 HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with 
 HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less 
 secure than HTTP is - to put this as politely and gently as possible - a pile 
 of bovine manure

This feature (i.e. opportunistic encryption) was implemented in Firefox 37, but 
unfortunately an implementation bug made HTTPS insecure too. But I guess 
Mozilla will fix it and make this feature available in a future release.

 * Support for a decentralized (blockchain-based, ala Namecoin?) certificate 
 authority
 
 Basically, the current CA system is - again, to put this as gently and 
 politely as possible - fucking broken.  Anything that forces the world to 
 rely on it exclusively is not a solution, but is instead just going to make 
 the problem worse.

I don't think the current CA system is broken. The domain name registration is 
also centralized, but almost every website has a hostname, rather than using IP 
address, and few people complain about this.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Karl Dubost
Richard,

Le 13 avr. 2015 à 23:57, Richard Barnes rbar...@mozilla.com a écrit :
 There's pretty broad agreement that HTTPS is the way forward for the web.

Yes, but that doesn't make deprecation of HTTP a consensus.

 In order to encourage web developers to move from HTTP to HTTPS, I would
 like to propose establishing a deprecation plan for HTTP without security.

This is not encouragement. This is call forcing. ^_^ Just that we are using the 
right terms for the right thing.


In the document
 https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing

You say:
Phase 3: Essentially all of the web is HTTPS.  

I understand this is the last hypothetical step, but it sounds like a bit let's 
move the Web to XML. It didn't work out very well.

I would love to have a more secure Web, but this can not happen without a few 
careful consideration.

* Third tier person for certificates being mandatory is a no-go. It creates a 
system of authority and power, an additional layer of hierarchy which deeply 
modify the ability for anyone to publish and might in some circumstances 
increase the security risk.

* If we have to rely, cost of certificates must be zero. These for the simple 
reason than not everyone is living in a rich industrialized country.

* Setup and publication through HTTPS should be as easy as HTTP. The Web 
brought a publishing power to any individuals. Imagine cases where you need to 
create a local network, web developing on your computer, hacking a server for 
your school, community, etc. If it relies on a heavy process, it will not 
happen.


So instead of a plan based on technical features, I would love to see a: Let's 
move to a secure Web. What are the user scenarios, we need to solve to achieve 
that.

These user scenarios are economical, social, etc.


my 2 cents.
So yes, but not the way it is introduced and plan now.


-- 
Karl Dubost, Mozilla
http://www.la-grange.net/karl/moz

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Karl Dubost

Le 14 avr. 2015 à 10:43, imfasterthanneutr...@gmail.com a écrit :
 I don't think the current CA system is broken.

The current CA system creates issues for certain categories of population. It 
is broken in some ways.

 The domain name registration is also centralized, but almost every website 
 has a hostname, rather than using IP address, and few people complain about 
 this.

Two points:

1. You do not need to register a domain name to have a Web site (IP address)
2. You do not need to register a domain name to run a local blah.test.site

Both are still working and not deprecated in browsers ^_^

Now the fact to have to rent your domain name ($$$) and that all the URIs are 
tied to this is in terms of permanent identifiers and the fabric of time on 
information has strong social consequences. But's that another debate than the 
one of this thread on deprecating HTTP in favor of HTTPS.

I would love to see this discussion happening in Whistler too. 

-- 
Karl Dubost, Mozilla
http://www.la-grange.net/karl/moz

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Martin Thomson
On Mon, Apr 13, 2015 at 3:53 PM, Eugene imfasterthanneutr...@gmail.com wrote:
 In addition to APIs, I'd like to propose prohibiting caching any resources 
 loaded over insecure HTTP, regardless of Cache-Control header, in Phase 2.N.

This has some negative consequences (if only for performance).  I'd
like to see changes like this properly coordinated.  I'd rather just
treat caching as one of the features for Phase 2.N.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread david . a . p . lloyd
 * If we have to rely, cost of certificates must be zero. These for the simple 
 reason than not everyone is living in a rich industrialized country.

Certificates (and paying for them) is an artificial economy.  If I register a 
DNS address, I should get a certificate to go with it.  Heck, last time I got 
an SSL certificate, they effectively bootstrapped the trust based on my DNS MX 
record...

Hence IMO TLS should be:
- DANE for everyone
- DANE  Trusted Third Party CAs for the few
- DANE  TTP  EV for sites that accept financial and medical details

The Firefox opportunistic encryption feature is a good first step towards this 
goal.  If they could just nslookup the TLSA certificate hash, we'd be a long 
way down the road.  
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread ipartola
 * Less scary warnings about self-signed certificates (i.e. treat 
 HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with 
 HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less 
 secure than HTTP is - to put this as politely and gently as possible - a pile 
 of bovine manure

I am against this. Both are insecure and should be treated as such. How is your 
browser supposed to know that gmail.com is intended to serve a self-signed 
cert? It's not, and it cannot possibly know it in the general case. Thus it 
must be treated as insecure.

 * Support for a decentralized (blockchain-based, ala Namecoin?) certificate 
 authority

No. Namecoin has so many other problems that it is not feasible.

 Basically, the current CA system is - again, to put this as gently and 
 politely as possible - fucking broken.  Anything that forces the world to 
 rely on it exclusively is not a solution, but is instead just going to make 
 the problem worse.

Agree that it's broken. The fact that any CA can issue a cert for any domain is 
stupid, always was and always will be. It's now starting to bite us.

However, HTTPS and the CA system don't have to be tied together. Let's ditch 
the immediately insecure plain HTTP, then add ways to authenticate trusted 
certs in HTTPS by means other than our current CA system. The two problems are 
orthogonal, and trying to solve both at once will just leave us exactly where 
we are: trying to argue for a fundamentally different system.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread ben
On Tuesday, April 14, 2015 at 12:27:22 AM UTC-4, commod...@gmail.com wrote:
 On Monday, April 13, 2015 at 1:43:25 PM UTC-7, byu...@gmail.com wrote:
  Let 'em do this. When Mozilla and Google drop HTTP support, then it'll be 
  open season for someone to fork/make a new browser with HTTP support, and 
  gain an instant 30% market share.
 Or, more likely, it'll be a chance for Microsoft and Apple to laugh all the 
 way to the bank. Because seriously, what else would you expect to happen when 
 the makers of a web browser announce that, starting in X months, they'll be 
 phasing out compatibility with the vast majority of existing websites?

This isn't at all what Richard was trying to say. The original discussion 
states that the plan will be to make all new browser features only work under 
HTTPS, to help developers and website owners to migrate to HTTPS only. This 
does mean these browsers will remove support for HTTP ever; but simply to 
deprecate it. Browsers still support many legacy and deprecated features.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread vic
On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote:
 HTTP deprecation

I'm strongly against the proposal as it is described here. I work with small 
embedded devices (think sensor network) that are accessed over HTTP. These 
devices have very little memory, only a few kB, implementing SSL is simply not 
possible. Who are you to decree these devices become unfit hosts?

Secondly the proposal to restrain unrelated new features like CSS attributes to 
HTTPS sites only is simply a form of strong-arming. Favoring HTTPS is fine but 
authoritarianism is not. Please consider that everyone is capable of making 
their own decisions.

Lastly deprecating HTTP in the current state of the certificate authority 
business is completely unacceptable. These are *not* separate issues, to 
implement HTTPS without warnings you must be able to obtain certificates 
(including wildcard ones) easily and affordably and not only to rich western 
country citizens. The let's go ahead and we'll figure this out later attitude 
is irresponsible considering the huge impact that this change will have.

I would view this proposal favorably if 1) you didn't try to force people to 
adopt the One True Way and 2) the CA situation was fixed.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread ben
On Tuesday, April 14, 2015 at 1:16:25 AM UTC-4, vic wrote:
 On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote:
  HTTP deprecation
 
 I'm strongly against the proposal as it is described here. I work with small 
 embedded devices (think sensor network) that are accessed over HTTP. These 
 devices have very little memory, only a few kB, implementing SSL is simply 
 not possible. Who are you to decree these devices become unfit hosts?
 
 Secondly the proposal to restrain unrelated new features like CSS attributes 
 to HTTPS sites only is simply a form of strong-arming. Favoring HTTPS is fine 
 but authoritarianism is not. Please consider that everyone is capable of 
 making their own decisions.
 
 Lastly deprecating HTTP in the current state of the certificate authority 
 business is completely unacceptable. These are *not* separate issues, to 
 implement HTTPS without warnings you must be able to obtain certificates 
 (including wildcard ones) easily and affordably and not only to rich western 
 country citizens. The let's go ahead and we'll figure this out later 
 attitude is irresponsible considering the huge impact that this change will 
 have.
 
 I would view this proposal favorably if 1) you didn't try to force people to 
 adopt the One True Way and 2) the CA situation was fixed.

An embedded device would not be using a web browser such as Firefox, so this 
isn't really much of a concern. The idea would be to only enforce HTTPS 
deprecation from browsers, not web servers. You can continue to use HTTP on 
your own web services and therefore use it through your embedded devices.

As all technology protocols change over time, enforcing encryption is a natural 
and logical step to evolve web technology. Additionally, while everyone is able 
to make their own decisions, it doesn't mean people make the right choice. A 
website that uses sensitive data insecurely over HTTP and the users are 
unaware, as most web consumers are not even aware what the difference of HTTP 
vs HTTPS means, is not worth the risk. It'd be better to enforce security and 
reduce the risks that exist with internet privacy. Mozilla though never truly 
tries to operate anything with an authoritarianism approach, but this 
suggestion is to protect the consumers of the web, not the developers of the 
web.

Mozilla is trying to get https://letsencrypt.org/ started, which will be free, 
removing all price arguments from this discussion.

IMHO, this debate should be focused on improving the way HTTP is deprecated, 
but I do not believe there are any valid concerns that HTTP should not be 
deprecated.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread ipartola
On Monday, April 13, 2015 at 10:10:44 PM UTC-4, Karl Dubost wrote:

 Now the fact to have to rent your domain name ($$$) and that all the URIs are 
 tied to this is in terms of permanent identifiers and the fabric of time on 
 information has strong social consequences. But's that another debate than 
 the one of this thread on deprecating HTTP in favor of HTTPS.

The registrars are, as far as I'm concerned, where the solution to the CA 
problem lies. You buy a domain name from someone, you are already trusting them 
with it. They can simply redirect your nameservers elsewhere and you can't do 
anything about it. Remember, you never buy a domain name, you lease it.

What does this have to do with plain HTTP to HTTPS transition? Well, why are we 
trusting CA's at all? Why not have the registrar issue you a wildcard cert with 
the purchase of a domain, and add restrictions to the protocol such that only 
your registrar can issue a cert for that domain?

Or even better, have the registrar sign a CA cert for you that is good for your 
domain only. That way you can issue unlimited certs for domains you own and 
*nobody but you can do that*.

However, like you said that's a separate discussion. We can solve the CA 
problem after we solve the plain HTTP problem.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Yoav Weiss
IMO, limiting new features to HTTPS only, when there's no real security
reason behind it will only end up limiting feature adoption.
It directly punishing developers and adds friction to using new features,
but only influence business in a very indirect manner.

If we want to move more people to HTTPS, we can do any or all of the
following:
* Show user warnings when the site they're on is insecure
* Provide an opt-in don't display HTTPS mode as an integral part of the
browser. Make it extremely easy to opt in.

Search engines can also:
* Downgrade ranking of insecure sites in a significant way
* Provide a don't show me insecure results button

If you're limiting features to HTTPS with no reason you're implicitly
saying that developer laziness is what's stalling adoption. I don't believe
that the case.

There's a real eco-system problem with 3rd party widgets and ad networks
that makes it hard for large sites to switch until all of their site's
widgets have. Developers have no saying here. Business does.

What you want is to make the business folks threaten that out-dated 3rd
party widget that if it doesn't move to HTTPS, the site would switch to the
competition. For that you need to use a stick that business folks
understand: If you're on HTTP, you'd see less and less traffic. Limiting
new features does absolutely nothing in that aspect.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread commodorejohn
On Monday, April 13, 2015 at 1:43:25 PM UTC-7, byu...@gmail.com wrote:
 Let 'em do this. When Mozilla and Google drop HTTP support, then it'll be 
 open season for someone to fork/make a new browser with HTTP support, and 
 gain an instant 30% market share.
Or, more likely, it'll be a chance for Microsoft and Apple to laugh all the way 
to the bank. Because seriously, what else would you expect to happen when the 
makers of a web browser announce that, starting in X months, they'll be phasing 
out compatibility with the vast majority of existing websites?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-13 Thread Trevor Saunders
On Mon, Apr 13, 2015 at 01:28:05PM -0400, Ehsan Akhgari wrote:
 On 2015-04-13 5:26 AM, Nicolas B. Pierron wrote:
 On 04/10/2015 07:47 PM, Ehsan Akhgari wrote:
 On 2015-04-10 1:41 PM, Nicolas B. Pierron wrote:
 Also, what is the alternative? Acquiring a nsCOMPtr/nsRefPtr inside the
 Lambda constructor (or whatever it's called)?
 
 Yes, another option would be to ensure that the lambda cannot be used
 after a specific point.
 
 nsINode* myNode;
 auto callFoo = MakeScopedLambda([]() {
 myNode-Foo();
 })
 TakeLambda(callFoo);
 
 Any reference to the lambda after the end of the innermost scope where
 MakeScopedLambda is used can cause a MOZ_CRASH.
 
 How would you detect that at compile/run time?
 
 
 Simply by replacing the reference to the lambda inside callFoo at the
 end of the scope, and replace it by a constructing a dummy function
 which expects the same type of arguments as the lambda, but calls
 MOZ_CRASH instead.
 
 Sorry, my question was: how do you implement this with C++?  (As in, how
 would an actual implementation work?)

That actually seems kind of straight forward.  You want to have an
object that wraps the provided lambda in callFoo and then nukes the
wrapping when the scope exits.  So I guess it would look something like
this (totally untested).

templatetypename T
class ScopedLambda
{
templatetypename U
class LambdaHolder
{
LambdaHolder(const LambdaHolder other)
{
if (!other.valid) {
valid = false;
return;
}

other.master-Add(this);
mLambda = other.mLambda;
valid = true;
}

...

void Revoke() { valid = false; }

void Call()
{
if (valid) /* do magic with vargs to pass
arguments to mLambda */ ;
}

private:
T mLambda;
ScopedLambdaT* master;
bool valid;
};

ScopedLambda(T lambda) : mLambda(lambda) {}
~ScopedLambda()
{
for (LambdaHolder* h: holders) { h-revoke(); }
}

// Try to force passing ScopedLambda foo to a function to result in the
// function getting a LambdaHolder.
operator LambdaHolder()
{
LambdaHolder l;
l.master = this;
l.mLambda = mLambda;
l.valid = true;
return l;
}

ScopedLambda(const ScopedLamba) = delete;
ScopedLambda(ScopedLambda) = delete;

T mLambda;
ListLambdaHolderT* holders;
};

or actually you might be able to do the same thing my having
ScopedLambda allocate an object on the heap that's ref counted.

Trev

 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Richard Barnes
On Mon, Apr 13, 2015 at 3:00 PM, Frederik Braun fbr...@mozilla.com wrote:

 On 13.04.2015 20:52, david.a.p.ll...@gmail.com wrote:
 
  2) Protected by subresource integrity from a secure host
 
  This would allow website operators to securely serve static assets from
 non-HTTPS servers without MITM risk, and without breaking transparent
 caching proxies.
 
  Is that a complicated word for SHA512 HASH? :)  You could envisage a new
 http URL pattern http://video.vp9?SHA512-HASH

 I suppose Subresource Integrity would be http://www.w3.org/TR/SRI/ -

 But, note that this will not give you extra security UI (or less
 warnings): Browsers will still disable scripts served over HTTP on an
 HTTPS page - even if the integrity matches.

 This is because HTTPS promises integrity, authenticity and
 confidentiality. SRI only provides the former.


I agree that we should probably not allow insecure HTTP resource to be
looped in through SRI.

There are several issues with this idea, but the one that sticks out for me
is the risk of leakage from HTTPS through these http-schemed resource
loads.  For example, that fact that you're loading certain images might
reveal which Wikipedia page you're reading.

--Richard


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Non-UTF-8 file paths on Gtk platforms

2015-04-13 Thread Zack Weinberg
Given that everyone else working in this area agrees that UTF-8 file
paths are the Right Thing and wants to desupport legacy encodings, I
would now vote for suggestion 1 (contra what I said last year in bug
960957, which amounts to a variation on your suggestion 2).  However,
I think it might be a good idea to add some minimal telemetry so we
know about it if and when users encounter non-UTF-8 paths and in what
context.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-13 Thread Ehsan Akhgari

On 2015-04-13 5:26 AM, Nicolas B. Pierron wrote:

On 04/10/2015 07:47 PM, Ehsan Akhgari wrote:

On 2015-04-10 1:41 PM, Nicolas B. Pierron wrote:

Also, what is the alternative? Acquiring a nsCOMPtr/nsRefPtr inside the
Lambda constructor (or whatever it's called)?


Yes, another option would be to ensure that the lambda cannot be used
after a specific point.

nsINode* myNode;
auto callFoo = MakeScopedLambda([]() {
myNode-Foo();
})
TakeLambda(callFoo);

Any reference to the lambda after the end of the innermost scope where
MakeScopedLambda is used can cause a MOZ_CRASH.


How would you detect that at compile/run time?



Simply by replacing the reference to the lambda inside callFoo at the
end of the scope, and replace it by a constructing a dummy function
which expects the same type of arguments as the lambda, but calls
MOZ_CRASH instead.


Sorry, my question was: how do you implement this with C++?  (As in, how 
would an actual implementation work?)


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-13 Thread Nicolas B. Pierron

On 04/10/2015 07:47 PM, Ehsan Akhgari wrote:

On 2015-04-10 1:41 PM, Nicolas B. Pierron wrote:

Also, what is the alternative? Acquiring a nsCOMPtr/nsRefPtr inside the
Lambda constructor (or whatever it's called)?


Yes, another option would be to ensure that the lambda cannot be used
after a specific point.

nsINode* myNode;
auto callFoo = MakeScopedLambda([]() {
myNode-Foo();
})
TakeLambda(callFoo);

Any reference to the lambda after the end of the innermost scope where
MakeScopedLambda is used can cause a MOZ_CRASH.


How would you detect that at compile/run time?



Simply by replacing the reference to the lambda inside callFoo at the end of 
the scope, and replace it by a constructing a dummy function which expects 
the same type of arguments as the lambda, but calls MOZ_CRASH instead.


I guess I would have wrote the auto as  ScopedLambdavoid (*)()  for the 
example.


--
Nicolas B. Pierron
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to deprecate: Insecure HTTP

2015-04-13 Thread Richard Barnes
There's pretty broad agreement that HTTPS is the way forward for the web.
In recent months, there have been statements from IETF [1], IAB [2], W3C
[3], and even the US Government [4] calling for universal use of
encryption, which in the case of the web means HTTPS.

In order to encourage web developers to move from HTTP to HTTPS, I would
like to propose establishing a deprecation plan for HTTP without security.
Broadly speaking, this plan would entail  limiting new features to secure
contexts, followed by gradually removing legacy features from insecure
contexts.  Having an overall program for HTTP deprecation makes a clear
statement to the web community that the time for plaintext is over -- it
tells the world that the new web uses HTTPS, so if you want to use new
things, you need to provide security.  Martin Thomson and I drafted a
one-page outline of the plan with a few more considerations here:

https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing

Some earlier threads on this list [5] and elsewhere [6] have discussed
deprecating insecure HTTP for powerful features.  We think it would be a
simpler and clearer statement to avoid the discussion of which features are
powerful and focus on moving all features to HTTPS, powerful or not.

The goal of this thread is to determine whether there is support in the
Mozilla community for a plan of this general form.  Developing a precise
plan will require coordination with the broader web community (other
browsers, web sites, etc.), and will probably happen in the W3C.

Thanks,
--Richard

[1] https://tools.ietf.org/html/rfc7258
[2]
https://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/
[3] https://w3ctag.github.io/web-https/
[4] https://https.cio.gov/
[5]
https://groups.google.com/d/topic/mozilla.dev.platform/vavZdN4tX44/discussion
[6]
https://groups.google.com/a/chromium.org/d/topic/blink-dev/2LXKVWYkOus/discussion
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform