Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On Fri, Nov 21, 2014 at 5:44 PM, Patrick McManus mcma...@ducksong.com wrote: On Fri, Nov 21, 2014 at 10:09 AM, Anne van Kesteren ann...@annevk.nl wrote: Why would they be allowed to use OE? The reasons why any individual resource has to be http:// and may (or may not) be able to run OE vary by resource. Of course only the content provider can know their reason for sure. I've tried to point out in this thread what some of those reasons can be and its really not very relevant whether you or I agree with them unless we own the content. Well you brought up the nosslsearch example, presumably in defense of OE. To me it seems then reasonable to ask if they could use OE in such a scenario. They are doing this with opportunistic encryption (via the Alternate-Protocol response header) for http:// over QUIC from chrome. In the past they did this with spdy too (I'm unsure of current status of that). They don't want the open and standards track web to participate. It seems we can't be trusted to do what they're already proprietarily doing for their own services. Google is unwilling to standardize QUIC? Or are you saying that because Google experiments with OE in QUIC, including in services today through Chrome, it is weird for them to oppose OE in HTTP? -- https://annevankesteren.nl/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
Hi Anne, On Tue, Nov 25, 2014 at 9:13 AM, Anne van Kesteren ann...@annevk.nl wrote: They are doing this with opportunistic encryption (via the Alternate-Protocol response header) for http:// over QUIC from chrome. In Or are you saying that because Google experiments with OE in QUIC, including in services today through Chrome, it is weird for them to oppose OE in HTTP? Its interesting because of what it says about the actual options instead of the arguments we make about them. Google is trying hard to be https:// everywhere and yet they still have to run http:// services. That illustrates how hard a full transition is - most people can't match the kind of resources to spend on the problem that google has, and yet google hasn't been 100% successful. The rest of the web does far worse - heck we just launched our new h.264 Cisco addon download over http:// (with an external integrity check). When running http:// google has twice made an engineering decision to do so with OE and something better than h1. The result is better than plaintext-h1 and we should also be striving to bring our users and the whole web the same benefits. This site runs better in Chrome sucks. What we're going to do is make https better faster and cheaper as the long play to ubiquitous real security, and in the short term offer folks more encryption and better transports on http:// too because we hope to reach more of them that way. Plaintext is the last choice and is maintained strictly for compatibility - nobody wins when we do that. -P [I think we're firmly into the recycling phase again :)] ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On Fri, Nov 21, 2014 at 4:53 PM, Patrick McManus mcma...@ducksong.com wrote: Hi - On Fri, Nov 21, 2014 at 5:41 AM, Henri Sivonen hsivo...@hsivonen.fi wrote: Indeed. Huge thanks to everyone who is making Let's Encrypt happen. regulatory compliance, What's this about? nosslsearch.google.com is an example of the weight of regulatory compliance in action. It's not clear what the regulation in question is. It seems that the stated use case for nosslsearch was allowing schools to MITM searches to filter out adult content, but Google seems to be transitioning to addressing this use case by allowing schools to make Safe Search at Google's end non-user-togglable. Anyway, the question I was trying to ask was: For what regulation do all of the following hold: 1) The regulation disallows crypto that is unMITMable without stealing keys or compromising/fooling a CA. 2) The regulation doesn't disallow all crypto but allows MITMable crypto. 3) The way the MITMability is used is decrypting and re-encrypting and not just inhibiting the upgrade. (If the upgrade is inhibited, OE is just plain old HTTP 1.1 after all.) 4) It is congruent with Mozilla's goals to accommodate the regulation beyond just letting the servers subject to the regulation stay on plain old HTTP 1.1. ? non-access to webpki. Does this mean intranets? mostly.. This is basically the Microsoft argument against https. It would be easier to accept if intranet-motivated things didn't leak to the public Web e.g. by being limited to RFC 1918 addresses. A limitation to RFC 1918 addresses would, for sure, irk some intranet admins who use end-to-end IP addressing, but, OTOH, they are better positioned to get publicly-trusted certs for their stuff. If the intranet case can't be isolated, it seems bad to make the public network worse off in order to let the intranet admins have more convenience. (Also, as an intranet gets larger, the notion of trusting the network gets a worse and worse an idea even on an intranet.) but more generally things that don't bind well to the global dns that the webpki relies on.. so potentially peer to peer and mesh interactions too.. How are these relevant to Firefox? (Also, if the thread on the C/ABforum list about issuing a cert for Facebook's .onion address shows anything, I think it shows that it's bad that the built-in validation of a DNS alternative sticks to a layer below http: and doesn't extend https: validation in a way that'd result in a consistent UX [the visible URL scheme being part of the UX] for all authenticated sites--PKI or not.) -- Henri Sivonen hsivo...@hsivonen.fi https://hsivonen.fi/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On Wed, Nov 19, 2014 at 4:50 PM, Patrick McManus mcma...@ducksong.com wrote: On Wed, Nov 19, 2014 at 1:45 AM, Henri Sivonen hsivo...@hsivonen.fi wrote: Does Akamai's logo appearing on the Let's Encrypt announcements change Akamai's need for OE? (Seems *really* weird if not.) let's encrypt is awesome - more https is awesome. Indeed. Huge thanks to everyone who is making Let's Encrypt happen. regulatory compliance, What's this about? CA-risk, I.e. Let's Encrypt going away somehow? non-access to webpki. Does this mean intranets? A hosting or CDN provider doesn't control all of those things - especially the legacy and mixed content. Yes, OE definitely allows CDNs and hosting providers to make things better without getting their customers to take action. But the customers feeling they don't need to take action is the problem I'm worried about. There are basically 2 arguments against OE here: 1] you don't need OE because everyone can run https and 2] OE somehow undermines https I don't buy them because [1] remains a substantial body of data and [2] is unsubstantiated speculation and borders on untested FUD. Of course [2] is speculation. The notion that OE wouldn't harm the adoption of https is speculation, too. Both are fundamentally about guessing how the future would go in different circumstances without a way, in the future, to check how things had gone differently with the other option. However, it seems reasonable and believable that shortening the perceived distance between what you get with http URLs and what you get with https URLs makes some set of admins feel less urgency to move from http URLs to https URLs, so I think it's rather an exaggeration to call it FUD. It would be remarkably counterintuitive if OE didn't take away some of the momentum of https (at least if OE was adopted broadly by browsers). -- Henri Sivonen hsivo...@hsivonen.fi https://hsivonen.fi/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On Fri, Nov 21, 2014 at 3:53 PM, Patrick McManus mcma...@ducksong.com wrote: nosslsearch.google.com is an example of the weight of regulatory compliance in action. Google talks loudly about all https (and has the leading track record), yet there it is. And google isn't special in that regard. Why would they be allowed to use OE? I.e. Let's Encrypt going away somehow? More generally being dependent on a CA is an additional third party operational risk when comparing http:// vs https://.. you're already dependent on your DNS provider and an ISP and now your fate is also linked to the CA that signed your cert too. e.g. at the most basic level not revoking it on you - but also not doing something dumb unrelated to you that gets the signing cert your CA used tossed out of UAs (again). That risks seems tiny compared to the risk of having an end user man-in-the-middled. non-access to webpki. Does this mean intranets? mostly.. but more generally things that don't bind well to the global dns that the webpki relies on.. so potentially peer to peer and mesh interactions too.. But that would no longer be about HTTP. At least as far as the things we've been talking about exposing in browsers are concerned. -- https://annevankesteren.nl/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On 2014-11-21, at 08:19, Justin Dolske dol...@mozilla.com wrote: Is that a direct or indirect cause? AFAIK nothing directly requires Google to offer this, but the alternative would be organizations and networks who do want/need to see traffic simply blocking Google services. And so Google has made the voluntary choice to support nosslsearch (presumably for a mix of reasons like revenue impact and perception of service availability). Why would the precise form of the motivating force be relevant? The point is that there are pressures on content other than the ones that we might like to think push them toward https://. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On 18/11/14 04:03, voracity wrote: The issue isn't that people are cheapskates, and will lose 'a few dollars'. The issue is that transaction costs http://en.wikipedia.org/wiki/Transaction_cost can be crippling. https://letsencrypt.org/ . Gerv ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On Wed, Nov 19, 2014 at 1:45 AM, Henri Sivonen hsivo...@hsivonen.fi wrote: Does Akamai's logo appearing on the Let's Encrypt announcements change Akamai's need for OE? (Seems *really* weird if not.) let's encrypt is awesome - more https is awesome. The availability of let's encrypt (or something like it) was certainly taken into consideration in the OE thinking. The idea has been kicking around for a while from lots of orgs so it was forseeable someone would pull it off - but huge kudos to our partnership for doing it as that really is powerful and will help the web. Its also a feather in Mozilla's cap. I'm really excited about it. OE plus Let's Encrypt is exactly the manifestation of walking and chewing gum at the same time that I referred to earlier. We're working hard at this to improve things on multiple fronts and the ideas are not at odds with each other. Ciphertext as the new plaintext is meant to cover situations where people won't run https. Kudos for let's encrypt helping make that a smaller market, but it doesn't solve all the use cases of http:// (nor does OE - but it reaches potentially more of them). These include legacy content and urls, third-party mixed content, regulatory compliance, CA-risk, non-access to webpki. A hosting or CDN provider doesn't control all of those things - especially the legacy and mixed content. But they can compatibly improve the transport experience and they're interested in doing that. So to answer your question without having a partner discussion on dev-platform, the folks interested in deploying OE foresaw let's encrypt (or something like it) and are still interested in OE. There are basically 2 arguments against OE here: 1] you don't need OE because everyone can run https and 2] OE somehow undermines https I don't buy them because [1] remains a substantial body of data and [2] is unsubstantiated speculation and borders on untested FUD. I understand that google is the loudest voice - yet these realities impact them as well if you look at their actions on google.com. Google, despite being the leading industry player in making admirable herculean efforts at deploying sophisticated https, still also runs lots of http:// services such as nosslsearch, gstatic, and google-analytics. The cost of a cert isn't what is holding them back from making those services https only - and they are the best case scenario for a party being both interested and capable. fwiw - nobody would be happier than me if [1] dwindled to 0 and OE was moot, I just think it will be a super long time in coming and in the interim we can substitute some of that plaintext with ciphertext and that's a win for our users. -P ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On 11/19/14 04:50, Patrick McManus wrote: There are basically 2 arguments against OE here: 1] you don't need OE because everyone can run https and 2] OE somehow undermines https I don't buy them because [1] remains a substantial body of data and [2] is unsubstantiated speculation and borders on untested FUD. I agree, and find the assertion of [2] to be further perplexing: it completely discounts the fact that OE can (and ideally will) be opt-out for most server configurations, while HTTPS remains opt-in -- even for the Let's Encrypt setup. There's a radical difference in penetration between opt-in and opt-out, and we base substantial portions of our privacy decisions on this fact. I'm a bit baffled that it's not immediately obvious to everyone in this conversation that this distinction translates to the deployment of encryption. I'm all for the drive to have authenticated encryption everywhere, and am very excited about the Let's Encrypt initiative. But there's no reason to leave traffic gratuitously unencrypted while we drive towards 100% HTTPS penetration. -- Adam Roach Principal Platform Engineer a...@mozilla.com +1 650 903 0800 x863 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On Wednesday, November 19, 2014 11:12:42 PM UTC+11, Gervase Markham wrote: https://letsencrypt.org/ . When I first saw Let's Encrypt (the very next day after my post) I got excited, but when I read how it works, I got even more excited. There's still things it doesn't (seem to) solve (localhost/intranet apps and possibly internet-of-things as well as CA centralisation), but coupled with good TOFU, it covers most of the things that matter to the little people of the web. I still object to carrot-sticking people to use https. We don't carrot-stick people to use open source, even though many of the key arguments for doing so would be quite similar. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On 11/17/14 1:48 AM, Henri Sivonen wrote: As for cat and mouse, I'd prefer putting our cat-and-mouse energies into patching up https PKI instead of introducing a new cat-and-mouse situation to pay attention to. (Despite being able to walk and chew gum, our end isn't 100% immune to opportunity cost issues, either.) Given Mozilla's announcements around Let's Encrypt, are there still use cases for HTTP+OE? https://letsencrypt.org/2014/11/18/announcing-lets-encrypt.html chris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On Wed, Nov 19, 2014 at 1:20 PM, Chris Peterson cpeter...@mozilla.com wrote: On 11/17/14 1:48 AM, Henri Sivonen wrote: As for cat and mouse, I'd prefer putting our cat-and-mouse energies into patching up https PKI instead of introducing a new cat-and-mouse situation to pay attention to. (Despite being able to walk and chew gum, our end isn't 100% immune to opportunity cost issues, either.) Given Mozilla's announcements around Let's Encrypt, are there still use cases for HTTP+OE? https://letsencrypt.org/2014/11/18/announcing-lets-encrypt.html Given Richard Barnes is listed as the editor of Let's Encrypt's ACME spec ( https://github.com/letsencrypt/acme-spec/blob/master/draft-barnes-acme.md) and has also been advocating HTTP+OE ... Rob -- oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo owohooo osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o oioso oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo owohooo osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro ooofo otohoeo ofoioroeo ooofo ohoeololo. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On Wed, Nov 19, 2014 at 1:20 PM, Chris Peterson cpeter...@mozilla.com wrote: Given Mozilla's announcements around Let's Encrypt, are there still use cases for HTTP+OE? https://letsencrypt.org/2014/11/18/announcing-lets-encrypt.html In particular: https://wiki.mozilla.org/Platform/2014-10-14#Necko_.28dougt.2Fjduell.29 says: Opportunistic Encryption (OE) for HTTP/2 (i.e. if server opts-in we'll upgrade http to use TLS w/o certs) has landed (bug 1003448). Akamai will be our first main use case. Does Akamai's logo appearing on the Let's Encrypt announcements change Akamai's need for OE? (Seems *really* weird if not.) On Wed, Nov 19, 2014 at 3:46 AM, Robert O'Callahan rob...@ocallahan.org wrote: Given Richard Barnes is listed as the editor of Let's Encrypt's ACME spec ( https://github.com/letsencrypt/acme-spec/blob/master/draft-barnes-acme.md) and has also been advocating HTTP+OE ... So what are the remaining use cases? HTTP+OE requires you to have TLS set up on the server. Let's Encrypt is about to take away the argument boohoo certs are too expensive and hard to get. AFAICT, the arguments that remain are: 1) Home routers or NAS boxes don't have a DNS name, so they can't get a publicly trusted certs. 2) Making sure the right keys are on the right servers at the right time is too hard. 3) It's too hard to change old content with third-party includes not to get broken by the mixed content blocker. For case #1, you want https+TOFU--not http+OE. I think we should make the self-signed cert warning different (more situation-appropriate) for RFC 1918 addresses (192.168, etc.). Argument #2 seems silly: If you have enough servers for it to be a problem, you should have the staff/tools/knowhow to solve it. As for argument #3, getting the Web encrypted in an authenticated manner seems so important that it seems reasonable to tell admins of sites with legacy content that if they want to get HTTP/2 speed, they need to revise those old includes. -- Henri Sivonen hsivo...@hsivonen.fi https://hsivonen.fi/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Fri, Nov 14, 2014 at 8:00 PM, Patrick McManus mcma...@ducksong.com wrote: On Thu, Nov 13, 2014 at 11:16 PM, Henri Sivonen hsivo...@hsivonen.fi wrote: The part that's hard to accept is: Why is the countermeasure considered effective for attacks like these, when the level of how active the MITM needs to be to foil the countermeasure (by inhibiting the upgrade by messing with the initial HTTP/1.1 headers) is less than the level of active these MITMs already are when they inject new HTTP/1.1 headers or inject JS into HTML? There are a few pieces here - 1] I totally expect what you describe about signalling stripping to happen to some subset of the traffic, but an active cleartext carrier based MITM is not the only opponent. Many of these systems are tee'd read only dragnets. Especially the less sophisticated scenarios. I agree that http+OE is effective in the case of mere read-only fiber splitters when no hop on the way inhibits the upgrade. (The flipside is, of course, that if you have an ISP inhibiting the upgrade as a small time attack to inject ads, a fiber split at another hop gets to see the un-upgraded traffic.) 1a] not all of the signalling happens in band especially wrt mobility. The notion that devices move between networks that change the contents of IP packets and networks that deliver IP packets without changing their contents and upgrade signals seen in the latter kind of network getting remembered for the first kind makes sense, yes. But this isn't really about whether there exist some cases where OE works but about whether OE distracts from https. 2] When the basic ciphertext technology is proven, I expect to see other ways to signal its use. I casually mentioned a tofu pin yesterday and you were rightly concerned about pin fragility - but in this case the pin needn't be hard fail (and pin was a poor word choice) - its an indicator to try OE. That can be downgraded if you start actively resetting 443, sure - but that's a much bigger step to take that may result in generally giving users of your network a bad experience. And if you go down this road you find all manner of other interesting ways to bootstrap OE - especially if what you are bootstrapping is an opportunistic effort that looks a lot like https on the wire: gossip distribution of known origins, optimistic attempts on your top-N frecency sites, DNS (sec?).. even h2 https sessions can be used to carry http schemed traffic (the h2 protocol finally explicitly carries scheme as part of the transaction instead of making all transactions on the same connection carry the same scheme) which might be a very good thing for folks with mixed content problems. Most of this can be explored asynchronously at the cost of some plaintext usage in the interim. Its opportunistic afterall. There is certainly some cat and mouse here - as Martin says, its really just a small piece. I don't think of it as more than replacing some plaintext with some encryption - that's not perfection, but I really do think its significant. I think the idea that there might be other signals is a bad sign: It's a sign that incrementally patching up OE signaling will end up taking more and more effort while still falling short of https that is already available for adoption and available even in legacy browsers. Also, it's a bad sign in the sense that some of the things you mention as possibilities are problems in themselves: While DNSSEC-based signaling to use encryption in a legacy protocol whose baseline is unencrypted makes some sense for protocols where the connection latency is not an important part of the user experience, such as server-to-server SMTP, it seems pretty clear that with all the focus on initial connection latency, browsers won't start making additional DNS queries--especially ones that might fail thanks to middleboxes--before connecting. (Though, I suppose when the encryption is *opportunistic* anyway, you could query DNSSEC lazily and let the first few HTTP requests go in the clear.) As for prescanning your top-N frecency, that's a privacy leak it itself, since an eavesdropper could tell what the top-N frecency is by looking at the DNS traffic the browser generates (DNSSEC infamously not providing confidentiality...). (Also, at least if you don't have a huge legacy of third-party includes that would become mixed content, https+HSTS is way easier to deploy than DNSSEC in terms of setting it up, in terms of keeping it running without interruption and in terms of not having middleboxes mess with it.) But I think the fundamental problem is still opportunity cost and sapping the current momentum of https. The idea this is effective against read-only dragnet some of the time, therefore let's do it to improve things even some of the time might make sense if it was an action to be taken just by the browser without the participation of server admins and had no effect on how server admins perceive https in
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Friday, November 14, 2014 6:25:43 PM UTC+11, Henri Sivonen wrote: This is obvious to everyone reading this mailing list. My concern is that if the distinction between http and https gets fuzzier, people who want encryption but who want to avoid ever having to pay a penny to a CA will think that http+OE is close enough to https that they deploy http+OE when if http+OE didn't exist, they'd hold their nose, pay a few dollars to a CA and deploy https with a publicly trusted cert (now that there's more awareness of the need for encryption). Could I just interject at this point (while apologising for my general rudeness and lack of technical security knowledge). The issue isn't that people are cheapskates, and will lose 'a few dollars'. The issue is that transaction costs http://en.wikipedia.org/wiki/Transaction_cost can be crippling. Another problem is that the whole CA system is equivalent to a walled-garden, in which a small set of 'trusted' individuals (ultimately) restrict or permit what everyone else can see. It hasn't caused problems in the history of the internet so far, because a non-centralised alternative exists. (An alternative that is substantially more popular *precisely* *because* of transaction costs and independence.) This means it's currently a difficult environment for a few mega-CAs (and governments) to exercise any power. A CA-only internet changes that environment radically. I'm unsurprised that Google doesn't think this is an issue. If they do something that (largely invisibly but substantially) increases the internet's http://en.wikipedia.org/wiki/Barriers_to_entry , it reduces diversity on the internet, but otherwise doesn't affect Google very much. (Actually, it may do, since it will make glorified hosting services like Facebook much more popular still over independent websites.) However, there is a special onus on Mozilla to think through *all* the social implications of what it does. Security is *never* pure win; there is *always* a trade off that society has to make, and I don't see this being considered properly here. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On 2014-11-13, at 21:25, Henri Sivonen hsivo...@hsivonen.fi wrote: Your argument relies on there being no prior session that was not intermediated by the attacker. I’ll concede that this is a likely situation for a large number of clients, and not all servers will opt for protection against that school of attack. What protection are you referring to? HTTP-TLS (which seems to be confused with Alt-Svc in some of the discussion I’ve seen). If you ever get a clean session, you can commit to being authenticated and thereby avoid any MitM until that timer lapses. I appreciate that you think that this is worthless, and it may well be of marginal or even no use. That’s why it’s labelled as an experiment. I haven't been to the relevant IETF sessions myself, but assume that https://twitter.com/sleevi_/status/509954820300472320 is true. That’s pure FUD as far as I can tell. How so given that http://tools.ietf.org/html/draft-loreto-httpbis-trusted-proxy20-01 exists and explicitly seeks to defeat the defense that TLS traffic arising from https and TLS traffic arising from already-upgraded OE http look pretty much alike to an operator? That is a direct attempt to water down the protections of the opportunistic security model to make MitM feasible by signaling its use. That received a strongly negative reaction and E/// and operators have since distanced themselves from that line of solution. What about http://arstechnica.com/security/2014/10/verizon-wireless-injects-identifiers-link-its-users-to-web-requests/ ? What about http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/ ? Opportunistic security is a small part of our response to that. I don’t understand why this is difficult to comprehend. A simple server upgrade with no administrator intervention is very easy, and the protection that affords is, for small time attacks like these, what I consider to be an effective countermeasure. I’ve been talking regularly to operators and they are concerned about opportunistic security. It’s less urgent for them given that we are the only ones who have announced an intent to deploy it (and its current status). Concerned in what way? (Having concerns suggests they aren't seeking to merely carry IP packets unaltered.) Concerned in the same way that they are about all forms of increasing use of encryption. They want in. To enhance content. To add services. To collect information. To decorate traffic to include markers for their partners. To do all the things they are used to doing with cleartext traffic. You suggest that they can just strip this stuff off if we add it. It’s not that easy. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Fri, Nov 14, 2014 at 10:51 AM, Martin Thomson m...@mozilla.com wrote: How so given that http://tools.ietf.org/html/draft-loreto-httpbis-trusted-proxy20-01 exists and explicitly seeks to defeat the defense that TLS traffic arising from https and TLS traffic arising from already-upgraded OE http look pretty much alike to an operator? That is a direct attempt to water down the protections of the opportunistic security model to make MitM feasible by signaling its use. That received a strongly negative reaction and E/// and operators have since distanced themselves from that line of solution. Seems to be an indication of what some operators want nonetheless. What about http://arstechnica.com/security/2014/10/verizon-wireless-injects-identifiers-link-its-users-to-web-requests/ ? What about http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/ ? Opportunistic security is a small part of our response to that. I don’t understand why this is difficult to comprehend. A simple server upgrade with no administrator intervention is very easy, and the protection that affords is, for small time attacks like these, what I consider to be an effective countermeasure. The part that's hard to accept is: Why is the countermeasure considered effective for attacks like these, when the level of how active the MITM needs to be to foil the countermeasure (by inhibiting the upgrade by messing with the initial HTTP/1.1 headers) is less than the level of active these MITMs already are when they inject new HTTP/1.1 headers or inject JS into HTML? I’ve been talking regularly to operators and they are concerned about opportunistic security. It’s less urgent for them given that we are the only ones who have announced an intent to deploy it (and its current status). Concerned in what way? (Having concerns suggests they aren't seeking to merely carry IP packets unaltered.) Concerned in the same way that they are about all forms of increasing use of encryption. They want in. To enhance content. To add services. To collect information. To decorate traffic to include markers for their partners. To do all the things they are used to doing with cleartext traffic. You suggest that they can just strip this stuff off if we add it. It’s not that easy. Why isn't stripping the HTTP/1.1 headers that signal the upgrade that easy? Rendering the upgrade signaling headers unrecognizable without stretching or contracting the bytes seems easier than adding HTTP headers or adding JS. -- Henri Sivonen hsivo...@hsivonen.fi https://hsivonen.fi/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Thu, Nov 13, 2014 at 11:16 PM, Henri Sivonen hsivo...@hsivonen.fi wrote: The part that's hard to accept is: Why is the countermeasure considered effective for attacks like these, when the level of how active the MITM needs to be to foil the countermeasure (by inhibiting the upgrade by messing with the initial HTTP/1.1 headers) is less than the level of active these MITMs already are when they inject new HTTP/1.1 headers or inject JS into HTML? There are a few pieces here - 1] I totally expect what you describe about signalling stripping to happen to some subset of the traffic, but an active cleartext carrier based MITM is not the only opponent. Many of these systems are tee'd read only dragnets. Especially the less sophisticated scenarios. 1a] not all of the signalling happens in band especially wrt mobility. 2] When the basic ciphertext technology is proven, I expect to see other ways to signal its use. I casually mentioned a tofu pin yesterday and you were rightly concerned about pin fragility - but in this case the pin needn't be hard fail (and pin was a poor word choice) - its an indicator to try OE. That can be downgraded if you start actively resetting 443, sure - but that's a much bigger step to take that may result in generally giving users of your network a bad experience. And if you go down this road you find all manner of other interesting ways to bootstrap OE - especially if what you are bootstrapping is an opportunistic effort that looks a lot like https on the wire: gossip distribution of known origins, optimistic attempts on your top-N frecency sites, DNS (sec?).. even h2 https sessions can be used to carry http schemed traffic (the h2 protocol finally explicitly carries scheme as part of the transaction instead of making all transactions on the same connection carry the same scheme) which might be a very good thing for folks with mixed content problems. Most of this can be explored asynchronously at the cost of some plaintext usage in the interim. Its opportunistic afterall. There is certainly some cat and mouse here - as Martin says, its really just a small piece. I don't think of it as more than replacing some plaintext with some encryption - that's not perfection, but I really do think its significant. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
I’m not all that enthused by the blow-by-blow here. Nonetheless, there are some distortions to correct. On 2014-11-12, at 20:23, Henri Sivonen hsivo...@hsivonen.fi wrote: That's true if the server presents a publicly trusted cert for the wrong hostname (as is common if you try to see what happens if you change the scheme for a random software download URL to https and get a cert for Akamai--I'm mentioning Akamai because of the [unmentioned on the draft] affiliation of the other author). However, if the site presents a self-signed cert, the MITM could check the chain and treat self-signed certs differently from publicly trusted certs. (While checking the cert chain takes more compute, it's not outlandish considering that an operator bothers to distinguish OpenVPN from IMAP-over-TLS on the same port per https://grepular.com/Punching_through_The_Great_Firewall_of_TMobile .) This is true for TLS = 1.2, but will not be true for TLS 1.3. Certificates are available to a MitM currently, but in future versions, that sort of attack will be detectable. But even so, focusing on what the upgraded sessions look like is rather beside the point when it's trivial for the MITM to inhibit the upgrade in the first place. In an earlier message to this thread, I talked about overwriting the relevant header in the initial HTTP/1.1 traffic with spaces. I was thinking too complexly. All it takes is changing one letter in the header name to make it unrecognized. In that case, the MITM doesn't even need to maintain the context of two adjacent TCP packets but can, with little risk of false positives, look for the full header string in the middle of the packet or a tail of at least half the string at the start of a packet or at least half the string at the end of a packet and change one byte to make the upgrade never happen--all on the level of looking at individual IP packets without bothering to have any cross-packet state. Your argument relies on there being no prior session that was not intermediated by the attacker. I’ll concede that this is a likely situation for a large number of clients, and not all servers will opt for protection against that school of attack. I haven't been to the relevant IETF sessions myself, but assume that https://twitter.com/sleevi_/status/509954820300472320 is true. That’s pure FUD as far as I can tell. I’ve been talking regularly to operators and they are concerned about opportunistic security. It’s less urgent for them given that we are the only ones who have announced an intent to deploy it (and its current status). ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
I haven't really waded into this iteration of the discussion because there isn't really new information to talk about. But I know everyone is acting in good faith so I'll offer my pov again. We're all trying to serve our users and the Internet - same team :) OE means ciphertext is the new plaintext. This is a transport detail. Of course https:// is more secure than http:// of any form. This isn't controversial - OE proponents believe this too :) Its a matter of opinion exactly how common, comprehensive, and easy downgrade to cleartext will be in practice - but its trivially easy to show an existence proof. Therefore, given the choice, you should be running https://. full stop. However, in my opinion https deployment is not trivially easy to do all the time and in all environments and as a result tls based ciphertext is an improvement on the defacto cleartext alternative. Particularly at scale using forward-secret suites mixed in with https:// traffic it creates an obstacle to dragnet interception. tofu pinning is another possibility that helps especially wrt mobility. Its a matter of opinion how big of an obstacle that is. I get feedback from people that I know are collecting cleartext right now that don't want us to do it. That's encouraging. https:// has seen very welcome growth - but ilya's post is a bit generous in its implications on that front and even the most optimistic reading leaves tons of plaintext http://. If you measure by HTTP transaction you get an amount of https in the mid 50%'s (this is closer to Ilya's approach) and our metrics match the post about chrome.. However, if you measure by page load or by origin you get numbers much much lower with slower growth. (we have metrics on the former - origin numbers are based on web crawlers).. if you measure by byte count you start getting ridiculously low amounts of https. I want to see those numbers higher, we all do, but I also think that bringing some transport confidentiality to the fraction you can't bring over to the https:// camp is a useful thing for the confidentiality of our users and it doesn't ignore the reality of the situation. There are lots of reasons people don't run https://. The most unfortunate one, that OE doesn't help with in any sense, is that this choice is wholly in the hands of the content operator when the cost of confidentiality loss is borne at least partially (and perhaps completely) by the user. But that's not the only reason - mixed content, cert management, application integration, sni problems, pki distrust, ocsp risk, and legacy markup are just various parts of the story why some content owners don't deploy https://. OE can help with those - those sites aren't run by folks with google.com like resources to overcome them all. There are other barriers OE can't help with such as hosting premium charges. Its a false dichotomy to suggest we can't work on mitigations to those problems to encourage https and also provide OE for scenarios that can't be satisfied that way. This isn't hypothetical - we absolutely are both walking and chewing gum at the same time already on this front. I don't really believe many in the position to choose between OE and https would choose OE - I expect it to be used by the folks that can't quite get there. OE doesn't change the semantics of web security, so if I'm wrong about OE's relationship to https transition rates we can disable it - it has no semantic meaning to worry about compatibility with: ciphertext is the new plaintext but the web (security and other) model is completely unchanged as this is a transport detail. Reversion is effectively a safety valve that I would have no problem using if it were necessary. Thanks. -Patrick On Wed, Nov 12, 2014 at 8:23 PM, Henri Sivonen hsivo...@hsivonen.fi wrote: On Wed, Nov 12, 2014 at 11:12 PM, Richard Barnes rbar...@mozilla.com wrote: On Nov 12, 2014, at 4:35 AM, Anne van Kesteren ann...@annevk.nl wrote: On Mon, Sep 15, 2014 at 7:56 PM, Adam Roach a...@mozilla.com wrote: The whole line of argumentation that web browsers and servers should be taking advantage of opportunistic encryption is explicitly informed by what's actually happening elsewhere. Because what's *actually* happening is an overly-broad dragnet of personal information by a wide variety of both private and governmental agencies -- activities that would be prohibitively expensive in the face of opportunistic encryption. ISPs are doing it already it turns out. Governments getting to ISPs has already happened. I think continuing to support opportunistic encryption in Firefox and the IETF is harmful to our mission. You're missing Adam's point. From the attacker's perspective, opportunistic sessions are indistinguishable from I assume you meant to say indistinguishable from https sessions, so the MITM risks breaking some https sessions in a noticeable way if the MITM tries to inject itself into an opportunistic session.
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Thu, Nov 13, 2014 at 8:29 PM, Martin Thomson m...@mozilla.com wrote: This is true for TLS = 1.2, but will not be true for TLS 1.3. Certificates are available to a MitM currently, but in future versions, that sort of attack will be detectable. Great. I was unaware of this. (This is particularly nice to hear after the move from NPN to ALPN going the other way.) Your argument relies on there being no prior session that was not intermediated by the attacker. I’ll concede that this is a likely situation for a large number of clients, and not all servers will opt for protection against that school of attack. What protection are you referring to? The draft has only this: Once a server has indicated that it will support authenticated TLS, a client MAY use key pinning [I-D.ietf-websec-key-pinning] or any other mechanism that would otherwise be restricted to use with HTTPS URIs, provided that the mechanism can be restricted to a single HTTP origin. ...which seems too vague to lead to interoperable implementations. Also, it seems that the set of sites that have the operational maturity to deploy key pinning but for whom provisioning publicly trusted certs is too hard/expensive is going to be a very small set--likely a handful of CDNs who haven't yet responded to the competitive pressure from Cloudflare to buy publicly trusted certs in wholesale but will eventually have to anyway. I haven't been to the relevant IETF sessions myself, but assume that https://twitter.com/sleevi_/status/509954820300472320 is true. That’s pure FUD as far as I can tell. How so given that http://tools.ietf.org/html/draft-loreto-httpbis-trusted-proxy20-01 exists and explicitly seeks to defeat the defense that TLS traffic arising from https and TLS traffic arising from already-upgraded OE http look pretty much alike to an operator? What about http://arstechnica.com/security/2014/10/verizon-wireless-injects-identifiers-link-its-users-to-web-requests/ ? What about http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/ ? I’ve been talking regularly to operators and they are concerned about opportunistic security. It’s less urgent for them given that we are the only ones who have announced an intent to deploy it (and its current status). Concerned in what way? (Having concerns suggests they aren't seeking to merely carry IP packets unaltered.) On Thu, Nov 13, 2014 at 10:08 PM, Patrick McManus mcma...@ducksong.com wrote: Of course https:// is more secure than http:// of any form. This isn't controversial - OE proponents believe this too :) Its a matter of opinion exactly how common, comprehensive, and easy downgrade to cleartext will be in practice - but its trivially easy to show an existence proof. Therefore, given the choice, you should be running https://. full stop. This is obvious to everyone reading this mailing list. My concern is that if the distinction between http and https gets fuzzier, people who want encryption but who want to avoid ever having to pay a penny to a CA will think that http+OE is close enough to https that they deploy http+OE when if http+OE didn't exist, they'd hold their nose, pay a few dollars to a CA and deploy https with a publicly trusted cert (now that there's more awareness of the need for encryption). However, in my opinion https deployment is not trivially easy to do all the time and in all environments and as a result tls based ciphertext is an improvement on the defacto cleartext alternative. OE is a strict improvement over cleartext if the existence of OE doesn't cause sites that in the absence of OE would have migrated to https in the next couple of years to migrate only to OE. That is, things that are technically improvements can still be distractions that harm the deployment of the further improvements that are really important (in this case, real https). OTOH, point 8 at http://open.blogs.nytimes.com/2014/11/13/embracing-https/ suggests that holding better performance hostage works as a way to drive https adoption. Particularly at scale using forward-secret suites mixed in with https:// traffic it creates an obstacle to dragnet interception. If the upgrade takes place, yes. tofu pinning is another possibility that helps especially wrt mobility. TOFU pinning seems rather hand-wavy at this point, so I think it's not well enough defined to base assessments about the merit of http+OE on. Also, if TOFU pinning for http+OE existed, it would mean that server admins who deploy http+OE have to care about key management in order to avoid TOFU failures arising from random rekeying. This would bring the deployability concerns of http+OE even closer to those of real https, which would make it even sillier not to just do the real thing. Specifically, https+HSTS requires you to: * Configure the server to do TLS. * Bear the performance burden of the server doing TLS. * Add a header to
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Mon, Sep 15, 2014 at 7:56 PM, Adam Roach a...@mozilla.com wrote: The whole line of argumentation that web browsers and servers should be taking advantage of opportunistic encryption is explicitly informed by what's actually happening elsewhere. Because what's *actually* happening is an overly-broad dragnet of personal information by a wide variety of both private and governmental agencies -- activities that would be prohibitively expensive in the face of opportunistic encryption. ISPs are doing it already it turns out. Governments getting to ISPs has already happened. I think continuing to support opportunistic encryption in Firefox and the IETF is harmful to our mission. Google's laser focus on preventing active attackers to the exclusion of any solution that thwarts passive attacks is a prime example of insisting on a perfect solution, resulting instead in substantial deployments of nothing. They're naïvely hoping that finding just the right carrot will somehow result in mass adoption of an approach that people have demonstrated, with fourteen years of experience, significant reluctance to deploy universally. Where are you getting your data from? https://plus.google.com/+IlyaGrigorik/posts/7VSuQ66qA3C shows a very different view of what's happening. -- https://annevankesteren.nl/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Nov 12, 2014, at 4:35 AM, Anne van Kesteren ann...@annevk.nl wrote: On Mon, Sep 15, 2014 at 7:56 PM, Adam Roach a...@mozilla.com wrote: The whole line of argumentation that web browsers and servers should be taking advantage of opportunistic encryption is explicitly informed by what's actually happening elsewhere. Because what's *actually* happening is an overly-broad dragnet of personal information by a wide variety of both private and governmental agencies -- activities that would be prohibitively expensive in the face of opportunistic encryption. ISPs are doing it already it turns out. Governments getting to ISPs has already happened. I think continuing to support opportunistic encryption in Firefox and the IETF is harmful to our mission. You're missing Adam's point. From the attacker's perspective, opportunistic sessions are indistinguishable from Google's laser focus on preventing active attackers to the exclusion of any solution that thwarts passive attacks is a prime example of insisting on a perfect solution, resulting instead in substantial deployments of nothing. They're naïvely hoping that finding just the right carrot will somehow result in mass adoption of an approach that people have demonstrated, with fourteen years of experience, significant reluctance to deploy universally. Where are you getting your data from? https://plus.google.com/+IlyaGrigorik/posts/7VSuQ66qA3C shows a very different view of what's happening. Be careful how you count. Ilya's stats are equivalent to the Firefox HTTP_TRANSACTION_IS_SSL metric [1], which counts things like search box background queries; in particular, it greatly over-samples Google. A more realistic number is HTTP_PAGELOAD_IS_SSL, for which HTTPS adoption is still around 30%. That's consistent with other measures of how many sites out there support HTTPS. --Richard [1] http://telemetry.mozilla.org/#filter=release%2F32%2FHTTP_TRANSACTION_IS_SSLaggregates=multiselect-all!SubmissionsevoOver=Buildslocked=truesanitize=truerenderhistogram=Graph [2] http://telemetry.mozilla.org/#filter=release%2F32%2FHTTP_PAGELOAD_IS_SSLaggregates=multiselect-all!SubmissionsevoOver=Buildslocked=truesanitize=truerenderhistogram=Graph -- https://annevankesteren.nl/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Wed, Nov 12, 2014 at 11:12 PM, Richard Barnes rbar...@mozilla.com wrote: On Nov 12, 2014, at 4:35 AM, Anne van Kesteren ann...@annevk.nl wrote: On Mon, Sep 15, 2014 at 7:56 PM, Adam Roach a...@mozilla.com wrote: The whole line of argumentation that web browsers and servers should be taking advantage of opportunistic encryption is explicitly informed by what's actually happening elsewhere. Because what's *actually* happening is an overly-broad dragnet of personal information by a wide variety of both private and governmental agencies -- activities that would be prohibitively expensive in the face of opportunistic encryption. ISPs are doing it already it turns out. Governments getting to ISPs has already happened. I think continuing to support opportunistic encryption in Firefox and the IETF is harmful to our mission. You're missing Adam's point. From the attacker's perspective, opportunistic sessions are indistinguishable from I assume you meant to say indistinguishable from https sessions, so the MITM risks breaking some https sessions in a noticeable way if the MITM tries to inject itself into an opportunistic session. That's true if the server presents a publicly trusted cert for the wrong hostname (as is common if you try to see what happens if you change the scheme for a random software download URL to https and get a cert for Akamai--I'm mentioning Akamai because of the [unmentioned on the draft] affiliation of the other author). However, if the site presents a self-signed cert, the MITM could check the chain and treat self-signed certs differently from publicly trusted certs. (While checking the cert chain takes more compute, it's not outlandish considering that an operator bothers to distinguish OpenVPN from IMAP-over-TLS on the same port per https://grepular.com/Punching_through_The_Great_Firewall_of_TMobile .) But even so, focusing on what the upgraded sessions look like is rather beside the point when it's trivial for the MITM to inhibit the upgrade in the first place. In an earlier message to this thread, I talked about overwriting the relevant header in the initial HTTP/1.1 traffic with spaces. I was thinking too complexly. All it takes is changing one letter in the header name to make it unrecognized. In that case, the MITM doesn't even need to maintain the context of two adjacent TCP packets but can, with little risk of false positives, look for the full header string in the middle of the packet or a tail of at least half the string at the start of a packet or at least half the string at the end of a packet and change one byte to make the upgrade never happen--all on the level of looking at individual IP packets without bothering to have any cross-packet state. This is not a theoretical concern. See https://www.eff.org/deeplinks/2014/11/starttls-downgrade-attacks for an analogous attack being carried out for email by ISPs. If we kept http URLs strictly HTTP 1.1, it would be clear that if you want the fast new stuff, you have to do confidentiality, integrity and authenticity properly. Sites want the fast new stuff, so this would be an excellent carrot. By offering an upgrade to unauthenticated TLS, people both at our end and at the server end expend effort to support MITMable encryption, which is bad in two ways: 1) that effort would be better spent on proper https [i.e. provisioning certs properly as far as the sites are concerned; you already need a TLS setup for the opportunistic stuff] and 2) it makes the line between the MITMable and the real thing less clear, so people are likely to mistake the MITMable for the real thing and feel less urgency to do the real thing. I haven't been to the relevant IETF sessions myself, but assume that https://twitter.com/sleevi_/status/509954820300472320 is true. If even only some operators show a preference to opportunistic encryption over real https, that alone should be a huge red flag that they intend to keep MITMing what's MITMable. Therefore, we should allocate our finite resources to pushing https to be better instead of diverting effort to MITMable things. -- Henri Sivonen hsivo...@hsivonen.fi https://hsivonen.fi/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Mon, Sep 15, 2014 at 11:34 AM, Anne van Kesteren ann...@annevk.nl wrote: It seems very bad if those kind of devices won't use authenticated connections in the end. Which makes me wonder, is there some activity at Mozilla for looking into an alternative to the CA model? What happened to serving certs over DNSSEC? If browsers supported that well, it seems it has enough deployment on TLDs and registrars to be usable to a large fraction of sites. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Sun, Sep 21, 2014 at 1:14 PM, Aryeh Gregor a...@aryeh.name wrote: What happened to serving certs over DNSSEC? If browsers supported that well, it seems it has enough deployment on TLDs and registrars to be usable to a large fraction of sites. DNSSEC does not help with authentication of domains and establishing a secure communication channel as far as I know. Is there a particular proposal you are referring to? -- https://annevankesteren.nl/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
Pretty sure that what he's referring to is called DANE. It lets a domain holder assert a certificate or key pair, using DNSSEC to bind it to the domain instead of PKIX (or in addition to PKIX). https://tools.ietf.org/html/rfc6698 On Sep 21, 2014, at 8:01 AM, Anne van Kesteren ann...@annevk.nl wrote: On Sun, Sep 21, 2014 at 1:14 PM, Aryeh Gregor a...@aryeh.name wrote: What happened to serving certs over DNSSEC? If browsers supported that well, it seems it has enough deployment on TLDs and registrars to be usable to a large fraction of sites. DNSSEC does not help with authentication of domains and establishing a secure communication channel as far as I know. Is there a particular proposal you are referring to? -- https://annevankesteren.nl/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On 15/09/14 16:34, Anne van Kesteren wrote: It seems very bad if those kind of devices won't use authenticated connections in the end. Which makes me wonder, is there some activity at Mozilla for looking into an alternative to the CA model? What makes you think that switching away from the CA model will significantly reduce the amount of crypto operations such devices have to do? Gerv ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Fri, Sep 12, 2014 at 6:07 PM, Trevor Saunders trev.saund...@gmail.com wrote: Do we really want all servers to have to authenticate themselves? On the level of DV, yes, I think. (I.e. the user has a good reason to believe that the [top-level] page actually comes from the host named in the location bar.) In most cases they probably should, but I suspect there are cases where you want to run a server, but have plausable deniability. I haven't gone looking for legal precedent, but it seems to me cryptographically signing material makes it much harder to reasonably believe a denial. It seems to me this concern would have more weight if you actually had found precedent of someone successfully repudiating what they've allegedly served on the grounds of the absence of authenticated https. (In general, the way things work is that the absence of cryptographic evidence doesn't create enough doubt. Whenever there is a scandal over a famous person's SMSs, those SMSs haven't been cryptographically signed...) Is it really the right call for the Web to let people get the performance characteristics without making them do the right thing with authenticity (and, therefore, integrity and confidentiality)? On the face of things, it seems to me we should be supporting HTTP/2 only with https URLs even if one buys Theodore T'so's reasoning about anonymous ephemeral Diffie–Hellman. The combination of https://twitter.com/sleevi_/status/509954820300472320 and http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/ is pretty alarming. I agree that's bad, but I tend to believe anonymous ephemeral Diffie–Hellman is good enough to deal with the Comcat's of the world, I agree that anonymous ephemeral Diffie–Hellman as the baseline would probably reduce ISP MITMing by making it more costly. My point is that with https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 , the baseline isn't anonymous ephemeral Diffie–Hellman but unencrypted HTTP 1.1. If a major American ISP has the capacity to inject some JS into HTTP 1.1 for all users, they definitely have the capacity to strip a header from HTTP 1.1 (to make the upgrade to HTTP/2 not take place) *and* inject some JS for all users. It would have a performance impact on those connections (the delta between HTTP 1.1 and HTTP/2), but it seems that you get to remain a major American ISP even if you are widely perceived as providing slow connections... (Note that ad injection can happen on the edge and the logic of having to perform operations on Internet exchange traffic volumes doesn't apply. Making a copy of all traffic on the edge is harder, since there's a need to move the copy somewhere from the edge. However, if the edge makes sure the connections never upgrade in order to keep doing HTTP 1.1 ad injection, then the connection is unupgraded at all hops, including the hops that are suitable for moving a copy elsewhere.) On Fri, Sep 12, 2014 at 7:06 PM, Martin Thomson m...@mozilla.com wrote: The view that encryption is expensive is a prevailing meme, and it’s certainly true that some sites have reasons not to want the cost of TLS, but the costs are tiny, and getting smaller (https://www.imperialviolet.org/2011/02/06/stillinexpensive.html). I will concede that certain outliers will exist where this marginal cost remains significant (Netflix, for example), but I don’t think that’s generally applicable. As the above post shows, it’s not that costly (even less on modern hardware). And HTTP/2 and TLS 1.3 will remove a lot of the performance concerns. Yeah, I think the best feature of https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 is that anyone who deploys it loses the argument that they can't deploy https due to TLS being too slow (since they already deployed TLS--just not with publicly trusted certs). The current consensus view in the IETF (at least) is that all or nothing approach has not done enough to materially improve security. It's worth noting that the historical data is from a situation where you have two alternatives: one one hand unencrypted and unauthenticated and on the other hand encrypted and authenticated and the latter is always slower (maybe not slower enough to truly technically matter but truly slower so that anyone who ignores the magnitude of how much slower can always make a knee-jerk decision not to use the slower thing). What the Chrome folks suggest for HTTP/2 would give rise to a situation where your alternatives are still one one hand unencrypted and unauthenticated and on the other hand encrypted and authenticated *but* the latter is *faster*. So the performance argument is reversed compared to the historical data. What if the IETF consensus is based on an attribution error and the historical data is actually attributable to the speed difference (not the magnitude but to the perception that there's a difference)
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Mon, 15 Sep 2014, Henri Sivonen wrote: What the Chrome folks suggest for HTTP/2 would give rise to a situation where your alternatives are still one one hand unencrypted and unauthenticated and on the other hand encrypted and authenticated *but* the latter is *faster*. You mess up that reversal of the speed argument if you let unauthenticated be as fast as authenticated. In my view that is a very depressing argument. That's favouring *not* improving something just to make sure the other option runs faster in comparision. Shouldn't we strive to make the user experience better for all users, even those accessing HTTP sites? In a world with millions and billions of printers, fridges, TVs, settop boxes, elevators, nannycams or whatever all using embedded web servers - the amount of certificate handling for all those devices to run and use fully authenticated HTTPS is enough to prevent a large amount of those to just not go there. With opp-sec we could still up the level and make pervasive monitoring of a lot of such network connections much more expensive. -- / daniel.haxx.se ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Mon, Sep 15, 2014 at 10:24 AM, Daniel Stenberg dan...@haxx.se wrote: Shouldn't we strive to make the user experience better for all users, even those accessing HTTP sites? Well, the question is whether we want HTTP in the end. E.g. we are opting to not enable new powerful features such as service workers on them, and we also want the whole web to work offline (in theory). In a world with millions and billions of printers, fridges, TVs, settop boxes, elevators, nannycams or whatever all using embedded web servers - the amount of certificate handling for all those devices to run and use fully authenticated HTTPS is enough to prevent a large amount of those to just not go there. With opp-sec we could still up the level and make pervasive monitoring of a lot of such network connections much more expensive. It seems very bad if those kind of devices won't use authenticated connections in the end. Which makes me wonder, is there some activity at Mozilla for looking into an alternative to the CA model? -- http://annevankesteren.nl/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Mon, Sep 15, 2014 at 11:24 AM, Daniel Stenberg dan...@haxx.se wrote: On Mon, 15 Sep 2014, Henri Sivonen wrote: What the Chrome folks suggest for HTTP/2 would give rise to a situation where your alternatives are still one one hand unencrypted and unauthenticated and on the other hand encrypted and authenticated *but* the latter is *faster*. You mess up that reversal of the speed argument if you let unauthenticated be as fast as authenticated. In my view that is a very depressing argument. That's favouring *not* improving something just to make sure the other option runs faster in comparision. Shouldn't we strive to make the user experience better for all users, even those accessing HTTP sites? I think the primary way for making the experience better for users currently accessing http sites should be getting the sites to switch to https so that subsequently people accessing those sites would be accessing https sites. That way, the user experience not only benefits from HTTP/2 performance but also from the absence of ISP-injected ads or other MITMing. In a world with millions and billions of printers, fridges, TVs, settop boxes, elevators, nannycams or whatever all using embedded web servers - the amount of certificate handling for all those devices to run and use fully authenticated HTTPS is enough to prevent a large amount of those to just not go there. It seems like a very bad idea not to have authenticated security for devices that provide access to privacy-sensitive data (nannycams, fridges, DVRs) or that allow intruders to effect unwanted physical-world behaviors (printers, elevators). For devices like this that are exposed to the public network, I think it would be worthwhile to make it feasible for dynamic DNS providers to run a publicly trusted sub-CA that's constrained to issuing certs only to host under their domain (i.e. not allowed to sign all names on the net). For devices that aren't exposed to the public network, maybe we should make the TOFU interstitial for self-signed certs different for RFC1918 IP addresses or at least 192.168.*.*. (Explain that if you are on your home network and accessing an appliance for the first time, it's OK and expected to create and exception to pin that particular public key for that IP address. However, if you are on a hotel or coffee shop network, don't.) -- Henri Sivonen hsivo...@hsivonen.fi https://hsivonen.fi/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Sep 15, 2014, at 5:11 AM, Henri Sivonen hsivo...@hsivonen.fi wrote: On Mon, Sep 15, 2014 at 11:24 AM, Daniel Stenberg dan...@haxx.se wrote: On Mon, 15 Sep 2014, Henri Sivonen wrote: What the Chrome folks suggest for HTTP/2 would give rise to a situation where your alternatives are still one one hand unencrypted and unauthenticated and on the other hand encrypted and authenticated *but* the latter is *faster*. You mess up that reversal of the speed argument if you let unauthenticated be as fast as authenticated. In my view that is a very depressing argument. That's favouring *not* improving something just to make sure the other option runs faster in comparision. Shouldn't we strive to make the user experience better for all users, even those accessing HTTP sites? I think the primary way for making the experience better for users currently accessing http sites should be getting the sites to switch to https so that subsequently people accessing those sites would be accessing https sites. That way, the user experience not only benefits from HTTP/2 performance but also from the absence of ISP-injected ads or other MITMing. Just turn on HTTPS is not as trivial as you seem to think. For example, mixed content blocking means that you can't upgrade until all of your external dependencies have too. --Richard In a world with millions and billions of printers, fridges, TVs, settop boxes, elevators, nannycams or whatever all using embedded web servers - the amount of certificate handling for all those devices to run and use fully authenticated HTTPS is enough to prevent a large amount of those to just not go there. It seems like a very bad idea not to have authenticated security for devices that provide access to privacy-sensitive data (nannycams, fridges, DVRs) or that allow intruders to effect unwanted physical-world behaviors (printers, elevators). For devices like this that are exposed to the public network, I think it would be worthwhile to make it feasible for dynamic DNS providers to run a publicly trusted sub-CA that's constrained to issuing certs only to host under their domain (i.e. not allowed to sign all names on the net). For devices that aren't exposed to the public network, maybe we should make the TOFU interstitial for self-signed certs different for RFC1918 IP addresses or at least 192.168.*.*. (Explain that if you are on your home network and accessing an appliance for the first time, it's OK and expected to create and exception to pin that particular public key for that IP address. However, if you are on a hotel or coffee shop network, don't.) -- Henri Sivonen hsivo...@hsivonen.fi https://hsivonen.fi/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Mon, Sep 15, 2014 at 5:59 PM, Richard Barnes rbar...@mozilla.com wrote: On Sep 15, 2014, at 5:11 AM, Henri Sivonen hsivo...@hsivonen.fi wrote: I think the primary way for making the experience better for users currently accessing http sites should be getting the sites to switch to https so that subsequently people accessing those sites would be accessing https sites. That way, the user experience not only benefits from HTTP/2 performance but also from the absence of ISP-injected ads or other MITMing. Just turn on HTTPS is not as trivial as you seem to think. For example, mixed content blocking means that you can't upgrade until all of your external dependencies have too. I don't think anyone is suggesting it's trivial. We're saying that a) it's necessary if you want to prevent MITM, ad-injection, etc. and b) it's required for new features such as service workers (which in turn are required if you want to make your site work offline). At the moment setting up TLS is quite a bit of hassle and requires dealing with CAs to get a certificate. But given that there's no way around TLS becoming the bottom line for interesting new features in browsers, we need to start looking into how we can simplify that process. Looking into how we can prolong the non-TLS infrastructure should have much less priority I think. Google seems to have the right trade off and the IETF consensus seems to be unaware of what is happening elsewhere. -- https://annevankesteren.nl/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Mon, Sep 15, 2014 at 9:08 AM, Anne van Kesteren ann...@annevk.nl wrote: On Mon, Sep 15, 2014 at 5:59 PM, Richard Barnes rbar...@mozilla.com wrote: On Sep 15, 2014, at 5:11 AM, Henri Sivonen hsivo...@hsivonen.fi wrote: I think the primary way for making the experience better for users currently accessing http sites should be getting the sites to switch to https so that subsequently people accessing those sites would be accessing https sites. That way, the user experience not only benefits from HTTP/2 performance but also from the absence of ISP-injected ads or other MITMing. Just turn on HTTPS is not as trivial as you seem to think. For example, mixed content blocking means that you can't upgrade until all of your external dependencies have too. I don't think anyone is suggesting it's trivial. We're saying that a) it's necessary if you want to prevent MITM, ad-injection, etc. and b) it's required for new features such as service workers (which in turn are required if you want to make your site work offline). At the moment setting up TLS is quite a bit of hassle and requires dealing with CAs to get a certificate. But given that there's no way around TLS becoming the bottom line for interesting new features in browsers, we need to start looking into how we can simplify that process. Looking into how we can prolong the non-TLS infrastructure should have much less priority I think. I'm not really sure what's being debated here. There seem to be several questions, each of which has both a standards and implementation answer. - Should there be HTTP2 w/o authenticated TLS (i.e., HTTPS)? [Standards answer: Yes. Chrome answer: no. Firefox answer: no HTTP w/o TLS but support opportunistic unauthenticated TLS.] - Should there be new Web features on non-HTTPS origins? Specifically. * ServiceWorkers [Standards answer: HTTPS only, Chrome/Firefox answer: same] * WebCrypto [Standards answer: yes, Google answer: No, Firefox yes.] * gUM [Standards answer: yes, Chrome/Firefox answer: same] Generally, I think it's useful to distinguish between settings where TLS is especially necessary for security reasons (e.g., gUM persistent permissions) and those where it's merely desirable as part of a general raising of the security the bar (arguably gUM). It seems like much of the debate about WebCrypto is where it falls in this taxonomy. Google seems to have the right trade off and the IETF consensus seems to be unaware of what is happening elsewhere. I don't think it's clear that Google has a general position, seeing as they are doing gUM without HTTPS. I'd also be interested in what is happening elsewhere that you think that the IETF consensus is unaware of. Maybe I too am unaware of it. Perhaps you could enlighten me? -Ekr ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On 9/15/14 11:08, Anne van Kesteren wrote: Google seems to have the right trade off and the IETF consensus seems to be unaware of what is happening elsewhere. You're confused. The whole line of argumentation that web browsers and servers should be taking advantage of opportunistic encryption is explicitly informed by what's actually happening elsewhere. Because what's *actually* happening is an overly-broad dragnet of personal information by a wide variety of both private and governmental agencies -- activities that would be prohibitively expensive in the face of opportunistic encryption. Google's laser focus on preventing active attackers to the exclusion of any solution that thwarts passive attacks is a prime example of insisting on a perfect solution, resulting instead in substantial deployments of nothing. They're naïvely hoping that finding just the right carrot will somehow result in mass adoption of an approach that people have demonstrated, with fourteen years of experience, significant reluctance to deploy universally. This is something far worse than being simply unaware of what's happening elsewhere: it's an acknowledgement that pervasive passive monitoring is taking place, and a conscious decision not to care. -- Adam Roach Principal Platform Engineer a...@mozilla.com +1 650 903 0800 x863 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Fri, Sep 12, 2014 at 1:55 AM, Henri Sivonen hsivo...@hsivonen.fi wrote: tion to https that obtaining, provisioning and replacing certificates is too expensive. Related concepts are at the core of why I'm going to give Opportunistic Security a try with http/2. The issues you cite are real issues in practice, but they become magnified in other environments where the PKI doesn't apply well (e.g. behind firewalls, in embedded devices, etc..).. and then, perhaps most convincingly for me, there remains a lot of legacy web content that can't easily migrate to vanilla https:// schemes we all want them to run (e.g. third party dependencies or SNI dependencies) and this is a compatibility measure for them. Personally I expect any failure mode here will be that nobody uses it, not that it drives out https. But establishment is all transparent to the web security model and asynchronous, so if that does happen we can easily remove support. The potential upside is that a lot of http:// traffic will be encrypted and protected against passive monitoring. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Fri, Sep 12, 2014 at 08:55:51AM +0300, Henri Sivonen wrote: On Thu, Sep 11, 2014 at 9:00 PM, Richard Barnes rbar...@mozilla.com wrote: On Sep 11, 2014, at 9:08 AM, Anne van Kesteren ann...@annevk.nl wrote: On Thu, Sep 11, 2014 at 5:56 PM, Richard Barnes rbar...@mozilla.com wrote: Most notably, even over non-secure origins, application-layer encryption can provide resistance to passive adversaries. See https://twitter.com/sleevi_/status/509723775349182464 for a long thread on Google's security people not being particularly convinced by that line of reasoning. Reasonable people often disagree in their cost/benefit evaluations. As Adam explains much more eloquently, the Google security team has had an all-or-nothing attitude on security in several contexts. For example, in the context of HTTP/2, Mozilla and others have been working to make it possible to send http-schemed requests over TLS, because we think it will result in more of the web getting some protection. It's worth noting, though, that anonymous ephemeral Diffie–Hellman* as the baseline (as advocated in http://www.ietf.org/mail-archive/web/ietf/current/msg82125.html ) and unencrypted as the baseline with a trivial indicator to upgrade to anonymous ephemeral Diffie–Hellman (as https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 ) are very different things. If the baseline was that there's no unencrypted mode and every connection starts with anonymous ephemeral Diffie–Hellman, a passive eavesdropper would never see content and to pervasively monitor content, the eavesdropper would have to not only have the capacity to compute Diffie–Hellman for each connection handshake but would also have to maintain state about the symmetric keys negotiated for each connection and keep decrypting and re-encrypting data for the duration of each connection. This might indeed lead to the cost outcomes that Theodore T'so postulates. https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 is different. A passive eavesdropper indeed doesn't see content after the initial request/response pair, but to see all content, the level of active that the eavesdropper needs to upgrade to is pretty minimal. To continue to see content, all the MITM needs to do is to overwrite the relevant HTTP headers with space (0x20) bytes. There's no need to maintain state beyond dealing with one of those headers crossing a packed boundary. There's no need to adjust packet sizes. There's no compute or state maintenance requirement for the whole duration of the connection. I have a much easier time believing that anonymous ephemeral Diffie–Hellman as the true baseline would make a difference in terms of pervasive monitoring, but I have a much more difficult time believing that an opportunistic encryption solution that can be defeated by overwriting some bytes with 0x20 with minimal maintenance of state would make a meaningful difference. Moreover, https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 has the performance overhead of TLS, so it doesn't really address the TLS takes too much compute power objection to https, which is the usual objection from big sites that might particularly care about the performance carrot of HTTP/2. It only addresses the objection to https that obtaining, provisioning and replacing certificates is too expensive. (And that's getting less expensive with HTTP/2, since HTTP/2 clients support SNI and SNI makes the practice of having to get host names from seemingly unrelated domains certified together obsolete.) It seems to me that this undermines the performance carrot of HTTP/2 as a vehicle of moving the Web to https pretty seriously. It allows people to get the performance characteristics of HTTP/2 while still falling short of the last step of to make the TLS connection properly authenticated. Do we really want all servers to have to authenticate themselves? In most cases they probably should, but I suspect there are cases where you want to run a server, but have plausable deniability. I haven't gone looking for legal precedent, but it seems to me cryptographically signing material makes it much harder to reasonably believe a denial. Is it really the right call for the Web to let people get the performance characteristics without making them do the right thing with authenticity (and, therefore, integrity and confidentiality)? On the face of things, it seems to me we should be supporting HTTP/2 only with https URLs even if one buys Theodore T'so's reasoning about anonymous ephemeral Diffie–Hellman. The combination of https://twitter.com/sleevi_/status/509954820300472320 and http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/ is pretty alarming. I agree that's bad, but I tend to believe anonymous ephemeral Diffie–Hellman is good enough to
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On 2014-09-11, at 22:55, Henri Sivonen hsivo...@hsivonen.fi wrote: Moreover, https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 has the performance overhead of TLS, so it doesn't really address the TLS takes too much compute power objection to https, which is the usual objection from big sites that might particularly care about the performance carrot of HTTP/2. It only addresses the objection to https that obtaining, provisioning and replacing certificates is too expensive. (And that's getting less expensive with HTTP/2, since HTTP/2 clients support SNI and SNI makes the practice of having to get host names from seemingly unrelated domains certified together obsolete.) It seems to me that this undermines the performance carrot of HTTP/2 as a vehicle of moving the Web to https pretty seriously. It allows people to get the performance characteristics of HTTP/2 while still falling short of the last step of to make the TLS connection properly authenticated. The view that encryption is expensive is a prevailing meme, and it’s certainly true that some sites have reasons not to want the cost of TLS, but the costs are tiny, and getting smaller (https://www.imperialviolet.org/2011/02/06/stillinexpensive.html). I will concede that certain outliers will exist where this marginal cost remains significant (Netflix, for example), but I don’t think that’s generally applicable. As the above post shows, it’s not that costly (even less on modern hardware). And HTTP/2 and TLS 1.3 will remove a lot of the performance concerns. I’ve seen it suggested a couple of times (largely by Google employees) that an opportunistic security option undermines HTTPS adoption. That’s hardly a testable assertion, and I think that Adam (Roach) explained the current preponderance of opinion there. The current consensus view in the IETF (at least) is that all or nothing approach has not done enough to materially improve security. One reason that you missed for the -encryption draft is the problem with content migration. A great many sites have a lot of content with http:// origins that can’t easily be rewritten. And the restrictions on the Referer header field also mean that some resources can’t be served over HTTPS (their URL shortener is apparently the last hold-out for http:// at Twitter). There are options in -encryption for authentication that can be resistant to some active attacks. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Fri, Sep 12, 2014 at 6:06 PM, Martin Thomson m...@mozilla.com wrote: And the restrictions on the Referer header field also mean that some resources can’t be served over HTTPS (their URL shortener is apparently the last hold-out for http:// at Twitter). That is something that we should have fixed a long time ago. It's called meta name=referrer and is these days also part of CSP. -- http://annevankesteren.nl/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS
On 12/09/14 13:37, Anne van Kesteren wrote: That is something that we should have fixed a long time ago. It's called meta name=referrer and is these days also part of CSP. I'll forward that on to those involved. Thanks. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On 9/12/14 10:07, Trevor Saunders wrote: [W]hen it comes to the NSA we're pretty much just not going to be able to force everyone to use something strong enough they can't beat it. Not to get too far off onto this sidebar, but you may find the following illuminating; not just for potentially adjusting your perception of what the NSA can and cannot do (especially in the coming years), but as a cogent analysis of how even the thinnest veneer of security can temper intelligence agencies' overreach into collecting information about non-targets: http://justsecurity.org/7837/myth-nsa-omnipotence/ While not the thesis of the piece, a highly relevant conclusion the author draws is: [T]hose engineers prepared to build defenses against bulk collection should not be deterred by the myth of NSA omnipotence. That myth is an artifact of the post-9/11 era that may now be outdated in the age of austerity, when NSA will struggle to find the resources to meet technological challenges. (I'm hesitant to appeal to authority here, but I do want to point out the About the Author section as being important for understanding Marshall's qualifications to hold forth on these matters.) -- Adam Roach Principal Platform Engineer a...@mozilla.com +1 650 903 0800 x863 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)
On Thu, Sep 11, 2014 at 9:00 PM, Richard Barnes rbar...@mozilla.com wrote: On Sep 11, 2014, at 9:08 AM, Anne van Kesteren ann...@annevk.nl wrote: On Thu, Sep 11, 2014 at 5:56 PM, Richard Barnes rbar...@mozilla.com wrote: Most notably, even over non-secure origins, application-layer encryption can provide resistance to passive adversaries. See https://twitter.com/sleevi_/status/509723775349182464 for a long thread on Google's security people not being particularly convinced by that line of reasoning. Reasonable people often disagree in their cost/benefit evaluations. As Adam explains much more eloquently, the Google security team has had an all-or-nothing attitude on security in several contexts. For example, in the context of HTTP/2, Mozilla and others have been working to make it possible to send http-schemed requests over TLS, because we think it will result in more of the web getting some protection. It's worth noting, though, that anonymous ephemeral Diffie–Hellman* as the baseline (as advocated in http://www.ietf.org/mail-archive/web/ietf/current/msg82125.html ) and unencrypted as the baseline with a trivial indicator to upgrade to anonymous ephemeral Diffie–Hellman (as https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 ) are very different things. If the baseline was that there's no unencrypted mode and every connection starts with anonymous ephemeral Diffie–Hellman, a passive eavesdropper would never see content and to pervasively monitor content, the eavesdropper would have to not only have the capacity to compute Diffie–Hellman for each connection handshake but would also have to maintain state about the symmetric keys negotiated for each connection and keep decrypting and re-encrypting data for the duration of each connection. This might indeed lead to the cost outcomes that Theodore T'so postulates. https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 is different. A passive eavesdropper indeed doesn't see content after the initial request/response pair, but to see all content, the level of active that the eavesdropper needs to upgrade to is pretty minimal. To continue to see content, all the MITM needs to do is to overwrite the relevant HTTP headers with space (0x20) bytes. There's no need to maintain state beyond dealing with one of those headers crossing a packed boundary. There's no need to adjust packet sizes. There's no compute or state maintenance requirement for the whole duration of the connection. I have a much easier time believing that anonymous ephemeral Diffie–Hellman as the true baseline would make a difference in terms of pervasive monitoring, but I have a much more difficult time believing that an opportunistic encryption solution that can be defeated by overwriting some bytes with 0x20 with minimal maintenance of state would make a meaningful difference. Moreover, https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 has the performance overhead of TLS, so it doesn't really address the TLS takes too much compute power objection to https, which is the usual objection from big sites that might particularly care about the performance carrot of HTTP/2. It only addresses the objection to https that obtaining, provisioning and replacing certificates is too expensive. (And that's getting less expensive with HTTP/2, since HTTP/2 clients support SNI and SNI makes the practice of having to get host names from seemingly unrelated domains certified together obsolete.) It seems to me that this undermines the performance carrot of HTTP/2 as a vehicle of moving the Web to https pretty seriously. It allows people to get the performance characteristics of HTTP/2 while still falling short of the last step of to make the TLS connection properly authenticated. Is it really the right call for the Web to let people get the performance characteristics without making them do the right thing with authenticity (and, therefore, integrity and confidentiality)? On the face of things, it seems to me we should be supporting HTTP/2 only with https URLs even if one buys Theodore T'so's reasoning about anonymous ephemeral Diffie–Hellman. The combination of https://twitter.com/sleevi_/status/509954820300472320 and http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/ is pretty alarming. * In this message, I mean the general concept of DH—not necessarily the original discrete log flavor. -- Henri Sivonen hsivo...@hsivonen.fi https://hsivonen.fi/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform