Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-17 Thread Henri Sivonen
On Fri, Nov 14, 2014 at 8:00 PM, Patrick McManus mcma...@ducksong.com wrote:

 On Thu, Nov 13, 2014 at 11:16 PM, Henri Sivonen hsivo...@hsivonen.fi
 wrote:

 The part that's hard to accept is: Why is the countermeasure
 considered effective for attacks like these, when the level of how
 active the MITM needs to be to foil the countermeasure (by
 inhibiting the upgrade by messing with the initial HTTP/1.1 headers)
 is less than the level of active these MITMs already are when they
 inject new HTTP/1.1 headers or inject JS into HTML?

 There are a few pieces here -
 1] I totally expect what you describe about signalling stripping to happen
 to some subset of the traffic, but an active cleartext carrier based MITM is
 not the only opponent. Many of these systems are tee'd read only dragnets.
 Especially the less sophisticated scenarios.

I agree that http+OE is effective in the case of mere read-only fiber
splitters when no hop on the way inhibits the upgrade. (The flipside
is, of course, that if you have an ISP inhibiting the upgrade as a
small time attack to inject ads, a fiber split at another hop gets
to see the un-upgraded traffic.)

 1a] not all of the signalling happens in band especially wrt mobility.

The notion that devices move between networks that change the contents
of IP packets and networks that deliver IP packets without changing
their contents and upgrade signals seen in the latter kind of network
getting remembered for the first kind makes sense, yes. But this isn't
really about whether there exist some cases where OE works but about
whether OE distracts from https.

 2] When the basic ciphertext technology is proven, I expect to see other
 ways to signal its use.

 I casually mentioned a tofu pin yesterday and you were rightly concerned
 about pin fragility - but in this case the pin needn't be hard fail (and pin
 was a poor word choice) - its an indicator to try OE. That can be downgraded
 if you start actively resetting 443, sure - but that's a much bigger step to
 take that may result in generally giving users of your network a bad
 experience.

 And if you go down this road you find all manner of other interesting ways
 to bootstrap OE - especially if what you are bootstrapping is an
 opportunistic effort that looks a lot like https on the wire: gossip
 distribution of known origins, optimistic attempts on your top-N frecency
 sites, DNS (sec?).. even h2 https sessions can be used to carry http schemed
 traffic (the h2 protocol finally explicitly carries scheme as part of the
 transaction instead of making all transactions on the same connection carry
 the same scheme) which might be a very good thing for folks with mixed
 content problems. Most of this can be explored asynchronously at the cost of
 some plaintext usage in the interim. Its opportunistic afterall.

 There is certainly some cat and mouse here - as Martin says, its really just
 a small piece. I don't think of it as more than replacing some plaintext
 with some encryption - that's not perfection, but I really do think its
 significant.

I think the idea that there might be other signals is a bad sign: It's
a sign that incrementally patching up OE signaling will end up taking
more and more effort while still falling short of https that is
already available for adoption and available even in legacy browsers.

Also, it's a bad sign in the sense that some of the things you mention
as possibilities are problems in themselves: While DNSSEC-based
signaling to use encryption in a legacy protocol whose baseline is
unencrypted makes some sense for protocols where the connection
latency is not an important part of the user experience, such as
server-to-server SMTP, it seems pretty clear that with all the focus
on initial connection latency, browsers won't start making additional
DNS queries--especially ones that might fail thanks to
middleboxes--before connecting. (Though, I suppose when the encryption
is *opportunistic* anyway, you could query DNSSEC lazily and let the
first few HTTP requests go in the clear.) As for prescanning your
top-N frecency, that's a privacy leak it itself, since an eavesdropper
could tell what the top-N frecency is by looking at the DNS traffic
the browser generates (DNSSEC infamously not providing
confidentiality...). (Also, at least if you don't have a huge legacy
of third-party includes that would become mixed content, https+HSTS is
way easier to deploy than DNSSEC in terms of setting it up, in terms
of keeping it running without interruption and in terms of not having
middleboxes mess with it.)

But I think the fundamental problem is still opportunity cost and
sapping the current momentum of https. The idea this is effective
against read-only dragnet some of the time, therefore let's do it to
improve things even some of the time might make sense if it was an
action to be taken just by the browser without the participation of
server admins and had no effect on how server admins perceive https in

Firefox 34 release date moving to Dec 1/2

2014-11-17 Thread Lawrence Mandel
The Firefox 34 release date will move out one week from Nov 25 to Dec 1/2. This 
change impacts Firefox Desktop, Firefox for Android, Firefox ESR, and 
Thunderbird. The purpose of this change is to allow for an additional week of 
stabilization during the 34 cycle.

Details of the change:
- Release date change from Nov 25 to Dec 1/2 (need to determine the date that 
works best given the work week)
- Merge date change from Tue, Nov 24 to Fri, Nov 28
- Two additional desktop betas (10 and 11) will be added to the calendar this 
week on our usual beta build schedule (build Mon and Thu, release Tue and Fri)
- One additional mobile beta (beta 11) will be added to the schedule. Note that 
mobile beta 10 will gtb on schedule on Mon. Mobile beta 11 will gtb on Thu with 
desktop in order to be ready early the following week.
- RC builds will happen on Mon, Nov 24

Note that we are effectively moving an extra week that we had previously added 
to the 35 Beta cycle in the 34 Beta cycle. 35 will have a 7 week Aurora cycle 
instead of a 7 week Beta cycle.

Follow-up questions to dev-planning please.

Lawrence
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Test Informant Report - Week ending Nov 15

2014-11-17 Thread Test Informant

Test Informant report for 2014-11-15.

State of test manifests at revision a52bf59965a0.
Using revision d380166816dd as a baseline for comparisons.
Showing tests enabled or disabled between 2014-11-08 and 2014-11-15.

87% of tests across all suites and configurations are enabled.

Summary
---
marionette- ↑0↓0   - 92%
mochitest-a11y- ↑0↓0   - 99%
mochitest-browser-chrome  - ↑62↓32 - 94%
mochitest-browser-chrome-e10s - ↑156↓4 - 46%
mochitest-chrome  - ↑40↓0  - 97%
mochitest-plain   - ↑223↓70 - 86%
mochitest-plain-e10s  - ↑58↓40 - 79%
xpcshell  - ↑84↓0  - 92%

Full Report
---
http://brasstacks.mozilla.com/testreports/weekly/2014-11-15.informant-report.html


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-17 Thread voracity
On Friday, November 14, 2014 6:25:43 PM UTC+11, Henri Sivonen wrote:
 This is obvious to everyone reading this mailing list. My concern is
 that if the distinction between http and https gets fuzzier, people
 who want encryption but who want to avoid ever having to pay a penny
 to a CA will think that http+OE is close enough to https that they
 deploy http+OE when if http+OE didn't exist, they'd hold their nose,
 pay a few dollars to a CA and deploy https with a publicly trusted
 cert (now that there's more awareness of the need for encryption).

Could I just interject at this point (while apologising for my general rudeness 
and lack of technical security knowledge).

The issue isn't that people are cheapskates, and will lose 'a few dollars'. The 
issue is that transaction costs http://en.wikipedia.org/wiki/Transaction_cost 
can be crippling.

Another problem is that the whole CA system is equivalent to a walled-garden, 
in which a small set of 'trusted' individuals (ultimately) restrict or permit 
what everyone else can see. It hasn't caused problems in the history of the 
internet so far, because a non-centralised alternative exists. (An alternative 
that is substantially more popular *precisely* *because* of transaction costs 
and independence.) This means it's currently a difficult environment for a few 
mega-CAs (and governments) to exercise any power. A CA-only internet changes 
that environment radically.

I'm unsurprised that Google doesn't think this is an issue. If they do 
something that (largely invisibly but substantially) increases the internet's 
http://en.wikipedia.org/wiki/Barriers_to_entry , it reduces diversity on the 
internet, but otherwise doesn't affect Google very much. (Actually, it may do, 
since it will make glorified hosting services like Facebook much more popular 
still over independent websites.) However, there is a special onus on Mozilla 
to think through *all* the social implications of what it does. Security is 
*never* pure win; there is *always* a trade off that society has to make, and I 
don't see this being considered properly here.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform