Re: WebCrypto for http:// origins

2014-09-12 Thread Henri Sivonen
On Thu, Sep 11, 2014 at 6:56 PM, Richard Barnes rbar...@mozilla.com wrote:
 No, WebCrypto on an http:// origin is not a replacement for TLS.

Addressing confusion on this point seems to be the main driver of
Chrome's restriction of Web Crypto to authenticated origins. Is there
any way to quantify in advance how damaging it would be to fail to
actively undermine this confusion? (I.e. how broadly will the
enablement of Web Crypto on http origins cause the non-adoption of TLS
with proper certs?)

BTW, regarding https://twitter.com/sleevi_/status/509939303045005313 ,
when running with e10s, is the NSS instance backing Web Crypto in the
content process and separate from the NSS instance doing TLS in the
chrome process?

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebCrypto for http:// origins

2014-09-12 Thread helpcrypto helpcrypto
On Thu, Sep 11, 2014 at 6:58 PM, Adam Roach a...@mozilla.com wrote:

 When you force people into an all or nothing situation regarding
 security,


Nature finds his own way: As nothing was invented for doing Javscript
Cryptography, someone started using Java Applets. Java applets are much
more insecure than the first option, but consumers NEED something to make
web crypto.

In other words: is security vs usability, but at least you have to give an
option!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Restricting gUM to authenticated origins only

2014-09-12 Thread Frederik Braun
On 11.09.2014 19:04, Anne van Kesteren wrote:
 On Thu, Sep 11, 2014 at 6:58 PM, Martin Thomson m...@mozilla.com wrote:
 On 2014-09-11, at 00:56, Anne van Kesteren ann...@annevk.nl wrote:
 Are we actually partitioning permissions per top-level browsing
 context or could they already accomplish this through an iframe?

 As far as I understand it, permissions are based on domain name only, they 
 don’t include scheme or port from the origin.  So it’s probably less 
 granular than that.
 
 That seems somewhat bad.
 

Yes.

AFAIU (I might be terribly wrong), this is because all of those
permissions (gUM, Geolocation, Offilne Storage, Fullscreen) are using
the Permission manager we still have from the Popup Blocker/Cookie
Manager. This is domain based. Not origin :(
You can see this in about:permissions.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Restricting gUM to authenticated origins only

2014-09-12 Thread Henri Sivonen
On Fri, Sep 12, 2014 at 12:39 PM, Frederik Braun fbr...@mozilla.com wrote:
 On 11.09.2014 19:04, Anne van Kesteren wrote:
 On Thu, Sep 11, 2014 at 6:58 PM, Martin Thomson m...@mozilla.com wrote:
 On 2014-09-11, at 00:56, Anne van Kesteren ann...@annevk.nl wrote:
 Are we actually partitioning permissions per top-level browsing
 context or could they already accomplish this through an iframe?

 As far as I understand it, permissions are based on domain name only, they 
 don’t include scheme or port from the origin.  So it’s probably less 
 granular than that.

 That seems somewhat bad.


 Yes.

 AFAIU (I might be terribly wrong), this is because all of those
 permissions (gUM, Geolocation, Offilne Storage, Fullscreen) are using
 the Permission manager we still have from the Popup Blocker/Cookie
 Manager. This is domain based. Not origin :(
 You can see this in about:permissions.

This is shocking. Making the fundamental design bug of cookies affect
everything else is *really* bad. Is there a bug on file for fixing
this?

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Per-origin versus per-domain restrictions (Re: Restricting gUM to authenticated origins only)

2014-09-12 Thread Frederik Braun
On 12.09.2014 11:51, Henri Sivonen wrote:
 On Fri, Sep 12, 2014 at 12:39 PM, Frederik Braun fbr...@mozilla.com wrote:
 On 11.09.2014 19:04, Anne van Kesteren wrote:
 On Thu, Sep 11, 2014 at 6:58 PM, Martin Thomson m...@mozilla.com wrote:
 On 2014-09-11, at 00:56, Anne van Kesteren ann...@annevk.nl wrote:
 Are we actually partitioning permissions per top-level browsing
 context or could they already accomplish this through an iframe?

 As far as I understand it, permissions are based on domain name only, they 
 don’t include scheme or port from the origin.  So it’s probably less 
 granular than that.

 That seems somewhat bad.


 Yes.

 AFAIU (I might be terribly wrong), this is because all of those
 permissions (gUM, Geolocation, Offilne Storage, Fullscreen) are using
 the Permission manager we still have from the Popup Blocker/Cookie
 Manager. This is domain based. Not origin :(
 You can see this in about:permissions.
 
 This is shocking. Making the fundamental design bug of cookies affect
 everything else is *really* bad. Is there a bug on file for fixing
 this?
 

Yes and no. I identified this while working on a thesis on the Same
Origin Policy in 2012 and filed this only for Geolocation in bug
https://bugzilla.mozilla.org/show_bug.cgi?id=812147.

But the general solution might be a permission manager rewrite, I suppose?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Touchpad event

2014-09-12 Thread Kershaw Chang
Hi Jonas,

That’s a good point.
I agree with you that we should only expose this to certified or
privileged apps.

Thanks and regards,
Kershaw

於 2014/9/12 上午1:22,Jonas Sicking jo...@sicking.cc 寫道:

Hi Kershaw,

Has there been any discussions with other browser vendors about this
API? Or is there an official standard somewhere for them?

If not, I don't think that we'll want to expose this to the web at
large. It would still be fine to expose to certified apps, or even to
expose to privileged apps under a permission.

Does this sound ok?

/ Jonas

On Wed, Sep 10, 2014 at 11:18 PM, Kershaw Chang kech...@mozilla.com
wrote:
 Hi All,

 Summary:
 Touchpad(trackpad) is a common feature on laptop computers. Currently,
the
 finger activities on touchpad are translated to touch event and mouse
event.
 However, the coordinates of touch event and mouse event are actually
 associated to display [1]. For some cases, we need to expose the
absolute
 coordinates that are associated to touchpad itself to the application.
 That's why AOSP also defines another input source type for touchpad
[2]. The
 x and y coordinates of touchpad event are relative to the size of
touchpad.

 Use case:
 Handwriting recognition application will be benefited from this touchpad
 event. Currently, OS X supports handwriting input by touchpad [3].

 Idea of implementation:
 The webidl of touchpad event is like touch event except that x and y
 coordinates are relative to touchpad rather than display.

 --- /dev/null
 +++ b/dom/webidl/Touchpad.webidl
 +
 +[Func=mozilla::dom::Touchpad::PrefEnabled]
 +interface Touchpad {
 +  readonlyattribute long identifier;
 +  readonlyattribute EventTarget? target;
 +  readonlyattribute long touchpadX;
 +  readonlyattribute long touchpadY;
 +  readonlyattribute long radiusX;
 +  readonlyattribute long radiusY;
 +  readonlyattribute floatrotationAngle;
 +  readonlyattribute floatforce;
 +};

 --- /dev/null
 +++ b/dom/webidl/TouchpadEvent.webidl
 +
 +interface WindowProxy;
 +
 +[Func=mozilla::dom::TouchpadEvent::PrefEnabled]
 +interface TouchPadEvent : UIEvent {
 +  readonly attribute TouchpadList touches;
 +  readonly attribute TouchpadList targetTouches;
 +  readonly attribute TouchpadList changedTouches;
 +
 +  readonly attribute short   button;
 +  readonly attribute boolean altKey;
 +  readonly attribute boolean metaKey;
 +  readonly attribute boolean ctrlKey;
 +  readonly attribute boolean shiftKey;
 +
 +  [Throws]
 +  void initTouchpadEvent(DOMString type,
 + boolean canBubble,
 + boolean cancelable,
 + WindowProxy? view,
 + long detail,
 + short button,
 + boolean ctrlKey,
 + boolean altKey,
 + boolean shiftKey,
 + boolean metaKey,
 + TouchPadList? touches,
 + TouchPadList? targetTouches,
 + TouchPadList? changedTouches);
 +};

 --- /dev/null
 +++ b/dom/webidl/TouchpadList.webidl
 +
 +[Func=mozilla::dom::TouchpadList::PrefEnabled]
 +interface TouchpadList {
 +  [Pure]
 +  readonly attribute unsigned long length;
 +  getter Touchpad? item(unsigned long index);
 +};
 +
 +/* Mozilla extension. */
 +partial interface TouchpadList {
 +  Touchpad? identifiedTouch(long identifier);
 +};

 Platform converge: all

 Welcome for any suggestion or feedback.
 Thanks.

 [1]
 
http://developer.android.com/reference/android/view/InputDevice.html#SOUR
CE_
 CLASS_POINTER
 [2]
 
http://developer.android.com/reference/android/view/InputDevice.html#SOUR
CE_
 CLASS_POSITION
 [3] http://support.apple.com/kb/HT4288

 Best regards,
 Kershaw


 ___
 dev-webapi mailing list
 dev-web...@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-webapi


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Per-origin versus per-domain restrictions (Re: Restricting gUM to authenticated origins only)

2014-09-12 Thread Anne van Kesteren
On Fri, Sep 12, 2014 at 11:56 AM, Frederik Braun fbr...@mozilla.com wrote:
 Yes and no. I identified this while working on a thesis on the Same
 Origin Policy in 2012 and filed this only for Geolocation in bug
 https://bugzilla.mozilla.org/show_bug.cgi?id=812147.

 But the general solution might be a permission manager rewrite, I suppose?

That seems like a good idea. TLS permissions leaking to non-TLS seems
really bad. Cross-port also does not seem ideal. I hope it's not as
bad as cookies in that it also depends on Public Suffix?

If we rewrite I think it would be good to take top-level browsing
context partitioning under consideration. That is, if I navigate to
https://example/ and grant it the ability to do X. And then navigate
to https://elsewhere.invalid/ which happens to embed https://example/,
the embedded https://example/ does not have the ability to do X.


-- 
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Per-origin versus per-domain restrictions (Re: Restricting gUM to authenticated origins only)

2014-09-12 Thread Frederik Braun
On 12.09.2014 12:22, Anne van Kesteren wrote:
 On Fri, Sep 12, 2014 at 11:56 AM, Frederik Braun fbr...@mozilla.com wrote:
 Yes and no. I identified this while working on a thesis on the Same
 Origin Policy in 2012 and filed this only for Geolocation in bug
 https://bugzilla.mozilla.org/show_bug.cgi?id=812147.

 But the general solution might be a permission manager rewrite, I suppose?
 
 That seems like a good idea. TLS permissions leaking to non-TLS seems
 really bad. Cross-port also does not seem ideal. I hope it's not as
 bad as cookies in that it also depends on Public Suffix?
 
 If we rewrite I think it would be good to take top-level browsing
 context partitioning under consideration. That is, if I navigate to
 https://example/ and grant it the ability to do X. And then navigate
 to https://elsewhere.invalid/ which happens to embed https://example/,
 the embedded https://example/ does not have the ability to do X.
 
 

I filed bug https://bugzilla.mozilla.org/show_bug.cgi?id=1066517 for this.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Patrick McManus
On Fri, Sep 12, 2014 at 1:55 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:

 tion to https
 that obtaining, provisioning and replacing certificates is too
 expensive.


Related concepts are at the core of why I'm going to give Opportunistic
Security a try with http/2. The issues you cite are real issues in
practice, but they become magnified in other environments where the PKI
doesn't apply well (e.g. behind firewalls, in embedded devices, etc..)..
and then, perhaps most convincingly for me, there remains a lot of legacy
web content that can't easily migrate to vanilla https:// schemes we all
want them to run (e.g. third party dependencies or SNI dependencies) and
this is a compatibility measure for them.

Personally I expect any failure mode here will be that nobody uses it, not
that it drives out https. But establishment is all transparent to the web
security model and asynchronous, so if that does happen we can easily
remove support. The potential upside is that a lot of http:// traffic will
be encrypted and protected against passive monitoring.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Trevor Saunders
On Fri, Sep 12, 2014 at 08:55:51AM +0300, Henri Sivonen wrote:
 On Thu, Sep 11, 2014 at 9:00 PM, Richard Barnes rbar...@mozilla.com wrote:
 
  On Sep 11, 2014, at 9:08 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
  On Thu, Sep 11, 2014 at 5:56 PM, Richard Barnes rbar...@mozilla.com 
  wrote:
  Most notably, even over non-secure origins, application-layer encryption 
  can provide resistance to passive adversaries.
 
  See https://twitter.com/sleevi_/status/509723775349182464 for a long
  thread on Google's security people not being particularly convinced by
  that line of reasoning.
 
  Reasonable people often disagree in their cost/benefit evaluations.
 
  As Adam explains much more eloquently, the Google security team has had an 
  all-or-nothing attitude on security in several contexts.  For example, in 
  the context of HTTP/2, Mozilla and others have been working to make it 
  possible to send http-schemed requests over TLS, because we think it will 
  result in more of the web getting some protection.
 
 It's worth noting, though, that anonymous ephemeral Diffie–Hellman* as
 the baseline (as advocated in
 http://www.ietf.org/mail-archive/web/ietf/current/msg82125.html ) and
 unencrypted as the baseline with a trivial indicator to upgrade to
 anonymous ephemeral Diffie–Hellman (as
 https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 )
 are very different things.
 
 If the baseline was that there's no unencrypted mode and every
 connection starts with anonymous ephemeral Diffie–Hellman, a passive
 eavesdropper would never see content and to pervasively monitor
 content, the eavesdropper would have to not only have the capacity to
 compute Diffie–Hellman for each connection handshake but would also
 have to maintain state about the symmetric keys negotiated for each
 connection and keep decrypting and re-encrypting data for the duration
 of each connection. This might indeed lead to the cost outcomes that
 Theodore T'so postulates.
 
 https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 is
 different. A passive eavesdropper indeed doesn't see content after the
 initial request/response pair, but to see all content, the level of
 active that the eavesdropper needs to upgrade to is pretty minimal.
 To continue to see content, all the MITM needs to do is to overwrite
 the relevant HTTP headers with space (0x20) bytes. There's no need to
 maintain state beyond dealing with one of those headers crossing a
 packed boundary. There's no need to adjust packet sizes. There's no
 compute or state maintenance requirement for the whole duration of the
 connection.
 
 I have a much easier time believing that anonymous ephemeral
 Diffie–Hellman as the true baseline would make a difference in terms
 of pervasive monitoring, but I have a much more difficult time
 believing that an opportunistic encryption solution that can be
 defeated by overwriting some bytes with 0x20 with minimal maintenance
 of state would make a meaningful difference.
 
 Moreover, https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00
 has the performance overhead of TLS, so it doesn't really address the
 TLS takes too much compute power objection to https, which is the
 usual objection from big sites that might particularly care about the
 performance carrot of HTTP/2. It only addresses the objection to https
 that obtaining, provisioning and replacing certificates is too
 expensive. (And that's getting less expensive with HTTP/2, since
 HTTP/2 clients support SNI and SNI makes the practice of having to get
 host names from seemingly unrelated domains certified together
 obsolete.)
 
 It seems to me that this undermines the performance carrot of HTTP/2
 as a vehicle of moving the Web to https pretty seriously. It allows
 people to get the performance characteristics of HTTP/2 while still
 falling short of the last step of to make the TLS connection properly
 authenticated.

 Do we really want all servers to have to authenticate themselves?  In
 most cases they probably should, but I suspect there are cases where
 you want to run a server, but have plausable deniability.  I haven't
 gone looking for legal precedent, but it seems to me cryptographically
 signing material makes it much harder to reasonably believe a denial.

 Is it really the right call for the Web to let people get the
 performance characteristics without making them do the right thing
 with authenticity (and, therefore, integrity and confidentiality)?
 
 On the face of things, it seems to me we should be supporting HTTP/2
 only with https URLs even if one buys Theodore T'so's reasoning about
 anonymous ephemeral Diffie–Hellman.
 
 The combination of
 https://twitter.com/sleevi_/status/509954820300472320 and
 http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
 is pretty alarming.

I agree that's bad, but I tend to believe anonymous ephemeral
Diffie–Hellman is good enough to 

Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Martin Thomson
On 2014-09-11, at 22:55, Henri Sivonen hsivo...@hsivonen.fi wrote:

 Moreover, https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00
 has the performance overhead of TLS, so it doesn't really address the
 TLS takes too much compute power objection to https, which is the
 usual objection from big sites that might particularly care about the
 performance carrot of HTTP/2. It only addresses the objection to https
 that obtaining, provisioning and replacing certificates is too
 expensive. (And that's getting less expensive with HTTP/2, since
 HTTP/2 clients support SNI and SNI makes the practice of having to get
 host names from seemingly unrelated domains certified together
 obsolete.)
 
 It seems to me that this undermines the performance carrot of HTTP/2
 as a vehicle of moving the Web to https pretty seriously. It allows
 people to get the performance characteristics of HTTP/2 while still
 falling short of the last step of to make the TLS connection properly
 authenticated.

The view that encryption is expensive is a prevailing meme, and it’s certainly 
true that some sites have reasons not to want the cost of TLS, but the costs 
are tiny, and getting smaller 
(https://www.imperialviolet.org/2011/02/06/stillinexpensive.html).  I will 
concede that certain outliers will exist where this marginal cost remains 
significant (Netflix, for example), but I don’t think that’s generally 
applicable.  As the above post shows, it’s not that costly (even less on modern 
hardware).  And HTTP/2 and TLS 1.3 will remove a lot of the performance 
concerns.

I’ve seen it suggested a couple of times (largely by Google employees) that an 
opportunistic security option undermines HTTPS adoption.  That’s hardly a 
testable assertion, and I think that Adam (Roach) explained the current 
preponderance of opinion there.  The current consensus view in the IETF (at 
least) is that all or nothing approach has not done enough to materially 
improve security.

One reason that you missed for the -encryption draft is the problem with 
content migration.  A great many sites have a lot of content with http:// 
origins that can’t easily be rewritten.  And the restrictions on the Referer 
header field also mean that some resources can’t be served over HTTPS (their 
URL shortener is apparently the last hold-out for http:// at Twitter).  There 
are options in -encryption for authentication that can be resistant to some 
active attacks.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: web-platform-tests now running in automation

2014-09-12 Thread James Graham
On 10/09/14 19:32, Aryeh Gregor wrote:
 On Tue, Sep 9, 2014 at 3:44 PM, James Graham ja...@hoppipolla.co.uk wrote:
 Yes, I agree too. One option I had considered was making a suite
 web-platform-tests-mozilla for things that we can't push upstream e.g.
 because the APIs aren't (yet) undergoing meaningful standardisation.
 Putting the editing tests into this bucket might make some sense.
 
 That definitely sounds like a great idea, but I think it would be even
 better if upstream had a place for these tests, so we could share them
 with other engines (and hopefully they would reciprocate).  Anyone
 who's just interested in conformance test figures would be free not to
 run these extra tests, of course.  I don't see why upstream would mind
 hosting these tests.

I tend to agree, but I suggest that you bring this up on public-test-infra.

 In the longer term, I think it would be very interesting if all simple
 mochitests were written in a shareable format, and if other engines
 did similarly.  I imagine we'd find lots of interesting regressions if
 we ran a large chunk of WebKit/Blink tests as part of our regular test
 suite, even if many of the tests will expect the wrong results from
 our perspective.

Yes, insofar as written in a sharable format means written in one of
the formats that is accepted into wpt. We should strive to make sharing
our tests just as fundamental a part of our culture as working with open
standards is today.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Per-origin versus per-domain restrictions (Re: Restricting gUM to authenticated origins only)

2014-09-12 Thread Ehsan Akhgari

On 2014-09-12, 6:22 AM, Anne van Kesteren wrote:

On Fri, Sep 12, 2014 at 11:56 AM, Frederik Braun fbr...@mozilla.com wrote:

Yes and no. I identified this while working on a thesis on the Same
Origin Policy in 2012 and filed this only for Geolocation in bug
https://bugzilla.mozilla.org/show_bug.cgi?id=812147.

But the general solution might be a permission manager rewrite, I suppose?


That seems like a good idea. TLS permissions leaking to non-TLS seems
really bad. Cross-port also does not seem ideal. I hope it's not as
bad as cookies in that it also depends on Public Suffix?


The permission manager was originally used to store the permission of 
websites who are allowed to set third-party cookies if you turn on that 
pref, and it's not used for storing the cookies themselves.  As such, it 
is fortunately oblivious to the Public Suffix List.



If we rewrite I think it would be good to take top-level browsing
context partitioning under consideration. That is, if I navigate to
https://example/ and grant it the ability to do X. And then navigate
to https://elsewhere.invalid/ which happens to embed https://example/,
the embedded https://example/ does not have the ability to do X.


The permission manager itself is unaware of browsing contexts, it is the 
consumer which decides how to query it.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Touchpad event

2014-09-12 Thread Ehsan Akhgari
On Thu, Sep 11, 2014 at 7:02 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Thu, Sep 11, 2014 at 3:21 PM, Ehsan Akhgari ehsan.akhg...@gmail.com
 wrote:
  On 2014-09-11, 5:54 PM, smaug wrote:
  If we just needs new coordinates, couldn't we extend the existing event
 interfaces with some new properties?
 
  Yeah, this seems like the way to go to me as well.

 Do we currently dispatch pointer events on desktop when the user
 places a finger on a laptop touchpad? Or do we just dispatch events
 when the user then moves the finger and thus move the on-screen
 pointer?


I'm not sure about our implementation, but if I'm reading the spec
correctly, we should be firing an event when you place your first finger on
the touchpad.


 If we only dispatch events to indicate pointer movement, then I don't
 think simply extending existing interfaces will be possible.

 Similarly, what happens if you touch multiple fingers to the touchpad?
 Do we fire events as the second and third finger is placed on the
 touchpad? If not, we similarly need additional events to be fired.


Again, if I'm reading the spec correctly, we should be firing multiple
events in that case.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Per-origin versus per-domain restrictions (Re: Restricting gUM to authenticated origins only)

2014-09-12 Thread Jonas Sicking
On Fri, Sep 12, 2014 at 11:44 AM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
 If we rewrite I think it would be good to take top-level browsing
 context partitioning under consideration. That is, if I navigate to
 https://example/ and grant it the ability to do X. And then navigate
 to https://elsewhere.invalid/ which happens to embed https://example/,
 the embedded https://example/ does not have the ability to do X.

 The permission manager itself is unaware of browsing contexts, it is the
 consumer which decides how to query it.

This is one of the bad things with the permission manager. It leads to
that we end up with different policies for different permissions.

It's actually even worse than that. Because it is the *reader* that
sets the policy, it means that a cookie policy written to the
permission manager could be interpreted in different ways depending on
which exact code is checking the permission manager.

What we really should do is to enable writing into the permission
manager set this cookie policy for domain and subdomains or set
this cookie policy for this domain or set this cookie policy for
this origin.

And then make the reading side simply ask can this principal send
cookies. Rather than the current can this principal send cookies
assuming that the stored data should use policy X.

We can probably expand this pattern to also handle 3rd party iframes.

Note that there are use cases for both narrow and broad policies. At
the very least it seems useful to be able to say both deny all of
*.adnetwork.com from using cookies as well as allow
https://google.com/ to use camera.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Anne van Kesteren
On Fri, Sep 12, 2014 at 6:06 PM, Martin Thomson m...@mozilla.com wrote:
 And the restrictions on the Referer header field also mean that some 
 resources can’t be served over HTTPS (their URL shortener is apparently the 
 last hold-out for http:// at Twitter).

That is something that we should have fixed a long time ago. It's
called meta name=referrer and is these days also part of CSP.


-- 
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS

2014-09-12 Thread Martin Thomson

On 12/09/14 13:37, Anne van Kesteren wrote:

That is something that we should have fixed a long time ago. It's
called meta name=referrer and is these days also part of CSP.


I'll forward that on to those involved.  Thanks.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Per-origin versus per-domain restrictions (Re: Restricting gUM to authenticated origins only)

2014-09-12 Thread Anne van Kesteren
On Fri, Sep 12, 2014 at 8:44 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
 The permission manager itself is unaware of browsing contexts, it is the
 consumer which decides how to query it.

But shouldn't it be aware of this so you can adequately scope the
permission? E.g. I could grant https://amazingmaps.example/ when
embedded through https://okaystore.invalid/ permission to use my
location. But it would not be given out if it were embedded through
https://evil.invalid/ later on.

Or e.g. I could allow YouTube embedded through reddit to go
fullscreen, but not necessarily YouTube itself or when embedded
elsewhere.


-- 
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Per-origin versus per-domain restrictions (Re: Restricting gUM to authenticated origins only)

2014-09-12 Thread Martin Thomson

On 12/09/14 13:59, Anne van Kesteren wrote:

But shouldn't it be aware of this so you can adequately scope the
permission? E.g. I could granthttps://amazingmaps.example/  when
embedded throughhttps://okaystore.invalid/  permission to use my
location. But it would not be given out if it were embedded through
https://evil.invalid/  later on.

Or e.g. I could allow YouTube embedded through reddit to go
fullscreen, but not necessarily YouTube itself or when embedded
elsewhere.


In most cases (though here sicking's comment regarding what should 
happen remains especially applicable), the actor is the only thing that 
matters.


That is, it's the principal of the JS compartment, which is the origin 
you see in the bar at the top.  The location that script is loaded from 
doesn't matter.  An iframe embed is different, but in that context, the 
framed site retains complete control over its content and is arguably 
competent to ensure that it isn't abused; more importantly, the outer 
site has no visibility other than what the framed site grants it.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Adam Roach

On 9/12/14 10:07, Trevor Saunders wrote:

[W]hen it comes to the NSA we're pretty much just not going to be able
to force everyone to use something strong enough they can't beat it.


Not to get too far off onto this sidebar, but you may find the following 
illuminating; not just for potentially adjusting your perception of what 
the NSA can and cannot do (especially in the coming years), but as a 
cogent analysis of how even the thinnest veneer of security can temper 
intelligence agencies' overreach into collecting information about 
non-targets:


http://justsecurity.org/7837/myth-nsa-omnipotence/

While not the thesis of the piece, a highly relevant conclusion the 
author draws is: [T]hose engineers prepared to build defenses against 
bulk collection should not be deterred by the myth of NSA omnipotence.  
That myth is an artifact of the post-9/11 era that may now be outdated 
in the age of austerity, when NSA will struggle to find the resources to 
meet technological challenges.


(I'm hesitant to appeal to authority here, but I do want to point out 
the About the Author section as being important for understanding 
Marshall's qualifications to hold forth on these matters.)


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform