Re: Intent to ship: Autodiscovery of WebExtension search engines

2020-02-19 Thread Ehsan Akhgari
Hi Dale,

It seems that this feature is not being developed with the hope of it
turning into a cross-browser autodiscovery of search extensions in the
future.  With that in mind, I'd like to ask if it is really necessary to
expose a new meta tag to the web platform, given the unclear path to the
adoption of this feature for another browser engine (e.g. Servo, Blink or
WebKit.)  Per our API exposure guidelines <
https://wiki.mozilla.org/ExposureGuidelines#When_is_a_feature_ready_to_ship.3F>
it doesn't appear that this feature meets the criteria for a new platform
feature that we should ship.

Were there other alternatives considered which do not require modifying web
pages with a new meta tag?  For example, one simple idea that comes to mind
is allowing extra metadata in the definition of search engine extensions
which indicates the URL for the equivalent OpenSearch service and matching
that metadata with the OpenSearch annotations found in the existing pages
that users browse which have OpenSearch annotations.  (This idea itself is
probably not useful since I don't know your exact requirements, suggesting
it mostly as a way to open a discussion.)

I realize that getting this kind of feedback at the time of an intent to
ship is at best extremely unsettling, because you've probably done a lot of
work on this, and for that I apologize.

Thanks for your consideration,
Ehsan

On Fri, Feb 14, 2020 at 2:50 PM Dale Harvey  wrote:

> Summary: Since Firefox 57, users have been able to install additional
> search engines in the shape of a WebExtension[1] from addons.mozilla.org
> (AMO), whereas this used to only be possible using the OpenSearch XML
> format[2]. Since Firefox 68, all the search engines we distribute are
> WebExtensions[3].
>
> Websites are currently able to advertise their search engine to the
> browser[4], but this is not yet possible for WebExtension search engines.
>
> We’re proposing a new mime-type to be supported for 
> tags, other than “application/opensearchdescription+xml”: “x-xpinstall” for
> WebExtension search engines. Example:  href=“https://addons.mozilla.org//firefox/addon/”/>.
>
> NB: we are aware that the (pseudo-)mime-type “x-xpinstall” is not
> particularly friendly to cross-browser adoption, so we’re very much open to
> suggestions! Possible alternatives might be “x-webext” or “x-webextension”.
> Please reply in this thread with your ideas.
>
> NB2: We are collaborating with the addons team at Mozilla to consider
> autodiscovery of addons in general, using this mechanism, but haven’t
> reached a point where we can announce something concrete.
>
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1562657
>
> Standard: The WebExtension API documentation can be found at [4]. There is
> no standard known to us for this feature and we are not collaborating with
> other browsers at this point (see ‘Other browsers’ below).
>
> Platform coverage: This feature will be available on Firefox Desktop, on
> all supported platforms.
>
> Preference: We intend to ship this feature pref'd on by default, but it may
> be disabled by setting the “browser.search.webext.discovery.enabled" pref
> to ‘false’.
>
> DevTools bug: N/A.
>
> Other browsers: We haven’t been in active discussion with other browser
> vendors regarding this feature, since our primary motivation for moving to
> WebExtensions wholesale is motivated by our fight against malware and
> search hijackers. We are open to any type of collaboration with other
> browsers regarding this feature and WebExtension search engines in the
> broadest sense.
>
> web-platform-tests: (web-)Search has never been considered an official part
> of the web-platform and the OpenSearch specification hasn’t seen any major
> updates for over a decade. Test coverage is provided by XPCShell and
> Mochitest-browser tests written specifically for the ‘search’ toolkit
> component.
>
> Secure contexts: Yes.
>
> Is this feature enabled by default in sandboxed iframes? Yes, this feature
> does not have any impact on sandboxed iframes.
>
> As of Firefox 75, the intent is to turn WebExtension search engine
> discovery on by default on all platforms. It has been developed behind the
> browser.search.webext.discovery.enabled preference. See above for the
> shipping status in other browsers.
>
> Product: Mike Connor.
>
> Bug to implement and turn on by default:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1562657.
>
> [1]
>
> https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/manifest.json/chrome_settings_overrides
>
> [2] https://developer.mozilla.org/en-US/docs/Web/OpenSearch
>
> [3] See https://bugzilla.mozilla.org/show_bug.cgi?id=1486820 where we
> converted all our own, built-in engines to WebExtensions.
>
> [4]
>
> https://developer.mozilla.org/en-US/docs/Web/OpenSearch#Autodiscovery_of_search_plugins
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/d

Intent to ship: restrict access to request notification permissions from cross-origin iframes

2019-08-09 Thread Ehsan Akhgari
Hi everyone,

We currently allow cross-origin iframes to request notification
permissions.  This is problematic because we'd like to move to a model
where permissions are only requested for the top-level document’s origin in
order to show non-address-bar origins as little to the user as possible.
Therefore, in Firefox 70 I plan to land a change in bug 1560741[1] to deny
such permission requests without showing a prompt.

Chrome has announced this change over 2 years ago[2], but have yet to ship
it.  Our telemetry for beta 68 shows that cross-origin use of this feature
has very low usage[3] at around 0.02% of notification requests.

We’ll also log a warning in the web console when denying the permission
request because of this reason.

Please let me know if you have any questions or concerns.

Thanks,
Ehsan

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1560741
[2]
https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/n37ij1E_1aY
[3] https://mzl.la/2Mafa6q
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: precedence of Mozilla, Google, and clang-format styles for C++ code

2019-07-23 Thread Ehsan Akhgari
On Tue, Jul 23, 2019 at 2:50 PM Bryce Seager van Dyk 
wrote:

> On Sunday, July 21, 2019 at 10:18:02 PM UTC-7, Karl Tomlinson wrote:
> > Near the top of
> >
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style
> > there is
> > > New code should try to conform to these standards, so it is as
> > > easy to maintain as existing code. [...]
> > >
> > > This article is particularly for those new to the Mozilla
> > > codebase [...]"  Before requesting a review, please read over
> > > this document, making sure that your code conforms to
> > > recommendations."
> > >
> > > Firefox code base uses the Google Coding style for C++ code [2]
> >
> > On reading over the document, the reader finds a number of
> > additional guidelines, some of which are complementary or
> > orthogonal to Google style and some of which are contradictory.
> >
> > Can we clarify which documents have precedence so that we can
> > spell this out clearly on this page, please?
> >
> > "we would like to convert entirely to this [Google C++] coding
> > style -- perhaps with some exceptions for Mozilla-specific
> > constraints". [1]
> > Would the Mozilla coding style be the appropriate place to list
> > such exceptions?
> > The existing recording of a recent resolution [3] on this page
> > seems helpful to me.
> >
> > The reason to think this page might not be the appropriate place
> > is because [1]
> > > we will retire the Mozilla C/C++ coding style.  The intention
> > > behind retiring the current Mozilla style is to just stop
> > > maintaining it as an active coding style going forward, but the
> > > documentation for this coding style will be kept intact.
> >
> > What is the motivation for keeping intact the legacy coding style?
> > Is it to guide those contributing in code that has not yet moved
> > to Google style?  I wonder how useful that would be given "we now
> > have significant inconsistencies in the style of our codebase"
> > [1]?
> >
> > If it is useful to keep intact the documentation of legacy style,
> > can we distinguish which are the guidelines that should apply only
> > to older code?
> >
> > "we are not aiming to enforce any naming conventions at this time"
> > [1] but both the Google and Mozilla style guides have
> > (conflicting) naming conventions.  Can we clarify which, if any,
> > naming guidelines apply to new and older code?
> >
> > The other piece of the equation here is [1]
> > > our precise coding style will be the result of formatting our
> > > tree with clang-format, so by definition our code, when
> > > formatted with clang-format, will be following our new coding
> > > style.
> >
> > clang-format doesn't provide compliance with all guidelines in the
> > Google style, but it does have precedence, because the only option
> > to override is `// clang-format off`.  clang-format has its own
> > preferences for formatting, many of which are not documented in
> > the Google style guide.
> >
> > Would it be appropriate for the Mozilla coding style page to
> > indicate that `./mach clang-format` defines the formatting of C++
> > code and the Google style guide provides other guidelines, which
> > should be followed in new code unless there is an exception
> > listed?
> >
> >
> https://developer.mozilla.org/en-US/docs/Mozilla/Using_CXX_in_Mozilla_code
> > is another page with more guidelines.  I assume we would expect
> > these to be consistent with those on the Mozilla coding style
> > page.  I assume "Using C++ in Mozilla code" would take precedence
> > over Google style?
> >
> > [1]
> >
> https://groups.google.com/forum/#!msg/mozilla.dev.platform/VCLB1Lz4yXc/dNoNVzmlCQAJ
> > [2]
> > https://google.github.io/styleguide/cppguide.html
> > [3]
> > https://groups.google.com/forum/#!topic/mozilla.dev.platform/0NFBXe6VnfM
>
> My opinions (C++ specific) having revised a small amount of the style
> guide recently: the less documents that need to be consulted, the better.
> Non exhaustive reasons I'd prefer we keep docs lean:
> - People may not be aware of a doc.
> - It can be hard to determine which docs take precedence. Need docs for
> our docs.
> - Minimize places to bitrot.
>
> Following on, I would like our style page ([1] above) to, ideally, offer
> clarifications and, rarely and in clearly stated cases, supersede the
> Google style. I wouldn't mind if other pages with C++ style rules where
> trimmed and merged into this one. I think the guide needs some more work to
> get to this point because:
> - Old historical content that can now be cut or revised has not yet been.
> - C++ style is sprinkled throughout the guide, not just in an isolated C++
> section.
> - There are general styling rules that apply to C++ and other areas that
> are hard to revise.
> - C++ style is also splintered across other pages.
>

A lot (most?) of the cruft you mention above is just remnants of the old
coding style...


> Does `./mach clang-format` ever perform formatting inconsistent with
> Google style? If so, does it 

Re: precedence of Mozilla, Google, and clang-format styles for C++ code

2019-07-23 Thread Ehsan Akhgari
On Mon, Jul 22, 2019 at 1:20 AM Karl Tomlinson  wrote:

> Near the top of
>
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style
> there is
> > New code should try to conform to these standards, so it is as
> > easy to maintain as existing code. [...]
> >
> > This article is particularly for those new to the Mozilla
> > codebase [...]"  Before requesting a review, please read over
> > this document, making sure that your code conforms to
> > recommendations."
> >
> > Firefox code base uses the Google Coding style for C++ code [2]
>
> On reading over the document, the reader finds a number of
> additional guidelines, some of which are complementary or
> orthogonal to Google style and some of which are contradictory.
>

Yes, that is indeed the case.  Sorry about the confusing state of affairs.


> Can we clarify which documents have precedence so that we can
> spell this out clearly on this page, please?
>
> "we would like to convert entirely to this [Google C++] coding
> style -- perhaps with some exceptions for Mozilla-specific
> constraints". [1]
> Would the Mozilla coding style be the appropriate place to list
> such exceptions?
> The existing recording of a recent resolution [3] on this page
> seems helpful to me.
>

I think the best course of action is probably to repurpose this page to
describe our current coding style, and move out the documentation for the
historical "Mozilla Coding Style" to another page to avoid confusion here.
I have a busy schedule in the next couple of days but will do my best to do
this reorganization in the next few days.


> The reason to think this page might not be the appropriate place
> is because [1]
> > we will retire the Mozilla C/C++ coding style.  The intention
> > behind retiring the current Mozilla style is to just stop
> > maintaining it as an active coding style going forward, but the
> > documentation for this coding style will be kept intact.
>
> What is the motivation for keeping intact the legacy coding style?
> Is it to guide those contributing in code that has not yet moved
> to Google style?  I wonder how useful that would be given "we now
> have significant inconsistencies in the style of our codebase"
> [1]?
>
> If it is useful to keep intact the documentation of legacy style,
> can we distinguish which are the guidelines that should apply only
> to older code?
>

The motivation for keeping intact the legacy coding style was to preserve
historical documentation in this area (which is still useful when changing
existing code), but as I said above I agree that keeping both in the same
place isn't ideal.  In hindsight we should have split them up into two
pages immediately after switching coding styles.


> "we are not aiming to enforce any naming conventions at this time"
> [1] but both the Google and Mozilla style guides have
> (conflicting) naming conventions.  Can we clarify which, if any,
> naming guidelines apply to new and older code?
>

FWIW, I suspect that just separating the documentation may not help answer
all questions like this.  In the case of any possible further confusions,
let's have more discussions!  If there are areas where we find that what
the Google C++ Style says doesn't match our needs I suspect we will end up
making small modifications here and there.  There is precedent for doing
this in large projects that adopt the Google C++ Style (e.g. the Chromium
C++ Style
https://chromium.googlesource.com/chromium/src/+/master/styleguide/c++/c++.md
).


> The other piece of the equation here is [1]
> > our precise coding style will be the result of formatting our
> > tree with clang-format, so by definition our code, when
> > formatted with clang-format, will be following our new coding
> > style.
>
> clang-format doesn't provide compliance with all guidelines in the
> Google style, but it does have precedence, because the only option
> to override is `// clang-format off`.  clang-format has its own
> preferences for formatting, many of which are not documented in
> the Google style guide.
>

Perhaps what that paragraph is trying to say isn't coming across clearly
(in which case please suggest a more clear phrasing).  The intention behind
that paragraph is to tell the reader that if their reason for reading the
style guidelines documentation is to figure out how to format whitespace in
a specific context, the way we answer that is through running clang-format,
rather than following any specific text written somewhere.


> Would it be appropriate for the Mozilla coding style page to
> indicate that `./mach clang-format` defines the formatting of C++
> code and the Google style guide provides other guidelines, which
> should be followed in new code unless there is an exception
> listed?
>

Hmm, I don't think so?  To me, whitespace formatting is a subset of coding
style matters.  For that, we use clang-format.  But clang-format doesn't
answer all questions regarding our coding style (e.g. how to name things,
what t

Re: Intent to ship: Add wildcard to Access-Control-Expose-Headers, Access-Control-Allow-Methods, and Access-Control-Allow-Headers

2019-07-19 Thread Ehsan Akhgari
Thanks Kershaw for fixing this.  I was curious why you expect this to ship
in Firefox 71 but not 70?

On Tue, Jul 16, 2019 at 1:56 PM Kershaw Jang  wrote:

>  Hi everyone,
>
> *Summary*:
> Allow more wildcards in CORS headers when used without credential.
> So, the new syntax for these http headers is:
> Access-Control-Expose-Headers = #field-name / wildcard
> Access-Control-Allow-Methods  = #method / wildcard
> Access-Control-Allow-Headers  = #field-name / wildcard
>
> *Bug*:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1309358
>
> *Link to standard*:
> https://fetch.spec.whatwg.org/#http-new-header-syntax
>
> *Platform coverage*:
> all platforms.
>
> *Estimated or target release*:
> Firefox 71.
>
> *Preference behind which this will be implemented*:
> No preference.
>
> *DevTools bug*:
> No.
>
> *Do other browser engines implement this?*
> Yes, both Blink and WebKit support this feature.
>
> *web-platform-tests*:
> We already have existing wpt tests for this.
>
> *Is this feature restricted to secure contexts?*
> No.
>
> Please note that I already landed this patch in nightly since I was not
> aware of that I should send this email first. If anyone has any concerns,
> feel free to file another bug to back this out.
>
> Thanks,
> Kershaw
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style 🙄 : `int` vs `intX_t` vs `unsigned/uintX_t`

2019-07-09 Thread Ehsan Akhgari
On Mon, Jul 8, 2019 at 11:00 PM Gerald Squelart 
wrote:

> Thank you all for some very interesting discussions so far.
>
> Even if we don't take blanket steps to avoid unsigned types in
> non-bitfield/modulo cases (as suggested by our newly-adopted Google style),
> at least hopefully we're now aware of their subtleties, and we can be more
> careful and deliberate in our choice of integer types in our respective
> domains.
>
> Coming back to my original questions, I think the first part has not been
> categorically answered yet:
>
> Do we have style rules (or folklore) against naked `int`s/`unsigned`s, in
> favor of explicitly-sized `(u)intXX_t` everywhere?
>

For new code, the style guide for this question can be found here:
https://google.github.io/styleguide/cppguide.html#Integer_Types.  For
existing code, consistency with surrounding code should take precedence for
now.  I hope this answers your question.

Cheers,
Ehsan


>
> On Thursday, July 4, 2019 at 3:11:27 PM UTC+10, Gerald Squelart wrote:
> > Recently I coded something with a not-very-important slow-changing
> rarely-used positive number: `unsigned mGeneration;`
> > My reviewer commented: "Please use a type with an explicit size, such as
> uint32_t. (General mozilla style; you don't see a bare "unsigned" around
> much)"
> >
> > I had never heard of this (in 4+ years), so I did a bit of research:
> >
> > - I found plenty of `unsigned`s around, more than `uint32_t`s.
> >
> > - I can't see anything about that in our coding style guides [1, 2].
> >
> > - Our latest coding style [1] points at Google's, which has a section
> about Integer Types [3], and the basic gist is: Use plain `int` for
> "not-too-big" numbers, int64_t for "big" numbers, intXX_t if you need a
> precise size; never use any unsigned type unless you work with bitfields or
> need 2^N overflow (in particular, don't use unsigned for always-positive
> numbers, use signed and assertions instead).
> >
> > So, questions:
> > 1. Do we have a written style I missed somewhere?
> > 2. Do we have an unwritten style? (In which case I think we should write
> it down.)
> > 3. What do we think of the Google style, especially the aversion to
> unsigned?
> >
> > Cheers,
> > Gerald
> >
> >
> > [1]
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style
> > [2]
> https://developer.mozilla.org/en-US/docs/Mozilla/Using_CXX_in_Mozilla_code
> > [3] https://google.github.io/styleguide/cppguide.html#Integer_Types
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement: CSS backdrop-filter

2019-07-09 Thread Ehsan Akhgari
On Mon, Jul 8, 2019 at 6:22 PM Connor Brewster 
wrote:

> Hi Ehsan,
>
> Currently, the plan is to develop this feature behind a pref flag that
> will be off by default. We haven't decided the best way forward for
> enabling this feature by default yet. We have two possible options:
>  1. Add backdrop filter support to the other render backends.
>  2. Rely on an @supports media query to determine if backdrop-filter is
> supported (this would check for the backdrop-filter pref flag and check if
> WebRender is enabled). If we choose this route, we need to remove any
> dynamic fallbacks to the software renderer that may occur when using
> backdrop-filter. Currently WebRender does not support all SVG filters, so
> these would need to be fully implemented for this approach to work.
>

Thanks, this all sounds good!


> >  * Do we have other web-exposed features that are only supported when
> WebRender is enabled?
>
> I don't believe we have any other web-exposed features only supported by
> WebRender at the moment.
>
> > * How do we plan to communicate to web developers (who may not
> necessarily know/care what WebRender is) where this feature is usable?
>
> This is a good thing to keep in mind when determining the right approach.
> If we decide to implement backdrop-filter on all backends, this won't be an
> issue; otherwise, if we go with approach 2, we would need to tell web
> developers to rely on the @supports media query.
>
> > * What happens to the DevTools integration a) when WebRender is disabled
> on the target Firefox instance, and b) when WebRender is disabled on the
> target Firefox instance but enabled on the host instance (e.g. when
> debugging Fennec on Windows with the right NVIDIA GPU driver).
>
> Currently, DevTools only checks if the backdrop-filter pref is enabled and
> doesn't check if WebRender is enabled. If we decide on approach 2, we would
> need to check for the cases that you mentioned.
>
> Thanks,
> Connor
>
>
> On Fri, Jul 5, 2019 at 7:23 AM Ehsan Akhgari 
> wrote:
>
>> On Mon, Jul 1, 2019 at 8:54 PM Connor Brewster 
>> wrote:
>>
>>> Platform coverage: All platforms using the Gecko rendering engine
>>> (WebRender enabled only)
>>>
>>> Target release: Firefox 71 (WebRender enabled only)
>>>
>>> DevTools bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1561060
>>>
>>
>> Hi Connor,
>>
>> Since this feature is only enabled based on WebRender being enabled, I
>> have a few questions for you:
>>
>> * Do we have other web-exposed features that are only supported when
>> WebRender is enabled?
>>
>> * How do we plan to communicate to web developers (who may not
>> necessarily know/care what WebRender is) where this feature is usable?
>>
>> * What happens to the DevTools integration a) when WebRender is disabled
>> on the target Firefox instance, and b) when WebRender is disabled on the
>> target Firefox instance but enabled on the host instance (e.g. when
>> debugging Fennec on Windows with the right NVIDIA GPU driver).
>>
>> Thanks,
>> --
>> Ehsan
>>
>

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changes to “ExposureGuidelines” (Intent to *)

2019-07-08 Thread Ehsan Akhgari
On Sat, Jul 6, 2019 at 1:56 AM Anne van Kesteren  wrote:

> On Sat, Jul 6, 2019 at 2:44 AM Ehsan Akhgari 
> wrote:
> > Does the page need to include some of this description?
>
> It has this:
>
> # It's acceptable to merge the "intent to prototype" into the
> # "intent to ship" email as long as all the relevant
> # requirements are met.
>
> I suppose it could be a little clearer on whether to send two emails
> or one though. Is that what you meant?
>

Yes, exactly.  Sorry for not being more clear before!

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changes to “ExposureGuidelines” (Intent to *)

2019-07-05 Thread Ehsan Akhgari
On Wed, Jul 3, 2019 at 5:48 AM Anne van Kesteren  wrote:

> Hi all,
>
> In consultation with others I have made some adjustments to
> https://wiki.mozilla.org/ExposureGuidelines.
>
> “Intent to implement” has been renamed to “Intent to prototype” to signify
> more clearly that the change does not affect the shipping product and has
> no direct relation to standardization processes (though continues to be an
> important input for them).
>
> “Intent to ship” encompasses “Intent to implement and ship”. Include the
> fields from “Intent to prototype” if the change does not warrant going
> through two stages.
>

Does the page need to include some of this description?


> “Intent to unship” remains as-is.
>
> (Despite some recent emails, “Intent to experiment” is not part of the
> process. Fortunately “Intent to prototype” works well for them.)
>
> I also took this opportunity to revise the page so that it now leads with
> the more significant information. Please do feel free to edit this further
> or message me suggestions.
>
> Kind regards,
>
> Anne
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: Document.clear/captureEvents/releaseEvents no-op methods, and Document.all property

2019-07-05 Thread Ehsan Akhgari
 Hi everyone,

*Summary*: I'm planning to move the clear/captureEvents/releaseEvents no-op
methods as well as the all property from HTMLDocument to Document to bring
our implementation of these methods and property on par with the HTML spec.
*Bug*: https://bugzilla.mozilla.org/show_bug.cgi?id=1558570,
https://bugzilla.mozilla.org/show_bug.cgi?id=1558571
*Link to standard*:
https://html.spec.whatwg.org/#other-elements,-attributes-and-apis:dom-document-clear*,
*
https://html.spec.whatwg.org/#other-elements,-attributes-and-apis:dom-document-all
*Platform coverage*: all platforms.
*Estimated or target release*: Firefox 70.
*Preference behind which this will be implemented*: No preference.
*Is this feature enabled by default in sandboxed iframes?* The methods are
available to all frames.
*DevTools bug*: No specific devtools support needed.
*Do other browser engines implement this?* Yes, both Blink and WebKit
support these APIs on Document.prototype.

*web-platform-tests*: This fix makes us pass the existing wpt tests that
test for the availability of these methods on the Document prototype.

*Is this feature restricted to secure contexts?* No.  This is motivated by
improving our web compatibility story, so we'd like to make this attribute
available where other web engines already make it available.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using Google styling of #define guards

2019-07-05 Thread Ehsan Akhgari
On Wed, Jul 3, 2019 at 8:45 PM Bryce Seager van Dyk 
wrote:

> On Wednesday, July 3, 2019 at 2:27:30 PM UTC-7, Chris Peterson wrote:
> > On 7/3/2019 11:37 AM, Bryce Seager van Dyk wrote:
> > > I wanted to clarify, and discuss if needed, our styling of #define
> guards. My understanding is that we are now using Google's style in regards
> to #define guards (
> https://google.github.io/styleguide/cppguide.html#The__define_Guard). I
> believe this would mean for `dom/media/foo/bar.h` I would use a `#define
> DOM_MEDIA_FOO_BAR_H_` style guard.
> >
> > The Google style guide says:
> >
> > "The format of the symbol name should be ___H_. ...
> > For example, the file foo/src/bar/baz.h in project foo should have the
> > following guard: #ifndef FOO_BAR_BAZ_H_"
> >
> > Would our project prefix be "MOZ_"? The #define guard for
> > dom/media/foo/bar.h would then be MOZ_DOM_MEDIA_FOO_BAR_H.
>
> Good question. Chromium's code doesn't seem to observe the project
> convention in the media code I'm familiar with.


FWIW Chromium's style is heavily modified:
https://chromium.googlesource.com/chromium/src/+/master/styleguide/c++/c++.md
although in this case they follow the Google style.


> E.g.
> https://cs.chromium.org/chromium/src/media/formats/mp4/track_run_iterator.h?l=36&rcl=fb277091ba0ccbf8b2869022910573386e33c6de
> doesn't have a CHROMIUM prefix on its guard. Other directories rooted at
> chromium/src/ also appear to not have a CHROMIUM prefix used in their
> guards. Trawling some of their other projects on github I see that some do
> observe the usage of the prefix and some do not.
>
> I thought the non-project prefix version seemed the more intuitive mapping
> onto the mozilla-central structure. However, that's entirely my person
> view, and is almost certainly influenced by my reading of Chromium code.
>
> I feel much more strongly about consistency than about either particular
> style. So I appreciate hearing thoughts about what seems the
> intuitive/appropriate mapping of the guidelines onto the codebase.
>

I think the idea behind including the project name is to make it easier to
include various project headers' inside each other.  Since that may not be
a realistic concern for us, we may as well avoid the needless verbosity and
use DOM_MEDIA_FOO_BAR_H_, I think.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style 🙄 : `int` vs `intX_t` vs `unsigned/uintX_t`

2019-07-05 Thread Ehsan Akhgari
On Thu, Jul 4, 2019 at 8:05 AM Gerald Squelart 
wrote:

> > > - Our latest coding style [1] points at Google's, which has a section
> about Integer Types [3], and the basic gist is: Use plain `int` for
> "not-too-big" numbers
> >
> > If you can 100% guarantee that they will not be too big, right?
> >
> > (In particular, for generation counters I would be somewhat worried
> > about making such a guarantee.)
>
> They did add "use int64_t for "big" numbers".
>
> In my own selfish case, it will be once per profiler start/stop. I'm
> fairly confident a user won't start and stop the profiler a few billion
> times in a session. :-D
>

FWIW once in a while I have come across bugs caused by truncation of
integers where someone picked a specific size that was too small also, e.g.
storing an offset into a text node in a 16-bit integer.  I think that's
maybe something that's hiding between the lines there, being careful with
that direction also if you pick a type with a specific size to make sure
your type is large enough.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style 🙄 : `int` vs `intX_t` vs `unsigned/uintX_t`

2019-07-05 Thread Ehsan Akhgari
On Fri, Jul 5, 2019 at 2:48 AM Jeff Gilbert  wrote:

> Yes I intend to write precisely that, if we ban unsigned types.
> However I'm not really convinced that throwing out unsigned types is
> the right move.
>

Note that such a class, if it performs assertions, is actually completely
different than C++'s built-in unsigned types.  For example:

  unsigned i = -1; // compiles and runs just fine with -Wall
  Unsigned j = -1; // would (fatally?) assert at runtime

The assumption that unsigned types can't take negative values is generally
false, unless if the values they're being assigned to are all asserted to
be non-negative...


> For instance, one of the optimizations mentioned in the linked video
> seems to not mention that using (unsigned!) size_t instead of uint32_t
> (like you're sort of supposed to) should also improve code generation
> in the same way as using int32_t.
>

It does , it talks about the problem
being caused by the mismatch between the size of the pointers (64 bits) vs
the indices (32 bits) and it mentions that using 32-bit indices is common
in real-world code where for example the index is stored in a data
structure and the programmer chooses 32-bits to represent the index to save
space in the data structure since they decide that the index can never be
large enough to warrant 64-bits.


> It is, however, super poignant to me that uint32_t-indexing-on-x64 is
> pessimal, as that's precisely what our ns* containers (nsTArray) use
> for size, /unlike/ their std::vector counterparts, which will be using
> the more-optimal size_t.
>
> I'm not saying there's nothing wrong with unsigned types, but rather
> than I feel their faults are being overblown, and the arguments
> against them feel cherry-picked, which in a language like C++, is easy
> to do.
>

As far as I understand these faults are the inherent semantics of unsigned
types irrespective of whether we like them or not (their modular arithmetic
semantics as well as the semantics of arithmetic with mixtures of signed
and unsigned types), and the problem is that these types do something that
programmers may not expect (e.g. "unsigned can't accept negative values",
"unsigned values will remain (correct) positive values during arithmetic
operations" etc.).  The existing solutions that we have to mitigate some of
these problems (CheckedInt for example) are usually helpful when the code
author/reviewer are consciously thinking about these issues and otherwise
we end up using them in response to bug reports.

The arguments being overblown or not, IMO, is completely subjective and I
doubt it is something that we will ever agree on in a discussion forum with
many participants.  :-)  If you disagree with the semantic differences
about them I'd be interested to know why.


>
> On Thu, Jul 4, 2019 at 4:45 PM Gerald Squelart 
> wrote:
> >
> > (Glad I started this discussion; thank you Nathan for the enlightening
> links, I need to review all my code now!)
> >
> > Jeff, maybe what we need is a new value type that advertizes that it's
> unsigned, but doesn't have the unwanted 2^N wrapping (and its effects on
> bug-finding tools and compiler optimizations).
> > `class Unsigned { int mValue; /* magic API here */ }` -- feels like
> unsigned, but underneath it's all `int` arithmetics, with optional >=0
> assertions.
> > Would that help?
> >
> > Gerald
> >
> >
> > On Friday, July 5, 2019 at 5:35:30 AM UTC+10, Jeff Gilbert wrote:
> > > That's what CheckedInt is for, and that's what we use.
> > >
> > > The problems webgl deals with aren't arithmatic. Arithmatic is easy.
> > > (CheckedInt!) Reasoning about constraints is hard.
> > >
> > > We have some entrypoints where negative values are valid, and many
> > > where they are not. It's really nice to have a natural way to document
> > > which we expect /at compile time/. Saying "no unsigned types" really
> > > throws out the baby with the bathwater for me.
> > >
> > > On Thu, Jul 4, 2019 at 11:46 AM Botond Ballo 
> wrote:
> > > >
> > > > On Thu, Jul 4, 2019 at 2:03 PM Jeff Gilbert 
> wrote:
> > > > > It's a huge
> > > > > help to have a compile-time constraint that values can't be
> negative.
> > > >
> > > > The question is, how useful is that guarantee. Suppose you have some
> > > > code that decrements an integer too far, past zero. Instead of having
> > > > a -1 you'll have a 4294967295. Is that an improvement? Will it give
> > > > the code saner behaviour than the -1?
> > > >
> > > > Cheers,
> > > > Botond
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla

Re: Intent to Implement: CSS backdrop-filter

2019-07-05 Thread Ehsan Akhgari
On Mon, Jul 1, 2019 at 8:54 PM Connor Brewster 
wrote:

> Platform coverage: All platforms using the Gecko rendering engine
> (WebRender enabled only)
>
> Target release: Firefox 71 (WebRender enabled only)
>
> DevTools bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1561060
>

Hi Connor,

Since this feature is only enabled based on WebRender being enabled, I have
a few questions for you:

* Do we have other web-exposed features that are only supported when
WebRender is enabled?

* How do we plan to communicate to web developers (who may not necessarily
know/care what WebRender is) where this feature is usable?

* What happens to the DevTools integration a) when WebRender is disabled on
the target Firefox instance, and b) when WebRender is disabled on the
target Firefox instance but enabled on the host instance (e.g. when
debugging Fennec on Windows with the right NVIDIA GPU driver).

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement & ship: Document colour properties

2019-06-04 Thread Ehsan Akhgari
Hi everyone,

*Summary*: I'm planning to move the fgColor, linkColor, alinkColor,
vlinkColor and bgColor properties from HTMLDocument to Document to bring
our implementation of these properties on par with the HTML spec.
*Bug*:
*Link to standard*:
https://html.spec.whatwg.org/#other-elements,-attributes-and-apis:dom-document-fgcolor
*Platform coverage*: all platforms.
*Estimated or target release*: Firefox 69.
*Preference behind which this will be implemented*: No preference.
*Is this feature enabled by default in sandboxed iframes?* The methods are
available to all frames.
*DevTools bug*: No specific devtools support needed.
*Do other browser engines implement this?* Yes, both Blink and WebKit
support these APIs on Document.prototype.

*web-platform-tests*: This fix makes us pass the existing wpt tests that
test for the availability of these methods on the Document prototype.

*Is this feature restricted to secure contexts?* No.  This is motivated by
improving our web compatibility story, so we'd like to make this attribute
available where other web engines already make it available.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement & ship: Editing APIs on Document

2019-05-30 Thread Ehsan Akhgari
Hi everyone,

*Summary*: I'm planning to move the designMode getter/setter as well as
execCommand(), and the queryCommand*() methods from HTMLDocument to Document
to bring our implementation of these APIs on par with the HTML spec.
*Bug*: https://bugzilla.mozilla.org/show_bug.cgi?id=1415270
*Link to standard*: https://html.spec.whatwg.org/multipage/dom.html#the-
document-object
*Platform coverage*: all platforms.
*Estimated or target release*: Firefox 69.
*Preference behind which this will be implemented*: No preference.
*Is this feature enabled by default in sandboxed iframes?* The methods are
available to all frames.
*DevTools bug*: No specific devtools support needed.
*Do other browser engines implement this?* Yes, both Blink and WebKit
support these APIs on Document.prototype.

*web-platform-tests*: This fix makes us pass the existing wpt tests that
test for the availability of these methods on the Document prototype.  I'm
also adding a few new tests.

*Is this feature restricted to secure contexts?* No.  This is motivated by
improving our web compatibility story, so we'd like to make this attribute
available where other web engines already make it available.

Please note that while I'm moving where these APIs are exposed, this does
not mean that we're going to expose the ability to edit non-HTML documents.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement & ship: Document.open/close/write/writeln

2019-05-23 Thread Ehsan Akhgari
 Hi everyone,

*Summary*: I'm planning to move the open(), close(), write() and writeln()
methods from HTMLDocument to Document to bring our implementation of this
attribute on par with the HTML spec.
*Bug*: https://bugzilla.mozilla.org/show_bug.cgi?id=1549560
*Link to standard*: https://html.spec.whatwg.org/multipage/dom.html#the-
document-object
*Platform coverage*: all platforms.
*Estimated or target release*: Firefox 69.
*Preference behind which this will be implemented*: No preference.
*Is this feature enabled by default in sandboxed iframes?* The methods are
available to all frames.
*DevTools bug*: No specific devtools support needed.
*Do other browser engines implement this?* Yes, both Blink and WebKit
support these methods on Document.prototype.

*web-platform-tests*: This fix makes us pass the existing wpt tests that
test for the availability of these methods on the Document prototype.

*Is this feature restricted to secure contexts?* No.  This is motivated by
improving our web compatibility story, so we'd like to make this attribute
available where other web engines already make it available.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement & ship: Document.domain

2019-05-23 Thread Ehsan Akhgari
 Hi everyone,

*Summary*: I'm planning to move the domain attribute from HTMLDocument to
Document to bring our implementation of this attribute on par with the HTML
spec.
*Bug*: https://bugzilla.mozilla.org/show_bug.cgi?id=1467625
*Link to standard*: https://html.spec.whatwg.org/multipage/dom.html#the-
document-object
*Platform coverage*: all platforms.
*Estimated or target release*: Firefox 69.
*Preference behind which this will be implemented*: No preference.
*Is this feature enabled by default in sandboxed iframes?* The attribute is
available to all frames.
*DevTools bug*: No specific devtools support needed.
*Do other browser engines implement this?* Yes, both Blink and WebKit
support Document.domain.

*web-platform-tests*: This fix makes us pass the existing wpt tests that
test for the availability of the domain attribute on various non-HTML
documents and also adds a new WPT test around it.

*Is this feature restricted to secure contexts?* No.  This is motivated by
improving our web compatibility story, so we'd like to make this attribute
available where other web engines already make it available.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a new tracking blocking mechanism using a separate nsIPrincipal for cookie jar access: StoragePrincipal

2019-04-29 Thread Ehsan Akhgari
You're right about both issues. We reused the firstPartyDomain field in
OriginAttributes mostly for simplicity but long term if this feature proves
to be viable we should probably introduce a new OA field specifically to be
used for it to avoid these types of problems. That being said, making it
actually compatible with the existing FPI pref would be more work than that.

On Thu, Apr 25, 2019, 1:08 PM Tom Ritter,  wrote:

> If you have FPI enabled; it will override the StoragePrincipal switch
> and always return the partitioned jar; correct?
>
> Also, I don't think this is a big problem; but users who have enabled
> FPI in the past and then disabled it will have pre-populated
> sub-cookie jars for the trackers. This will link a prior 'identity' to
> the current one.
>
> -tom
>
> On Mon, Apr 15, 2019 at 3:23 PM Andrea Marchesini
>  wrote:
> >
> > Since Firefox 66, the anti-tracking project has been working on
> strategies
> > to protect users against trackers. One of the solutions is the ‘cookie
> > blocking for 3rd party trackers’. This is currently done denying 3rd
> party
> > trackers access to their cookie jar (no cookies sent/received, no
> > localStorage, IndexedDB, BroadcastChannel, etc).
> >
> > As experience has shown blocking is not always the best path forward
> > because blocking of legitimate requests breaks current web architecture.
> > Hence, instead of blocking, we would like our tracking algorithm to be
> > smarter and hence we would like to partition the cookie jar.
> >
> > To that end bug 1536411 decouples the nsIPrincipal to be used for the
> > cookie jar and the nsIPrincipal to be used for anything else (networking,
> > window and workers communications, and so on).
> >
> > The nsIPrincipal for the cookie jar is now called ‘StoragePrincipal’ and
> it
> > can be obtained from the Document, from the WorkerPrivate, or from the
> > nsIScriptObjectPrincipal and it can be created from an nsIChannel by
> > nsIScriptSecurityManager::getChannelResultStoragePrincipal().
> >
> > Normally, these getters return the regular nsIPrincipal of the resource.
> If
> > the  nsIChannel is classified by the URL-Classifier as a 3rd party
> tracking
> > resource then its StoragePrincipal will be a modified nsIPrincipal -
> which
> > is the ‘regular’ nsIPrincipal plus “First Party Isolation” (FPI) - See
> > below the differences between FPI and StoragePrincipal. More technical,
> its
> > OriginAttributes.firstPartyDomain attribute within the nsIPrincipal will
> be
> > set to the top-level domain. When a document is loaded, it takes the
> > StoragePrincipal by its nsIChannel and it propagates it to any storage
> API,
> > worker, and so on. Because of this, tracking resources will use a
> > partitioned cookie jar, unique for their origins, in that first-party
> > domain.
> >
> > Also SharedWorkers and BroadcastChannels use the StoragePrincipal to
> > generate their ‘sharing’ keys in order to avoid the communication between
> > partitioned and non-partitioned contexts.
> >
> > Everyone interested in reading code comments, I refer the reader to:
> >
> >
> https://searchfox.org/mozilla-central/rev/0376cbf447efa16922c550da3bfd783b916e35d3/toolkit/components/antitracking/StoragePrincipalHelper.h#10
> >
> > When storage access permission is granted (see StorageAccess API [1] or
> the
> > anti-tracking heuristic [2]), the StoragePrincipal getters will start
> > returning the ‘regular’ nsIPrincipal and any storage components exposed
> to
> > content, will be reset/recreated: they will behave like the first-party
> > resource’s one. This behavior is observable to content because, for
> > instance, the localStorage data before and after the permission will be
> > different. But, also the current approach [3] is observable to content:
> > localStorage throws exceptions before, and it works after.
> >
> > The new storage partitioning mechanism is controlled by the pref:
> > privacy.storagePrincipal.enabledForTrackers, which is to false by
> default.
> >
> > Difference between FPI and StoragePrincipal:
> >
> > a. With storagePrincipal we have partitioned cookie jars just for
> tracking
> > resources. FPI has partitioned cookie jars everywhere, double-keying any
> > origin.
> >
> > b. StoragePrincipal doesn’t break window-to-window communications because
> > the partitioning the partitioning happens at a cookie jar level and it’s
> > not spread to the ‘regular’ nsIPrincipal.
> >
> > c. We can ‘revert’ this partitioning when needed (StorageAccess API).
> >
> > [1] https://developer.mozilla.org/en-US/docs/Web/API/Storage_Access_API
> >
> > [2]
> >
> https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Privacy/Storage_access_policy#Storage_access_grants
> >
> > [3]
> >
> https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Privacy/Storage_access_policy#What_does_the_storage_access_policy_block
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://li

Intent to implement and ship: Support "noreferrer" feature for window.open()

2019-04-22 Thread Ehsan Akhgari
*Summary*: This feature adds a token to the window.open() feature argument
to allow callers to specify that the window should be opened without a
referrer (and an opener).
*Bug*: https://bugzilla.mozilla.org/show_bug.cgi?id=1527287
*Link to standard*: https://github.com/whatwg/html/pull/4331
*Platform coverage*: where will this be available? All platforms.
*Estimated or target release*: in which version do you want to/plan to
release this? Firefox 68.
*Preference behind which this will be implemented*:
dom.window.open.noreferrer.enabled.
*Is this feature enabled by default in sandboxed iframes?* Yes, it is.
*DevTools bug*: No devtools specific support needed.
*Do other browser engines implement this?*

Blink: Implemented on trunk:
https://groups.google.com/a/chromium.org/d/msg/blink-dev/K8_-u7mK688/rK7FrIOLBQAJ

WebKit: Implemented in the latest Safari Technology Preview:
https://webkit.org/blog/8825/release-notes-for-safari-technology-preview-80/

*web-platform-tests*: https://github.com/web-platform-tests/wpt/pull/15352

*Is this feature restricted to secure contexts?* No, since window.open()
itself is exposed to non-secure contexts as well.
Please note that similar to our implementation of the noopener feature
token for window.open(), our noreferrer implementation will diverge from
the spec in that we will remove the parsed tokens from the feature string
before passing it down to lower layers, see
https://bugzilla.mozilla.org/show_bug.cgi?id=1419960.  This is now being
tracked to be merged into the spec in
https://github.com/whatwg/html/pull/3297.  The impact of this (currently
Firefox specific) behaviour is that if you pass a feature argument
containing only "noopener" (or "noreferrer") to window.open() you will get
a usable window with the default UI elements only with its opener (or
referrer) stripped out as opposed to a barebones window with all of the UI
elements (e.g. various toolbars) missing as well.

Please let me know if you have any questions or concerns.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement & ship: Document.cookie

2019-04-17 Thread Ehsan Akhgari
Hi everyone,

*Summary*: I'm planning to move the cookie attribute from HTMLDocument to
Document to bring our implementation of this attribute on par with the HTML
spec.
*Bug*: https://bugzilla.mozilla.org/show_bug.cgi?id=144795
*Link to standard*:
https://html.spec.whatwg.org/multipage/dom.html#the-document-object
*Platform coverage*: all platforms.
*Estimated or target release*: Firefox 68.
*Preference behind which this will be implemented*: No preference.
*Is this feature enabled by default in sandboxed iframes?* The attribute is
available to all frames, abiding by the existing sandbox restrictions
guarding the visibility of cookies.
*DevTools bug*: No specific devtools support needed.
*Do other browser engines implement this?* Yes, both Blink and WebKit
support Document.cookie since a long time ago.  In fact this is motivated
by eliminating a wpt test failure that only fails in Firefox.

*web-platform-tests*: This fix makes us pass the existing wpt tests that
test for the availability of the cookie attribute on various non-HTML
documents.

*Is this feature restricted to secure contexts?* No.  This is motivated by
improving our web compatibility story, so we'd like to make this attribute
available where other web engines already make it available.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to unship: XMLDocument.load and XMLDocument.async APIs

2019-04-16 Thread Ehsan Akhgari
Hi everyone,

As of Firefox 68 I'm planning to turn off support for non-standard
XMLDocument.load and XMLDocument.async APIs.  These are really old APIs
that allow developers to load an XML document over the network
synchronously or asynchronously but were never standardized and implemented
by other web engines, and have long been replaced by XMLHttpRequest.

On Beta65, XMLDocument.load() is being used on ~0.003% of top level page
loads (see UseOfDOM3LoadMethod in
https://georgf.github.io/usecounters/index.html#kind=page&group=DEPRECATED&channel=beta&version=65)
and the setter of XMLDocument.async is used on ~0.009% (
https://georgf.github.io/usecounters/index.html#kind=page&group=XMLDOCUMENT&channel=beta&version=65).
These numbers are a bit surprising since it looks like about three times as
many pages set the async attribute without calling load() but setting async
without calling load() has no side effects...

We believe the usage is low enough to try to unship, but we're not yet
removing the code.  The unshipping is done by hiding both APIs behind a
pref in https://bugzilla.mozilla.org/show_bug.cgi?id=332175 and
https://bugzilla.mozilla.org/show_bug.cgi?id=1328138.

Please let me know if you have any questions or concerns.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Limit the maximum life-time of cookies set through document.cookie to seven days

2019-04-08 Thread Ehsan Akhgari
On Wed, Apr 3, 2019 at 7:15 PM  wrote:

> Hi, is there an expected date for the cookie change to go to production?
>

No, this is only an experiment turned on for Firefox Nightly at this
point.  You can watch this mailing list for future updates as we make
future plans about it going forward.


> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Limit the maximum life-time of cookies set through document.cookie to seven days

2019-04-02 Thread Ehsan Akhgari
On Tue, Apr 2, 2019 at 6:25 AM  wrote:

> Hi,
>
> I know Google Analytics very well, only the _ga cookie is used to
> recognize a web browser across visits.
>

Great, thanks for confirming this.  So based on the data you cited before
it looks like this change hasn't made any significant impact on GA output
for Safari, so we should expect the same for Firefox.  That is good news.


> If this is the right place to discuss this change, I'm surprised there is
> so few people participating regarding the impact of such a change on the
> internet ad ecosystem.
>

This is the right place.


> You don't take the point about how ITP has been circumvented so far by the
> large ad networks.
> Google Ads, Facebook Ads and Bings Ads have deployed huge human
> ressources, have updated their ad delivery and tracking systems to be in a
> situation where they keep tracking ad conversion but some have lost
> capabilities in remarketing / targeting the proper audience.
>

Yes, I am aware of some of those efforts you're alluding to.  I won't get
into the details since that involves discussing the business of other
companies.


> Give 3 names in the smaller ad networks that have done similar changes to
> adapt their technologies and platforms !
> Even Criteo didn't update its tracking.
>
> Here is an article in French that explores that point with more facts
>
> https://www.journaldunet.com/ebusiness/publicite/1422902-apple-est-il-en-train-de-tuer-la-pub-web-sur-mobile/
>
> Of course today there are abusive usage of cookies, cross domain cookie
> tracking, cookie sync operations, obscur third party data sharing with
> partners who are not identified, that don't get user consent, nor
> advertiser consent.
>
> I don't see a chance of stopping these without:
> - better legal rules where ad networks are responsible, not just
> advertisers or inventory providers
> - a common standard implemented in every modern web browser that allows to
> control which data is exposed to which tag / resource
>

The second avenue of efforts is what we are working towards here.  I agree
with you that a complete solution probably would include other
non-technical measures but that's beyond my expertise.


> The patch you propose sounds to me like a try and error change.
> But it forgets completely to study the impact on user experience and on
> the ad ecosystem.
> I'm not a fan of advertising, but it fuels lots of money into internet
> content, services and technologies. Without ads, many sides of internet as
> we know it will collapse and disappear.
>

In the interest of having a productive conversation, I will ignore the
unfounded assertions you're making about our efforts here.

Can you please clarify what exact impacts on user experience and on the ad
ecosystem you are worried about as a result of this change?

>
> Better governance is needed. Experiments without clear problem statement
> and solution design can be disastrous.
>
> Another good read, in English
>
> https://www.fastcompany.com/90308068/how-the-tragic-death-of-do-not-track-ruined-the-web-for-everyone
>

Another topic which has absolutely nothing to do with the topic under
discussion.  :-)  No matter how many off topic issues you link to, the only
issue that I am interested to discuss in this thread is changing the
maximum lifetime cap of cookies set through document.cookie to seven days
in Firefox Nightly.


> PS: I see no evidence that the Safari 12.1 that has been released
> eventually incorporates ITP 2.1 as described in beta
>

If you're looking for evidence, may I suggest testing the lifetime of
cookies in that version of Safari?

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Limit the maximum life-time of cookies set through document.cookie to seven days

2019-04-01 Thread Ehsan Akhgari
On Fri, Mar 22, 2019 at 1:55 PM  wrote:

> Hi Ehsan,
>
> Thanks for the follow up.
> I don't have access to a macOS computer with that 12.0.3 version of Safari
>
> On the other hand I have access to Google Analytics data for multiple
> sites, not the top 100 Alexa, but I don't see any evidence of a shift
> introduced by Safari 12.0.3 , for example through an increase in the share
> of new visitors with that version
>
> See that example https://photos.app.goo.gl/FTJoDLsYiJ17SPY46
>
> The Mozilla organization probably has access to similar data.
>
> Instrumentation could also be a good option to assess the impact of such
> type of change.
>

See
https://developers.google.com/analytics/devguides/collection/analyticsjs/cookie-usage#analyticsjs.
Google Analytics relies on two user identifier cookies, one expiring in 24
hours and one expiring in 2 years.  I'm not sure how the two identifiers
are linked together but it seems that they already treat repeat visitors
who come back to the site in less than 24 hours in a special way.  That may
explain why you haven't seen any impact as a result of this change.


> What I don't get is what is the assessment process in place to understand
> how such a change will be circumvented. Because it will. As many of the
> previous changes put in place by Safari ITP initiative.
>
> And the impact on the digital advertising ecosystem. The biggest ad
> networks might suffer. But will provide solutions, estimates,
> alternatives... And most of the smaller networks, with less agility, less
> money and resources, less skills, may break, get sold to the biggest, may
> close.
>

I'm not sure why you think this change will affect companies of different
sizes in a different way.  That is a strange proposition.

If you have any concrete worries please do share and I will be happy to
discuss.  Vague concerns like this usually read like the fear of the
unknown and are usually not helpful for having a productive discussion.


> Is the goal of this change really to enforce the role of the largest ad
> networks ? Why not. They tend to honor laws more likely than others...
>

It is not.  The goal of this change is to ensure that third-party scripts,
when running in the context of a top-level site, cannot abuse its cookie
jar to store cookies that can uniquely identify visitors and can be used to
keep a record of their browsing history across websites.  This change
achieves the goal by ensuring that such identifiers, once stored in the
top-level site's cookie jar, will expire in one week maximum, and therefore
the risk of the user being tracked by the companies that serve those
third-party scripts will be restricted to that window of time.


> Questioning anyway...
>
> This no just a technical decision about a web browser dropping an internet
> standard.
>
> What do you think ?
> What does the community thinks ?
>

If you have more feedback to share, this is the right place.  I would
recommend to keep the discussion focused on concrete technical issues
please and avoid FUD (
https://en.wikipedia.org/wiki/Fear,_uncertainty,_and_doubt) through
referencing claims such as dropping an internet standard or enforcing the
role of certain companies.  I would love to continue to have a discussion
around the technical merits of this feature.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: cryptomining and fingerprinting resource blocking

2019-04-01 Thread Ehsan Akhgari
Hi Rik,

Sorry for the late reply.

On Fri, Mar 22, 2019 at 10:10 PM Rik Cabanier  wrote:

>
>
> On Fri, Mar 22, 2019 at 6:07 AM Ehsan Akhgari 
> wrote:
>
>> On Thu, Mar 21, 2019, 9:39 PM Rik Cabanier,  wrote:
>>
>>> Why are these sites not included in the "safe browsing" service that is
>>> used by most browsers?
>>> That way, everyone would be protected.
>>>
>>
>> Because the relevant part of safe browsing service covers a different set
>> of criteria: https://www.google.com/about/unwanted-software-policy.html.
>>
>
> I think this page has the 3 criteria:
> https://safebrowsing.google.com/#policies
> It seems origins that try to fingerprint users or do cryptomining fall
> under category 1 and 3
>

Hmm, do you mean malware or social engineering?  I read
https://support.google.com/webmasters/answer/3258249 and
https://support.google.com/webmasters/answer/6350487/ and it is absolutely
not clear to me whether origins that fingerprint the user or engage in
(covert) cryptomining fall within those definitions.  The examples
mentioned on those two pages are specific to Chrome and other Google
software/policies and that's also unhelpful in determining whether those
categories include such software.

Note that I'm not saying that these categories are _not_ possibly a subset
of the existing SafeBrowsing database, but it's not obvious to us if they
are.

Another advantage that working on these categories with Disconnect will
have is that we will be able to document the technical details leading to
the decisions, for example see
https://github.com/disconnectme/disconnect-tracking-protection/blob/master/descriptions.md.
In the cases where I have (in my personal capacity) submitted a URL to
Google Safe Browsing, my experience has been that I have never heard back
about whether my submissions have been added to the database or not, and
what the reasons have been.

Cheers,
Ehsan


>
>
>> But more importantly, Google's safe browsing isn't by far the only block
>> list of bad URLs based on various criteria that various browsers and
>> extension use to improve the user's browsing experience. To answer your
>> actual question here, the block lists we're working with Disconnect to
>> create here are available for everyone to use under a permissive license at
>> https://github.com/disconnectme/disconnect-tracking-protection. We
>> actually ingest the list using the safe browsing protocol so other browsers
>> that have implemented that protocol could do the same today.
>>
>
> Good to know. Thanks for that link!
>
>
>>
>>> On Thu, Mar 21, 2019 at 2:59 PM Steven Englehardt <
>>> sengleha...@mozilla.com>
>>> wrote:
>>>
>>> > Summary:
>>> > We are expanding the set of resources blocked by Content Blocking to
>>> > include domains found to participate in cryptomining and
>>> fingerprinting.
>>> > Cryptomining has a significant impact on a device’s resources [0], and
>>> the
>>> > scripts are almost exclusively deployed without notice to the user [1].
>>> > Fingerprinting has long been used to track users, and is in violation
>>> our
>>> > anti-tracking policy [2].
>>> >
>>> > In support of this, we’ve worked with Disconnect to introduce two new
>>> > categories of resources to their list: cryptominers [3] and
>>> fingerprinters
>>> > [4]. As of Firefox 67, we have exposed options to block these
>>> categories of
>>> > domains under the “Custom” section of the Content Blocking in
>>> > about:preferences#privacy. We are actively working with Disconnect to
>>> > discover new domains that participate in these practices, and expect
>>> the
>>> > lists to grow over time. A full description of the lists is given here
>>> [5].
>>> >
>>> > Bugs:
>>> > Implementation: https://bugzilla.mozilla.org/show_bug.cgi?id=1513159
>>> > Breakage:
>>> > Cryptomining: https://bugzilla.mozilla.org/show_bug.cgi?id=1527015
>>> > Fingerprinting: https://bugzilla.mozilla.org/show_bug.cgi?id=1527013
>>> >
>>> > We plan to test the impact of blocking these categories during the
>>> Firefox
>>> > 67 release cycle [6][7]. We are currently targeting Firefox 69 to block
>>> > both categories by default, however this may change depending on the
>>> > results of our user studies.
>>> >
>>> > To further field test the new lists, we expect to enable the blocking
>>> of
>>> > both cat

Re: Intent to implement: Limit the maximum life-time of cookies set through document.cookie to seven days

2019-03-22 Thread Ehsan Akhgari
On Tue, Mar 19, 2019, 6:15 PM ,  wrote:

> Hi
>
> Sorry but I don't read the webkit.org blog post in the same way
> https://webkit.org/blog/8613/intelligent-tracking-prevention-2-1/
>
> "The **beta** releases of iOS 12.2 and Safari 12.1 on macOS High Sierra
> and Mojave include an updated version of Intelligent Tracking Prevention
> (ITP). For purposes of developer communication, we’ll refer to it as ITP
> 2.1."
>
> => this is being shipped in **beta** version of Safari 12.1 and iOS 12.2,
> this is not announced as part of Safari 12.0.3 which looks to be a security
> update only https://support.apple.com/en-us/HT209449
>
>
> https://developer.apple.com/documentation/safari_release_notes/safari_12_1_beta_3_release_notes
>
>
> https://developer.apple.com/documentation/ios_release_notes/ios_12_2_beta_6_release_notes
>
> then this is not live yet in a fully released product
>

Hi Olivier,

What I said in my email was based on manually testing that version of
Safari on a macOS machine. You're welcome to give it a test yourself. Not
sure about the reason behind the discrepancy with the blog post.

Cheers,
Ehsan

___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: cryptomining and fingerprinting resource blocking

2019-03-22 Thread Ehsan Akhgari
On Thu, Mar 21, 2019, 9:39 PM Rik Cabanier,  wrote:

> Why are these sites not included in the "safe browsing" service that is
> used by most browsers?
> That way, everyone would be protected.
>

Because the relevant part of safe browsing service covers a different set
of criteria: https://www.google.com/about/unwanted-software-policy.html.

But more importantly, Google's safe browsing isn't by far the only block
list of bad URLs based on various criteria that various browsers and
extension use to improve the user's browsing experience. To answer your
actual question here, the block lists we're working with Disconnect to
create here are available for everyone to use under a permissive license at
https://github.com/disconnectme/disconnect-tracking-protection. We actually
ingest the list using the safe browsing protocol so other browsers that
have implemented that protocol could do the same today.

Cheers,
Ehsan


> On Thu, Mar 21, 2019 at 2:59 PM Steven Englehardt  >
> wrote:
>
> > Summary:
> > We are expanding the set of resources blocked by Content Blocking to
> > include domains found to participate in cryptomining and fingerprinting.
> > Cryptomining has a significant impact on a device’s resources [0], and
> the
> > scripts are almost exclusively deployed without notice to the user [1].
> > Fingerprinting has long been used to track users, and is in violation our
> > anti-tracking policy [2].
> >
> > In support of this, we’ve worked with Disconnect to introduce two new
> > categories of resources to their list: cryptominers [3] and
> fingerprinters
> > [4]. As of Firefox 67, we have exposed options to block these categories
> of
> > domains under the “Custom” section of the Content Blocking in
> > about:preferences#privacy. We are actively working with Disconnect to
> > discover new domains that participate in these practices, and expect the
> > lists to grow over time. A full description of the lists is given here
> [5].
> >
> > Bugs:
> > Implementation: https://bugzilla.mozilla.org/show_bug.cgi?id=1513159
> > Breakage:
> > Cryptomining: https://bugzilla.mozilla.org/show_bug.cgi?id=1527015
> > Fingerprinting: https://bugzilla.mozilla.org/show_bug.cgi?id=1527013
> >
> > We plan to test the impact of blocking these categories during the
> Firefox
> > 67 release cycle [6][7]. We are currently targeting Firefox 69 to block
> > both categories by default, however this may change depending on the
> > results of our user studies.
> >
> > To further field test the new lists, we expect to enable the blocking of
> > both categories by default in Nightly within the coming month. If you do
> > discover breakage related to this feature, we ask that you report it in
> one
> > of the cryptomining or fingerprinting blocking breakage bugs above.
> >
> > Link to standard: These are additions to Content Blocking/Tracking
> > Protection which is not a feature we've standardized.
> >
> > Platform coverage:
> > Desktop for now. It is being considered for geckoview: (
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1530789) but is on hold
> until
> > the feature is more thoroughly tested.
> >
> > Estimated release:
> > Disabled by default and available for testing in Firefox 67. We expect to
> > ship this on by default in a future release, pending user testing
> results.
> > An intent to ship will be sent later.
> >
> > Preferences:
> > * privacy.trackingprotection.fingerprinting.enabled - controls whether
> > fingerprinting blocking is enabled
> > * privacy.trackingprotection.cryptomining.enabled - controls whether
> > cryptomining blocking is enabled
> >
> > These can also be enabled using the checkboxes under the Custom section
> of
> > Content Blocking in about:preferences#privacy for Firefox 67+.
> >
> > Is this feature enabled by default in sandboxed iframes?: Blocking
> applies
> > to all resources, regardless of their source.
> >
> > DevTools bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1537627
> > When blocking of either category is enabled, any blocked resources will
> be
> > logged to the console with the following message: `The resource at “
> > example.com” was blocked because content blocking is enabled.`
> >
> > Do other browser engines implement this?
> > Opera and Brave block cryptominers using the no-coin cryptomining list
> > [8][9]. The cryptomining list supplied by Disconnect is, in part, created
> > by matching web crawl data against no-coin and other crowdsourced lists.
> > No other browsers currently block the fingerprinting list, as we are
> > working with Disconnect to build it for this feature. However, many of
> the
> > domains on the fingerprinting list are likely to appear on other
> > crowdsourced adblocking lists.
> >
> > Web-platform-tests: Since content blocking is not a standardized feature,
> > there are no wpts.
> >
> > Is this feature restricted to secure contexts? No. Users benefit from
> > blocking in all contexts.
> >
> > [0] https://arxiv.org/pdf/1806.01994.pdf
> > [1] https:/

Re: Intent to implement and ship: Gamepad Extensions `multi touch` and `light indicator`

2019-03-15 Thread Ehsan Akhgari
On Fri, Mar 15, 2019 at 11:35 AM Tom Ritter  wrote:

> Thanks for more details on the use case.
>
> On Wed, Mar 6, 2019 at 1:35 AM  wrote:
> >
> > On Monday, February 25, 2019 at 4:17:29 PM UTC-8, Martin Thomson wrote:
> > > To add to Dan's comments here...
> > >
> > > Assuming that I'm reading this correctly [1], the fingerprinting risks
> are
> > > pretty extreme here.  In the touch spec, we have a monotonically
> increasing
> > > counter that doesn't appear to be origin-bound in any way.  What is the
> > > purpose of this identifier?  In the light spec, we have full RGB
> control
> > > over the light.  Does the light change back to a default state when the
> > > origin is no longer the primary input focus?
> > >
> > > Implementing specs of a private GitHub account is fine for the
> purposes of
> > > getting feedback, but I think that we want a clearer signal that this
> is an
> > > accepted approach before we ship something like this.  When you
> consider
> > > the potential for security and privacy implications, this is
> particularly
> > > important.
> > >
> > >
> >
> > Hi Martin,
> >
> > Sorry for reply late.
> > We will restrict theses APIs to secure contexts to help it be more
> secure. Regarding to the touchId, the reason we wanna make it monotonically
> increasing is order to recognize if fingers have been released after the
> last touch. Let me give you two examples.
> >
> > Example 1: Let’s say touchId is currently set to 0 and no fingers are
> touching the touchpad.  When a finger touches the touchpad, touchId of this
> event would be 1.  As that finger moves around the touchpad, new touch
> events are added with updated coordinates, however, the touchId is still 1
> to denote that the finger has not been lifted from the touchpad.  If the
> finger is released and touches again, the touchId would then be 2.
> >
> > Example 2: In the case of multi touch, the first finger that touches the
> touchpad would have a touchId of 1.  The next finger that touches the
> touchpad before the first finger is released would have a touchId of 2.  If
> the first touch finger is released and touches again, that touchId would be
> 3.  This way, the application can distinguish between different touches
> that have or haven’t been removed from the touchpad.
>
>
> In this situation, it seems like the actual value of the field doesn't
> matter, only that it is increasing relative to the last value. So it
> should be possible to have separate values based on the origin.
>

I assume you mean the origin of the top-level page here.

As far as I can tell from the current spec, we can implement this
restriction based on the current spec but since we are the first engine to
ship this it seems prudent to change the spec as well in order to ensure
all future implementations would implement this in a privacy-preserving
manner.


> Not doing so creates a cross-origin tracking and fingerprinting vector.
>
>
> > In terms of lightColor,  we would give the default color to [0, 0, 0] if
> there is no one set it yet or when it is just plugged in. Then, the
> application is allowed to set the controller's lightbar color whenever they
> want. I have reached the author and ask him add this description into his
> proposal.
>
> It appears that one can set but cannot read the lightColor, so that's good.
>
> GamepadPose gives me a lot of concern as well. If I have a gamepad
> resting on my desk, I don't want every website to get a persistent
> identifier about me because of the pose it's resting in.  I think/hope
> that there's something in the main gamepad spec where you can't
> enumerate gamepads until the user has interacted with them, but I
> don't recall for sure.
>

There is: https://w3c.github.io/gamepad/#dom-navigator-getgamepads.  But
note that with resist fingerprinting mode we always return an empty array
from navigator.getGamepads().


> I am very opposed to shipping this spec without addressing these concerns.
>
> -tom
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Limit the maximum life-time of cookies set through document.cookie to seven days

2019-03-08 Thread Ehsan Akhgari
On Thu, Mar 7, 2019 at 7:31 PM Ehsan Akhgari 
wrote:

> *Estimated or target release*: in which version do you want to/plan to
> release this? Getting enabled in Nightly in 67, staying Nightly only for
> now in order to collect some early feedback.  No estimated target release
> available yet since this is an early exploration.
>
Minor correction, in order to avoid conflicting with the soft freeze I will
hold off to enable this in the beginning of the next cycle.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Limit the maximum life-time of cookies set through document.cookie to seven days

2019-03-08 Thread Ehsan Akhgari
On Fri, Mar 8, 2019 at 11:54 AM James Graham  wrote:

> On 08/03/2019 15:06, Boris Zbarsky wrote:
> > On 3/7/19 7:31 PM, Ehsan Akhgari wrote:
> >> *web-platform-tests*: This is an intervention which different engines do
> >> not agree on yet.  Creating a web-platform-test for it would be very
> >> simple
> >> but it will be failing in the engines that do not agree with the
> >> intervention.  I'm not sure what the recommendation for testing these
> >> types
> >> of changes is, would be happy to submit a test if there is a path for
> >> getting them accepted into wpt.
> >
> > Other vendors have been landing tests with ".tentative" as the last part
> > of the filename before the suffixes the test harness expects (so e.g.
> > "web-locks/mode-shared.tentative.https.any.js").
> >
> > I think doing that here is fine; we may want the tests or the commit
> > message involved to point to an explainer or something tracking the need
> > for a spec change or something like that...
>
> Yes, this seems correct to me too; a .tentative. test is the right way
> to land a test for something that isn't yet standardised, and it should
> somehow link to the relevant discussion but there isn't an explicit
> convention for how that should happen (commit message vs comment vs link
> element, for example). See the end of [1] for the documentation.
>

Thanks for the suggestion, sounds good.  I filed
https://bugzilla.mozilla.org/show_bug.cgi?id=1533827 to track this.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to Implement: Trim the Referer header sent to third-party tracking resources to origin by default through modifying the default referrer policy

2019-03-08 Thread Ehsan Akhgari
*Summary*: With Enhanced Tracking Protection gradually rolling out to our
users on the release channel, we are limiting the ability of third-party
trackers to store unique client identifiers on the user's device in order
to capture a history of their browsing across different sites.  However we
are still leaking important browsing history data to these trackers in the
Referer header, which may still be tied to the user through identifiers
such as the IP address which may be stable for some users.  It is well
known that the Referer header can contain a lot of sensitive information
such as the user's search query, their user ID on various sites, and so on.

In 2017 we did a study <
https://blog.mozilla.org/data/2018/01/26/improving-privacy-without-breaking-the-web/>
on measuring the breakage impact of a number of privacy features including
limiting the Referer data sent to third-parties to origin only, and
discovered very low breakage impact from doing so.  Since then, we have
modified the default referrer policy for all third-parties in private
browsing mode with great results in terms of web compatibility.

This intent is sent to experiment with applying the same idea to
third-party tracking resources when Enhanced Tracking Protection is turned
on, through setting a default referrer policy of
strict-origin-when-cross-origin on them.  We'd like to get some early
testing by enabling this feature on the Nightly channel in the next cycle.

*Bug*: This feature landed in
https://bugzilla.mozilla.org/show_bug.cgi?id=1530076, and
https://bugzilla.mozilla.org/show_bug.cgi?id=1533763 has been filed to
enable it on Nightly.  The intention is to enable it on normal windows,
since in private windows all third-party content already has this
restriction applied to it.
*Link to standard*: https://github.com/whatwg/fetch/issues/879
*Platform coverage*: Everywhere.
*Estimated or target release*: It is unclear as of yet, we are still in
early exploration phase for this feature,
*Preference behind which this will be implemented*:
network.http.referer.defaultPolicy.trackers and
network.http.referer.defaultPolicy.trackers.pbmode, available for testing
in Nightly right now.  I'm proposing to set the value of the former pref to
2 on Nightly.
*Is this feature enabled by default in sandboxed iframes?* If not, is there
a proposed sandbox flag to enable it? If allowed, does it preserve the
current i

Please link to the test suite. If any part of the feature is not tested by
web-platform-tests, please include links to one or more of:

   - A web-platform-tests issue explaining why a certain thing cannot be
   tested (example ).
   - A spec issue for some change that would make it possible to test (
   example ).
   - A Bugzilla bug for the creating web-platform-tests.

nvariants in terms of what sandboxed iframes can do? I'm proposing to
enable this feature in sandboxed iframes by default.  It will preserve the
current invariants.
*DevTools bug*: No specific bug, since devtools already supports showing
the referrer policy applied to each network resource in the network
monitor, as well as the individual Referer headers sent alongside each
request, of course.
*Do other browser engines implement this?* Safari shipped this for origins
with tracking capabilities since some point last year with their ITP 2.0
release (https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/).
Firefox has shipped this for third-party origins in private windows since
Firefox 59.
*web-platform-tests*: https://bugzilla.mozilla.org/show_bug.cgi?id=1533825
is filed to track adding them.

*Is this feature restricted to secure contexts?* It's not. This
intervention is performed for protecting the user's privacy and the same
argument applies for non-secure contexts as well. Also, Referer headers
sent over non-secure contexts would be visible to network MITM attackers so
arguably protecting them is even more valuable than protecting the secure
context ones.
Please let me know if you have any questions or concerns.

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement: Limit the maximum life-time of cookies set through document.cookie to seven days

2019-03-07 Thread Ehsan Akhgari
*Summary*: As part of the anti-tracking project, we'd like to experiment
with a new feature in order to mitigate the impact of third-party tracking
scripts running in top-level documents.  As we are planning to start
removing the storage capabilities of third-party trackers, which removes
their ability to store unique identifiers for each client in cookies which
would be transmitted to servers across websites, a number of cross-site
trackers have started to switch to using first-party cookies to store their
unique identifiers.  The way that this works typically is through
annotating outbound links to the target website with a unique query string
parameter, and then reading and storing that unique parameter in the
first-party cookie storage through the third-party scripts that they
control which are running in the context of the top-level target site when
the user follows such a link.

This is bad because it would allow cross-site trackers to keep tracking the
user through a unique identifier, but it would also expose the unique
identifier used by tracker1 to all of the other trackers that have active
code running on the page, which would allow them to sync up each others'
tracking identifiers without resorting to practices such as cookie syncing <
https://clearcode.cc/blog/cookie-syncing/>.

We'd like to experiment with imposing a short maximum life-time on cookies
set through document.cookie in order to ensure that these unique
identifiers would expire relatively quickly once they are established, and
wouldn't allow cross-site trackers to establish a pervasive permanent
unique identifier.  We're picking seven days to be compatible with Safari
which has been shipping this for a while now <
https://webkit.org/blog/8613/intelligent-tracking-prevention-2-1/>.

*Bug*: This was implemented originally in (disabled by default)
https://bugzilla.mozilla.org/show_bug.cgi?id=1529836.  The bug to enable
this on Nightly is https://bugzilla.mozilla.org/show_bug.cgi?id=1533584.
*Link to standard*: No public standard change has been made for this
feature yet.  We should modify the cookie RFC when we know more about
whether we would like to ship this change or not given that we would be the
second web engine doing so.
*Platform coverage*: where will this be available? Everywhere.
*Estimated or target release*: in which version do you want to/plan to
release this? Getting enabled in Nightly in 67, staying Nightly only for
now in order to collect some early feedback.  No estimated target release
available yet since this is an early exploration.
*Preference behind which this will be implemented*: if applicable, how can
interested parties test this before it ships pref'd on by default? The
preference for this feature, available in Nightly right now, is
privacy.documentCookies.maxage.  The value of this pref is the maximum
life-time cap in seconds.  A value of 0 would disable enforcing a cap (0 is
the current default.)
*Is this feature enabled by default in sandboxed iframes?* If not, is there
a proposed sandbox flag to enable it? If allowed, does it preserve the
current invariants in terms of what sandboxed iframes can do? I'm proposing
to enable this feature in sandboxed iframes by default.  It will preserve
the current invariants.
*DevTools bug*: No specific bug, since devtools already supports showing
the expiry date of cookies and once this feature gets enabled it will
correctly display the expiry date for those cookies where Firefox would
enforce this cap on.
*Do other browser engines implement this?* Answer with: Safari shipped
(since version 12.0.x, tested in 12.0.3, announced as part of ITP 2.1
recently: ).

*web-platform-tests*: This is an intervention which different engines do
not agree on yet.  Creating a web-platform-test for it would be very simple
but it will be failing in the engines that do not agree with the
intervention.  I'm not sure what the recommendation for testing these types
of changes is, would be happy to submit a test if there is a path for
getting them accepted into wpt.

*Is this feature restricted to secure contexts?* It is not.  This is an
intervention which is trying to protect the user's privacy, and we would
like to ensure the protections would be applicable to non-secure contexts
as well.  Arguably these protections are even more important for non-secure
contexts given the fact that cookies set through document.cookie would be
visible to network MITM attackers after that point.

Here are the answers to a few other questions which you may have on your
mind regarding this proposed change.

*Would this change cause people to get logged out of services?* We believe
the answer to that should be no, at least for services with secure logins.
Authentication cookies should be marked as Secure and HttpOnly, and if
they're set through document.cookie they can't be HttpOnly by default and
are therefore vulnerable to XSS attacks.  There

Re: Cookie policy/permission in live documents - proposal

2019-02-07 Thread Ehsan Akhgari
I think Johann is right.  Thanks for the design, Eric, this looks great to
me.  I think reloading all tabs is a much better idea than restarting the
browser.  I filed https://bugzilla.mozilla.org/show_bug.cgi?id=1526075 for
this.

On Mon, Feb 4, 2019 at 8:16 AM Johann Hofmann  wrote:

> In my experience having to restart applications to make settings apply is
> the worst thing ever, hence I really like your mock. We should make sure to
> include it in the bug (or a follow-up) for this proposed change.
>
> On Thu, Jan 31, 2019 at 8:13 PM  wrote:
>
> > On Monday, January 28, 2019 at 11:08:32 AM UTC-5, Ehsan Akhgari wrote:
> > > On Mon, Jan 28, 2019 at 10:51 AM Daniel Veditz 
> > wrote:
> > >
> > > > On Mon, Jan 28, 2019 at 12:57 AM Andrea Marchesini <
> > > > amarches...@mozilla.com> wrote:
> > > >
> > > >> If we try to apply the new cookie policy immediately, 3rd party
> > trackers
> > > >> in opened tabs should switch to a first-party-isolation storage, but
> > they
> > > >> could also have already data in memory (user-tokens), and populate
> > the new
> > > >> cookie jar consequentially. This would break the isolation. The
> > solution in
> > > >> this case, is to apply the change only after the reloading.
> > > >>
> > > >
> > > > That's a great point in favor of your proposal. I'm still concerned
> > about
> > > > "infinite-page" sites (facebook/twitter/etc) where a user typically
> > rarely
> > > > reloads. Would it be too ugly to apply an infobar to each active tab
> > that
> > > > says "The cookie policy has changed. Reload to apply the new policy
> > > > [Reload]"? Or maybe has a [Reload this tab][Reload All] set of
> > buttons. I
> > > > have serious misgivings about my UX suggestion here, but maybe it
> will
> > > > spark better ideas on how to communicate to users. An
> alert/doorhanger
> > in
> > > > the pref page where the setting is changed that warns the user it
> only
> > > > applies to new pages and offers to reload all active tabs?
> > > >
> > >
> > > One option that we have for handling this change is to modify the way
> we
> > > apply the change in the Preferences UI instead of asking people to
> reload
> > > their pages.  For example, we can ask the user to restart their browser
> > > when they make changes to the cookie policy/permissions (similar to how
> > > turning permanent private browsing on/off works), or add a notice in
> the
> > > Preferences saying that the changes made will only affect pages loaded
> > from
> > > now on, etc.
> > >
> > > I don't think showing a message on every open tab to ask the user to
> > reload
> > > it is the only UX that is possible for solving this problem, it's only
> > one
> > > rough idea (AFAIK nobody has talked to the UX team about it yet!)...
> > >
> > > Cheers,
> > > --
> > > Ehsan
> >
> >
> > From a UX perspective I think your proposal makes sense, Baku.
> >
> > I feel that having a user manually reload each individual tab they have
> > open is too much to ask.
> >
> > I spoke with Bryan Bell and we share Ehsan thinking.
> > If a user changes preferences that affect the cookie policy they get an
> > extra box that appears and explains they need to reload tabs in order for
> > the new policy to apply.
> >
> > Did a quick mock up to show what this might look like (note the mock
> isn't
> > final and the copy hasn't been reviewed)
> >
> > Mock can be found here: https://cl.ly/7b6cc1e85e36
> >
> > Also, instead of reloading the tabs we can restart their browser as Ehsan
> > mentioned. We'll just have to be careful and explain that all their tabs
> > will be reopened. Is one way more performant than the other?
> >
> > Regards,
> > Eric
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ method definition comments

2019-01-30 Thread Ehsan Akhgari
On Wed, Jan 30, 2019 at 1:35 AM Karl Tomlinson  wrote:

> Ehsan Akhgari writes:
>
> > On Mon, Jan 28, 2019 at 2:58 PM Jeff Gilbert 
> wrote:
> >
> >> I would much rather revert to:
> >> /*static*/ void
> >> Foo::Bar()
> >>
> >> The Foo::Bar is the most relevant part of that whole expression, which
> >> makes it nice to keep up against the start of the line.
> >>
> >
> > The clang-format option which allows formatting the way you are
> suggesting,
> > AlwaysBreakAfterDefinitionReturnType, is deprecated, and is likely to be
> > removed from a future version of clang-format, so there is no sustainable
> > way for us to adopt this suggestion.
>
> Where there's a will there's often a way. e.g.
>
> /*static*/ void  //(clang-format line-break)
> Foo::Bar() {
>
> I do like being able to see function names in diff output, but I'm
> not so keen on having to put //cflb at the beginning of every
> function.  This feels too much like working against the decision
> to follow Google style.
>
> With so much code using Google style, I guess there must be tools
> to show useful information in diff output, or at least there will
> be soon...
>

What tool do you use which has difficulty showing function names in diffs
right now?  It seems to work fine for me both in git and hgweb...

--
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ method definition comments

2019-01-30 Thread Ehsan Akhgari
On Wed, Jan 30, 2019 at 1:30 AM Karl Tomlinson  wrote:

> Ehsan Akhgari writes:
>
> > On Mon, Jan 28, 2019 at 6:27 PM Ryan Hunt  wrote:
> >
> >> [...]
> >>
> >> So for converting from C-style to C++-style, that would be:
> >>
> >> /* static */ void Foo::Bar() {
> >>  ...
> >> }
> >>
> >> // static
> >> void Foo::Bar() {
> >>  ...
> >> }
> >>
> >> [...]
> >>
> >> My one concern would be the presence of other C++-style
> >> comments before the method definition. For example [1].
> >
> > [...]  How about detecting those cases and inserting a newline
> > between the comments on the line before, for extra clarity?
> >
> >> [...]
> >>
> >> [1]
> >>
> https://hg.mozilla.org/mozilla-central/file/e4b9b1084292/layout/generic/nsFrame.cpp#l1023
> >>
> >> ‐‐‐ Original Message ‐‐‐
> >> On Monday, January 28, 2019 12:51 PM, Ehsan Akhgari <
> >> ehsan.akhg...@gmail.com> wrote:
> >>
> >> [...]
> >>
> >> The path to least resistance for addressing this problem may be to
> convert
> >> these into C++-style comments and therefore moving them into their own
> >> lines.  Would you be OK with that?
>
> I haven't noticed clang-format enforcing its own opinions on
> comments when they already follow Google style.
>
> In my experiments clang-format is accepting this:
>
> // Make this Foo Bar.
> /* static */
> void Foo::Bar() {
>  ...
> }
>
> The /* */ style comment provides a clear separation from any other
> comment on the previous line, without the need for an extra
> blank-line.  "don't use blank lines when you don't have to."
>

It depends on where you start from.  If you start from the code sample
above, clang-format won't touch the lines with comments.  However if you
start from the code sample below, it will:

// Make this Foo Bar.
/* static */ void
Foo::Bar()
{
  // ...
}

It will get reformatted to:

// Make this Foo Bar.
/* static */ void Foo::Bar() {
  // ...
}

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ method definition comments

2019-01-30 Thread Ehsan Akhgari
On Tue, Jan 29, 2019 at 6:40 PM  wrote:

> On Wednesday, January 30, 2019 at 9:57:02 AM UTC+11, Ehsan Akhgari wrote:
> > On Mon, Jan 28, 2019 at 7:10 PM  wrote:
> >
> > > Just a thought: Would it be worth considering a blank macro, e.g.:
> > > static void foo();
> > > DECLARED_STATIC void foo() {...}
> > >
> > > On top of not being confused with other comments around, it could be
> > > clang-checked so it's never wrong. (And maybe eventually enforced, like
> > > MOZ_IMPLICIT is.)
> > >
> >
> > Yeah, that could be a future alternative, but it would require someone to
> > do the hard work of implementing the required static analysis and landing
> > it.  I think Ryan's proposal will probably simplify that process somewhat
> > by making it possible to mass-replace a bunch of "// static" comments
> with
> > "DECLARED_STATIC" or some such, but I don't want to hold the good
> solution
> > here for a perfect solution in the future.  I would personally be OK for
> > these two to happen incrementally.
> >
> > Would you mind filing a bug for this please?
> >
> > Thanks,
> > Ehsan
>
> Thank you Ehsan. Yes, a phased approach would definitely be the way to go.
> https://bugzilla.mozilla.org/show_bug.cgi?id=1523793
>
> I realize now that this doesn't help at all with the issue of valuable
> horizontal real-estate! Sorry for the tangent.
>

Oh but I think it will actually.  :-)  clang-format has a heuristic that
makes it put upper-case macros before function definition return types on
their own lines (e.g. see how our NS_IMETHODIMP macro is treated.)


> One (partly tongue-in-cheek) solution to make the important function name
> more prominent is to use trailing return types:
> `auto Foo::Bar() -> void {`
> Note that this is more easily greppable when searching for function names.
> And it looks more like Rust.
>
> To help with spacing, the `DECLARED_...` macros could be placed at the end
> (if possible?) :
> `void Foo::Bar() DECLARED_STATIC {`
> `auto Foo::Bar() -> void DECLARED_STATIC {`
> But this feels uglier to me.
>

Interesting idea!  I wonder if we are at a point in our adoption curve of
Rust that we should worry about how weird our C++ code looks for the
majority of developers yet...  I suppose doing this would mean reserving
only five characters at the beginning of function names for the "return
type".


> Cheers,
> Gerald
>
> > > On Tuesday, January 29, 2019 at 10:27:17 AM UTC+11, Ryan Hunt wrote:
> > > > Yeah, personally I have found them be useful and don't have an issue
> > > with keeping
> > > > them. I just wasn't sure if that was a common experience.
> > > >
> > > > So for converting from C-style to C++-style, that would be:
> > > >
> > > > /* static */ void Foo::Bar() {
> > > >  ...
> > > > }
> > > >
> > > > // static
> > > > void Foo::Bar() {
> > > >  ...
> > > > }
> > > >
> > > > I think that would be good. My one concern would be the presence of
> > > other C++-style
> > > > comments before the method definition. For example [1].
> > > >
> > > > Ideally documentation like that should go in the header by the method
> > > declaration, but I
> > > > have no idea if we consistently do that.
> > > >
> > > > [1]
> > >
> https://hg.mozilla.org/mozilla-central/file/e4b9b1084292/layout/generic/nsFrame.cpp#l1023
> > > >
> > > > Thanks,
> > > > Ryan
> > > >
> > > > ‐‐‐ Original Message ‐‐‐
> > > > On Monday, January 28, 2019 12:51 PM, Ehsan Akhgari <
> ehsan...@gmail.com>
> > > wrote:
> > > >
> > > > > This is indeed one of the cases where the reformat has made things
> > > worse.  I think as a couple of people have already said, we'll find
> that
> > > some people do find these annotations useful, even if they're not
> always
> > > consistently present.
> > > > >
> > > > > The path to least resistance for addressing this problem may be to
> > > convert these into C++-style comments and therefore moving them into
> their
> > > own lines.  Would you be OK with that?
> > > > >
> > > > > Thanks,
> > > > > Ehsan
> > > > >
> > > > > On Fri, Jan 25, 2019 at 11:49 PM Ryan Hunt 
> wrote:
> > > > >
> > > > >

Re: C++ method definition comments

2019-01-29 Thread Ehsan Akhgari
On Mon, Jan 28, 2019 at 7:10 PM  wrote:

> Just a thought: Would it be worth considering a blank macro, e.g.:
> static void foo();
> DECLARED_STATIC void foo() {...}
>
> On top of not being confused with other comments around, it could be
> clang-checked so it's never wrong. (And maybe eventually enforced, like
> MOZ_IMPLICIT is.)
>

Yeah, that could be a future alternative, but it would require someone to
do the hard work of implementing the required static analysis and landing
it.  I think Ryan's proposal will probably simplify that process somewhat
by making it possible to mass-replace a bunch of "// static" comments with
"DECLARED_STATIC" or some such, but I don't want to hold the good solution
here for a perfect solution in the future.  I would personally be OK for
these two to happen incrementally.

Would you mind filing a bug for this please?

Thanks,
Ehsan


> Cheers,
> Gerald
>
> On Tuesday, January 29, 2019 at 10:27:17 AM UTC+11, Ryan Hunt wrote:
> > Yeah, personally I have found them be useful and don't have an issue
> with keeping
> > them. I just wasn't sure if that was a common experience.
> >
> > So for converting from C-style to C++-style, that would be:
> >
> > /* static */ void Foo::Bar() {
> >  ...
> > }
> >
> > // static
> > void Foo::Bar() {
> >  ...
> > }
> >
> > I think that would be good. My one concern would be the presence of
> other C++-style
> > comments before the method definition. For example [1].
> >
> > Ideally documentation like that should go in the header by the method
> declaration, but I
> > have no idea if we consistently do that.
> >
> > [1]
> https://hg.mozilla.org/mozilla-central/file/e4b9b1084292/layout/generic/nsFrame.cpp#l1023
> >
> > Thanks,
> > Ryan
> >
> > ‐‐‐ Original Message ‐‐‐
> > On Monday, January 28, 2019 12:51 PM, Ehsan Akhgari 
> wrote:
> >
> > > This is indeed one of the cases where the reformat has made things
> worse.  I think as a couple of people have already said, we'll find that
> some people do find these annotations useful, even if they're not always
> consistently present.
> > >
> > > The path to least resistance for addressing this problem may be to
> convert these into C++-style comments and therefore moving them into their
> own lines.  Would you be OK with that?
> > >
> > > Thanks,
> > > Ehsan
> > >
> > > On Fri, Jan 25, 2019 at 11:49 PM Ryan Hunt  wrote:
> > >
> > >> Hi all,
> > >>
> > >> Quick C++ style question.
> > >>
> > >> A common pattern in Gecko is for method definitions to have a comment
> with the
> > >> 'static' or 'virtual' qualification.
> > >>
> > >> Before the reformat, the comment would be on it's own separate line
> [1]. Now
> > >> it's on the main line of the definition [2].
> > >>
> > >> For example:
> > >>
> > >> /* static */ void
> > >> Foo::Bar() {
> > >>   ...
> > >> }
> > >>
> > >> vs.
> > >>
> > >> /* static */ void Foo::Bar() {
> > >>   ...
> > >> }
> > >>
> > >> Personally I think this now takes too much horizontal space from the
> main
> > >> definition, and would prefer it to be either on its own line or just
> removed.
> > >>
> > >> Does anyone have an opinion on whether we still want these comments?
> And if so
> > >> whether it makes sense to move them back to their own line.
> > >>
> > >> (My ulterior motive is that sublime text's indexer started failing to
> find
> > >>  these definitions after the reformat, but that should be fixed
> regardless)
> > >>
> > >> If you're interested in what removing these would entail, I wrote a
> regex to
> > >> make the change [3].
> > >>
> > >> Thanks,
> > >> Ryan
> > >>
> > >> [1]
> https://hg.mozilla.org/mozilla-central/file/0348d472115d/layout/generic/nsFrame.cpp#l1759
> > >> [2]
> https://hg.mozilla.org/mozilla-central/file/e4b9b1084292/layout/generic/nsFrame.cpp#l1756
> > >> [3]
> https://hg.mozilla.org/try/rev/31ab3e466b6f15dcdbb1aee47edabc7c358b86f2
> > >>
> > >
> > > --
> > > Ehsan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ method definition comments

2019-01-29 Thread Ehsan Akhgari
On Mon, Jan 28, 2019 at 2:58 PM Jeff Gilbert  wrote:

> I would much rather revert to:
> /*static*/ void
> Foo::Bar()
>
> The Foo::Bar is the most relevant part of that whole expression, which
> makes it nice to keep up against the start of the line.
>

The clang-format option which allows formatting the way you are suggesting,
AlwaysBreakAfterDefinitionReturnType, is deprecated, and is likely to be
removed from a future version of clang-format, so there is no sustainable
way for us to adopt this suggestion.


> In a clang-format world, we should feel more free to make such
> deviations from Google Style, since it's all handled for us.
>
> On Mon, Jan 28, 2019 at 10:52 AM Ehsan Akhgari 
> wrote:
>
>
> > This is indeed one of the cases where the reformat has made things worse.
> > I think as a couple of people have already said, we'll find that some
> > people do find these annotations useful, even if they're not always
> > consistently present.
> >
> > The path to least resistance for addressing this problem may be to
> convert
> > these into C++-style comments and therefore moving them into their own
> > lines.  Would you be OK with that?
> >
> > Thanks,
> > Ehsan
> >
> > On Fri, Jan 25, 2019 at 11:49 PM Ryan Hunt  wrote:
> >
> > > Hi all,
> > >
> > > Quick C++ style question.
> > >
> > > A common pattern in Gecko is for method definitions to have a comment
> with
> > > the
> > > 'static' or 'virtual' qualification.
> > >
> > > Before the reformat, the comment would be on it's own separate line
> [1].
> > > Now
> > > it's on the main line of the definition [2].
> > >
> > > For example:
> > >
> > > /* static */ void
> > > Foo::Bar() {
> > >   ...
> > > }
> > >
> > > vs.
> > >
> > > /* static */ void Foo::Bar() {
> > >   ...
> > > }
> > >
> > > Personally I think this now takes too much horizontal space from the
> main
> > > definition, and would prefer it to be either on its own line or just
> > > removed.
> > >
> > > Does anyone have an opinion on whether we still want these comments?
> And
> > > if so
> > > whether it makes sense to move them back to their own line.
> > >
> > > (My ulterior motive is that sublime text's indexer started failing to
> find
> > >  these definitions after the reformat, but that should be fixed
> regardless)
> > >
> > > If you're interested in what removing these would entail, I wrote a
> regex
> > > to
> > > make the change [3].
> > >
> > > Thanks,
> > > Ryan
> > >
> > > [1]
> > >
> https://hg.mozilla.org/mozilla-central/file/0348d472115d/layout/generic/nsFrame.cpp#l1759
> > > [2]
> > >
> https://hg.mozilla.org/mozilla-central/file/e4b9b1084292/layout/generic/nsFrame.cpp#l1756
> > > [3]
> > >
> https://hg.mozilla.org/try/rev/31ab3e466b6f15dcdbb1aee47edabc7c358b86f2
> > >
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
> >
> >
> > --
> > Ehsan
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ method definition comments

2019-01-29 Thread Ehsan Akhgari
On Mon, Jan 28, 2019 at 6:27 PM Ryan Hunt  wrote:

> Yeah, personally I have found them be useful and don't have an issue with
> keeping
> them. I just wasn't sure if that was a common experience.
>
> So for converting from C-style to C++-style, that would be:
>
> /* static */ void Foo::Bar() {
>  ...
> }
>
> // static
> void Foo::Bar() {
>  ...
> }
>
>
>
> I think that would be good.
>

Great!


> My one concern would be the presence of other C++-style
> comments before the method definition. For example [1].
>

That's nothing that a bit of regex wizardry can't take care of.  :-)  How
about detecting those cases and inserting a newline between the comments on
the line before, for extra clarity?


> Ideally documentation like that should go in the header by the method
> declaration, but I
> have no idea if we consistently do that.
>
> [1]
> https://hg.mozilla.org/mozilla-central/file/e4b9b1084292/layout/generic/nsFrame.cpp#l1023
>
> Thanks,
> Ryan
>
> ‐‐‐ Original Message ‐‐‐
> On Monday, January 28, 2019 12:51 PM, Ehsan Akhgari <
> ehsan.akhg...@gmail.com> wrote:
>
> This is indeed one of the cases where the reformat has made things worse.
> I think as a couple of people have already said, we'll find that some
> people do find these annotations useful, even if they're not always
> consistently present.
>
> The path to least resistance for addressing this problem may be to convert
> these into C++-style comments and therefore moving them into their own
> lines.  Would you be OK with that?
>
> Thanks,
> Ehsan
>
> On Fri, Jan 25, 2019 at 11:49 PM Ryan Hunt  wrote:
>
>> Hi all,
>>
>> Quick C++ style question.
>>
>> A common pattern in Gecko is for method definitions to have a comment
>> with the
>> 'static' or 'virtual' qualification.
>>
>> Before the reformat, the comment would be on it's own separate line [1].
>> Now
>> it's on the main line of the definition [2].
>>
>> For example:
>>
>> /* static */ void
>> Foo::Bar() {
>>   ...
>> }
>>
>> vs.
>>
>> /* static */ void Foo::Bar() {
>>   ...
>> }
>>
>> Personally I think this now takes too much horizontal space from the main
>> definition, and would prefer it to be either on its own line or just
>> removed.
>>
>> Does anyone have an opinion on whether we still want these comments? And
>> if so
>> whether it makes sense to move them back to their own line.
>>
>> (My ulterior motive is that sublime text's indexer started failing to find
>>  these definitions after the reformat, but that should be fixed
>> regardless)
>>
>> If you're interested in what removing these would entail, I wrote a regex
>> to
>> make the change [3].
>>
>> Thanks,
>> Ryan
>>
>> [1]
>> https://hg.mozilla.org/mozilla-central/file/0348d472115d/layout/generic/nsFrame.cpp#l1759
>> [2]
>> https://hg.mozilla.org/mozilla-central/file/e4b9b1084292/layout/generic/nsFrame.cpp#l1756
>> [3]
>> https://hg.mozilla.org/try/rev/31ab3e466b6f15dcdbb1aee47edabc7c358b86f2
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
> --
> Ehsan
>
>
>

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: implicit rel=noopener for target=_blank on anchor and area elements

2019-01-28 Thread Ehsan Akhgari
On Thu, Jan 24, 2019 at 6:52 AM Andrea Marchesini 
wrote:

> I intend to turn "implicit ref=noopener for anchor and area elements for
> target=_blank" on by default in 67. It has been developed behind the
> "dom.targetBlankNoOpener.enabled" preference and enabled in nightly for ~2
> cycles (it landed in FF65).
>
> Safari has already shipped this feature:
>
> https://webkit.org/blog/8475/release-notes-for-safari-technology-preview-68/
>

Just to be clear, this hasn't shipped in a release version of Safari yet,
it will probably be part of the next release of Safari.


> Bug to turn on by default:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1522083
>
> This feature was previously discussed in this "intent to implement"
> thread:
>
> https://groups.google.com/forum/#!searchin/mozilla.dev.platform/intent$20to$20implement$20rel$3D%7Csort:date/mozilla.dev.platform/d4R7WIHSOMY/iHBAH0koCgAJ
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ method definition comments

2019-01-28 Thread Ehsan Akhgari
This is indeed one of the cases where the reformat has made things worse.
I think as a couple of people have already said, we'll find that some
people do find these annotations useful, even if they're not always
consistently present.

The path to least resistance for addressing this problem may be to convert
these into C++-style comments and therefore moving them into their own
lines.  Would you be OK with that?

Thanks,
Ehsan

On Fri, Jan 25, 2019 at 11:49 PM Ryan Hunt  wrote:

> Hi all,
>
> Quick C++ style question.
>
> A common pattern in Gecko is for method definitions to have a comment with
> the
> 'static' or 'virtual' qualification.
>
> Before the reformat, the comment would be on it's own separate line [1].
> Now
> it's on the main line of the definition [2].
>
> For example:
>
> /* static */ void
> Foo::Bar() {
>   ...
> }
>
> vs.
>
> /* static */ void Foo::Bar() {
>   ...
> }
>
> Personally I think this now takes too much horizontal space from the main
> definition, and would prefer it to be either on its own line or just
> removed.
>
> Does anyone have an opinion on whether we still want these comments? And
> if so
> whether it makes sense to move them back to their own line.
>
> (My ulterior motive is that sublime text's indexer started failing to find
>  these definitions after the reformat, but that should be fixed regardless)
>
> If you're interested in what removing these would entail, I wrote a regex
> to
> make the change [3].
>
> Thanks,
> Ryan
>
> [1]
> https://hg.mozilla.org/mozilla-central/file/0348d472115d/layout/generic/nsFrame.cpp#l1759
> [2]
> https://hg.mozilla.org/mozilla-central/file/e4b9b1084292/layout/generic/nsFrame.cpp#l1756
> [3]
> https://hg.mozilla.org/try/rev/31ab3e466b6f15dcdbb1aee47edabc7c358b86f2
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cookie policy/permission in live documents - proposal

2019-01-28 Thread Ehsan Akhgari
On Mon, Jan 28, 2019 at 10:51 AM Daniel Veditz  wrote:

> On Mon, Jan 28, 2019 at 12:57 AM Andrea Marchesini <
> amarches...@mozilla.com> wrote:
>
>> If we try to apply the new cookie policy immediately, 3rd party trackers
>> in opened tabs should switch to a first-party-isolation storage, but they
>> could also have already data in memory (user-tokens), and populate the new
>> cookie jar consequentially. This would break the isolation. The solution in
>> this case, is to apply the change only after the reloading.
>>
>
> That's a great point in favor of your proposal. I'm still concerned about
> "infinite-page" sites (facebook/twitter/etc) where a user typically rarely
> reloads. Would it be too ugly to apply an infobar to each active tab that
> says "The cookie policy has changed. Reload to apply the new policy
> [Reload]"? Or maybe has a [Reload this tab][Reload All] set of buttons. I
> have serious misgivings about my UX suggestion here, but maybe it will
> spark better ideas on how to communicate to users. An alert/doorhanger in
> the pref page where the setting is changed that warns the user it only
> applies to new pages and offers to reload all active tabs?
>

One option that we have for handling this change is to modify the way we
apply the change in the Preferences UI instead of asking people to reload
their pages.  For example, we can ask the user to restart their browser
when they make changes to the cookie policy/permissions (similar to how
turning permanent private browsing on/off works), or add a notice in the
Preferences saying that the changes made will only affect pages loaded from
now on, etc.

I don't think showing a message on every open tab to ask the user to reload
it is the only UX that is possible for solving this problem, it's only one
rough idea (AFAIK nobody has talked to the UX team about it yet!)...

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cookie policy/permission in live documents - proposal

2019-01-25 Thread Ehsan Akhgari
On Fri, Jan 25, 2019 at 2:51 PM Daniel Veditz  wrote:

>
> Your description equating cookies and storage within a document lifetime
> makes sense. Is this intended to also apply to network requests? The
> first-party document already has no access to 3rd party cookies so it
> shouldn't matter at that level if Necko's rules change "live". If I'm on
> twitter/facebook (which make constant background requests) and I clear my
> entire cookie jar those documents are going to break. If I just tossed all
> my cookies that's what I want! Discovering that I'm still logged into those
> sites would be disturbing. Similarly, if I flip the "block 3rd-party
> cookies" pref I'm going to react negatively if I still see tracker cookies
> showing up just because I've left an active page open somewhere.
>

Cookies have been dynamic and racey since the dawn of time, both at the
HTTP layer and in their reflection in DOM (document.cookie).  Clearing your
cookies isn't something that is changed by this proposal.  I'm not too sure
how Andrea was planning to handle cookie policy at the Necko layer but we
have a lot of flexibility here and pages also can probably tolerate dynamic
changes to document.cookie.  I *think* handling the cookie policy globally
at the Necko layer is probably easier but I'm curious to know Andrea's
thoughts.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cookie policy/permission in live documents - proposal

2019-01-25 Thread Ehsan Akhgari
On Thu, Jan 24, 2019 at 5:34 PM Martin Thomson  wrote:

> There are two reasons I can see why we might consider access to storage to
> require "hiding" of some sort.
>
> The first is that we want to make sure that pages that were built with an
> expectation of access to persistent storage, continue to work more or less
> as they did before.  We don't want them to be able to track people, but we
> don't want them to suddenly catch fire.  In that light, we're not really
> hiding the fact that we are compartmentalizing persisted data, we're just
> ensuring that those pages aren't surprised by errors.
>
> The second possibility is that we might want to be able to present a fake
> environment to trackers that mimics a real one, but is limited.  That is,
> they might think that they are operating in an unconstrained iframe, but
> they really aren't.
>
> I think that the fact that you can query with document.hasStorageAccess
> puts us firmly in the first bucket.  While we might present a somewhat
> functional API, we're only doing that to avoid bustage for old documents,
> and we're not trying to hide the fact that we're blocking their ability to
> track.  From that perspective, having this modelled as a permission (and
> even allowing permission.query() to work) is probably OK.
>

Note that hasStorageAccess() doesn't reveal whether you have "storage"
access, rather it reveals whether you have "storage access".  The
difference is very subtle, but here "storage access" means the thing that
you gain if you'd called requestStorageAccess() before, or if the browser
determines that you're eligible to receive the kind of access that function
is capable of granting you.  That is effectively a subversion of our
soon-to-be-default cookie policy for third-party trackers for web
compatibility, not "storage" access in the sense you're referring to here.

But anyways, there is navigator.cookieEnabled which reveals the kind of
storage access I think you're alluding to here, so the rest of your point
is still valid.  :-)


> So this is really about the dynamism aspects of this.  Clearly, building a
> system where you can switch out the fake storage with the real one
> dynamically is hard.  The surface area is surprisingly big and the changes
> are pervasive.  That's infeasible.  But I think that rather than saying
> that permissions are a poor fit, I think that it would pay to think more
> about how maybe we might fix this in permissions.  Because I think that
> this is something that works with permissions in many other respects - it's
> something that we might want to bake into feature policy for instance.
>
> When we talked about permissions recently, we concluded that there were
> some aspects of the .query() API that didn't properly reflect the range of
> policies that might apply.  For instance, permission for some actions is
> conditional on an engagement gesture (a click or key press).  Here I think
> that we might have a new condition: the permission is conditional on the
> page being refreshed.  Now, I concede that perhaps this is too ambitious
> and in the absence of other cases that share this constraint this might be
> overgeneralizing, but I think that this fits reasonably well.
>

I think the reason why Andrea said permissions are a poor fit here is the
dynamism aspect.  For example, web pages aren't typically written with the
expectation that localStorage/indexedDB methods will throw until a
permission is granted out of band, or they start throwing when a permission
is revoked.  Our current behaviour which is exactly that will cause pages
to break when the user does something to change permissions while there are
live documents around.  I don't think the problem is really about whether
Permissions.query() is a good fit to query this type of permission.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to adjust our clang-format rules to include spaces after the hash for nested preprocessor directives

2019-01-17 Thread Ehsan Akhgari
I've received some on and off-list responses to this, all in favour.  I've
filed https://bugzilla.mozilla.org/show_bug.cgi?id=1521000 to make the
change.

Thanks to everyone who provided feedback.

On Thu, Jan 10, 2019 at 8:01 PM Ehsan Akhgari 
wrote:

> Hi everyone,
>
> I'd like to propose the first modification to our clang-format rules based
> on the feedback I've received since the switch to our new coding style from
> the SpiderMonkey team.
>
> The problem we're trying to solve is that in code that has nested
> preprocessor directives, when the directives are far apart from each other,
> it may be difficult to spot that a given directive is nested under another
> one, and this may hide important logic in cases such as a chain of
> #if/#ifdef directives.  Here is a practical example: <
> https://github.com/sylvestre/gecko-dev/blob/master/js/src/wasm/WasmBaselineCompile.cpp#L4054>.
> When looking at code like this, it can be quite difficult to know where in
> the (potential) preprocessor nesting levels you are without scrolling
> around quite a bit and keeping a lot of context in mind.  In the example
> above, I would say that's not reasonably possible to do.
>
> The common way to deal with this problem is to indent nested preprocessor
> directives, very much similarly to how we indent normal code, for example:
>
> #if foo
> #  if bar
> #define x 1
> #  else
> #define x 2
> #  endif
> #endif
>
> This is allowed, but not required, by our coding style <
> https://google.github.io/styleguide/cppguide.html#Preprocessor_Directives>,
> and is supported by clang-format by using the IndentPPDirectives: AfterHash
> option <https://clang.llvm.org/docs/ClangFormatStyleOptions.html>.  So in
> order to accommodate easier reading of code like this, I propose to adopt
> that clang-format option.
>
> You can look at how the above code looks like after reformatting the tree
> with that option here: <
> https://github.com/ehsan/gecko-dev/blob/45df04c211f4dd875c58125bca5fbca381ed8fca/js/src/wasm/WasmBaselineCompile.cpp#L4824>.
> If you are curious to look at the entire tree or the resulting changes,
> they're available for viewing here respectively: <
> https://github.com/ehsan/gecko-dev/tree/afterhash> and <
> https://github.com/ehsan/gecko-dev/commit/45df04c211f4dd875c58125bca5fbca381ed8fca
> >.
>
> Please let me know if you have any suggestions or concerns.  Ideas that
> have come up before include doing nothing about this problem (which I think
> isn't a reasonable answer since I think the problem proposed is valid and
> has a viable solution covered in our coding style which we can achieve
> easily with tooling) and doing this in a small scope, for example only for
> SpiderMonkey (which the SpiderMonkey team or myself prefer not to do since
> we don't have any evidence to suggest the problem is localized to that part
> of the code base and we prefer to not have special cases in our coding
> style where we can avoid it, also this change isn't nearly as invasive as
> the tree-wide reformat so we should be able to do it with minimal
> disruption to anyone's work).
>
> I'm planning to file a bug on this by the end of next week if no serious
> concerns aren't raised by then.
>
> Thanks,
> --
> Ehsan
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: report-to header as part of Reporting API

2019-01-10 Thread Ehsan Akhgari
On Thu, Jan 10, 2019 at 7:27 AM Andrea Marchesini 
wrote:

> Summary: Reporting API offers 2 ways to obtain reports: ReportingObserver
> and Report-to Header. I implemented ReportingObserver months ago and I sent
> a separate intent-to-implement email about it. This email is about
> "report-to" header, which allows a server to specify a set of endpoints to
> retrieve reports. This is similar to "report-uri" directive for CSP.
>
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1508310
>
> Link to standard: https://w3c.github.io/reporting/
>
> Platform coverage: All.
>
> Estimated or target release: The code is landed in 65, disabled by pref. I
> would enable it in 67/68 if there are no objections.
>
> Preference behind which this will be implemented:
> dom.reporting.header.enabled
>
> Is this feature enabled by default in sandboxed iframes? yes.
>

Sorry for my laziness not having scanned through the links below to find
the answer to this question, but how does this interact with the
same-origin policy, if at all?  And if it does, is enabling it in sandbox
iframes without the allow-same-origin token the right thing to do?


> DevTools bug: Not yet filed. Would be nice to expose the reporting queue
> per domain somehow.
>
> Do other browser engines implement this? Chromium is experimenting this
> feature:
>
> https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/V38Q47CxTIY
> https://developers.google.com/web/updates/2018/09/reportingapi
>
> web-platform-tests: just a little support. I wrote several mochitests which
> can be converted to WPTs with a bit of effort.
>
> Is this feature restricted to secure contexts? yes.
>
> A bit of context: as you probably remember, I implemented ReportingObserver
> because it was needed in order to run feature-policy WPTs.
> ReportingObserver is still disabled by default in release and I haven't
> sent a intent-to-ship email yet because I wanted to have a full
> implementation of the reporting API before doing it. Now we have it and we
> can discuss the next steps.
>
> ReportingObserver is definitely useful, in particular for
> DeprecationReportBody. It could also be interesting to use
> InterventationReportBody for the anti-tracking project to inform trackers
> that their cookieJar has been blocked/partitioned, or to inform when
> autoplay is disabled for media elements.
>
> Report-to is nice to have because it's similar to CSP "report-uri"
> directive, which is already implemented and released (btw, CSP has
> "report-to" directive to use the "report-to" header endpoints directly.
> This is not implemented yet). A nice goal would be to unify reporting API
> and CSP violation delivering in one single component. This could be a good
> way to improve both features, for instance, using the URL-Classifier to
> avoid the sending of reports to trackers.
>

I assume it is possible for foo.example to use this API to send a report to
thirdparty.example (let's imagine thirdparty.example isn't on the
Disconnect tracking proptection list.)  What data is leaked to
thirdparty.example as part of those reports?  Do we send
credentials/referrer?

Experience on the web has shown that surveillance companies create dual-use
widgets for web developers to do useful things such as provide
analytics/reporting and at the same time collect vast amounts of user data
such as browser history without informed user consent (which arguably in
cases such as this API is probably impossible to obtain).  It is important
to avoid starting to leak any new sources of such data while we are working
hard to close all of the existing sources of such leaks, since once such
leaks are opened up, they will be abused sooner or later.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proposal to adjust our clang-format rules to include spaces after the hash for nested preprocessor directives

2019-01-10 Thread Ehsan Akhgari
Hi everyone,

I'd like to propose the first modification to our clang-format rules based
on the feedback I've received since the switch to our new coding style from
the SpiderMonkey team.

The problem we're trying to solve is that in code that has nested
preprocessor directives, when the directives are far apart from each other,
it may be difficult to spot that a given directive is nested under another
one, and this may hide important logic in cases such as a chain of
#if/#ifdef directives.  Here is a practical example: <
https://github.com/sylvestre/gecko-dev/blob/master/js/src/wasm/WasmBaselineCompile.cpp#L4054>.
When looking at code like this, it can be quite difficult to know where in
the (potential) preprocessor nesting levels you are without scrolling
around quite a bit and keeping a lot of context in mind.  In the example
above, I would say that's not reasonably possible to do.

The common way to deal with this problem is to indent nested preprocessor
directives, very much similarly to how we indent normal code, for example:

#if foo
#  if bar
#define x 1
#  else
#define x 2
#  endif
#endif

This is allowed, but not required, by our coding style <
https://google.github.io/styleguide/cppguide.html#Preprocessor_Directives>,
and is supported by clang-format by using the IndentPPDirectives: AfterHash
option .  So in
order to accommodate easier reading of code like this, I propose to adopt
that clang-format option.

You can look at how the above code looks like after reformatting the tree
with that option here: <
https://github.com/ehsan/gecko-dev/blob/45df04c211f4dd875c58125bca5fbca381ed8fca/js/src/wasm/WasmBaselineCompile.cpp#L4824>.
If you are curious to look at the entire tree or the resulting changes,
they're available for viewing here respectively: <
https://github.com/ehsan/gecko-dev/tree/afterhash> and <
https://github.com/ehsan/gecko-dev/commit/45df04c211f4dd875c58125bca5fbca381ed8fca
>.

Please let me know if you have any suggestions or concerns.  Ideas that
have come up before include doing nothing about this problem (which I think
isn't a reasonable answer since I think the problem proposed is valid and
has a viable solution covered in our coding style which we can achieve
easily with tooling) and doing this in a small scope, for example only for
SpiderMonkey (which the SpiderMonkey team or myself prefer not to do since
we don't have any evidence to suggest the problem is localized to that part
of the code base and we prefer to not have special cases in our coding
style where we can avoid it, also this change isn't nearly as invasive as
the tree-wide reformat so we should be able to do it with minimal
disruption to anyone's work).

I'm planning to file a bug on this by the end of next week if no serious
concerns aren't raised by then.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Compartments will soon contain multiple global objects

2018-12-18 Thread Ehsan Akhgari
This project has been a tremendous amount of effort.  It started some time
around April last year, I think.  This should help fix one of the major
cases where Gecko's performance currently can suffer quite a lot compared
to other engines on certain pages.

Thanks a lot for pushing things through to this point to everyone involved,
especially Jason and Jan!

On Tue, Dec 18, 2018, 2:52 PM Jan de Mooij  The JS engine now supports allocating multiple global objects (also known
> as "realms") in a single compartment [0]. We are working on using this
> mechanism for our chrome code (one compartment for most of our
> system-principal sandboxes [1] and windows [2]). I'm hoping that part will
> land relatively soon. After that we want to enable this for same-origin
> content globals (like iframes) as well. This is the main goal of this work,
> but it depends on some architectural changes that are still in flight.
>
> We are doing this to reduce cross-compartment wrapper overhead: globals
> within a compartment can access each other directly, without any wrapper
> overhead, and they can share wrappers into other compartments. Initial
> results are promising; we are seeing performance improvements on various
> talos tests and some memory usage improvements as well.
>
> What this means is that globals are no longer guaranteed to have their own
> compartment. Instead of entering a compartment (JSAutoCompartment), code
> should now enter a global/realm (JSAutoRealm). Note that JSAutoRealm cannot
> be used with cross-compartment wrappers (there's a diagnostic assert for
> this, because cross-compartment wrappers are now shared by all realms in
> the compartment, so entering a wrapper's realm is kind of meaningless).
>
> Please keep this in mind when writing (test) code. Let me know if there are
> any questions or concerns.
>
> Thanks,
> Jan
>
> [0] https://bugzilla.mozilla.org/show_bug.cgi?id=1357862
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1512029
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1514210
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: Storage Access API

2018-12-18 Thread Ehsan Akhgari
Hi everyone,

As of Firefox 65 I intend to turn the Storage Access API on by default on
all desktop platforms. It has been developed behind the
dom.storage_access.enabled preference. Other UAs shipping this or intending
to ship it are Safari 12.

*Bug to turn on by default*: link to main relevant bug (
https://bugzilla.mozilla.org/show_bug.cgi?id=1513021).

This feature was previously discussed in this "intent to implement" thread:
<
https://groups.google.com/d/topic/mozilla.dev.platform/l8bV4RFgAc4/discussion
>.

It's worth stating again that this API for now only does something useful
for third-party trackers when our new cookie policy (being developed as
part of our anti-tracking project <
https://blog.mozilla.org/security/2018/10/23/firefox-63-lets-users-block-tracking-cookies/>)
is active.  As such, even though this will be shipped in 65, until the new
cookie policy is turned on for users as an opt-out setting, this API will
be a no-op for all callers except for those users who have manually turned
on our new cookie policy.  The decision on when the new cookie policy will
be turned on as an opt-out setting will be made independently than this
intent to ship, based on a number of tests and user studies which we will
be running in the few weeks.  The intention behind shipping this API in 65
is to lay the ground work for shipping our new cookie policy when we're
ready to do so based on the result of the aforementioned tests.

The notable changes since the discussion on that thread are the
introduction of automatic heuristics that control whether Firefox will
grant third-party trackers access to storage upon being requested which
have been documented here: <
https://developer.mozilla.org/en-US/docs/Web/API/Document/requestStorageAccess
>.
Also, the issue of having web-platform-tests for this feature was discussed
previously.  We are planning to follow up with adding wpt tests in
collaboration with WebKit to ensure our implementations are interoperable.
Due to the small surface of this API, and the fact that we wanted to make
sure it is enabled on the release channel in preparation for the upcoming
plans on shipping our new anti-tracking protections for all users in an
upcoming release, for now we've ensured interoperability through careful
manual testing.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Automatic changes during the dev workflow [was: Re: Upcoming changes to our C++ Coding Style]

2018-12-17 Thread Ehsan Akhgari
On Mon, Dec 17, 2018 at 1:22 PM Steve Fink  wrote:

> In theory, I would definitely prefer if all the code were auto-formatted
> during regular development(*), but in practice the performance means
> it's not as transparent as it needs to be -- I have noticed that since
> enabling the format-source extension, rebasing my patch stacks is
> significantly slower.
>

Please if you see performance issues, file them with some steps to
reproduce and CC Andi, he may be able to help.  We should definitely not
have to avoid running our tools due to performance issues.  :-)

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Automatic changes during the dev workflow [was: Re: Upcoming changes to our C++ Coding Style]

2018-12-17 Thread Ehsan Akhgari
On Fri, Dec 14, 2018 at 4:55 PM Kartikaya Gupta  wrote:

> On Fri, Dec 14, 2018 at 1:58 PM Sylvestre Ledru 
> wrote:
> > We have more and more tools at review phase (clang-format, flake8,
> eslint, clang-tidy, codespell, etc) which propose some auto-fixes.
>
> Honestly I find it quite annoying when I'm trying to review a patch
> and the phabricator diff is filled up with bot comments about
> formatting. As a reviewer, I shouldn't have to think about formatting,
> and it shouldn't fill up my screen and prevent me from looking at the
> actual code.
>
> > Currently, the turn around time of the tools is 14m on average which is
> usually faster than the reviewer looking at the patch.
>
> Usually, but not always. And 14 minutes is still long enough that the
> person writing the patch has context-switched away and is in the
> middle of doing something else when the bot comes a-knocking.
>

What if we *rejected* misformatted patches in Phabricator as opposed to
just comment on them?  That is, what if the bot would mark the reviews as
"Request Changes" when it found any problems, IOW it would review the patch
alongside with normal reviewers.  As Sylvestre mentioned the diffs on
individual lines is about to go away, but this way we'd also make sure that
misformatting caught at review time would actually block the landing of the
patch.


> > If Phabricator provides the capability, we could have the bot
> automatically proposing a new patchset on top on the normal one.
> > The reviewer would look at the updated version.
>
> This would be fine for the reviewer, but...
>
> > The main drawback would be that developer would have to retrieve the
> updated patch
> > before updating it.
>
> That's a chore too. For the developer writing the patch I feel like
> there should be a way to just say "please format this patch in-place
> for me" (basically what I expected `./mach clang-format` to do, except
> it doesn't quite - see bug 1514279) and even that should be opt-in.
> IMO the default behaviour should just be to automatically apply
> formatting prior to submission, without anybody having to think about
> it except in cases where the developer needs to apply "//clang-format
> off" blocks.
>

Right.  I think the ideal state would be for everyone to use editor
integration and local VCS hooks.  The VCS hooks we can enable in ./mach
bootstrap but editor integration is something people will need to opt into
turning on.  Once we get there, then I think formatting will be mostly
taken care of transparently for you without needing to think about it
consciously.

Doing things like rejecting misformatted code at review time may
incentivize people updating their local configuration.  I'm thinking of
reasons why that may break people's workflow but can't think of any right
now, curious to know if anyone else can?

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The upcoming C/C++ coding style change

2018-12-17 Thread Ehsan Akhgari
On Sun, Dec 16, 2018 at 9:08 PM Makoto Kato  wrote:

> Hi,
>
> Is Objective-C++ (*.mm) still old format?  Or Should I file a bug for it?
>

We didn't include Objective-C++ in the initial reformatting of the tree
mostly to avoid scope creep, but clang-format is well capable of formatting
it.  Please file a bug for it now, it should be a very simple change
(Adding ".mm" here: <
https://searchfox.org/mozilla-central/rev/1a4a7745fd4a14402a0de0e9abc040724d4fcf8f/python/mozbuild/mozbuild/mach_commands.py#1650>)
to include it.

Thanks,
Ehsan


>
> On Thu, Nov 29, 2018 at 9:38 PM Sylvestre Ledru 
> wrote:
>
> > Hello,
> >
> > As Ehsan announced recently, we are going to reformat Friday 30 November.
> > In order to mitigate the impact on the uplifts process, we choose this
> > date before the merge day.
> >
> > In term of execution, we will close the trees Friday around 8am
> > Paris/Berlin time (11pm PST).
> > We will then merge autoland and inbound into mozilla-central.
> > We are planning to land the big patch around 12am (3am PST).
> >
> > The experimentation with dom/media highlighted that we need an efficient
> > way to automatically rebase patches before the merge.
> > To achieve this, we updated the bootstrap process to include an extension
> > called hg formatsource.
> > This extension will automatically rebase the local changes to avoid
> > conflicts.
> > Please note that it is important that the extension is installed before
> > the pulling a revision including the reformatted sources.
> >
> > More information on:
> >
> >
> https://docs.google.com/document/d/13AwAsvKMhH0mflDlfatBqn6LmZHiQih76oxM4zfrPl4/edit
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1507711
> >
> > Sylvestre, Ehsan and Andi
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming changes to our C++ Coding Style

2018-12-17 Thread Ehsan Akhgari
On Mon, Dec 17, 2018 at 9:33 AM Kartikaya Gupta  wrote:

> Local commit hooks are even better, thanks! Are there instructions
> somewhere on how to set up the hooks and make sure they run properly?
>

We've yet to document these hooks but enabling them is very similar to
enabling the lint hooks which are documented here: <
https://firefox-source-docs.mozilla.org/tools/lint/usage.html#using-a-vcs-hook>.
The only difference is that you should use hooks_clang_format.py as the
file name.


> On Mon, Dec 17, 2018 at 9:22 AM Ehsan Akhgari 
> wrote:
> >
> > On Fri, Dec 14, 2018 at 7:20 AM Kartikaya Gupta 
> wrote:
> >>
> >> Personally I would prefer if we rewrote the commits locally to be
> >> formatted prior to submitting it into Phabricator. That way everything
> >> stays in sync. Also usually clang-formatting a patch is the last thing
> >> I want to do before submission anyway. And doing it now is kind of
> >> annoying if you have a multi-patch stack because by default it only
> >> operates on uncommitted changes so formatting a patch that's not the
> >> topmost means doing some patch juggling/rebasing.
> >
> >
> > Right.  I think that is the thinking behind the plan of enabling local
> commit hooks (provided in bug 1507007) if I understand your point
> correctly.  That will ensure that anything that is committed will be
> formatted properly, and as such it should ensure that code will be
> formatted before being submitted for review, as well as making sure that
> scenarios such as rebasing will also not mess up the formatting of the
> commits in your local queue.
> >
> >>
> >>
> >> On Thu, Dec 13, 2018 at 5:54 PM Ehsan Akhgari 
> wrote:
> >> >
> >> > Previously I had shied away from ideas similar to (a) or (b) since I
> wasn't sure if that would be what developers would expect and accept, given
> that it would cause the change the reviewer sees to no longer match their
> local change, which could cause hicups e.g. when trying to find the code
> that a reviewer has commented on locally, or when rebasing on top of a
> newer version of mozilla-central after Lando has landed your patch (which
> may cause conflicts with your local patch due to formatting differences).
> >> >
> >> > Do we think these types of issues aren't something we should be
> concerned about?
> >> >
> >> > Thanks,
> >> > Ehsan
> >> >
> >> > On Fri, Dec 7, 2018 at 3:09 PM Andrew Halberstadt 
> wrote:
> >> >>
> >> >> I think we should implement a) and do the formatting prior to
> submission. This prevents us from wasting reviewer time on format issues,
> and also makes sure that "what you see in phab, is what gets landed".
> >> >>
> >> >> On Fri, Dec 7, 2018, 2:04 PM Gregory Szorc,  wrote:
> >> >>>
> >> >>> On Fri, Dec 7, 2018 at 1:57 PM Botond Ballo 
> wrote:
> >> >>>
> >> >>> > On Fri, Dec 7, 2018 at 11:36 AM Sylvestre Ledru <
> sle...@mozilla.com>
> >> >>> > wrote:
> >> >>> > > In the meantime, we will be running a bot weekly to reformat the
> >> >>> > > mistakes and add the changeset into the ignore lists.
> >> >>> > > But in the long run this won’t be sustainable, so once we gain
> >> >>> > > confidence that a good number of developers have successfully
> integrated
> >> >>> > > clang-format into their local workflow, we will look into
> enabling a
> >> >>> > > Mercurial hook on hg.mozilla.org to reject misformatted code
> upon push
> >> >>> > > time.  That will be the ultimate solution to help ensure that
> our code
> >> >>> > > will be properly formatted at all times.
> >> >>> >
> >> >>> > Have you considered something like running clang-format
> automatically
> >> >>> > during landing (i.e. as part of what Lando does to the patch)?
> That
> >> >>> > seems like it would achieve the best of both worlds, by not
> placing
> >> >>> > any requirements on what developers do locally, while also
> ensuring
> >> >>> > the tree is always properly formatted without cleanup commits.
> >> >>> >
> >> >>>
> >> >>> I chatted with Sylvestre earlier today. While I don't want to speak
> for
> >> >>>

Re: Upcoming changes to our C++ Coding Style

2018-12-17 Thread Ehsan Akhgari
On Fri, Dec 14, 2018 at 7:20 AM Kartikaya Gupta  wrote:

> Personally I would prefer if we rewrote the commits locally to be
> formatted prior to submitting it into Phabricator. That way everything
> stays in sync. Also usually clang-formatting a patch is the last thing
> I want to do before submission anyway. And doing it now is kind of
> annoying if you have a multi-patch stack because by default it only
> operates on uncommitted changes so formatting a patch that's not the
> topmost means doing some patch juggling/rebasing.
>

Right.  I think that is the thinking behind the plan of enabling local
commit hooks (provided in bug 1507007) if I understand your point
correctly.  That will ensure that anything that is committed will be
formatted properly, and as such it should ensure that code will be
formatted before being submitted for review, as well as making sure that
scenarios such as rebasing will also not mess up the formatting of the
commits in your local queue.


>
> On Thu, Dec 13, 2018 at 5:54 PM Ehsan Akhgari 
> wrote:
> >
> > Previously I had shied away from ideas similar to (a) or (b) since I
> wasn't sure if that would be what developers would expect and accept, given
> that it would cause the change the reviewer sees to no longer match their
> local change, which could cause hicups e.g. when trying to find the code
> that a reviewer has commented on locally, or when rebasing on top of a
> newer version of mozilla-central after Lando has landed your patch (which
> may cause conflicts with your local patch due to formatting differences).
> >
> > Do we think these types of issues aren't something we should be
> concerned about?
> >
> > Thanks,
> > Ehsan
> >
> > On Fri, Dec 7, 2018 at 3:09 PM Andrew Halberstadt 
> wrote:
> >>
> >> I think we should implement a) and do the formatting prior to
> submission. This prevents us from wasting reviewer time on format issues,
> and also makes sure that "what you see in phab, is what gets landed".
> >>
> >> On Fri, Dec 7, 2018, 2:04 PM Gregory Szorc,  wrote:
> >>>
> >>> On Fri, Dec 7, 2018 at 1:57 PM Botond Ballo 
> wrote:
> >>>
> >>> > On Fri, Dec 7, 2018 at 11:36 AM Sylvestre Ledru 
> >>> > wrote:
> >>> > > In the meantime, we will be running a bot weekly to reformat the
> >>> > > mistakes and add the changeset into the ignore lists.
> >>> > > But in the long run this won’t be sustainable, so once we gain
> >>> > > confidence that a good number of developers have successfully
> integrated
> >>> > > clang-format into their local workflow, we will look into enabling
> a
> >>> > > Mercurial hook on hg.mozilla.org to reject misformatted code upon
> push
> >>> > > time.  That will be the ultimate solution to help ensure that our
> code
> >>> > > will be properly formatted at all times.
> >>> >
> >>> > Have you considered something like running clang-format automatically
> >>> > during landing (i.e. as part of what Lando does to the patch)? That
> >>> > seems like it would achieve the best of both worlds, by not placing
> >>> > any requirements on what developers do locally, while also ensuring
> >>> > the tree is always properly formatted without cleanup commits.
> >>> >
> >>>
> >>> I chatted with Sylvestre earlier today. While I don't want to speak for
> >>> him, I believe we both generally agree that the formatting should
> happen
> >>> "automagically" as part of the patch review and landing lifecycle,
> even if
> >>> the client doesn't have their machine configured for formatting on
> save.
> >>> This would mean that patches are either:
> >>>
> >>> a) auto-formatted on clients as part of being submitted to Phabricator
> >>> b) updated automatically by bots after submission to Phabricator
> >>> c) auto-formatted by Lando as part of landing
> >>>
> >>> Lando rewrites/rebases commits as part of landing, so commit hashes
> already
> >>> change. So if auto-formatting magically occurs and "just works" as
> part of
> >>> the review/landing process, there should be little to no developer
> >>> inconvenience compared to what happens today. i.e. developers don't
> need to
> >>> do anything special to have their patches land with proper formatting.
> >>> ___
> >>> dev-platform mailing list
> >>> dev-platform@lists.mozilla.org
> >>> https://lists.mozilla.org/listinfo/dev-platform
> >
> >
> >
> > --
> > Ehsan
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The upcoming C/C++ coding style change

2018-12-13 Thread Ehsan Akhgari
On Thu, Dec 13, 2018 at 6:05 PM Randell Jesup  wrote:

> >> tl;dr: I need to figure out how I'm going to dig out of the rebasing
> hole
> >> I'm now in, and could use advice/help, and I think the above sequence
> >> doesn't work for queues of dependent patches.
> >
> >If I'm understanding your situation correctly, I believe you can use
> rebase
> >to update a whole queue of dependent patches, provided you have the
> >format-source extension enabled.
>
> [snip]
>
> Thanks - that was the sequence I thought might work, after hearing from
> pehrsons that the rebase would reformat all the pushed patches.  hg qfin -a
> is critical I think.
>

Indeed it is.  It converts the "mq" patches to "real changesets", allowing
the Mercurial rebase to correctly reformat the patches through the
formatsource extension.


> If I wanted to optimize this, perhaps for each independent sequence:
> hg update -r ; hg qpush (*N);
>

You probably need `hg qfin -a` here.  I don't think you can rebase mq
patches as long as they're still mq patches (but I could be wrong about
that.)


> hg rebase -d PRE_TREEWIDE_CLANG_FORMAT (assuming
> this does what I expect); resolve any bitrots;
>

If by what you expect, you mean rebase the patches on top of the changeset
before the tree-wide reformat, then yes.  This should simplify resolving
conflicts somewhat, I think, by splitting the conflict resolution to two
phases (before and after the reformat) which is I think your goal here.


> hg rebase -d ;
> (hg qimport -r tip; hg qpop) (*N)
>
> Almost automatable (probably automatable; may not be worth it).
>
> Thanks!  I'll let people know if it works
>

Good luck!

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming changes to our C++ Coding Style

2018-12-13 Thread Ehsan Akhgari
Previously I had shied away from ideas similar to (a) or (b) since I wasn't
sure if that would be what developers would expect and accept, given that
it would cause the change the reviewer sees to no longer match their local
change, which could cause hicups e.g. when trying to find the code that a
reviewer has commented on locally, or when rebasing on top of a newer
version of mozilla-central after Lando has landed your patch (which may
cause conflicts with your local patch due to formatting differences).

Do we think these types of issues aren't something we should be concerned
about?

Thanks,
Ehsan

On Fri, Dec 7, 2018 at 3:09 PM Andrew Halberstadt  wrote:

> I think we should implement a) and do the formatting prior to submission.
> This prevents us from wasting reviewer time on format issues, and also
> makes sure that "what you see in phab, is what gets landed".
>
> On Fri, Dec 7, 2018, 2:04 PM Gregory Szorc,  wrote:
>
>> On Fri, Dec 7, 2018 at 1:57 PM Botond Ballo  wrote:
>>
>> > On Fri, Dec 7, 2018 at 11:36 AM Sylvestre Ledru 
>> > wrote:
>> > > In the meantime, we will be running a bot weekly to reformat the
>> > > mistakes and add the changeset into the ignore lists.
>> > > But in the long run this won’t be sustainable, so once we gain
>> > > confidence that a good number of developers have successfully
>> integrated
>> > > clang-format into their local workflow, we will look into enabling a
>> > > Mercurial hook on hg.mozilla.org to reject misformatted code upon
>> push
>> > > time.  That will be the ultimate solution to help ensure that our code
>> > > will be properly formatted at all times.
>> >
>> > Have you considered something like running clang-format automatically
>> > during landing (i.e. as part of what Lando does to the patch)? That
>> > seems like it would achieve the best of both worlds, by not placing
>> > any requirements on what developers do locally, while also ensuring
>> > the tree is always properly formatted without cleanup commits.
>> >
>>
>> I chatted with Sylvestre earlier today. While I don't want to speak for
>> him, I believe we both generally agree that the formatting should happen
>> "automagically" as part of the patch review and landing lifecycle, even if
>> the client doesn't have their machine configured for formatting on save.
>> This would mean that patches are either:
>>
>> a) auto-formatted on clients as part of being submitted to Phabricator
>> b) updated automatically by bots after submission to Phabricator
>> c) auto-formatted by Lando as part of landing
>>
>> Lando rewrites/rebases commits as part of landing, so commit hashes
>> already
>> change. So if auto-formatting magically occurs and "just works" as part of
>> the review/landing process, there should be little to no developer
>> inconvenience compared to what happens today. i.e. developers don't need
>> to
>> do anything special to have their patches land with proper formatting.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What is future of editor mode-lines after the switch to clang-format?

2018-12-05 Thread Ehsan Akhgari
On Fri, Nov 30, 2018 at 8:23 PM Ehsan Akhgari 
wrote:

> On Fri, Nov 30, 2018 at 4:08 PM Gregory Szorc  wrote:
>
>> On Fri, Nov 30, 2018 at 10:00 AM  wrote:
>>
>> > Now that all of mozilla-central is been migrated to use clang-format
>> > automated code formatting, the question of what should happen with
>> editor
>> > modelines at the top of files should be considered.
>> >
>> > https://bugzilla.mozilla.org/show_bug.cgi?id=clang-format
>> >
>> > Here are some options and some arguments I've heard. Please reply with
>> > further ideas or rationale. I've not classified points as pro/con and
>> leave
>> > that up to the reader's interpretation.
>> >
>> > Option 1: Remove mode lines
>> >   - Devs are expected to run clang-format anyways (hopefully automated
>> > with a hook of sorts)
>> >   - Devs are free to set their modeline configuration elsewhere
>> >   - If they aren't providing value, they deserve to be removed.
>> >   - Many of these were already inconsistent/wrong, so this might be an
>> > opportunity to phase out
>> >   - Not all devs use vim/emacs, so we should think about workflows help
>> > that doesn't need stuff in every single source file.
>> >   - The editorconfig project (https://editorconfig.org/) aims to solve
>> > this for a variety of editors without marking each file
>> >
>> > Option 2: Fix mode lines
>> >   - A correct text-width mode-line gives a closer first approximation
>> for
>> > devs
>> >   - Certain files (eg. moz.build, obj-C headers) use a non-standard file
>> > types.
>> >
>> > A hybrid of these is also very possible, such as removing certain
>> > attributes or only using when file type is non-standard.
>> >
>> > I had originally intended this discussion for js/ components, but it
>> turns
>> > out to be a question across the whole tree (even if the solution chosen
>> is
>> > per-module).
>> >
>>
>> https://editorconfig.org/ has been gaining popularity and I think we
>> should
>> adopt it. Unlike mode lines, you can control behavior for multiple files
>> with a single source file, making it much more convenient to use.
>>
>> Unfortunately, it doesn't look like you can set the file type with
>> .editorconfig files. I think we should lobby for that to be added. Then in
>> a year's time we can ditch mode lines for the remaining non-standard
>> filenames in the repo.
>>
>
> Great idea!  A future without modelines sounds really nice.
>

By the way, it seems like the emacs editorconfig plugin already had
experimental support for a filetype -> mode mapping configuration open (see
file_type_ext and file_type_emacs here
https://github.com/editorconfig/editorconfig-emacs#supported-properties).
Is this sufficient for our needs as far as replacing the file type
information in the Emacs modelines go?

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What is future of editor mode-lines after the switch to clang-format?

2018-11-30 Thread Ehsan Akhgari
On Fri, Nov 30, 2018 at 4:08 PM Gregory Szorc  wrote:

> On Fri, Nov 30, 2018 at 10:00 AM  wrote:
>
> > Now that all of mozilla-central is been migrated to use clang-format
> > automated code formatting, the question of what should happen with editor
> > modelines at the top of files should be considered.
> >
> > https://bugzilla.mozilla.org/show_bug.cgi?id=clang-format
> >
> > Here are some options and some arguments I've heard. Please reply with
> > further ideas or rationale. I've not classified points as pro/con and
> leave
> > that up to the reader's interpretation.
> >
> > Option 1: Remove mode lines
> >   - Devs are expected to run clang-format anyways (hopefully automated
> > with a hook of sorts)
> >   - Devs are free to set their modeline configuration elsewhere
> >   - If they aren't providing value, they deserve to be removed.
> >   - Many of these were already inconsistent/wrong, so this might be an
> > opportunity to phase out
> >   - Not all devs use vim/emacs, so we should think about workflows help
> > that doesn't need stuff in every single source file.
> >   - The editorconfig project (https://editorconfig.org/) aims to solve
> > this for a variety of editors without marking each file
> >
> > Option 2: Fix mode lines
> >   - A correct text-width mode-line gives a closer first approximation for
> > devs
> >   - Certain files (eg. moz.build, obj-C headers) use a non-standard file
> > types.
> >
> > A hybrid of these is also very possible, such as removing certain
> > attributes or only using when file type is non-standard.
> >
> > I had originally intended this discussion for js/ components, but it
> turns
> > out to be a question across the whole tree (even if the solution chosen
> is
> > per-module).
> >
>
> https://editorconfig.org/ has been gaining popularity and I think we
> should
> adopt it. Unlike mode lines, you can control behavior for multiple files
> with a single source file, making it much more convenient to use.
>
> Unfortunately, it doesn't look like you can set the file type with
> .editorconfig files. I think we should lobby for that to be added. Then in
> a year's time we can ditch mode lines for the remaining non-standard
> filenames in the repo.
>

Great idea!  A future without modelines sounds really nice.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What is future of editor mode-lines after the switch to clang-format?

2018-11-30 Thread Ehsan Akhgari
On Fri, Nov 30, 2018 at 2:06 PM Nathan Froyd  wrote:

> On Fri, Nov 30, 2018 at 1:51 PM Ehsan Akhgari 
> wrote:
> > I think these are all great points.  It seems like for Emacs, it is not
> > actually necessary to sprinkle modelines across all of the files in your
> > repository (per https://bugzilla.mozilla.org/show_bug.cgi?id=1023839#c7
> ).
> > For Vim, Benjamin Bouvier just landed a patch in
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1511383 to update the
> existing
> > modelines to have proper line width and tab width.
> >
> > It seems like for Emacs, we should probably do something similar also
> > relatively soon merely to address the newly introduced inconsistencies
> due
> > to the reformat.  But I'd like to hear from Emacs users what they think,
> > and if they have a preference on updating existing modelines vs using a
> > .dir-locals.el file instead...
>
> Using .dir-locals.el sounds great, at least for things like
> indent-tabs-mode and c-basic-offset.  Emacs 23 is older than what
> comes with Ubuntu 14.04 (LTS), so I think we're in the clear using it
> as far as Emacs versions go.
>
> Google's style guide comes with a builtin style for emacs's cc-mode:
>
>
> https://raw.githubusercontent.com/google/styleguide/gh-pages/google-c-style.el
>
> which we could just import into .dir-locals.el.
>

That sounds great, it should improve things a lot for Emacs users.

BTW in the mean time, arai kindly fixed the indentation in the Emacs
modelines: https://bugzilla.mozilla.org/show_bug.cgi?id=1511393 (thanks a
lot!).

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What is future of editor mode-lines after the switch to clang-format?

2018-11-30 Thread Ehsan Akhgari
On Fri, Nov 30, 2018 at 1:00 PM  wrote:

> Now that all of mozilla-central is been migrated to use clang-format
> automated code formatting, the question of what should happen with editor
> modelines at the top of files should be considered.
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=clang-format
>
> Here are some options and some arguments I've heard. Please reply with
> further ideas or rationale. I've not classified points as pro/con and leave
> that up to the reader's interpretation.
>
> Option 1: Remove mode lines
>   - Devs are expected to run clang-format anyways (hopefully automated
> with a hook of sorts)
>

Hopefully with the editor integrations that are available (
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style/Formatting_C++_Code_With_clang-format#Editor_Integrations).
But this would still not give consistent result while typing in all
editors, and this may bother the users of those editors.  The text line
width setting especially seems important here.


>   - Devs are free to set their modeline configuration elsewhere
>   - If they aren't providing value, they deserve to be removed.
>   - Many of these were already inconsistent/wrong, so this might be an
> opportunity to phase out
>

FWIW after a cursory conversation with kmag on IRC the other day, we
realized that it seems that Vim modelines tend to be largely consistent,
whereas the same doesn't seem to be the case for Emacs modelines.  So the
experience of individual developers probably depends on editor choice here.


>   - Not all devs use vim/emacs, so we should think about workflows help
> that doesn't need stuff in every single source file.
>   - The editorconfig project (https://editorconfig.org/) aims to solve
> this for a variety of editors without marking each file
>
> Option 2: Fix mode lines
>   - A correct text-width mode-line gives a closer first approximation for
> devs
>   - Certain files (eg. moz.build, obj-C headers) use a non-standard file
> types.
>
> A hybrid of these is also very possible, such as removing certain
> attributes or only using when file type is non-standard.
>
> I had originally intended this discussion for js/ components, but it turns
> out to be a question across the whole tree (even if the solution chosen is
> per-module).
>

I think these are all great points.  It seems like for Emacs, it is not
actually necessary to sprinkle modelines across all of the files in your
repository (per https://bugzilla.mozilla.org/show_bug.cgi?id=1023839#c7).
For Vim, Benjamin Bouvier just landed a patch in
https://bugzilla.mozilla.org/show_bug.cgi?id=1511383 to update the existing
modelines to have proper line width and tab width.

It seems like for Emacs, we should probably do something similar also
relatively soon merely to address the newly introduced inconsistencies due
to the reformat.  But I'd like to hear from Emacs users what they think,
and if they have a preference on updating existing modelines vs using a
.dir-locals.el file instead...

And of course, further improvements, such as supporting the editorconfig
format is also interesting to discuss, so users of other editors, please do
share feedback!

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The upcoming C/C++ coding style change

2018-11-29 Thread Ehsan Akhgari
On Thu, Nov 29, 2018 at 10:47 PM Emilio Cobos Álvarez 
wrote:

> On 11/30/18 2:06 AM, Ehsan Akhgari wrote:
> > On Thu, Nov 29, 2018 at 9:43 AM Emilio Cobos Álvarez 
> > wrote:
> >
> >> On 11/29/18 1:38 PM, Sylvestre Ledru wrote:
> >>> This extension will automatically rebase the local changes to avoid
> >>> conflicts.
> >>
> >> Is there a way to do the same for cinnabar users?
> >>
> >
> > Yes!  Sorry for the delay...
>
> NP!
>
> > Please check out this script:
> > https://github.com/ehsan/clang-format-reformat-branch.  This does
> something
> > similar to the format-source extension for Mercurial but done as a
> one-time
> > tool, borrowing from the tool that the MongoDB project developed for the
> > same use case.  It takes a local branch based on a clone of
> mozilla-central
> > that doesn't yet have the reformat commit and rebases it on top of the
> > reformat commit, reformatting your local modifications in the process.  I
> > hope it proves to be helpful for the git users out there!
>
> Nice! I haven't tried it yet (actually was going to report back when I
> found the reply).
>
> I hacked up something today as well while looking into this. It's not
> really sophisticated, and you need to tweak the git repo config, so your
> script probably works best for most people.
>
> Just in case it's useful for somebody, while looking into a way to do
> this (I basically followed[1]), I wrote a little merge driver which
> seems to work fine (with a caveat, see below). I just uploaded it here:
>
>https://github.com/emilio/clang-format-merge
>

This actually looks quite decent, and a cleaner approach.  It also taught
me about merge drivers.  :-)  I suspect this would work equally well for
everyone (but I haven't tested it myself, just based on reading the source.)

Thanks for making it happen.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: Unprefix -moz-user-select, unship mozilla-specific values.

2018-11-29 Thread Ehsan Akhgari
On Thu, Nov 29, 2018 at 6:27 AM Emilio Cobos Álvarez 
wrote:

> Sorry for the lag replying to this.
>
> On 11/13/18 4:35 PM, James Graham wrote:
> > On 11/11/2018 17:57, Emilio Cobos Álvarez wrote:
> >
> >> web-platform-tests: Test coverage for all the values is pre-existing.
> >> There's unfortunately little coverage in WPT, but a lot in our
> >> selection and contenteditable tests.
> >
> > Can we upstream some of these tests to wpt? I don't know if there
> > are/were technical barriers that would prevent us doing that, but if
> > user gestures are required, the new testdriver APIs might fill the gap,
> > and if there is some other piece of missing functionality I would be
> > interested to know what that is.
> Part of the difficulty is that we want these tests to show the caret,
> which is something normal reftests don't. Right now all these reftests
> are here:
>
>
>
> https://searchfox.org/mozilla-central/source/layout/base/tests/test_reftests_with_caret.html
>
> I'm not quite sure why they couldn't be normal reftests with the
> ui.caretBlinkTime pref set to -1. Maybe David, Ehsan or Mats know.
>

The reason for that is that these tests typically require input of some
sort (mouse, keyboard, for example), and as far as I know at least the
reftest framework didn't support synthesizing events, at least at the
time.  So we ended up creating this small mini-reftest framework in
mochitest so that we can use the EventUtils facilities.  (Also as far as I
remember setting prefs per reftest wasn't possible at the time either.)

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The upcoming C/C++ coding style change

2018-11-29 Thread Ehsan Akhgari
On Thu, Nov 29, 2018 at 9:43 AM Emilio Cobos Álvarez 
wrote:

> On 11/29/18 1:38 PM, Sylvestre Ledru wrote:
> > This extension will automatically rebase the local changes to avoid
> > conflicts.
>
> Is there a way to do the same for cinnabar users?
>

Yes!  Sorry for the delay...

Please check out this script:
https://github.com/ehsan/clang-format-reformat-branch.  This does something
similar to the format-source extension for Mercurial but done as a one-time
tool, borrowing from the tool that the MongoDB project developed for the
same use case.  It takes a local branch based on a clone of mozilla-central
that doesn't yet have the reformat commit and rebases it on top of the
reformat commit, reformatting your local modifications in the process.  I
hope it proves to be helpful for the git users out there!

The repository has full instructions on how to use it, but please let me
know if you have any questions or hit any issues.

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The upcoming C/C++ coding style change

2018-11-29 Thread Ehsan Akhgari
On Thu, Nov 29, 2018 at 12:20 PM Boris Zbarsky  wrote:

> On 11/29/18 11:40 AM, Ehsan Akhgari wrote:
> > Yes, that is true sadly.  But to be fair here, old mq patches that do not
> > apply due to age are already beyond saving with any kind of automated
> > tooling, and they require manual work to get them applied first.  :-/
>
> Sure.
>
> > That's not true.  clang-format can happily format diffs.  When formatting
> > diffs though it tries to format your changes, not the context around the
> > diffs
>
> Ah, OK.  That makes sense.
>
> >* Update your working tree to the parent of the commit that did the
> > reformat.
>
> This also makes sense.
>
> Have we considered adding an easy-to-remember tag for that commit to
> make that easier in the future?
>

Yes, please check out https://bugzilla.mozilla.org/show_bug.cgi?id=1508324.
(For git users, its sha1 will be recorded in .git-blame-ignore-revs for the
benefit of the git hyper-blame command[1].)

[1]
https://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/git-hyper-blame.html

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The upcoming C/C++ coding style change

2018-11-29 Thread Ehsan Akhgari
This short guide should be helpful for Mercurial users:
https://docs.google.com/document/d/13AwAsvKMhH0mflDlfatBqn6LmZHiQih76oxM4zfrPl4

On Thu, Nov 29, 2018 at 11:49 AM Ehsan Akhgari 
wrote:

> On Thu, Nov 29, 2018 at 11:44 AM Honza Bambas  wrote:
>
>> On 2018-11-29 17:12, Ehsan Akhgari wrote:
>> > On Thu, Nov 29, 2018 at 8:00 AM Boris Zbarsky  wrote:
>> >
>> >> On 11/29/18 7:38 AM, Sylvestre Ledru wrote:
>> >>> This extension will automatically rebase the local changes to avoid
>> >>> conflicts.
>> >> Sylvestre,
>> >>
>> >> Will it also update mq patches, or just in-repo commits?
>> >>
>> > I don't think this extension is aware of mq patches (since they don't
>> > really get rebased), but that being said from my foggy memory of using
>> mq
>> > ages ago, I seem to remember that it used to be possible to use the
>> qfinish
>> > command to convert them into regular hg commits and then later on to use
>> > qimport -r (IIRC) to convert them back into mq patches again.  If that
>> is
>> > still a workflow that works, then it should be possible to:
>> >
>> >* Pull from a pre-reformat of the tree, run ./mach bootstrap to
>> install
>> > hg-formatsource.
>> >* Do a one-time translation of your mq queue to a set of commits.
>> >* Pull from hg.mozilla.org to pick up the reformat commit.
>>
>> And you forget that a merge will be needed here, because the formatting
>> changes will likely collide with the code one's patches are touching.
>>
>
> No, I didn't.  :-)  That's exactly what the hg-formatsource extension does
> for you, it reformats your local changes on the fly as the rebase is in
> progress, so you will get no collisions.
>
>
>> When we were mass-converting from PRUint32 to uint32_t etc, there was a
>> tool capable of converting your patches based on the pre-formated code
>> to be apply-able on the formatted code.
>>
>> This is what we are missing.  So some of us may expect a huge merge pain
>> w/o something like that.
>>
>
> No, those are old days and long gone, my friend.  We are living in a new
> world with better tools these days (for Mercurial users).
>
>
>> OTOH, if the changes are only whitespace changes, maybe there is some
>> way `patch --ignore-whitespace --fuzz N` could apply the patches.  Then
>> just re-format and your patches are OK.
>>
>>
>> >* Do a one-time translation of your mq queue back to a series of
>> patches.
>>
>> That doesn't make much sense, because the commit history will look
>> something like (newest to oldest):
>> - merge of my patches with the formatted changes, having two parents
>> (formatted code default + my mq tip)
>> - formatted code `tip` (or `default`)
>> - my mq committed [ ]
>> - pre-formated parent
>> - ...
>>
>> You can't just recreate your mq from such changeset tree and you also
>> can't avoid the likely quite complicated merge anyway.
>>
>
> Again, no merge commits.  Please do note that I was suggesting there that
> you should use the rebase command, not merge.  I think that would be |hg
> pull --rebase|.
>
> --
> Ehsan
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The upcoming C/C++ coding style change

2018-11-29 Thread Ehsan Akhgari
On Thu, Nov 29, 2018 at 11:44 AM Honza Bambas  wrote:

> On 2018-11-29 17:12, Ehsan Akhgari wrote:
> > On Thu, Nov 29, 2018 at 8:00 AM Boris Zbarsky  wrote:
> >
> >> On 11/29/18 7:38 AM, Sylvestre Ledru wrote:
> >>> This extension will automatically rebase the local changes to avoid
> >>> conflicts.
> >> Sylvestre,
> >>
> >> Will it also update mq patches, or just in-repo commits?
> >>
> > I don't think this extension is aware of mq patches (since they don't
> > really get rebased), but that being said from my foggy memory of using mq
> > ages ago, I seem to remember that it used to be possible to use the
> qfinish
> > command to convert them into regular hg commits and then later on to use
> > qimport -r (IIRC) to convert them back into mq patches again.  If that is
> > still a workflow that works, then it should be possible to:
> >
> >* Pull from a pre-reformat of the tree, run ./mach bootstrap to
> install
> > hg-formatsource.
> >* Do a one-time translation of your mq queue to a set of commits.
> >* Pull from hg.mozilla.org to pick up the reformat commit.
>
> And you forget that a merge will be needed here, because the formatting
> changes will likely collide with the code one's patches are touching.
>

No, I didn't.  :-)  That's exactly what the hg-formatsource extension does
for you, it reformats your local changes on the fly as the rebase is in
progress, so you will get no collisions.


> When we were mass-converting from PRUint32 to uint32_t etc, there was a
> tool capable of converting your patches based on the pre-formated code
> to be apply-able on the formatted code.
>
> This is what we are missing.  So some of us may expect a huge merge pain
> w/o something like that.
>

No, those are old days and long gone, my friend.  We are living in a new
world with better tools these days (for Mercurial users).


> OTOH, if the changes are only whitespace changes, maybe there is some
> way `patch --ignore-whitespace --fuzz N` could apply the patches.  Then
> just re-format and your patches are OK.
>
>
> >* Do a one-time translation of your mq queue back to a series of
> patches.
>
> That doesn't make much sense, because the commit history will look
> something like (newest to oldest):
> - merge of my patches with the formatted changes, having two parents
> (formatted code default + my mq tip)
> - formatted code `tip` (or `default`)
> - my mq committed [ ]
> - pre-formated parent
> - ...
>
> You can't just recreate your mq from such changeset tree and you also
> can't avoid the likely quite complicated merge anyway.
>

Again, no merge commits.  Please do note that I was suggesting there that
you should use the rebase command, not merge.  I think that would be |hg
pull --rebase|.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The upcoming C/C++ coding style change

2018-11-29 Thread Ehsan Akhgari
On Thu, Nov 29, 2018 at 11:30 AM Boris Zbarsky  wrote:

> On 11/29/18 11:12 AM, Ehsan Akhgari wrote:
> >* Pull from a pre-reformat of the tree, run ./mach bootstrap to
> install
> > hg-formatsource.
> >* Do a one-time translation of your mq queue to a set of commits.
> >* Pull from hg.mozilla.org to pick up the reformat commit.
> >* Do a one-time translation of your mq queue back to a series of
> patches.
>
> So just to be clear, I have a dozen or so queues with hundreds of
> patches in them, not all of which will immediately apply due to age.  So
> this is not a simple operation, unfortunately...
>

Yes, that is true sadly.  But to be fair here, old mq patches that do not
apply due to age are already beyond saving with any kind of automated
tooling, and they require manual work to get them applied first.  :-/


> I guess there's no real way to clang-format diffs, so maybe the answer
> is manual merging when I actually need those patches.
>

That's not true.  clang-format can happily format diffs.  When formatting
diffs though it tries to format your changes, not the context around the
diffs, as the use case it has been designed for is for example quickly
reformatting your changes in a pre-commit hook.

But still all is not lost here.  When you do decide to do the manual
merging when you needed those patches, you would need to:

  * Update your working tree to the parent of the commit that did the
reformat.
  * Apply your patch to that tree and reformat the tree.
  * Diff the reformat commit and your current working directory.  That
would give the reformatted diff.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The upcoming C/C++ coding style change

2018-11-29 Thread Ehsan Akhgari
On Thu, Nov 29, 2018 at 8:00 AM Boris Zbarsky  wrote:

> On 11/29/18 7:38 AM, Sylvestre Ledru wrote:
> > This extension will automatically rebase the local changes to avoid
> > conflicts.
>
> Sylvestre,
>
> Will it also update mq patches, or just in-repo commits?
>

I don't think this extension is aware of mq patches (since they don't
really get rebased), but that being said from my foggy memory of using mq
ages ago, I seem to remember that it used to be possible to use the qfinish
command to convert them into regular hg commits and then later on to use
qimport -r (IIRC) to convert them back into mq patches again.  If that is
still a workflow that works, then it should be possible to:

  * Pull from a pre-reformat of the tree, run ./mach bootstrap to install
hg-formatsource.
  * Do a one-time translation of your mq queue to a set of commits.
  * Pull from hg.mozilla.org to pick up the reformat commit.
  * Do a one-time translation of your mq queue back to a series of patches.

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming changes to our C++ Coding Style

2018-11-22 Thread Ehsan Akhgari
On Thu, Nov 22, 2018 at 4:07 AM Jean-Yves Avenard 
wrote:

> Hi
>
> > On 21 Nov 2018, at 3:54 am, Ehsan Akhgari 
> wrote:
> >
> >
> > You will break the blame on VCS with this change
> >
> > Yes and no. Of course, just like many tree-wide mass changes in the past
> > (e.g. the MPL2 header update), this will remain in the log.
> >
> > Mercurial and Git both support a -w argument to ignore whitespace with
> > annotate/blame.
> >
> > In addition, modern versions of Mercurial have `hg annotate --skip
> > ` which allows you to specify a revset used to select revisions
> to
> > skip over when annotating.
> >
> > Last but not least, we will tag the changeset’s commit message with
> > “skip-blame” so that Mercurial would automatically ignore the reformat
> > changeset for blame operations.
>
> I’ve found the Google’s depot_tools hyper-blame particularly useful here.
>
> It takes a .git-blame-ignore-revs file containing the list of commits to
> ignore.
>
> $ cat .git-blame-ignore-revs
> abd6d77c618998827e5ffc3dab12f1a34d6ed03d
>
> That’s with Sylvestre single commit changing dom/media (hg SHA1:
> 0ceae9db9ec0be18daa1a279511ad305723185d4)
>
> $ git clone
> https://chromium.googlesource.com/chromium/tools/depot_tools.git
> $ export PATH=$PATH:$PWD/depot_tools
>
> now git hyper-blame will behave in the same fashion as git blame, but
> ignore that particular commit.
>

Indeed.  Kats was asking me about how we can possibly support skipping over
these commits in searchfox.org, and I pointed this tool to him as well.
I'm hoping that he would be able to find a way to integrate hyper-blame
with searchfox so that more people can benefit from this.  (I am not sure
how frequently people run blame tools locally...)

For Mercurial, "# skip-blame" is just a text token that Mercurial's revset
syntax can parse out of commit messages.  The revset syntax also supports
reading external files.  Felipe, gps and I talked a bit about adding a
similar .hg-blame-ignore-revs file in the tree which can contain Mercurial
sha1 changeset IDs for the rewrite commits to be skipped over, and people
would be able to use the file through an alias that can be configured in
~/.hgrc (possible to set it up via ./mach bootstrap).  Not sure if Felipe's
investigations have lead to results yet.


> I’m guessing we could make this .git-blame-ignore-revs part of the tree,
> assuming though everyone must use git-cinnabar.
>

Indeed we can.  I would be happy to review a patch.  I think the file can
contain both the cinnabar and the gecko-dev sha1 variants of the rewrite
commit(s), since sha1s that are non-existent in a repo will just be ignored.

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: implicit ref=noopener for target=_blank on anchor and area elements

2018-11-22 Thread Ehsan Akhgari
On Thu, Nov 22, 2018 at 10:55 AM Boris Zbarsky  wrote:

> On 11/22/18 1:06 AM, Ehsan Akhgari wrote:
> >> Can one do noreferrer with window.open()?
> >>
> >
> > Yes, by passing 'noopener' in the features argument:
> >
> https://html.spec.whatwg.org/multipage/window-object.html#apis-for-creating-and-navigating-browsing-contexts-by-name:disowned-its-opener
>
> But we're trying to _have_ an opener; that's why we're here to start
> with.  We're talking about noreferrer.
>

My apologies, I misread your previous email as saying "Can one do noopener"
:-/

Indeed, specifying noreferrer isn't currently possible.  I think that is a
desirable feature to add to window.open().


> > Which reminds me, it's impossible to block opener reference creation upon
> > form submission right now as far as I can tell.
>
> Yes.  Right now  doesn't have a thing like "rel" to pass along
> directives like "noopener".  Maybe we should get that fixed in the spec
> somehow.
>

Do you mean adding a rel attribute to ?  Not sure if all of the other
link type values for rel make sense for  but having some way of
passing "noopener" (and "opener" for that matter) directives for  is
indeed something that we should probably look into doing, especially when
it comes to changing the default behavior of .  I have
no good ideas for how to do it other than inventing yet a new attribute...


> > I wonder if it makes sense to make a similar change here, to make  > target="_blank"> imply noopener behaviour and then if that proves to be
> Web
> > compatible, propose to change the spec to pass false there?
>
> It might indeed.
>

Talked to baku and annevk a bit also, and I guess there is one way to know
the web compat impact, so I filed
https://bugzilla.mozilla.org/show_bug.cgi?id=1509346.

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: implicit ref=noopener for target=_blank on anchor and area elements

2018-11-21 Thread Ehsan Akhgari
On Wed, Nov 21, 2018 at 11:55 PM Boris Zbarsky  wrote:

> On 11/21/18 11:50 PM, Ehsan Akhgari wrote:
> > Would it be OK if the answer to that question be "use window.open()"?
>
> Can one do noreferrer with window.open()?
>

Yes, by passing 'noopener' in the features argument:
https://html.spec.whatwg.org/multipage/window-object.html#apis-for-creating-and-navigating-browsing-contexts-by-name:disowned-its-opener
.


> Also, if your thing doing the navigation is a , not , then
> window.open is pretty hard to use for that.  Then again,  target="_blank"> is not that common...
>

That's true, this wouldn't cover the form submission use case.

Which reminds me, it's impossible to block opener reference creation upon
form submission right now as far as I can tell.  This is actually a bug in
the spec.  <
https://html.spec.whatwg.org/multipage/form-control-infrastructure.html#form-submission-algorithm:the-rules-for-choosing-a-browsing-context-given-a-browsing-context-name>
calls into "rules for choosing a browsing context" passing only two
arguments, omitting the third one (noopener) <
https://html.spec.whatwg.org/multipage/browsers.html#the-rules-for-choosing-a-browsing-context-given-a-browsing-context-name
>.

I wonder if it makes sense to make a similar change here, to make  imply noopener behaviour and then if that proves to be Web
compatible, propose to change the spec to pass false there?

Cheers,
Ehsan
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: implicit ref=noopener for target=_blank on anchor and area elements

2018-11-21 Thread Ehsan Akhgari
On Wed, Nov 21, 2018 at 3:55 PM Boris Zbarsky  wrote:

> On 11/21/18 2:22 PM, Daniel Veditz wrote:
> > "opener" doesn't exist
>
> It does in WebKit's proposed changes and in our implementation of them.
>
> > You'd specify a target
> > name other than "_blank" to indicate it's a context you care about
>
> This seems backwards.  What matters is whether the context should care
> about _you_, not whether you care about it.  If you want to open a
> guaranteed-new context that can then communicate with you, how should
> you do that, exactly?
>

Would it be OK if the answer to that question be "use window.open()"?

It would really be nice if browsers could converge on a definition of what
these conditions are going to be, and I would be really happy the more
restrictive the cases we hand out window.opener references can be.  Having
a programmatic way (i.e. an API call) as the only access point to this
functionality has the nice side benefit that if the browser wants to
provide interventions in the creation of opener references, we would have
only a single API call to worry about, rather than having to keep track of
things like tabs opened from links.  In fact, one motivation beyond this
intent thread is one future anti-tracking intervention that we've been
discussing.

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Upcoming changes to our C++ Coding Style

2018-11-21 Thread Ehsan Akhgari
(In case the formatting doesn't survive transit, a copy of this note can be
found here:
https://docs.google.com/document/d/1CTaWucldHxEri5BUB1kL4faF4prwPlBa31yHFd-l8uc
.)


TL;DR

To improve developer productivity, we are planning to a) start
automatically formatting all Gecko C++ code with clang-format, and b)
switch to the Google C++ style
 and retire the Mozilla
C/C++ coding style
.
These changes are being tested and verified with a small part of the
codebase before rolling out to the entire tree.

Background

Mozilla has been moving towards using more automated tools for checking and
rewriting code. For example, eslint  (JS), checkstyle
 (Java), flake8
 (Python) are now Tier-1 in the CI. We
also deployed C/C++ static analysis as part of the development process.

In parallel, tooling to reformat C & C++ is now used in a wide variety of
projects like Chromium, LLVM and MongoDB. Mozilla has historically had a
coding style but it was never really enforced through tooling. In addition,
the Mozilla style is incomplete and used only in our projects; by contrast
the Google style is very complete, well supported in automated tooling, and
widely used by projects inside and outside of Google such as Chromium,
Tensorflow, and MySQL.

Because of the lack of checks and automation, we now have significant
inconsistencies in the style of our codebase.  On top of that, we have been
spending a lot of time talking about and adjusting whitespace. This wastes
human time when writing, reviewing, and discussing code.

Last but not least, being able to automatically format whitespace in our
C/C++ code is helpful for other types of automated rewrites so we can fix
wide-scale problems in our codebase using static analysis tools.  Right now
such fixes are difficult or impossible because of inconsistencies in the
Mozilla C++ coding style and lack of support in clang-format and other
tools. Addressing this issue is the first step in unblocking those future
improvements.

Goals

During patch authoring, code reviews, and mailing list discussions, Mozilla
developers have traditionally spent a lot of time on minor issues that
could easily be fixed through automated processes. Our Engineering
Operations and Product Integrity groups have helped to improve developer
productivity in this area through tools such as static analysis
, linting via Phabricator
reviewbot , and codebase
reformats

using clang-format. We will build on this foundation by applying coding
style checks and automated rewrites throughout the authoring process
(pre-commit, commit, review, landing).

Details

We’re announcing the following changes to our coding style and its
enforcement through our commit policy:


   1.

   We will check conformance with coding style rules upon commit, review,
   and landing so that issues can be easily addressed or fixed through
   automation. The preference will be to enforce style issues as early as
   possible (before landing) to avoid late surprises.

   2.

   We will migrate to the Google C++ coding style to encourage more
   consistent code and re-use a wider array of existing tools.  As part of
   this migration, we will retire the existing Mozilla C/C++ Coding Style. The
   intention behind retiring the current Mozilla style is to just stop
   maintaining it as an active coding style going forward, but the
   documentation for this coding style will be kept intact.



   1.

   We will automatically enforce restrictions on formatting of whitespace
   (such as indentation and braces). For stylistic features other than that
   (such as naming of functions and #include order), Google C++ style will be
   permitted but not initially enforced, and consistency with surrounding code
   should take precedence.


These changes have been approved both by Firefox senior engineering
leadership and the Firefox Technical Leadership Module
.  This work
is being tracked in bug 1188202
.

When?

On Monday this week, we pushed a patch
 to an integration
branch to rewrite a section of the tree (dom/media) in the Google C++ style
using clang-format.  This first step is intended to be a smoke test step,
designed to test the process of doing the rewrite and test the waters in a
small controlled environment with a small team of developers who have
volunteered to help us test this change. We haven’t found any major
surprises yet.

Assuming that ev

Re: Intent to implement: Reporting API

2018-11-16 Thread Ehsan Akhgari
I think there is value in supporting this API, both for web developers and
for us (e.g. in order to help us deprecate and remove APIs more easily).  I
go back and forth on the question of the privacy impact of the API, since
the fact that the reports are exposed both to JS and HTTP layers means that
protecting only the HTTP layer against e.g. cross-origin access wouldn't be
sufficient since JS code can just as easily be used to send the same data
cross-origin at least for reports coming the same global.  What's not quite
clear to me is what kind of data this API would allow the page to access
which is currently hidden from it...



On Thu, Nov 15, 2018 at 4:17 PM Andrea Marchesini 
wrote:

> There is a proposal to support "report-only" violations for feature policy:
>
> https://github.com/WICG/feature-policy/blob/824de86f89599240c24b5ae3cd58d25984446af5/reporting.md
>
> I think we should implement this API for these reasons:
>
> a. it unifies the reporting of violations and interventions. At the moment
> we have FeaturePolicy, Interventions, crashes and deprecated APIs, but it's
> easy to imagine reports coming from CSP violations, CORB, Origin-policy
> etc.
>
> b. Often, when an intervention is made, the only thing a browser does is to
> write a message in the console. autoplay heuristic and tracking protection
> are just 2 examples. Here we can do something more: we can communicate
> something to the page, using ReportingObserver.
>
> c. it's a nice diagnostic tool for websites. In the 'report-to' HTTP
> header, entry-points can be used as communication channel between the
> browser and the server.
>
>
> On Thu, Nov 15, 2018 at 5:25 PM Boris Zbarsky  wrote:
>
> > On 11/15/18 9:52 AM, Ehsan Akhgari wrote:
> > > The idea is to use Feature Policy in report-only mode
> >
> > There is no report-only mode in the Feature Policy spec, nor in our
> > implementation.  See the note at the end of
> > https://wicg.github.io/feature-policy/#reporting
> >
> > So I'm back to my question: is this an API we actually want to
> > implement?  It seems like a fair amount of complexity and overhead and
> > the one use case so far is for sites to have telemetry for when they're
> > ending up with feature policy violations, right?
> >
> > -Boris
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Reporting API

2018-11-15 Thread Ehsan Akhgari
On Wed, Nov 14, 2018 at 11:20 AM Boris Zbarsky  wrote:

> On 11/13/18 4:33 AM, Andrea Marchesini wrote:
> > I decided to implement this API, because it is required in the
> > web-platform-tests for FeaturePolicy.
>
> Is it needed for any other reason?  If not, this seems like a bug in the
> tests: they should not be coupling the two specs together.
>

It is.  The Reporting API and Feature Policy are meant to be used
together.  For example, developers may want to reduce the usage of sync XHR
on their website, but on a very complex site it may not be easy to figure
out where sync XHR is being used.  The idea is to use Feature Policy in
report-only mode to report the usages of Sync XHR for example so that they
can fix them gradually without disallowing Sync XHR using Feature Policy
right off the bat.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Reporting API

2018-11-14 Thread Ehsan Akhgari
On Wed, Nov 14, 2018 at 10:58 AM Tom Ritter  wrote:

> On Wed, Nov 14, 2018 at 3:17 PM Ehsan Akhgari 
> wrote:
> > What are your plans with regards to implementing the second part?  Can
> > these reports be sent cross-origin?  (From the spec, it seems like the
> > answer is yes.)  If so, how are you planning to handle issues such as
> > sending these reports to third-parties, including third-party trackers?
> > I'm worried about: a) sending these reports to a third-party not on the
> TP
> > list, b) sending these reports to a third-party on the TP list, and c)
> what
> > options we have to mitigate the tracking impact of these reports for both
> > of the previous cases, but especially for (b).
>
> Is this a different situation than CSP, which seems to have all these
> same issues? Do we do anything special there?
>

The CSP report-uri mechanism is deprecated AFAIK (
https://w3c.github.io/webappsec-csp/#directive-report-uri) and is supposed
to be replaced with this new API, so I think it is important to get the new
API right from the privacy perspective even if we didn't get CSP reporting
right (which we didn't -- AFAIK we happily send the CSP violation reports
to wherever the site points us to.)

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Reporting API

2018-11-14 Thread Ehsan Akhgari
On Tue, Nov 13, 2018 at 4:33 AM Andrea Marchesini 
wrote:

> Reporting API covers 2 different features:
> a. reporting to the current page, via ReportingObserver
> b. reporting to a remote server, known via 'report-to' HTTP header.
> My implementation covers only the first aspect. However I also have patches
> for the second part, not in review yet.
>

What are your plans with regards to implementing the second part?  Can
these reports be sent cross-origin?  (From the spec, it seems like the
answer is yes.)  If so, how are you planning to handle issues such as
sending these reports to third-parties, including third-party trackers?
I'm worried about: a) sending these reports to a third-party not on the TP
list, b) sending these reports to a third-party on the TP list, and c) what
options we have to mitigate the tracking impact of these reports for both
of the previous cases, but especially for (b).

Also, do I understand correctly that ReportingObserver objects only receive
reports from the same origin as the context they're created in?

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: CSS environment variables.

2018-11-04 Thread Ehsan Akhgari
On Fri, Nov 2, 2018 at 5:26 PM Emilio Cobos Álvarez 
wrote:

> This is mostly to prevent compat headache in mobile, hopefully prevent
> other vendors from shipping new un-spec'd env variables, and encourage
> people to use the fallback value if new environment variables are added
> in the future.
>
> Thoughts?
>

I think the rationale for shipping this, given our past experience with how
developers have used WebKit/Chrome specific features on mobile is
reasonable.

Thanks for thinking ahead of the compat issues.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing new test platform "Android 7.0 x86"

2018-11-02 Thread Ehsan Akhgari
Thanks a lot, this is great news!

What's the process model configuration for this testing platform?  Do these
tests run in single process mode or are they running in some e10s like
environment?  Is there some documentation that explains what runs in which
process?

Thanks,
Ehsan

On Thu, Nov 1, 2018 at 5:44 PM Geoffrey Brown  wrote:

> This week some familiar tier 1 test suites began running on a new test
> platform labelled "Android 7.0 x86" on treeherder. Only a few test suites
> are running so far; more are planned.
>
> Like the existing "Android 4.2" and "Android 4.3" test platforms, these
> tests run in an Android emulator running in a docker container (the same
> Ubuntu-based image used for linux64 tests).  The new platform runs an x86
> emulator using kvm acceleration, enabling tests to run much, much faster
> than on the older platforms. As a bonus, the new platform uses Android 7.0
> ("Nougat", API 24) - more modern, more relevant.
>
> This test platform was added to support geckoview testing. Tests run in the
> geckoview-based TestRunnerActivity (not Firefox for Android).
>
> To reproduce the main elements of this test environment locally:
>  - build for Android x86 (mozconfig with --target=i686-linux-android)
>  - 'mach android-emulator' or explicitly 'mach android-emulator --version
> x86-7.0'
>  - install the geckoview androidTest apk
>  - run your test command using --app to specify the geckoview test app,
> something like 'mach mochitest ... --app=org.mozilla.geckoview.test'
>
> Great thanks to the many people who have helped enable this test platform,
> especially :wcosta for help with taskcluster and :jchen for investigating
> test failures.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Test helper ok() will now fail if used with more than 2 arguments

2018-11-01 Thread Ehsan Akhgari
Thank you for doing this!  I cannot overstate how many times this has
tripped me and others over in the past.

Cheers,
Ehsan

On Thu, Nov 1, 2018 at 8:02 AM Julian Descottes 
wrote:

> We are about to land https://bugzilla.mozilla.org/show_bug.cgi?id=1467712
> which changes the behavior of `ok()` test helpers provided by Assert.jsm,
> SimpleTest.js and browser-test.js (as well as some test helpers based on
> them).
>
> `ok()` used to accept up to four arguments: condition, name,
> exception/diagnostic and stack.
> After bug 1467712 lands, `ok()`will only accept the 2 first arguments and
> will fail the test if more arguments are provided.
>
> The reason behind this change is that `ok()` was regularly used by mistake
> instead of `is()`:
>   ok(value1, value2, "Check value1 == value2");
>   // passes as long as value1 is truthy and won't detect regressions if
> value1 != value2
>
> We recently fixed all the incorrect uses of ok() in the code base in
> https://bugzilla.mozilla.org/show_bug.cgi?id=1499096. `is()` is called
> with
> 3 arguments, so restricting the number of arguments for ok() will prevent
> developers from making similar mistakes in the future.
>
> If you need the old `ok()` with 4 arguments, it has been moved to the
> `record()` method and you can use it instead.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coming in Firefox 65: Dedicated Profiles Per Install and Profile Downgrade Protection

2018-10-18 Thread Ehsan Akhgari
These are fantastic changes, I'm very excited to see we're finally tackling
these hard problems.  Thanks a lot to everyone who helped make them happen.

On Thu, Oct 18, 2018 at 3:32 PM Dave Townsend  wrote:

> In Firefox 65 we intend to ship two new features to help prevent user
> frustration caused by using profiles created by newer versions of Firefox.
>
> Why
>
> Firefox stores all of its settings in the user’s profile and unless
> certain command line arguments are used Firefox always launches with the
> same profile. Periodically as Firefox upgrades it makes changes to the
> settings that can render the profile unusable by earlier versions of
> Firefox. This can cause anything from certain features of Firefox being
> broken after a downgrade to Firefox crashing on startup.
>
> To protect against this two new features will be landing in Nightly soon.
>
> Dedicated Profiles Per Install
>
> A common cause of users switching between different versions of Firefox is
> having Firefox installed multiple times. This most often happens when users
> have different channels installed at the same time like ESR and Release. It
> is a common request to be able to run the different installs at the same
> time. In Firefox 35 this was made possible for the developer edition by
> giving it a dedicated profile. The new dedicated profiles per install
> feature extends this and will give every install of Firefox on the computer
> a dedicated profile by default.
>
> Users will be able to run different installs of Firefox simultaneously.
> Each will use a different profile for its settings. Upgrading an install of
> Firefox will keep it using the same settings.
>
> We’re tracking work on this feature in bug 1474285.
>
> Profile Downgrade Protection
>
> For cases where users manually downgrade an install of Firefox or attempt
> to forcefully use an older version of Firefox with a newer profile the
> profile downgrade protection feature will now tell the user that the
> profile is too new to use with this Firefox giving them the option to
> create a new profile to use or to quit.
>
> We’re tracking work on this feature in bug 1455707.
>
> Developer Implications
>
> As developers it is quite common to switch between different builds for
> example when testing different built versions of Firefox and doing
> regression testing. To support these use-cases a new command line argument
> “--allow-downgrade” will allow skipping the downgrade protection.
>
> Conclusions
>
> While some users may be impacted by this change we believe that most users
> will be unaffected and those that are will see less problems caused by
> downgrades than previously. This will give us the ability to ignore the
> possibility of downgrades when implementing new features and fixing bugs
> going forwards. Being able to make the assumption that this code works as
> designed will allow us flexibility in other changes downstream.
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: New cookie jar policy to block storage access from tracking resources

2018-10-17 Thread Ehsan Akhgari
Just a quick update: This new policy has now been made the new default in
Nightly in https://bugzilla.mozilla.org/show_bug.cgi?id=1492563.

On Fri, Sep 21, 2018 at 3:15 PM Steven Englehardt 
wrote:

> Technical documentation for this is now available on MDN:
> https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Privacy/Storage_access_policy
>
> On Wed, Sep 19, 2018 at 10:24 PM Ehsan Akhgari 
> wrote:
>
>> Hi everyone,
>>
>> This is a (belated) intent to implement, as well as an intent to ship, a
>> new cookie jar policy to block storage access to tracking resources.  This
>> work has been under development for several months now and has been
>> tracked
>> in https://bugzilla.mozilla.org/show_bug.cgi?id=cookierestrictions.
>>
>> As of Firefox 65, I intend to turn on our new cookie jar policy to block
>> storage access from tracking resources by default on all desktop platforms
>> (assuming our testing goes well).  This feature has been developed behind
>> the network.cookie.cookieBehavior preference (when set to 4). No other UA
>> is shipping this feature, although Safari 12 ships a somewhat similar
>> feature (
>> https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/).
>>
>> Bug to turn on by default:
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1492549
>>
>> Please note that this feature uses the Disconnect list in order to
>> identify
>> tracking resources, so it is not going to have any effect if you have
>> Tracking Protection turned on, or if you have installed a privacy
>> extension
>> and/or an ad blocker (examples include Privacy Badger, uBlock Origin and
>> Ghostery).  If you are a Nightly tester using such a feature, it would be
>> hugely helpful to us in the next few months if you would kindly consider
>> disabling such features and just use the default configuration of Nightly,
>> as this is what we are intending to ship to all our users soon. If you
>> encounter any web page breakage as a result of testing this feature,
>> please
>> consider filing a bug and making it block
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1480137.
>>
>> Since this isn’t a typical web feature, the standard “intent to implement”
>> template isn’t really helpful for it, but here is some of the information
>> surfaced from that template that apply to this feature:
>>
>> Platform coverage: the Gecko part is cross-platform, but the UI for the
>> feature has been developed on desktop for now, so we’re planning to enable
>> it on desktop at the moment.
>>
>> Estimated or target release: Firefox 65.  Please note that this feature is
>> currently undergoing a Shield Study on Beta 63, and the estimated target
>> release is assuming the successful outcome of that study and the continued
>> ongoing testing of the feature.
>>
>> DevTools bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1462372
>>
>> Is this feature restricted to secure contexts? No, it doesn’t distinguish
>> secure vs. non-secure contexts.  This isn’t a new web-facing API, rather
>> it
>> is intended to be a new privacy protection for our users. As such, we
>> intend to impose the new restrictions for tracking resources on both
>> secure
>> and non-secure contexts.  It should be noted that some non-secure tracking
>> vectors, e.g. HTTP cookies, can also be used for pervasive tracking by
>> passively monitoring the user’s connection, and while cracking down on
>> passive tracking isn’t a direct goal of this feature, it is a good
>> justification for not limiting these restrictions to secure contexts.
>>
>> Last but not least, in preparation for this intent to ship, we’ll be
>> gradually exposing more users to the feature.  The first part of this has
>> already been done in the form of the Shield Study mentioned above. As the
>> second part of this process, I intend to turn this feature on by default
>> on
>> all desktop platforms for Nightly only in
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1492563.
>>
>> Thanks,
>>
>> Ehsan
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Windows launcher process enabled by default on Nightly

2018-09-28 Thread Ehsan Akhgari
On Thu, Sep 27, 2018 at 11:19 AM Aaron Klotz  wrote:

> 1. Automation
> -
>
> If you are doing some kind of automation that waits for the Firefox
> process to finish, you'll find that the initial Firefox process exits
> sooner than the browser, appearing to your script that Firefox is
> already done when in fact it is still running. This is because, by
> default, the launcher process only lives long enough to stand up the
> browser process and transfer foreground to the new browser's window.
>
> In order to make the launcher wait for the browser to complete, pass the
> --wait-for-browser flag on the command line. The launcher will wait for
> the entire life of the browser process, and will also propagate the
> browser process's exit code as its own. The launcher also accepts the
> presence of the MOZ_AUTOMATION environment variable as an implicit
> --wait-for-browser request.
>
> I have already fixed up our automation to supply this parameter when
> necessary, so I expect everything in Mozilla's CI system to already Just
> Work.
>

Is the --wait-for-browser flag the default in headless mode, since that
mode is mostly for automation

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Input Delay Metric proposal

2018-09-20 Thread Ehsan Akhgari
Hi Randell,

The last part of your email where you suggested this metric could be
proposed for the Performance API WG confused me a bit about its goals.  It
seemed to me from the description earlier that the goal behind MID is to
provide a metric useful for browser engineers who would like to measure how
the performance of the browser varies over time in an automated lab
environment, not as a metric useful for measuring the performance of live
web applications used by real users in the wild.  Is my understanding
correct?

I'm asking since it seems to me that FID actually achieves a pretty good
result in measuring things "at the right time" if the goal is to measure
the performance of a web application used by real users in the wild.  That,
of course, makes it fairly hard to use as a metric for browser engineers to
use to evaluate the performance of the browser as we make code changes, due
to the problem of not knowing when to capture it as you mentioned, and MID
sounds like a good way to address that problem.  However I have a hard time
seeing why web developers would be interested in MID over FID, since FID
correctly captures the pain of the user when they first try to interact
with their pages upon loading, and it seems unnecessary to then try to
guess when would be a good time to measure how bad the pain would be _if_
the user had interacted with the page at that time...

Thanks,
Ehsan

On Wed, Sep 19, 2018 at 2:45 PM Randell Jesup  wrote:

> Problem:
> Various measures have been tried to capture user frustration with having
> to wait to interact with a site they're loading (or to see the site
> data).  This includes:
>
> FID - First Input Delay --
> https://developers.google.com/web/updates/2018/05/first-input-delay
> TTI - Time To Interactive --
>
> https://developers.google.com/web/fundamentals/performance/user-centric-performance-metrics#time_to_interactive
> related to: FCP - First Contentful Paint and FMP - First Meaningful Paint
> --
>
> https://developers.google.com/web/fundamentals/performance/user-centric-performance-metrics#first_paint_and_first_contentful_paint
> TTVC (Time To Visually Complete), etc.
>
> None of these do a great job capturing the reality around pageload and
> interactivity.  FID is the latest suggestion, but it's very much based
> on watching user actions and reporting on them, and thus depends on how
> much they think the page is ready to interact with, and dozens of other
> things. It's only good for field measurements in bulk of a specific
> site, by the site author.  In particular, FID cannot reasonably be used
> in automation (or before wide deployment).
>
> Proposal:
>
> We should define a new measure based on FID name MID, for Median Input
> Delay, which is measurable in automation and captures the expected delay
> a user experiences during a load.  We can run this in automation against
> a set of captured pages, while also measuring related values like FCP
> and TTI, and dump this into a set of per-page graphs (perhaps on
> "areweinteractiveyet.com" :-) ).
>
> While FID depends on measuring the delay when the user *happens* to
> click, MID would measure the median (etc) delay that would be
> experienced at any point between (suggestion) FCP and TTI.  I.e. it
> would be based on "if a user input event were generated this
> millisecond, how long would it be before it ran?"  This would measure
> delay in the input event queue (probably 0 for this case) plus the time
> remaining until he current-running event for the mainthread finishes.
>
> This inherently assumes we measure TTI and FCP (or something
> approximating it).  This is somewhat problematic, as TTI is very noisy.
> I have a first cut at TTI measurement (fed into profiler markers) in
> bug 1299118 (without the "no more than 2 connections in flight" part).
>
> Value calculation:
> Median seems to be the best measure, but once we have data we can look
> at the distributions on real sites and our test harness and decide what
> has the most correlation to user experience.  We could also measure the
> 95% point, for example.  In automation, there might be some advantage to
> recording/reporting more data, like median and 95%, or median, average,
> and 95%, and max.
>
> Another issue with the calculation is that it won't capture burstiness
> in the results well (a distribution would).
>
> Range measured over:
> We could modify the starting point to be when the first object that
> could be interacted with is rendered (input object, link, adding a key
> event handler, etc).  This would be a more-accurate measure for web
> developers, and would matter only a little for our use.  Note that
> getting content on the screen earlier might in some cases hurt you by
> starting the measurement "early" when the MainThread is presumably busy.
>
> Likewise, there might very well be alternatives to TTI for the end-point
> (and on some pages, you never get to TTI, or it's a Long Time).  Using
> TTI does imply we must collect data u

Re: Enabling (many) assertions in opt builds locally and eventually Nightly

2018-09-20 Thread Ehsan Akhgari
On Wed, Sep 19, 2018 at 8:38 PM Bobby Holley  wrote:

> I spoke with Georg about it recently, and got the impression that his team
> could get it finished if we had a current use-case. Hooking it up to
> MOZ_ASSERTs on nightly builds seems like a pretty good one.
>

That would indeed be wonderful to have!


> On Wed, Sep 19, 2018 at 5:05 PM Randell Jesup 
> wrote:
>
> > >On Thu, Sep 20, 2018 at 09:22:12AM +1000, Cameron McCormack wrote:
> > >> On Thu, Sep 20, 2018, at 1:52 AM, Ehsan Akhgari wrote:
> > >> > While it may be the case that we may need to be more stable for
> > >> > MOZ_RELEASE_ASSERTs, I very much doubt that every single MOZ_ASSERT
> > in our
> > >> > codebase is actually a guaranteed to never fail, so promoting them
> > all to
> > >> > be enabled in something like Nightly is extremely dangerous in terms
> > of the
> > >> > stability of Nightly for users who are trying to use the browser to
> > get
> > >> > their normal work done.
> > >>
> > >> If it's truly useful to know whether Nightly users are failing these
> > >> MOZ_ASSERT assertions, we could use Telemetry to report their failure
> > >> instead of crashing.>
> > >> (I wonder if we could collect all the same data, and use the same
> crash
> > reporting infrastructure, for non-crashing crash reports like this.)
> > >
> > >On Linux and Mac, we could make MOZ_ASSERT fork and crash in the child,
> > >and continue in the parent process.
> >
> > Oooh, tricky.  (though threads aren't preserved in forks, but the
> > current thread's stack is, so if we care about all threads, we might
> > need to grab pointers to their stacks/etc and store pointers to the
> > stacks before fork()).  But even without that it would be good.
> >
> > And yes, we don't have to have it crash the browser (though if it later
> > crashes we'd want to know about assertions we'd hit).  Telemetry isn't
> > quite appropriate for reporting it; it could however save a crash report
> > at ASSERT(), then (probably) disable MOZ_ASSERT reports and let the
> > browser continue (and it may well now really crash).  This disabling
> > would avoid the cascade effect, but still get us crashreports.  This
> > would be a bunch(?) of work in crashreporter - ted?  It might be easy
> > invoke a crashreport sampling, and continue, but I suspect it's not, and
> > might be quite hard.
> >
> > --
> > Randell Jesup, Mozilla Corp
> > remove "news" for personal email
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>


-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement and ship: New cookie jar policy to block storage access from tracking resources

2018-09-19 Thread Ehsan Akhgari
Hi everyone,

This is a (belated) intent to implement, as well as an intent to ship, a
new cookie jar policy to block storage access to tracking resources.  This
work has been under development for several months now and has been tracked
in https://bugzilla.mozilla.org/show_bug.cgi?id=cookierestrictions.

As of Firefox 65, I intend to turn on our new cookie jar policy to block
storage access from tracking resources by default on all desktop platforms
(assuming our testing goes well).  This feature has been developed behind
the network.cookie.cookieBehavior preference (when set to 4). No other UA
is shipping this feature, although Safari 12 ships a somewhat similar
feature (https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/).

Bug to turn on by default:
https://bugzilla.mozilla.org/show_bug.cgi?id=1492549

Please note that this feature uses the Disconnect list in order to identify
tracking resources, so it is not going to have any effect if you have
Tracking Protection turned on, or if you have installed a privacy extension
and/or an ad blocker (examples include Privacy Badger, uBlock Origin and
Ghostery).  If you are a Nightly tester using such a feature, it would be
hugely helpful to us in the next few months if you would kindly consider
disabling such features and just use the default configuration of Nightly,
as this is what we are intending to ship to all our users soon. If you
encounter any web page breakage as a result of testing this feature, please
consider filing a bug and making it block
https://bugzilla.mozilla.org/show_bug.cgi?id=1480137.

Since this isn’t a typical web feature, the standard “intent to implement”
template isn’t really helpful for it, but here is some of the information
surfaced from that template that apply to this feature:

Platform coverage: the Gecko part is cross-platform, but the UI for the
feature has been developed on desktop for now, so we’re planning to enable
it on desktop at the moment.

Estimated or target release: Firefox 65.  Please note that this feature is
currently undergoing a Shield Study on Beta 63, and the estimated target
release is assuming the successful outcome of that study and the continued
ongoing testing of the feature.

DevTools bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1462372

Is this feature restricted to secure contexts? No, it doesn’t distinguish
secure vs. non-secure contexts.  This isn’t a new web-facing API, rather it
is intended to be a new privacy protection for our users. As such, we
intend to impose the new restrictions for tracking resources on both secure
and non-secure contexts.  It should be noted that some non-secure tracking
vectors, e.g. HTTP cookies, can also be used for pervasive tracking by
passively monitoring the user’s connection, and while cracking down on
passive tracking isn’t a direct goal of this feature, it is a good
justification for not limiting these restrictions to secure contexts.

Last but not least, in preparation for this intent to ship, we’ll be
gradually exposing more users to the feature.  The first part of this has
already been done in the form of the Shield Study mentioned above. As the
second part of this process, I intend to turn this feature on by default on
all desktop platforms for Nightly only in
https://bugzilla.mozilla.org/show_bug.cgi?id=1492563.

Thanks,

Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling (many) assertions in opt builds locally and eventually Nightly

2018-09-19 Thread Ehsan Akhgari
On Tue, Sep 18, 2018 at 1:30 AM Randell Jesup  wrote:

> >On 9/17/2018 1:31 PM, Botond Ballo wrote:
> >> One potential issue here is that an assertion failure in Nightly
> >> builds might be simultaneously very annoying (because it crashes the
> >> browser of Nightly users every time it occurs) and not very actionable
> >> (because no reliable STR can be found for it).
> >
> >I would also be concerned about data loss and profile corruption when an
> >assertion crashes the browser. I don't think we'll have a very easy time
> >convincing people to do anything non-trivial with Nightly if there's a
> >pretty good chance that they completely lose the fruits of their labour.
> >
> >I'd hope that this wouldn't be too frequent a problem, but it probably
> >depends on the assertion, and the state of the durability of our user
> >profiles.
>
> We already *need* to be stable in that case, given MOZ_RELEASE_ASSERT
> (and normal just-because crashes). And as best I can tell, we are stable
> (with regards to user profiles).  Once upon a time we weren't (5(?)
> years ago?)
>

I do come across MOZ_ASSERTs that trigger while browsing every once in a
while (using a debug build).  I almost never file bugs for these, because
the issues are rarely reproducible, I often have little information that
would be helpful as to how exactly the assertion was triggered, and I often
am very actively working on something else and the assertion failure is
just getting in the way of me getting my work done.  I sometimes have to
comment out a broken MOZ_ASSERT in my local build to be able to proceed.

While it may be the case that we may need to be more stable for
MOZ_RELEASE_ASSERTs, I very much doubt that every single MOZ_ASSERT in our
codebase is actually a guaranteed to never fail, so promoting them all to
be enabled in something like Nightly is extremely dangerous in terms of the
stability of Nightly for users who are trying to use the browser to get
their normal work done.  This is, I believe, the way that we advertise
Nightly these days: we advertise it as a mostly stable experimental version
of Firefox and we encourage people to download it to try out the latest and
greatest.  Exposing that population to crashes as a result of promoting all
MOZ_ASSERTs to become MOZ_RELEASE_ASSERTs seems like a proposition that
needs to be backed by some data demonstrating that such a build would match
the current stability requirements of Nightly.

Note that with MOZ_DIAGNOSTIC_ASSERT, we have had tons of cases where
people have put in assertions that were *surely* correct and would never
ever fail, only to find out that the assertion turned out to be crashers
and in some cases top crashers in Nightly.  It is true that sometimes that
is the only tool in our toolbox for hunting down bugs, but using a macro
intentionally to opt in to such a scenario seems entirely different than
promoting all of our existing assertions to Nightly without any way to
assess their quality beforehand.

FWIW, I think your original proposal, having a way to opt into assertions
without a slow build locally, is extremely valuable.  I could see myself
using that option intentionally even with the pain points I described
above, but for dogfooding rather than while working on a patch and such.

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Feature Policy

2018-09-14 Thread Ehsan Akhgari
On Fri, Sep 14, 2018 at 2:25 PM Boris Zbarsky  wrote:

> On 9/14/18 1:58 PM, Andrea Marchesini wrote:
> > DevTools bug: No devtools support.
>
> Seems like devtools might be useful for answering questions like "what
> is the feature policy for this page and why?" given the complexity of
> feature policy determination (headers, inheritance from parents, etc).
>

Agreed, seems like at least it's worth having a bug on file and reaching
out to the devtools team to see if they can help with this.

We also have https://bugzilla.mozilla.org/show_bug.cgi?id=1449501 open to
display the CSP policy, perhaps it might make sense to expose both in
similar ways (or at least for similar contexts, e.g. iframes).

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [Fetch API] Headers.get("WWW-Authenticate") does not return anything, if server sends more than one such header

2018-09-13 Thread Ehsan Akhgari
On Thu, Sep 13, 2018 at 9:40 AM john.bieling--- via dev-platform <
dev-platform@lists.mozilla.org> wrote:

> I encountered something strange. I am on TB60 trying to use the Fetch API.
>
> I create a GET or POST request without authentication and get back a 401
> from the server and use
>
>response.headers.get("WWW-Authenticate")
>
> to eval the allowed authentication methods. This works fine, if the server
> sends only one such header. If however he sends two or more,
> Headers.get("WWW-Authenticate") returns null.
>
> I only observe this for the WWW-Authenticate header, other headers - if
> send multiple times - will simply be returned as a list (joined by colon).
>
> Is this a bug? Am I doing something wrong?
>
> This is a example set of headers, that is not accessible:
>
> WWW-Authenticate: Negotiate
> WWW-Authenticate: Basic realm="Contact and Calendar data"
>

Hmm, this may be a bug.  When there are multiple WWW-Authenticate headers,
we have a special case for merging them (with a newline character instead
of a comma):
https://searchfox.org/mozilla-central/rev/a23c3959b62cd3f616435e02810988ef5bac9031/netwerk/protocol/http/nsHttpHeaderArray.h#268
so it's possible something in the fetch code fails to handle this.  If you
have a test case, I suggest filing a bug so that someone can debug this.


> Also: Why was Headers.getAll() dropped after Gecko 52? The digest auth
> header includes colons himself and if it is returned joined into a list
> together with some other auth method, it will be very difficult to parse
> that.
>

See https://bugzilla.mozilla.org/show_bug.cgi?id=1278275.  That was a
non-standard method which was removed.

An interesting question is what we return from Headers.get() for
WWW-Authenticate, given that the fetch spec calls for pasting the multiple
header values together with ", " but Gecko internally pastes them together
with "\n"...

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changes to BMO-Phabricator integration

2018-09-12 Thread Ehsan Akhgari
Hi everyone,

As someone who cares a lot about code review (both as a developer and as a
reviewer), I care a lot about the tools I work with and the workflows I’ve
built around them over the years.  I’m sure I’m not the only one there.

What Mark wrote about below brings changes to some parts of our code review
workflows and I think it’s fair to assume that some people will find the
change unpleasant.  I think it’s important to know that decisions like this
aren’t made lightly though, and the team working on Phabricator has looked
into making this integration work as much as possible, before coming to
this final conclusion.

What I’ve learned from the experience of the folks who worked closely on
this project over the past 3-4 years is that integrating an existing code
review system with an existing issue management system is an extremely
difficult task.  There are certainly huge benefits to be reaped if a tight
integration is achieved, but there are great costs to also consider, and
the trade-off is tricky to get right.

As much as deep down in my heart, I secretly wish nothing would ever change
so that I could hold on to my old habits until who-knows-when, I realize
that sometimes it’s prudent to do what’s hard -- shaking off old habits and
workflows and pick up new ones, in the interest of the greater good.

In this case, even with my reviewer hat on, I think it would certainly be a
mistake for us to have tried as hard as we did for MozReview to integrate
BMO and Phabricator together at the expense of holding back the deployment
of Phabricator and new features in it.  If our smart engineers working on
this hard problem have looked at this integration issue, have tried to make
it work, and have decided down the line that it’s best to correct course,
my hunch is to trust them and consider maybe it’s time to change some of my
old habits on how I interact with code reviews on Bugzilla.

Thanks,
Ehsan

On Wed, Sep 12, 2018 at 10:37 AM Mark Côté  wrote:

> To reduce confusion and a growing maintenance burden, the Engineering
> Workflow team plans to remove two pieces of Phabricator-Bugzilla
> integration:
>
>
>1.
>
>The setting of r+ flags on the stub attachments in that link to
>Phabricator revisions (
>https://bugzilla.mozilla.org/show_bug.cgi?id=1490687).
>2.
>
>The Phabricator table in Bugzilla’s “My Requests” page (
>https://bugzilla.mozilla.org/show_bug.cgi?id=1487422).
>
>
> We plan on adding one more piece of integration: a panel on bug views
> (show_bug.cgi) that shows the status of Phabricator revisions in
> Phabricator’s terms, similar to the old MozReview table (
> https://bugzilla.mozilla.org/show_bug.cgi?id=1489706).
>
> The “stub attachments” will remain for the time being in order to
> facilitate tracking non-review attachment flags (checkin-needed, etc).
>
> # Rationale and background
>
> There have been a lot of questions about our decisions surrounding
> Bugzilla–Phabricator integration. We’ve expounded on those in various
> threads over the last year and a half, but I will try to go into more
> specifics.
>
> At the start of the Phabricator project, having learned a lot from
> MozReview, we consciously decided to limit the amount of integration to
> Bugzilla. This not only reduces upfront costs and maintenance burden but
> also avoids the complexity and ambiguity inherent in mixing two different
> systems together. Aside from necessary linkages like authentication,
> accounts, and security groups, the only other integration we implemented
> was the setting of r+ statuses on stub attachments and, later, adding
> Phabricator requests to BMO’s “My Dashboard”.
>
> Unfortunately both of these have had bugs and caused confusion. Since
> comments aren’t mirrored, the plain r+s were sometimes misleading if the
> revisions (or Phabricator’s email notifications) weren’t also consulted
> before landing. The requests view on “My Dashboard” suffered from bugs that
> resulted in missing requests and was further impacted by our experiments
> with reviewer groups that have no real analog in Bugzilla.
>
> # Differing models
>
> The central problem is that the models behind the two systems—Bugzilla’s
> attachments and flags, Phabricator’s revisions and reviews—are very
> different. Phabricator’s revisions have a much richer and more complex set
> of metadata than Bugzilla attachments. It is essentially impossible to
> represent Phabricator states as Bugzilla flags while preserving anything
> close to the expected semantics of the latter; consulting Phabricator would
> often be needed to make sense of any translated representation in Bugzilla.
> Further, Phabricator’s model is also subject to changes and additions (for
> example, the draft revision state, which is still being modified), which
> creates additional fragility and maintenance burden. Finally, not all the
> details we need are even exposed through APIs, in part because they are in
> flux.
>
> # Adding revision statuses

Re: Intent to Implement: Storage Access API

2018-09-12 Thread Ehsan Akhgari
On Wed, Sep 12, 2018 at 3:47 AM Anne van Kesteren  wrote:

> On Tue, Sep 11, 2018 at 9:06 PM Ehsan Akhgari 
> wrote:
> > Please note that Firefox will grant storage access permissions
> > automatically under certain circumstances for web compatibility reasons,
> so
> > even when the iframe has never called this API it may still obtain
> storage
> > access.  In order to prevent that from happening, the usual approaches
> > against embedded content gaining storage access (through sandboxing the
> > iframe to give it a unique origin) could be used.
>
> Unfortunately, that will still share cookies. Adding a feature policy
> or some such for that might be worthwhile.
>

Yes indeed.  But that's beyond the scope of the current intent thread.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement: Storage Access API

2018-09-11 Thread Ehsan Akhgari
On Mon, Sep 10, 2018 at 11:19 AM James Graham 
wrote:

> On 07/09/2018 21:27, Ehsan Akhgari wrote:
>
> > Very cool, I did not know this!  It seems like test_driver.bless() is
> > what we need here for simulating a user activation gesture.
> >
> > However it sounds like in this case you may need to add test-only
> APIs
> > for manipulating internal browser state. There are two possible
> > approaches here:
> >
> > * Add a feature to WebDriver for manipulating this data. This can be
> > specified in the StorageManager spec and is appropriate if we want to
> > add something that can be used by authors as part of the automated
> > testing for their website.
> >
> > * Add test-only DOM APIs in the StorageManager spec that we can
> enable
> > when running the browser in test mode. Each browser would be
> > expected to
> > have some implementation-specific means to enable these APIs when
> under
> > test (e.g. a pref). WebUSB has an example of this approach.
> >
> >
> > This is something that I should get some feedback on from at least
> > WebKit before deciding on a path forward, but from a Gecko perspective,
> > we basically need to call one function at the beginning and one function
> > at the end of each test, so looks like the first option would be
> > sufficient.  Do you happen to have an example of that handy?  I've never
> > done something like this before, and I do appreciate some pointers to
> > get started.
>
> You want an example of a spec adding a WebDriver-based test API? (I'm
> not 100% sure I interpreted your message correctly). The permissions
> spec has one [1].
>
> [1] https://w3c.github.io/permissions/#automation
>

Thanks, this is exactly what I wanted to find!

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement: Storage Access API

2018-09-11 Thread Ehsan Akhgari
On Sun, Sep 9, 2018 at 5:10 AM Mike O'Neill  wrote:

> This is great but I have a couple queries.
> >
> >In our implementation, once Storage Access API grants storage access,
> >all existing third-party iframes on the same first party will receive
> that
> >storage access, whereas in WebKit’s implementation they each would
> require
> >calling requestStorageAccess() separately.
> >-
> >
> Presumably this is restricted to iframes *of the same origin* on the same
> first party, i.e. if there are 2 iframes on different origins they would
> each still have to request storage access. Can you confirm this?
>

Yes, of course.


> >
> >
> > We don’t necessarily believe that a model where the user is asked whether
> > they consent to sharing their data with third-party trackers is ideal,
> > because explaining the implications of the data sharing is very hard, and
> > there are many problems associated with asking for permission from the
> > user.  But we are looking at this API as a programmatic hook into the
> point
> > in time when a third-party context would like to obtain full storage
> access
> > rights, which would allow the browser to perform various forms of
> > security/privacy checks at that time. Prompting the user is only one of
> the
> > options we’ve thought about so far.  Note that the API limits granting
> > access only to callers coming at times when processing a user gesture.
> >
> The legal requirement in Europe is that storage can only be accessed if
> the user has unambiguously given their "freely given, specific & informed"
> consent. How will a European website top-level context (first-party) ensure
> that embedded third-parties will not be granted storage access without the
> user first being prompted?
>

In general in order to comply with regulations such as GDPR, websites need
to do a lot more than just look at the Storage Access API, so this is a
very partial answer to your question.  The API provides the ability right
now for the embedder to control the ability of the embedded third-party to
request storage access using an iframe sandbox flag.  In the future we may
consider adding further controls in this regard, for example, allowing the
top-level embedder to control whether the embedded content can call the API
using feature policy.

Please note that Firefox will grant storage access permissions
automatically under certain circumstances for web compatibility reasons, so
even when the iframe has never called this API it may still obtain storage
access.  In order to prevent that from happening, the usual approaches
against embedded content gaining storage access (through sandboxing the
iframe to give it a unique origin) could be used.

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement: Storage Access API

2018-09-07 Thread Ehsan Akhgari
On Fri, Sep 7, 2018 at 2:49 PM James Graham  wrote:

> > web-platform-tests: Our implementation unfortunately doesn’t come with
> > web-platform-tests, for two reasons.  One is that there is currently no
> way
> > to mock user gestures in web platform tests [4], and the second reason is
> > that furthermore, our implementation also depends on being able to
> > manipulate the URL Classifier backend for testing purposes.
>
> So this isn't true, however the present state of affairs may not yet be
> useful to you.
>
> It is currently possible to make basic user actions in wpt using the
> testdriver API [1]. I am helping extend this to allow more general
> guestures via WebDriver-like actions [2].
>

Very cool, I did not know this!  It seems like test_driver.bless() is what
we need here for simulating a user activation gesture.


> However it sounds like in this case you may need to add test-only APIs
> for manipulating internal browser state. There are two possible
> approaches here:
>
> * Add a feature to WebDriver for manipulating this data. This can be
> specified in the StorageManager spec and is appropriate if we want to
> add something that can be used by authors as part of the automated
> testing for their website.
>
> * Add test-only DOM APIs in the StorageManager spec that we can enable
> when running the browser in test mode. Each browser would be expected to
> have some implementation-specific means to enable these APIs when under
> test (e.g. a pref). WebUSB has an example of this approach.
>

This is something that I should get some feedback on from at least WebKit
before deciding on a path forward, but from a Gecko perspective, we
basically need to call one function at the beginning and one function at
the end of each test, so looks like the first option would be sufficient.
Do you happen to have an example of that handy?  I've never done something
like this before, and I do appreciate some pointers to get started.

I'm very happy to learn that it's possible to contribute wpt tests for this
feature!

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement: Storage Access API

2018-09-07 Thread Ehsan Akhgari
On Fri, Sep 7, 2018 at 2:32 PM Tom Ritter  wrote:

> On Fri, Sep 7, 2018 at 12:54 PM, Ehsan Akhgari 
> wrote:
>
>>In our implementation, once the Storage Access API grants storage
>>access, all newly created third-party iframes of the same origin will
>> have
>>storage access for a period of time (currently defined at 30 days)
>> without
>>calling requestStorageAccess() again, whereas in WebKit’s
>> implementation
>>they’d need to reaffirm their access by calling the API again.
>>
>> ...
>>
>>In our implementation, we phase out the granted storage access
>>permissions after a number of days passed, whereas in WebKit’s
>>implementation they phase out the permissions after a number of days of
>>browser usage passed.
>>
>
>
> These two were confusing to me.  AIUI:
>

First thing to note, the expiry mechanism is completely independent of when
and how we choose to grant storage access, in case it helps clarify things.


> The first one is solely about having to call the requestStorageAccess()
> API again. It sounds like you're saying we give a 30 day grace window while
> safari expires things immediately; but that's not what you mean. It's
> solely about whether the javascript code needs to call
> requestStorageAccess() (even if it will be granted without a prompt) or if
> it doesn't need to call it.
>

That's right.  It's also about the initial navigation of the document
having storage access (before the document has a chance to call any JS
APIs) which matters for HTTP layer cookies.


> In Safari, you have to (even though it won't prompt), but in FF you won't
> have to. In your pseudocode, I guess it works either way, but... what was
> the reasoning for doing it this way instead of a pure Promise-based
> approach?
>

What do you mean by "this way instead of a pure Promise-based approach"?
If you mean why we chose to grant this access when it has previously
granted before the page calls the API again, the short answer is for more
web compatibility.

One thing that I should explain here is that we also have a couple of
heuristics that try to detect when the user interacts with a third-party
widget in certain ways and grant this storage access automatically.  These
are designed to catch scenarios such as logins using things like social
widgets, commenting widgets and the like.  We'd like to treat the storage
access permissions the same, no matter whether they've been granted
automatically or programmatically through calling this API.  The web
compatibility are mostly about the former scenario.


> In the second one:
> Day 0: I do the thing, grant the access
> Day 15: I sleep all day and don't use my browser
> Day 30: Firefox expires permission
> Day 31: Safari expires permission
>
> It's solely about the difference between counting Day 15 towards 30 days.
>

Right.  I should also add that this isn't something that we particularly
like in the current implementation, and may consider improving on it in the
future.  It's more a by-product of how our permission manager backend works.

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to Implement: Storage Access API

2018-09-07 Thread Ehsan Akhgari
Hi everyone,

As we recently announced [0], we are working on a number of new
anti-tracking features which are intended to change the current approach in
Firefox with regards to how user’s data is shared with third-party trackers
as the user browses the web.

In the old world (the status quo for now), the data sharing is a
non-consensual and invisible part of browsing the web.  We would like to
change this model by flipping the defaults, that is, make changes to
Firefox to avoid sharing user’s data with third-party trackers by default,
and also make the data sharing, where it happens, more visible and
controllable by the user. As part of these efforts we are experimenting
with a new cookie policy [1] that will block trackers from accessing
cookies and other storage while in a third-party context.

One of the new mechanisms that we would like to expose to web developers is
to ask for consent in order to gain access to the user’s data in a
third-party context.  We are going to do this by adopting the Storage
Access API [2] implemented by Safari for a very similar purpose. At a
high-level, the Storage Access API allows content in third-party frames to
request access to storage on a specific first party that would otherwise be
blocked by our new policy. Safari uses this API as part of their
Intelligent Tracking Prevention [3] feature, and there are some differences
in the edge cases of how that feature works compared to our new cookie
policy. We discuss this in more detail below, but the main differences
include:

   -

   In our implementation, once Storage Access API grants storage access,
   all existing third-party iframes on the same first party will receive that
   storage access, whereas in WebKit’s implementation they each would require
   calling requestStorageAccess() separately.
   -

   In our implementation, all future resource loads from a third party that
   has been granted storage access will also be granted access on the same
   first party. This includes access to third-party cookie headers for content
   embedded into the main context of the page.
   -

   In our implementation, once the Storage Access API grants storage
   access, all newly created third-party iframes of the same origin will have
   storage access for a period of time (currently defined at 30 days) without
   calling requestStorageAccess() again, whereas in WebKit’s implementation
   they’d need to reaffirm their access by calling the API again.
   -

   In our implementation, when the promise returned from
   requestStorageAccess() is resolved, the third-party context will have
   access to its full first-party cookie jar.  That includes things like
   cookies, localStorage, IndexedDB, DOM Cache, and so on. In WebKit’s current
   implementation, only access to cookies will be granted.
   -

   In our implementation, we phase out the granted storage access
   permissions after a number of days passed, whereas in WebKit’s
   implementation they phase out the permissions after a number of days of
   browser usage passed.


We don’t necessarily believe that a model where the user is asked whether
they consent to sharing their data with third-party trackers is ideal,
because explaining the implications of the data sharing is very hard, and
there are many problems associated with asking for permission from the
user.  But we are looking at this API as a programmatic hook into the point
in time when a third-party context would like to obtain full storage access
rights, which would allow the browser to perform various forms of
security/privacy checks at that time. Prompting the user is only one of the
options we’ve thought about so far.  Note that the API limits granting
access only to callers coming at times when processing a user gesture.

Summary: Storage Access API is used to grant first-party storage access to
third-party embedded content under some browser controlled conditions. [2]

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1469714

Link to standard: No standard exists yet, this is being discussed in
https://github.com/whatwg/html/issues/3338.

Platform coverage: All platforms.

Estimated or target release: Unclear as of yet (an intent to ship will be
sent later when a decision to ship has been made).

Preference behind which this will be implemented: dom.storage_access.enabled

Is this feature enabled by default in sandboxed iframes? No, the
allow-storage-access-by-user-activation sandbox flag enables it.

DevTools bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1462372.  We
will be filing more specific bugs as we work through the scenarios where
devtools can assist web developers in order to use this functionality.

Do other browser engines implement this? Shipped in Safari 11.1, no public
signals from Chrome or Edge.

web-platform-tests: Our implementation unfortunately doesn’t come with
web-platform-tests, for two reasons.  One is that there is currently no way
to mock user gestures in web platform tests [4], 

Re: mozilla::Hash{Map,Set}

2018-08-15 Thread Ehsan Akhgari
This is great work, thanks a lot, Nick!

On Wed, Aug 15, 2018 at 5:39 AM Nicholas Nethercote 
wrote:

> Bug 1477627 converted a hot hash table from PLDHashTable to
> mozilla::HashSet and appears to have sped up cycle collection in some cases
> by 7%. If you know of another PLDHashTable that is hot, it might be worth
> converting it to mozilla::HashTable.
>

Do you have any good suggestions of how to find such candidates?  One thing
that came to my mind was that the BHR data may be a useful source of
insight for this...  <
https://arewesmoothyet.com/?category=all&durationSpec=512_2048&invertCallstack&mode=explore&payloadID=c8e925752fc94f78af9349665aad14c7&search=PLDHashTable%3A%3ASearchTable&thread=0>
(for content processes, for example) suggests a few hashtables based on a
very quick skimming which aren't surprising to me (things like
nsHTMLDocument::mIdentifierMap, some XPCOM component manager hashtables,
memory reporter hashtables, some font hashtables, the preferences
hashtable, sEventListenerManagersHash, mJSHolderMap, etc.

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Ehsan Akhgari
On Fri, Jul 20, 2018 at 5:00 AM, Jonathan Kew  wrote:

> On 20/07/2018 03:17, Jeff Gilbert wrote:
>
>> Using a classic read/write exclusive lock, we would only every contend
>> on read+write or write+write, which are /rare/.
>>
>
> Indeed, that's what I envisage we'd want. The -vast- majority of prefs
> access only involves reading values. We should be able to do that from any
> thread without a second thought about either safety or contention.
>
>
>> It's really, really nice when we can have dead-simple threadsafe APIs,
>> instead of requiring people to jump through hoops or roll their own
>> dispatch code. (fragile) IIRC most new APIs added to the web are
>> supposed to be available in Workers, so the need for reading prefs
>> off-main-thread is only set to grow.
>>
>> I don't see how this can mutate into a foot-gun in ways that aren't
>> already the case today without off-main-thread access.
>>
>> Anyway, I appreciate the work that's been done and is ongoing here. As
>> you burn down the pref accesses in start-up, please consider
>> unblocking this feature request. (Personally I'd just eat the 400us in
>> exchange for this simplifying architectural win)
>>
>
> +1 to that. Our need for OMT access to prefs is only likely to grow, IMO,
> and we should just fix it once, so that any code (regardless of which
> thread(s) it may eventually run on) can trivially read prefs.
>
> Even if that means we can't adopt Robin Hood hashing, I think the
> trade-off would be well worthwhile.
>

While it's true that the need for reading more prefs from workers will
continue to grow, given the number of prefs we have I think it's safe to
say that the set of prefs that we need to access from threads beside the
main thread will be a minority of the entire set of prefs.  So one way to
have our cake and eat it too would be to separate out the prefs that are
meant to be accessible through a worker thread and expose them through an
alternate thread-safe API which may be a bit more costly to call on the
main thread, and keep the rest of the min-thread only prefs on the existing
non-thread-safe APIs.  This won't be as elegant as having one set of APIs
to work with, of course.

(FWIW pref accesses sometimes also show up in profiles when code ends up
reading prefs in a loop, e.g. when invoked from web page JS, so the startup
scenario discussed so far is only one case to think about here, there are
many more also.)


>
> JK
>
>
>
>> On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione 
>> wrote:
>>
>>> On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>>>

 We should totally be able to afford the very low cost of a
 rarely-contended lock. What's going on that causes uncached pref reads
 to show up so hot in profiles? Do we have a list of problematic pref
 keys?

>>>
>>>
>>> So, at the moment, we read about 10,000 preferences at startup in debug
>>> builds. That number is probably slightly lower in non-debug builds, bug
>>> we
>>> don't collect stats there. We're working on reducing that number (which
>>> is
>>> why we collect statistics in the first place), but for now, it's still
>>> quite
>>> high.
>>>
>>>
>>> As for the cost of locks... On my machine, in a tight loop, the cost of a
>>> entering and exiting MutexAutoLock is about 37ns. This is pretty close to
>>> ideal circumstances, on a single core of a very fast CPU, with very fast
>>> RAM, everything cached, and no contention. If we could extrapolate that
>>> to
>>> normal usage, it would be about a third of a ms of additional overhead
>>> for
>>> startup. I've fought hard enough for 1ms startup time improvements, but
>>> *shrug*, if it were that simple, it might be acceptable.
>>>
>>> But I have no reason to think the lock would be rarely contended. We read
>>> preferences *a lot*, and if we allowed access from background threads, I
>>> have no doubt that we would start reading them a lot from background
>>> threads
>>> in addition to reading them a lot from the main thread.
>>>
>>> And that would mean, in addition to lock contention, cache contention and
>>> potentially even NUMA issues. Those last two apply to atomic var caches
>>> too,
>>> but at least they generally apply only to the specific var caches being
>>> accessed off-thread, rather than pref look-ups in general.
>>>
>>>
>>> Maybe we could get away with it at first, as long as off-thread usage
>>> remains low. But long term, I think it would be a performance foot-gun.
>>> And,
>>> paradoxically, the less foot-gunny it is, the less useful it probably is,
>>> too. If we're only using it off-thread in a few places, and don't have to
>>> worry about contention, why are we bothering with locking and off-thread
>>> access in the first place?
>>>
>>>
>>> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 
 wrote:

>
> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>
>>
>>
>> On 13/07/2018 21:37, Kris Maglione wrote:
>>
>>>
>>>
>>> tl;dr

Re: Coding style: Making the `e` prefix for enum variants not mandatory?

2018-07-06 Thread Ehsan Akhgari
On Fri, Jun 29, 2018 at 2:36 PM, smaug  wrote:

> On 06/29/2018 05:58 PM, Boris Zbarsky wrote:
>
>> On 6/29/18 10:30 AM, Nathan Froyd wrote:
>>
>>> Given the language-required qualification for
>>> `enum class` and a more Rust-alike syntax, I would feel comfortable
>>> with saying CamelCase `enum class` is the way to go.
>>>
>>
>> For what it's worth, I agree.  The point of the "e" prefix is to
>> highlight that you have an enumeration and add some poor-man's namespacing
>> for a potentially large number of common-looking names, and the
>> language-required qualification already handles both of those.
>>
>>
> That works if we're consistently naming static variables and such using
> s-prefix etc, so that
> class Foo1
> {
>   static int sBar;
> };
> Foo1::sBar
>
> is clearly different to
> enum class Foo2
> {
>   Bar
> };
>
> Foo2::Bar
>
>
> (personally I'd still prefer using e-prefix always with enums, class or
> not. Foo1::sBar vs Foo2::eBar is easier to distinguish than Foo1::sBar vs
> Foo2::Bar)
>

Looking at other popular style guidelines, the Google C++ style requires
the 'k' prefix on both enum and enum class <
https://google.github.io/styleguide/cppguide.html#Enumerator_Names>
presumably because it doesn't require any special prefix for static
members.  But given that ours does, it seems that in the case of enum
classes, not using a prefix should be acceptable.

Based on Nathan's analysis (thanks, Nathan!) and the previous thread, it
seems like Emilio's edit should be reverted and a note should be added
about the usage of CamelCase for enum classes.  Emilio, do you mind doing
that, please?

Thanks,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   3   4   5   6   7   8   9   10   >