Re: Intent to ship: Return pixel deltas in wheel event if deltaMode is not checked by authors

2021-03-09 Thread James Graham

On 09/03/2021 18:21, Emilio Cobos Álvarez wrote:

On 3/9/21 17:10, Anne van Kesteren wrote:


On Tue, Mar 9, 2021 at 4:45 PM Emilio Cobos Álvarez 
 wrote:

Let me know if you have any concerns with this or what not.


So if I understand it correctly we'll have a getter with side effects.
Is the expectation that we can eventually remove this? Are other
browsers on board with this model?


Well, the side effect is not _quite_ that bad IMO, in the sense that the 
page can't really tell whether the event is returning pixels because the 
scroll was actually pixel-based (e.g., trackpads do this IIRC) or 
because they didn't check deltaMode.


The only alternative to this that I can think of is basically never 
return DOM_DELTA_LINE (which is what other browsers do in practice), but 
that seemed both more risky for us, and also a potential regression 
(knowing lines can be useful).


It seems like having all browsers have the same behaviour is going to be 
better in the long run than having some weird heuristic in gecko that 
acts as a footgun for authors going forward.


Do we have any way to assess the compat impact of switching to match 
Blink and WebKit always?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


web-platform-tests tracing/debugging improvements

2021-02-06 Thread James Graham

A couple of wpt changes recently landed that might be of interest to people:

* testharness.js tests now trace the individual asserts that run (for 
performance reasons this only happens when output is enabled e.g. when 
only a single test is run locally).


* A --debug-test command line option has been added which provides 
type-specific debugging facilities:


  - For testharness tests DevTools are started in the test page, and 
some console logging is added for test lifecycle events.


  - For reftests the reftest analyzer is loaded for failed tests with 
the relevant data.


This is the initial cut at these features and I'd like to make further 
improvements in this area; if there are specific ideas you have that 
will make them work better for you, or additional debugging features 
you'd like to see, please let me know.


It's also a known issue that the testharness features don't always get 
enabled for tests that run in workers; the fix requires some 
coordination so it's not quite trivial but it is on the radar.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: remote-protocol (CDP)

2021-01-12 Thread James Graham

On 12/01/2021 16:56, Anne van Kesteren wrote:

On Tue, Jan 12, 2021 at 5:43 PM James Graham  wrote:

CDP is not part of any web standard, and is not exposed to content.


Seems unfortunate we cannot standardize on this format if we have to
support it anyway, but I guess that was already considered and found
not feasible?



Yes, that was a definite point of consideration, but I don't think it 
would work out to do it directly:


* There's no support from other vendors from standardizing CDP as-is.

* CDP exposes some blink internals that aren't reasonable to bake into a 
cross-browser standard.


* Parts of CDP are awkward to use for automation use cases (this is 
probably the weakest reason).


Having said that it's clear that the automation client ecosystem needs a 
transition path away from CDP, and this means that the standard has to 
be pretty close in terms of the semantics and feature set. The intent is 
to work with clients through the standards process to ensure they're 
able to support WebDriver-BiDi as a backend; if that becomes too 
difficult it will be a red flag that we've done something that has a low 
chance of succeeding.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: remote-protocol (CDP)

2021-01-12 Thread James Graham
Summary: remote-protocol provides a Firefox implementation of a subset 
of the Chrome DevTools Protocol (CDP) [1], specifically targeted at 
testing and automation use cases.


remote-protocol isn't a web-exposed feature so it doesn't fit into the 
standard exposure guidelines. However we're mentioning it here to ensure 
everyone is aware of our plans.


Libraries such as Puppeteer and Selenium 4 depend on CDP for 
implementing browser automation with an API and featureset that isn't 
possible using standardised WebDriver. These libraries have either been 
Chrome-only or shipped a reduced featureset with Firefox, making it 
harder for web authors to test their sites against Gecko.


In order to address this issue we have been shipping support for a 
subset of CDP in Nightly Firefox for several releases under the name 
"remote-protocol". It is the basis for the current Firefox support in 
Puppeteer, and some Selenium 4 clients use it for preliminary Firefox 
support.


We now want the remote-protocol support to ride the trains so that users 
may test against stable Firefox. This is important, despite the fact the 
implementation remains incomplete, because for web-app testing Nightly 
is not a viable replacement for beta and stable due to the different 
features enabled on each channel.


Note that activating the remote-protocol server requires a specific 
command line flag, so for the majority of Firefox users this has no 
impact (this is identical to the way the existing marionette server works).


CDP is not part of any web standard, and is not exposed to content. In 
the long term we believe that the testing and automation use case should 
be fully served by cross-browser standard features; for this reason we 
have been working on webdriver-bidi [2], which will provide a 
standardised replacement for this feature. However with automated test 
tools increasingly using CDP instead of, or in addition to, WebDriver, 
we need to meet users where they are while the standard matures.


More details of the current state of the test automation landscape, and 
details of how to use the CDP support are in our hacks posts [3], [4].


Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1606604
Standard: N/A but will eventually be replaced by [2]
Platform coverage: All
Preference: Behind a command line flag: --remote-debugging-port
DevTools bug: N/A
Other browsers: Blink-only. Other vendors are participating in the 
WebDriver-BiDi standard.

web-platform-tests: N/A

[1] https://chromedevtools.github.io/devtools-protocol/
[2] https://w3c.github.io/webdriver-bidi/
[3] 
https://hacks.mozilla.org/2020/12/cross-browser-testing-part-1-web-app-testing-today/
[4] 
https://hacks.mozilla.org/2021/01/improving-cross-browser-testing-part-2-new-automation-features-in-firefox-nightly/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


This year in web-platform-tests - 2020 Edition

2020-12-23 Thread James Graham

It's been a long year hasn't it? Maybe the next one will seem shorter.
Although, at home in London today, it feels that if it does, it will
only be for want of a leap day.

Given all that's passed since 2019, you would be forgiven for having
entirely forgotten about "This Year in Web-Platform-Tests"; very much
the most infrequent regular-cadence project update email at
Mozilla. Nevertheless, some things have happened in web-platform-tests
this year, and knowing about them might even make it easier for you to
write cross-browser tests in the next year.


== Testability ==

* Added print reftests to test paginated layout using the actual
  printing codepath.

* Added support for using SpecialPowers in gecko-only
  web-platform-tests.

* Added support for using testdriver functions across multiple
  browsing contexts and origins.

* Added support for inferring test names from the content of
  arrow-function tests. This allows simple one-liner tests to be
  written with minimal boilerplate.

* Introduced new testdriver commands for deleting cookies and storage
  access.

* Filename flag .www. added to enable directly writing tests which run
  on a subdomain.

* New testharness.js functions step_wait, step_wait_func and
  step_wait_done. These allow polling for a condition before
  continuing the test rather than using a fixed timeout.

* Added assert_implements and assert_implements_optional to
  testharness.js. The former is intended to allow quickly checking an
  API is implemented before trying to use it in a way that may cause
  timeouts. The latter is for spec-optional features.

* Added a second HTTPS port to the server config for testing origin
  isolation.

* Added support for tests using QUIC and WebSockets-over-HTTP2.

== Gecko CI ==

* Moved all the Mozilla-created CSS reftests under vendor-import
  directory to run exclusively in the wpt harness, to save CI load,
  and make them work just like other tests. These are being slowly
  moved to the main testsuite which will improve their visiblity and
  maintainability.

* Made it possible to disable LSAN checks entirely for directories
  with acute leaks.

* Added support for running tests in predefined groups, as part of the
  manifest-based-scheduling work for TaskCluster.

* Configured the CI to run multiple concurrent tests in multiple
  Firefox instances in order to reduce the total test time.

* Added support for preloading Firefox instances during test runs, to
  reduce the time spent waiting for Firefox to start.

* Added -backlog jobs containing tests for features that gecko doesn't
  have current plans to implement, so they are run less frequently.

== Sync ==

* Exported over 600 wpt changes from mozilla-central to GitHub.

* Started filing bugs when changed wpt tests have untriaged
  failures. This is currently enabled for some DOM and CSS components;
  if you want this enabled for another component please let me know.

== Python 3 ==

Perhaps the biggest single project of the year:

* Converted all the wpt support code including the harness, server,
  and associated tooling, to work with Python 2 and 3.

* Converted the custom request handler API to work with Python 3
  whilst still allowing tests to put arbitrary bytes on the wire.

* Converted all existing handlers to work with both Python 2 and 3
  without changing any test behaviour.

* Switched the upstream CI to use Python 3 for most test runs.

* Switched Gecko CI to use Python 3 for wpt test jobs (I believe this
  is the first test type that uses mozharness to make the switch).

* Switched the wpt sync bot to Python 3 (random technical note: this
  was done by first providing type annotations on all the functions
  and then making mypy pass for both Py 2 and 3. It wasn't great but
  I'm pretty sure it was the best thing we could have done).

Python 2 support in wpt is expected to persist until February
2021. Let's hope we don't have to do such a tedious migration again.

== Meta ==

* Finally adopted a the Code of Conduct for the web-platform-tests
  organisation.

As always, thanks to all the people who have worked on
web-platform-tests this year, at Mozilla and elsewhere. All the
improvements here are the result of a huge amount of collaboration
from the whole web-platform-tests community. And of course apologies
for all the things I've no doubt forgotten to mention.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: web-platform-tests now runs under Python 3

2020-12-16 Thread James Graham

On 15/12/2020 20:00, James Graham wrote:
With bug 1678663 [1] now on central, web-platform-tests uses Python 3, 
both locally under mach, and on Gecko CI. This matches upstream, where 
the CI switched over a few weeks ago [2] and Google's CI which switched 
at the end of last week.


Apparently there is breakage if you're using Python 3.9; at the moment 
we only have up to 3.8 in the upstream CI.


I'll investigate the issue and propose adding 3.9 to the unit test 
configuration, but in the meantime if you encounter problems with Python 
3.9 please switch to 3.6, 3.7 or 3.8.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


web-platform-tests now runs under Python 3

2020-12-15 Thread James Graham
With bug 1678663 [1] now on central, web-platform-tests uses Python 3, 
both locally under mach, and on Gecko CI. This matches upstream, where 
the CI switched over a few weeks ago [2] and Google's CI which switched 
at the end of last week.


For most users this change should be entirely transparent. It is however 
relevant relevant to anyone writing a custom Python handler as part of 
their tests:


* All python handlers must run under Python 3.6+ as well as Python 2.7 *

The requirement to maintain 2.7 compatibility is temporary to give all 
wpt consumers a chance to migrate. The detailed timetable for the next 
steps is as follows [3]:


* Jan 1st 2021, wpt consumers are expected to be using Python 3
* Feb 1st 2021, Python 2 no longer supported

Thanks for all the people who worked on this transition, and particular 
thanks to Google/Igalia who took on the Sisyphean task of migrating all 
the existing handler functions through the bytes/string changes without 
altering the test behaviour. Unsurprisingly we have a lot of tests which 
are very sensitive to the exact bytes sent over the wire, so this was a 
big deal to get right.


As always please file bugs, or contact me on matrix [4] if you run into 
any bugs as a result of these changes.


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1678663
[2] https://github.com/web-platform-tests/wpt/pull/26252
[3] https://github.com/web-platform-tests/rfcs/blob/master/rfcs/py_3.md
[4] https://chat.mozilla.org/#/room/#interop:mozilla.org
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTTP/3 ready for testing

2020-11-20 Thread James Graham

On 20/11/2020 16:09, Dragana Damjanovic wrote:


Our implementation is ready for testing.


What's the interop testing story for HTTP/3? How confident are we that 
we won't run into implementation differences that look like web-compat 
issues?


Also, what's the priority on enabling other test types over HTTP/3. wpt 
has some quic support for WebTransport [1]; is that something that we 
should be looking to enable in our test infra?


[1] https://github.com/web-platform-tests/rfcs/blob/master/rfcs/quic.md

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Accessibility review process

2020-11-11 Thread James Teh
(Also cross-posting to firefox-dev.)

Hi all,

Tl;dr: The Mozilla Accessibility Release Guidelines outline what is needed
to make user interfaces accessible to people with disabilities. If you
would like help from the accessibility team to determine whether your
change is accessible, you can now set the a11y-review flag on a bug in
Bugzilla and fill in the comment template.

While the accessibility team has always performed accessibility reviews,
the process around this has hitherto been very ad hoc and unclear. To
address this, we have now formalised the process for requesting an
accessibility review. You can find it documented here:

https://firefox-source-docs.mozilla.org/bug-mgmt/processes/accessibility-review.html

For convenience, I've included the text at the bottom of this email.

This process references the Mozilla Accessibility Release Guidelines, which
have existed for a while but haven't been widely publicised. Among other
things, you can use these as a checklist to guide you in what to consider
to ensure an accessible user interface, whether that be during design,
development or determining shipping readiness. That document is here:

https://wiki.mozilla.org/Accessibility/Guidelines

If you have any questions or feedback, please feel free to reach out to me
directly.

Thanks!

jamie
--
Accessibility Review Introduction

At Mozilla, accessibility is a fundamental part of our mission to ensure
the internet is "open and accessible to all," helping to empower people,
regardless of their abilities, to contribute to the common good.
Accessibility Review is a service provided by the Mozilla Accessibility
team to review features and changes to ensure they are accessible to and
inclusive of people with disabilities.
Do I Need Accessibility Review?

You should consider requesting accessibility review if you aren't certain
whether your change is accessible to people with disabilities.
Accessibility review is optional, but it is strongly encouraged if you are
introducing new user interface or are significantly redesigning existing
user interface.
When Should I Request Accessibility Review?

Generally, it's best to request accessibility review as early as possible,
even during the product requirements or UI design stage. Particularly for
more complex user interfaces, accessibility is much easier when
incorporated into the design, rather than attempting to retro-fit
accessibility after the implementation is well underway.

The accessibility team has developed the Mozilla Accessibility Release
Guidelines  which
outline what is needed to make user interfaces accessible. To make
accessibility review faster, you may wish to try to verify and implement
these guidelines prior to requesting accessibility review.

The deadline for accessibility review requests is Friday of the first week
of nightly builds for the release in which the feature/change is expected
to ship. This is the same date as the PI Request deadline.
How Do I Request Accessibility Review?

You request accessibility review by setting the a11y-review flag to
"requested" on a bug in Bugzilla and filling in the template that appears
in the comment field. For features spanning several bugs, you may wish to
file a new, dedicated bug for the accessibility review. Otherwise,
particularly for smaller changes, you may do this on an existing bug. Note
that if you file a new bug, you will need to submit the bug and then edit
it to set the flag.
Questions?

If you have any questions, please don't hesitate to contact the
Accessibility team:

   - #accessibility on Matrix
   

   or Slack
   - Email: accessibil...@mozilla.com
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Status of Ubuntu 20.04 as a development platform

2020-11-10 Thread James Graham

On 10/11/2020 14:17, Kyle Huey wrote:

On Tue, Nov 10, 2020 at 3:48 AM Henri Sivonen  wrote:


Does Ubuntu 20.04 work properly as a platform for Firefox development?
That is, does rr work with the provided kernel and do our tools work
with the provided Python versions?


rr works. I use 20.04 personally.


I've also been using 20.04 and all the Python bits have worked fine.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Writing web-platform-tests using non-web-exposed APIs is now easier

2020-10-26 Thread James Graham

On 26/10/2020 14:11, Mirko Brodesser wrote:


Supporting synthesizing drag-and-drop events [1]


That seems like the kind of thing that ought to be covered by 
testdriver. Would something like

new test_driver.Actions()
  .pointerMove(0,0,{origin:elem1})
  .pointerDown()
  .pointerMove(0,0,{origin:elem2})
  .pointerUp()
  .send();

work?

Obviously if the concern is verbosity we could add helper functions for 
common patterns. Looking at the other functions there, I know wheel 
events are missing; that's a recent addition to the webdriver spec that 
we haven't implemented in marionette, but we can prioritise that work if 
it's needed for tests.



and convenience functions for checking the clipboard [2] would be helpful. In 
general, it's presumably worth checking what kind of convenience functions are 
offered [3] and used by mochitests.


Clipboard is a good idea.

Looking at SpecialPowers usage [1] reveals a lot that's just poking at 
other internal APIs, so it's hard to see what the underlying 
requirements are.


EventUtils usage [2] looks like it's mostly stuff that should be covered 
in testdriver, with the exception of IME stuff which might be 
interesting to investigate.


[1] https://paste.mozilla.org/stbR88is
[2] https://paste.mozilla.org/g4qMtnMx
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Writing web-platform-tests using non-web-exposed APIs is now easier

2020-10-21 Thread James Graham
Two changes that recently landed in web-platform-tests make it easier to 
write tests that require access to non-web-exposed features:


* testdriver APIs now work in many situations involving multiple 
browsing contexts origins


* SpecialPowers is now available in gecko-only web-platform-tests

testdriver
==

testdriver [1] is a cross-browser API provided by wpt for performing 
privileged actions, notably those involving trusted user gestures. 
Because it's cross-browser this is the preferred way to write tests 
requiring such interactions, where possible.


The default implementation of testdriver uses WebDriver on the backend, 
and where specs provide WebDriver APIs for automation (e.g. [2]) they 
can be exposed to wpt via testdriver.


Previously testdriver APIs only worked in the browsing context 
containing the test harness. Now it's possible to use them in any 
browsing context that is reachable from the context containing the 
harness (i.e. any context that can postMessage to the test window). Full 
documentation is given in [1].


Unfortunately handling cases where the contexts can't communicate e.g. 
rel=noopener remains challenging because of limitations in webdriver/the 
way we communicate between the harness and the browser. It isn't 
impossible to fix these cases, but will require additional work. Please 
let me know when you're blocked writing cross-browser tests because of 
such limitations since this will help prioritise the work.


SpecialPowers
=

The SpecialPowers API that's commonly used in gecko-specific tests in 
other harnesses is now available for use in gecko-specific 
web-platform-tests i.e. those that live in 
testing/web-platform/mozilla/tests These tests obviously can't be run 
cross-browser and aren't upstreamed. Therefore it's preferable to write 
a shareable test using testdriver or similar where possible. However 
there are some cases where that simply doesn't provide the required 
features and rather than forcing authors to write mochitest-plain tests 
for those cases it's now (hopefully) possible to do everything required 
for content tests in wpt.


Currently this is only enabled for js (i.e. testharness) tests. If there 
are use cases that require SpecialPowers in other test types (e.g. wpt 
reftests) please let me know.


To be really clear here, this isn't suitable for anything that would 
have previously used e.g. a browser-chrome test. The intent is to make 
it possible to test web apis in a way that requires occasional access to 
internals, not to allow testing the internals themselves.


Future Work
===

We want to keep making this kind of improvement to wpt, especially where 
doing so allows us to drive compatibility improvements by writing 
cross-browser tests where it was previously impossible.


We are proposing and making an initial implementation of the Browser 
Testing API spec [3], which is designed to provide a place to specify 
test-only features that don't make sense in WebDriver; the initial scope 
is a function to invoke a garbage collection; this has been a 
longstanding feature request. If there are additional APIs that people 
think would make sense in such a spec, please let me know.


In general if there are places where wpt doesn't meet our requirements 
for testing, or there are chances to improve the test writing ergonomics 
please let me know and we can make sure that work gets appropriate 
priority. Shared tests remains a key technique for ensuring 
interoperability between different browser engines.


[1] http://web-platform-tests.org/writing-tests/testdriver.html
[2] https://w3c.github.io/permissions/#automation
[3] https://jgraham.github.io/browser-test/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Adopting the black Python code style

2020-10-20 Thread James Graham

On 19/10/2020 22:01, Jeff Gilbert wrote:

I'm disappointed by that.


FWIW last time I looked at black, I found that the compromises it made 
to be fully automatic and with minimal configuration meant that it was 
liable to produce ugly or difficult to read code in some situations.


I understand that we've decided that people will get used to reading any 
code style over time, and therefore eliminating formatting concerns from 
the code writing process is a net win for productivity. So I'm not 
taking a position in opposition to this proposal, but it is not 
something I would have advocated personally.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


[action maybe required] Verify recent wpt changes are still on autoland

2020-09-21 Thread James Graham
A bug in the wptsync when switching over to Python 3 caused it to drop 
some upstream changes when doing recent landings. In order to get things 
back on track I've made a patch which copies over the files from the 
current sync revision from GitHub to m-c and reapples changes made to 
mozilla-central which hadn't yet landed in that GitHub revision. The 
commit containing that patch is now on autoland [1]


To verify I haven't missed anything it would be very helpful if anyone 
who's landed a change under testing/web-platform/tests in the last ~3 
weeks would confirm that their changes are still on autoland. If they 
are not please let me know and I'll ensure they are relanded.


Apologies for the inconvenience; the sync is getting some more sanity 
checks to ensure that this kind of thing doesn't happen again.


[1] 
https://hg.mozilla.org/integration/autoland/rev/1bf583a361709eca05877399cd19ec1c5022d4d7

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship in Nightly channel and early Beta: `beforeinput` event and `InputEvent.getTargetRanges()`

2020-09-17 Thread James Graham

On 17/09/2020 17:14, Masayuki Nakano wrote:

web-platform-tests:
none for `beforeinput` (because of it requires user input, and test 
driver was not when other browsers implement it), but there are a lot of 
tests for `getTargetRanges()` which I added (200+).
https://searchfox.org/mozilla-central/source/testing/web-platform/tests/input-events 


(FYI: We have a lot of `beforeinput` event tests in mochitests:
https://searchfox.org/mozilla-central/search?q=%22beforeinput%22&path=%2Ftests%2F&case=true®exp=false) 


This seems unfortunate. How confident are we that our implementation 
works like other browsers? Are there blockers other than time to porting 
to mochitests to wpt?


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Searchfox now gives info about tests at the top of test files!

2020-07-29 Thread James Graham

This is awesome!

On 28/07/2020 18:40, Andrew Sutherland wrote:

- How long your tests take to run (on average)!
- How many times your tests were run in the preceding 7 days[1]!
- How many times your tests were skipped in the preceding 7 days!
- What skip-patterns govern the skips from your test's 
mochitest.ini-style files!
- What WPT disabled patterns govern the skips from your WPT test's meta 
files!

- How many WPT subtests defy expectations!
- The wpt.fyi URL of the test you're looking at... by clicking on the 
"Web Platform Tests Dashboard" link in the "Navigation" panel on the 
right side of the searchfox page!


As an aside/reminder for people, for web-platform-tests in particular 
the dashboard at https://jgraham.github.io/wptdash/ can give you 
information about all tests in a component and is designed to answer 
questions like "which tests are failing in Firefox but passing in both 
Chrome and Safari", since knowing that can help prioritise issues that 
are likely web-compat hazards.



This information is brought to you by:

- The "source-test-file-metadata-test-info-all" taskcluster job defined 
at 
https://searchfox.org/mozilla-central/source/taskcluster/ci/source-test/file-metadata.yml 
that provides statistics on runs, skipped runs, and (non-WPT) skip 
conditions.
- The "source-test-wpt-metadata-summary" taskcluster job defined at 
https://searchfox.org/mozilla-central/source/taskcluster/ci/source-test/wpt-metadata.yml 
that derives its data from `mach wpt-metadata-summary`.  (Thanks :jgraham!)


And, as less of an aside, if there's more information people want, we 
can definitely add it to the summary task. The current featureset was 
based on the requirements of wptdash, but if there's more things that 
would be useful we should go ahead and add them.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Print reftest support in wpt

2020-07-29 Thread James Graham
It is now* possible to write reftests for print layout in wpt. These 
work like normal reftests except instead of comparing the rendering of 
two documents in a browser window, they compare all the pages of a 
paginated rendering of the documents.


wpt print reftests go through the full printing machinery, generating a 
PDF, so they exercise parts of the code that aren't covered by paginated 
 reftests in the reftest harness (at some cost to performance).


Documentation for writing print reftests is at [1]

The tests are currently supported in Firefox and Chrome, but not yet in 
WebKit, pending support for automated PDF generation.


Please let me know if you have questions or concerns.

Thanks to hiro for all the help and feedback, and bdahl for help with 
pdf.js. Also to the wider wpt community, notably including Google's 
Ecosystem-Infra team and gsnedders, who gave feedback on the design 
document and did most of the code review for upstream changes.


[1] http://web-platform-tests.org/writing-tests/print-reftests.html

* As of a month or two ago; I should have written this email sooner :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to change default try selector from `syntax` to `auto` (ACTION NEEDED for try syntax users)

2020-07-06 Thread James Graham

On 06/07/2020 17:50, Tom Ritter wrote:

Thank you for continuing to keep try syntax working. I know I'm
holding back progress by not spending the time to figure out how to
convert `./mach try -b do -p win32-mingwclang,win64-mingwclang -u all
-t none` to fuzzy  (maybe it's something like `./mach try fuzzy
"'mingwclang -talos"` ?).


AFAICT `mach try fuzzy -full -q mingwclang`

One of the nice things about `mach try fuzzy` is that without a `-q` 
argument you get a try-before-you-buy interface to select tasks. And you 
can save the query as a preset to use as e.g. `mach try --preset 
mingwclang`.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Ship : HTML5 element (Nightly Only)

2020-06-24 Thread James Teh
While this doesn't need to block shipping in Nightly, I think we should
consider advocating for the focus behaviour to be changed (and changing it
in Firefox if we can get it into the spec) before we ship to release.

The currently specified behaviour (and what both Firefox and Chrome
implement) is to focus the first focusable element in the dialog. However,
there are a few major problems with this, including:
1. This means a tabindex="-1" element could get focus, which is focusable
but not in the tab order. That's particularly strange if this gets focus in
preference to something which *does* participate in the tab order.
2. The first focusable element could be a long way down the page; e.g. a
dialog with a lot of preamble text and a form field or button after that
text. That is particularly problematic for screen reader users because that
will direct their reading cursor to the focused control, so they will
potentially not realise they're missing content above it. It might even
cause the page to scroll visually.

What I (and others in the accessibility community) am proposing is that the
dialog element itself should get focus, unless something within the dialog
has autofocus set, in which case we should focus that. There's a spec issue
for this, but it stalled:
https://github.com/whatwg/html/issues/1929

Another concern is that when the dialog is dismissed, focus gets thrown
back to the document. Instead, I think it should be returned to the element
which had focus before the dialog was shown, which is the recommended
pattern for good accessibility of dialogs. I don't think there is a spec
issue for this yet.

Jamie

On Thu, Jun 25, 2020 at 6:04 AM Sean Feng  wrote:

> Intent to Ship (Nightly Only) : Dialog Element
> ​
> Hi All,
> ​
> In bug 1645046 , I
> intend to turn html5  element on by default in Nightly. It has
> been developed behind the dom.dialog_element.enabled preference.
> ​
> Meta Tracking Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=840640
> ​
> This is Nightly only because two things needs adjustment in our
> implementation.
> ​
> 1. The inert element isn't supported (bug 921504 -
> https://bugzilla.mozilla.org/show_bug.cgi?id=921504).
>
>   - For modal dialog, elements that are not part the dialog should be
> mark as inert, so these elements gain the inert-ness and can't be
> focused. Since we don't have inert supported yet, users could use tab to
> move focus out of the dialog, which is not expected.
>
>   - Next Step: The implementation of inert element is going to be
> started soon (I think), and we can also discuss to support the
> inert-ness without the supporting of inert element
> ​
> 2. We currently use a temporary solution for the layout of modal dialog.
> (bug 1637310 - https://bugzilla.mozilla.org/show_bug.cgi?id=1637310).
>
>- Currently the spec defines modal dialog as an absolute element,
> along with some weird calculation requirement to make the element
> centered. This modal dialog layout felt like a hack to us, so we didn't
> follow it, and instead, we used a temporary solution
> (https://bugzilla.mozilla.org/show_bug.cgi?id=1642364) to make the modal
> dialog as a centered fixed element.
>
>- Next Step: The CCSWG agreed to switch modal dialog to be a centered
> fixed element
> (https://github.com/w3c/csswg-drafts/issues/4645#issuecomment-642130060),
> which is the same as the temporary solution we applied in bug 1642364.
> So the temporary solution may become a permanent solution after things
> have been finalized in spec.
> ​
> *Status in other browsers*
> Chrome has it enabled by default since Release 37
> https://www.chromestatus.com/feature/5770237022568448
>
> *web-platform-tests*:
>
> https://github.com/web-platform-tests/wpt/tree/master/html/semantics/interactive-elements/the-dialog-element,
>
> We have them all enabled and passing except for those layout and inert
> related ones.
>
> *Spec* - https://html.spec.whatwg.org/multipage/#the-dialog-element
>
> This feature was previously discussed in
>
> https://groups.google.com/d/msg/mozilla.dev.platform/vTPGW1aJq24/JnEnoH3BEAAJ
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: remove support for running desktop Firefox in single-process mode (e10s disabled) anywhere but in tests

2020-06-10 Thread James Teh
In general, this obviously makes a lot of sense. However, because there is
so much extra complication for accessibility when e10s is enabled, I find
myself disabling e10s in local opt/debug builds to isolate problems to the
core a11y engine (vs the a11y e10s stuff). The ability to do this was
instrumental in some of the perf work I've been doing lately. For example,
it allowed me to determine that some of the perf problems I had originally
attributed to the a11y e10s layer were actually problems in the core a11y
engine. I'm sure there's some way I could have achieved that with e10s
enabled, but it probably would have taken me weeks longer.

All of that said, I realise this is an obscure case and I don't want to
stand in the way of progress; I'm well aware legacy has to die eventually.
Nevertheless, I thought I'd at least flag this concern.

Btw, the need to isolate the core a11y engine from the a11y e10s stuff is
also why some of our a11y tests still run with e10s disabled in automation.

Jamie

On Thu, Jun 11, 2020 at 4:56 AM David Major  wrote:

> I agree that it's a bad idea for users to be running permanently with this
> setting on their daily driver browsers.
>
> But the environment variable has been a huge productivity enhancer to
> reduce my mental load when setting up an extra-hairy debug session or
> taking system traces.
>
> I wish we could have a way to allow this for one-off cases but not
> long-term usage. Unfortunately I can't settle for a proposal like "allow it
> only in debug or only in nightlies" because I often need to debug actual
> user-facing builds. Is there any way we could build some auto-expiration
> into this setting, like maybe you'd have to set the env var equal to the
> build ID or today's date?
>
>
>
> On Wed, Jun 10, 2020 at 2:44 PM Dave Townsend 
> wrote:
>
> > Non-e10s is such a different environment that I don't think we have any
> > hope of keeping it working without running the full test suite in that
> mode
> > and I don't think anyone wants to do that. Now that this has started
> > breaking I think it is actively harmful to our users for us to allow them
> > to disable e10s.
> >
> > On Wed, Jun 10, 2020 at 11:30 AM Gijs Kruitbosch <
> gijskruitbo...@gmail.com
> > >
> > wrote:
> >
> > > (Copied to fx-dev; Replies to dev-platform please.)
> > >
> > > Hello,
> > >
> > > Just over a year ago, I started a discussion[0] about our support for
> > > disabling e10s. The outcome of that was that we removed support for
> > > disabling e10s with a pref on Desktop Firefox with version 68, except
> > > for use from automation. We kept support for using the environment
> > > variable. [1]
> > >
> > > Last week, we released Firefox 77, which turned out to break all
> > > webpages sent using compression (like gzip) if you had disabled e10s
> > > using this environment variable. [2]
> > >
> > > So here we are again. I'd like to propose we also stop honouring the
> > > environment variable unless we're running tests in automation. We
> > > clearly do not have sufficient test coverage to guarantee basic things
> > > like "the browser works", it lacks security sandboxing, and a number of
> > > other projects require it (fission, gpu process, socket process, ...),
> > > so I think it's time to stop supporting this configuration at all.
> > >
> > > I hope to make this change for the 79 cycle. I'm open to arguments
> > > either way about what to do for 78 esr (assuming the patch for 79 turns
> > > out to be simple; the work to remove the pref had a number of annoying
> > > corner-cases at the time).
> > >
> > > Please speak up if you think that this plan needs adjusting.
> > >
> > > ~ Gijs
> > >
> > >
> > > [0]
> > >
> > >
> >
> https://groups.google.com/d/msg/mozilla.dev.platform/cJMzxi7_PmI/Pi1IOg_wCQAJ
> > > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1548941
> > > [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1638652
> > > ___
> > > firefox-dev mailing list
> > > firefox-...@mozilla.org
> > > https://mail.mozilla.org/listinfo/firefox-dev
> > >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: ParentNode#replaceChildren

2020-05-26 Thread James Graham

On 25/05/2020 16:45, Alex Vincent wrote:

Effective 2020-05-27, I intend to land ParentNode.replaceChildren feature
in mozilla-central, on by default. This has not been developed behind a
preference, as both I and the reviewers believe this is a small change.
Status in other browsers is unimplemented.

*Product*: N/A
*Bug to turn on by default*:
https://bugzilla.mozilla.org/show_bug.cgi?id=1626015 Please set the
*dev-doc-needed* keyword.

*Spec: *https://dom.spec.whatwg.org/#dom-parentnode-replacechildren

Other browsers' bugs to implement:

- https://bugs.webkit.org/show_bug.cgi?id=198578
-

https://bugs.chromium.org/p/chromium/issues/detail?id=1067384&q=replaceChildren


Does this have appropriate cross-browser (i.e. wpt) test coverage?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Testing Rust code in tree

2020-05-12 Thread James Graham

On 11/05/2020 23:54, Mike Hommey wrote:

On Mon, May 11, 2020 at 03:37:07PM -0700, Dave Townsend wrote:

Do we have any standard way to test in-tree Rust code?

Context: We're building a standalone binary in Rust that in the future will
be distributed with Firefox and of course we want to test it. It lives
in-tree and while we could use something like xpcshell to drive the
produced executable and verify its effects it would be much nicer to be
able to use Rust tests themselves. But I don't see a standard way to do
that right now.

Is there something, should we build something?


https://searchfox.org/mozilla-central/rev/446160560bf32ebf4cb7c4e25d7386ee22667255/python/mozbuild/mozbuild/frontend/context.py#1393


If it helps to have an example, Geckodriver is using RUST_TESTS

https://searchfox.org/mozilla-central/source/testing/geckodriver/moz.build#9-20

It's not depending on gecko, so it's a simpler case than the one Lina 
mentioned.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to prototype and ship: ARIA annotations

2020-02-27 Thread James Teh
In Firefox 75, we intend to enable ARIA annotations by default.

Summary: This adds two new ARIA roles, a new aria-description attribute and
expands aria-details. These changes are needed to support screen reader
accessibility of comments, suggestions and other annotations in published
documents and online word processing scenarios such as Google Docs, iCloud
Pages, and MS Office Online. This is not currently possible without
resorting to live region hacks, which are not as reliable as semantics and
do not work well with Braille displays. As a result, screen reader users
have non-optimal support for collaboration features of online word
processors.
Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1608975
Standard: Incorporated into ARIA 1.3 editor's draft. Explainer with links
to pull requests: https://github.com/aleventhal/aria-annotations
Platform coverage: desktop
Preference: None. Because this simply exposes new values to accessibility
clients using existing accessibility APIs, we do not anticipate any
problems.
DevTools bug: None. Both the DOM inspector and accessibility inspector will
reflect this new information without any additional work.
Other browsers: Chrome shipping enabled by default in version 82:
https://chromestatus.com/feature/4666935918723072
web-platform-tests: None. ARIA WPT is still somewhat of a work in progress
and hasn't been updated for ARIA 1.2 yet, let alone 1.3. We will have Gecko
tests, though.
Secure contexts: Not restricted to secure contexts, consistent with the
rest of ARIA.
Is this feature enabled by default in sandboxed iframes?: Yes; has no
impact on sandboxed iframes.
Link to standards-positions discussion:
https://github.com/mozilla/standards-positions/issues/253
How stable is the spec: The spec is straightforward and has been reviewed
and approved by stakeholders across the accessibility industry. We do not
anticipate any fundamental changes.

Jamie
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to prototype and ship: CSS comparison functions: min() / max() / clamp()

2020-02-21 Thread James Graham

On 21/02/2020 01:42, Emilio Cobos Álvarez wrote:
web-platform-tests: There's a variety of tests in: 
https://wpt.fyi/results/css/css-values?label=master&label=experimental&aligned&q=minmax%7Cclamp 


Do we have any sense of how good the test coverage is?

Also, not having enabled causes some confusing (but technically 
correct!) behavior for developers[1][2], which is IMO worth addressing, 
and also kinda likely to show up as compat bugs (specially on mobile 
where env() is used the most).


I assume the word "env()" is missing between "having enabled" and "causes"?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: CSS conic-gradient

2020-02-17 Thread James Graham

On 16/02/2020 10:46, Tim Nguyen wrote:


web-platform-tests:
https://searchfox.org/mozilla-central/search?q=conic-gradient&case=false®exp=false&path=testing%2Fweb-platform


That looks like some tests for the parsing, but afaict not much for 
rendering. Do we have a sense of how good the test coverage here is?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Deploy: ThreadSanitizer

2020-02-10 Thread James Graham

On 04/02/2020 09:41, Christian Holler wrote:

One of the problems with deploying ThreadSanitizer in CI is that we have a
fair amount of existing data races that orange pretty much every test we
have. In order to solve this situation, we are currently working on the
following strategy:


1.

Add a Linux TSan build as Tier1 to avoid build regressions (done in bug
1590162 )
2.

Run a set of tests and generate a runtime suppression list

for all of the existing issues.


Which set of tests are we planning to run? If it includes 
web-platform-tests (which I imagine it should in order to get good 
coverage), we also need a strategy to handle importing tests which show 
previously unknown races, without turning jobs orange (in practice this 
means having some annotation in the relevant ini file and code to update 
the annotation based on wptreport logs).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: AVIF (AV1 Image Format) support

2020-01-16 Thread James Graham

On 13/01/2020 21:48, Jon Bauman wrote:

AVIF is an image format based on the AV1 video codec [1] from the Alliance
for Open Media [2]. AV1 support shipped in release 55 [3] and is currently
supported in Chrome, but not Safari. There is an open issue for AVIF
support in Chrome [4].

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=avif

Standard: https://aomediacodec.github.io/av1-avif/

Platform coverage: All

Restricted to secure contexts: No. There's currently no mechanism to
enforce this for image formats, but we can revisit this before enabling
this by default. The same goes for CORS.

Target Release: 76

Preference behind which this will be implemented: image.avif.enabled,
turned off by default.


Is there some kind of cross-implementation testsuite for this format? Or 
how are we confident that we won't end up in a scenario where Chrome and 
Gecko have observable differences in their handling of AVIF?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Visibility of disabled tests

2020-01-11 Thread James Graham

On 09/01/2020 10:46, David Burns wrote:
I think a lot of the problem is not necessarily a technical issue, 
meaning I am not sure that tooling will solve the problem, but it is 
more of a social problem.


To exapnd a little on this; we've had various attempts at making 
disabled tests more visible. ahal had "Test Informant" which for a while 
was giving weekly reports on how many tests were being newly disabled 
and links to full details on all disabled tests. For wpt the interop 
dashboard currently shows the number of disabled tests per component 
(e.g. [1]). So far I don't think we've seen great success from any of 
these efforts; I don't have precise data but the general pattern is that 
almost all tests that are disabled remain disabled indefinitely.


Adding data to searchfox is an interesting alternative that I hadn't 
previously considered; it would make the data ambiently available to 
people looking at the tests/code rather than requiring specific action 
to look at a dashboard or read a recurring email. So it definitely seems 
like it could be worth experimenting with that. But as David says, a lot 
of the problem is in the disconnect between knowing that an issue exists 
and giving priority to actually fixing the issue.


[1] 
https://jgraham.github.io/wptdash/?tab=Gecko+Data&bugComponent=core%3A%3Adom%3A+core+%26+html

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Prototype: beforeinput event (disabled by default even in Nightly channel)

2020-01-11 Thread James Graham



On 08/01/2020 09:54, Masayuki Nakano wrote:
Summary: "beforeinput" event is useful for web apps which manage input 
data into ``, `` and/or `contenteditable`. This event 
is fired before our editor modifies value or DOM tree and some types are 
cancelable. Therefore, if some user inputs are not acceptable for the 
web app, they can cancel it, and/or modifies the value or the DOM tree 
as they want.


Bugzilla: https://bugzilla.mozilla.org/show_bug.cgi?id=970802
Standard: https://w3c.github.io/uievents/#event-type-beforeinput
Platform coverage: all
Preference: dom.input_events.beforeinput.enabled (false by default for now)
DevTools bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1607686


Are there wpt tests for this feature? What's the implementation status 
in other browsers?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Visibility of disabled tests

2020-01-11 Thread James Graham




On 07/01/2020 13:29, Johann Hofmann wrote:

/For disabling tests, review from the test author, triage owner or a 
component peer is required. If they do not respond within 2? business 
days or if the frequency is higher than x, the test may be disabled 
without their consent, but the triage owner *must* be needinfo'd on such 
a bug in this case./


This seems like a specific case of a more general problem.

Sometimes additional information comes up which means that a bug needs 
to be retriaged. For example a bug that's now observed to affect many 
users, or one that has a previously unknown web-compat impact. An 
intermittent becoming problematically frequent seems to clearly fit into 
this general category. So the process should be whatever the normal 
process is for the case where there's additional information that needs 
to be assessed by the triage owner. It's possible that "needinfo the 
triage owner" is indeed what one is supposed to do in such a case, but I 
can't find where that's documented; afaict the triage document at [1] 
doesn't mention the possibility of bugs returning to triage.


So in addition to the specific changes for intermittent handling, can we 
document how one nominates a bug for retriage in general (or point me at 
those docs if they already exist) and document some of the cases where 
retriage is appropriate.


[1] https://firefox-bug-handling.mozilla.org/triage-bugzilla

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


This year in web-platform-tests - 2019 edition

2019-12-20 Thread James Graham

Welcome to "This year in web-platform-tests"; possibly the lowest
frequency regular project update at Mozilla. In a break from tradition
the MoCo All Hands moving to January means that this is the first 
edition to come from a normal office and not a hotel corridor somewhere 
in North America.


This year has seen big improvements in our infrastructure, the addition
of some new features to improve parity with Gecko-specific test
harnesses, and a lot of improvements to our sync. We have also started
to see the overall project focus shift a little away from ensuring that
we have great infrastructure for writing cross-browser tests, to
ensuring that engineers are seeing interop problems and acting on them.
I expect to see more work along those lines in the forthcoming year.

== Gecko CI ==

* Added wpt support for GeckoView and enabled it as T-1 across all CI
  platforms

* Added fission support and enabled it on Linux64-qr and Win10-qr
  builds

* Added support for multiple test statuses so that we can mark
  unstable tests directly rather than disabling them (special shoutout
  to Outreachy intern Nikki Sharpley who did this work, as well as
  adding support for annotating test expectations with multiple statuses
  on import).

* Fixed the metadata update to produce more minimal if conditions and to
  remove stale metadata.

* Started the work to remove duplicate test coverage across test
  types, and so reduce CI load.

== Documentation ==

* Launched a revamped https://web-platform-tests.org with improved
  documentation and full text search.

== Testability ==

* Added support for `TestRendered` event in reftests; analogous to
  `MozReftestInvalidate` in Gecko reftests

* Implemented a crash test type, after Emilio found that running crash
  tests cross-browser produced a large number of previously unknown
  bugs.

* Turned on the http/2 support developed last year, now that all the
  CI systems are using a new enough Python. Added a `.h2` flag for
  test names to indicate that they are using http/2.

* Added support for fuzzy annotations in reftests, so that we are able
  to ignore differences up to a specified tolerance in terms of pixel
  values.

* Added a `PRECONDITION_FAILED​` status for tests and associated
  `assert_precondition` function for cases where the tests depend on a
  specific feature being implemented to work as intended.

* Added a `promise_setup` function to `testharness.js` to allow
  sequencing asynchronous test setup steps before tests defined with
  `promise_test`.

* Changed the reftest logic so that in any test all mismatch
  conditions must be satisfied and at least one match condition must
  be satisfied. This corresponds to the behaviour that many existing
  tests were expecting.

* Required an opt-in for "single page" tests to prevent tests that
  error early being incorrectly documented as single-page.

* Required Ahem to be supplied as a webfont rather than as a system
  font, avoiding various problems with system fonts.

== Results Notification ==

* Created a repository
  (https://github.com/web-platform-tests/wpt-metadata) holding links
  between test results and browser bugs, and the ability to query it

* Implemented https://jgraham.github.io/wptdash/ a dashboard that by
  default shows Gecko-only failures organised by bug component, and
  allows filtering according to whether the result is linked to a
  Gecko bug.

* Initial implementation of auto-filing triagable bugs for test changes
  that show various kinds of more serious problems like Gecko-only
  failures or regressions (e.g. changing from PASS to FAIL). We will be
  rolling this out across components in the new year; please get in
  touch if you'd like to be an early adopter.

* Added support for storing screenshots for failing reftests on
  wpt.fyi and a reftest-analyzer-like UI for viewing them.

* Lots of new search operators in wpt.fi including the ability to
  query by result status

* Launched https://wpt.live as a replacement for https://w3c-test.org

== Sync ==

* About 700 changes made by Gecko engineers were upstreamed from
  mozilla-central to GitHub, and over 2100 changes were upstreamed
  from Chromium, together making up a little over half of the total
  ~5000 PRs merged this year.

* Over half of Chromium CLs merged in Q3 that contained test changes
  modified web-platform-tests (we don't yet have this data for Gecko).

* Lots of work to make the Gecko sync more reliable and performant;
  sync latency is now tracked at https://arewewptyet.com/sync.html
  Since the middle of the year this has remained under 1 week apart
  from a period in October/November when both taskcluster and autoland
  migrations caused disruption.

* Changed the integration branch for the sync from mozilla-inbound to
  autoland

* Started work on Phabricator integration for earlier notification
  when Gecko test changes are going to break upstream CI.

== GitHub Infrastructure ==

* Added daily runs of webkitgtk_minibro

Re: Intent to prototype: CSS property `text-underline-position`

2019-12-06 Thread James Graham

On 03/12/2019 17:50, Jonathan Kew wrote:

web-platform-tests: 
https://wpt.fyi/results/css/css-text-decor/parsing?label=master&q=text-underline-position 


That looks like it's just testing the property computation. Do we also 
have tests for the layout effect, or is that difficult to do in this case?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reminder: Planned Taskcluster migration this weekend (Nov 9)

2019-11-08 Thread James Graham

On 04/11/2019 22:00, Chris Cooper wrote:

tl;dr:

Taskcluster, the platform supporting Firefox CI, will be moving to a new 
hosting environment during the tree closing window (TCW) this coming Saturday, 
Nov 9. Trees will be closed from 14:00 UTC to 23:00 UTC. CI services will be 
available as soon as possible thereafter, pending verification of the new setup.


I'd just like to call out how great the TaskCluster team have been at 
helping projects moving to the community instance.


We have just finished moving the upstream wpt testing to the new setup, 
and the TaskCluster team proactively made sure we were aware the change 
was happening, tried to identify likely issues in our setup, filed PRs 
for the most obvious changes, and made sure we were on track to finish 
by the deadline. They also helped us work through teething issues once 
we were running on the new instance.


Overall the experience has been as good as you could wish for with a 
large infrastructure change. Thanks to everyone involved; I hope the 
Gecko migration goes just as smoothly!

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS subgrid

2019-10-22 Thread James Graham

On 22/10/2019 00:07, L. David Baron wrote:

On Monday 2019-10-21 16:01 -0500, Mike Taylor wrote:

Hi David,

On 10/21/19 7:22 AM, L. David Baron wrote:

(That we haven't applied the policy that much because we've granted
exceptions because other browsers have shipped the features reduces
the effectiveness of the policy and its ability to meet its goals.
This is the sort of policy that is most effective if it applies to
the largest number of thngs, both because it has larger effects and
because it sets much clearer expectations about what will be limited
to secure contexts.  I think it's worth considering reducing that
exception to the existence of actual web compat problems from the
secure contexts limitation.)


Can you unpack this a little here?

Are we saying we would ship features in non-secure contexts because sites
theoretically rely on that behavior due to another browser shipping as
non-secure before we did? (This sounds like the current rationale for
exceptions, I think).

Or are we saying we would ship a feature by default as secure and be willing
(compelled?) to move to non-secure if we discover sites rely on other
significant market share browsers not requiring a secure context for said
feature -- once our users reported the bugs (or we did some kind of analysis
beforehand)?


I'm saying that we've been doing what you describe in the first
paragraph but maybe we need to shift to what you describe in the
second paragraph in order for the policy on secure contexts to be
effective.


Shipping a Gecko-first feature limited to secure contexts, when we don't 
have evidence that other browsers will follow suite, runs the risk of 
sites breaking only in Gecko once the feature is widely deployed. 
Although we can always change the configuration after breakage is 
observed, the time taken to receive a bug report, diagnose the issue, 
and ship the fix to users, can be significant. This is a window during 
which we're likely to lose users due to the — avoidable — compatibility 
problems.


I would argue that in the case where:

* There is no compelling security or privacy reason for limiting a 
feature to secure contexts


* There is reason to suspect that other browsers will ship a feature in 
both secure and insecure contexts (e.g. because limiting to secure 
contexts would be significant extra work in their engine, or because 
their past behaviour suggests that the feature will available everywhere)


the trade-off between nudging authors toward https usage, and avoiding 
web-compat hazards should fall on the side of minimising the 
compatibility risk, and so we should ship such features without limiting 
to secure contexts.


Alternatively we could have a policy that allows us to initially ship 
Gecko-first features meeting the above criteria as secure-context only, 
but that requires us to remove the limit if other browsers start 
shipping to their development channels without a secure-context limit. 
That minimises the compatibility risk — assuming we follow through on 
the process — but adds extra bureaucracy and has more steps to go wrong. 
I doubt the incremental effect on https adoption of this policy variant 
is worth the additional complexity, and suggest we should use this 
approach only if we misjudge the intentions of other vendors.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to remove: Fennec

2019-09-19 Thread James Willcox
Folks,

As you may be aware, Fennec has been frozen on 68 ESR with the expectation
that Fenix will become the new Firefox for Android in 2020. For reasons of
hygiene and simplification, I propose that we begin removing Fennec from
mozilla-central as soon as feasible. There are a few known blockers
currently being tracked under bug 1582218. If you know of any other issues,
please let me know and/or file blockers.

Obviously, we will not be removing anything related to GeckoView. This
means that mobile/android/geckoview/, MOZ_WIDGET_ANDROID, etc. will all be
sticking around. Only the Fennec frontend and any platform code that needed
to disambiguate Fennec from GeckoView at runtime[0] will be targeted.

Thanks,
James

[0] https://searchfox.org/mozilla-central/search?q=jni%3A%3AIsFennec&path=
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Improvements to infrastructure underpinning `firefox-source-docs`

2019-08-27 Thread James Graham

On 27/08/2019 16:24, Andrew Halberstadt wrote:
RST is foreign to most 
It turns out* that firefox-source-docs also supports markdown format, so 
if you have existing docs in that format, or simply feel like needing to 
learn ReST is a significant impediment to writing docs, that option is 
available.


The tradeoff is that the featureset of ReST is much larger than that of 
markdown, so you have to give up some expressiveness and docs/source 
integration features of Sphinx.


* After some discussion with ahal about the merits of enabling markdown, 
it turned out that maja_zf already had and the marionette docs are using it.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Support for annotating intermittent tests with multiple statuses

2019-07-25 Thread James Graham

On 25/07/2019 20:39, Andrew Halberstadt wrote:
Excellent work Nikki! This will give us more tools to help with the war 
on orange. Are there plans (or at least bugs on file) to support other 
harnesses?


I'll file some bugs.

On Thu, Jul 25, 2019 at 2:47 PM James Graham <mailto:ja...@hoppipolla.co.uk>> wrote:


In general the order of priority is:
* First try to fix the underlying issue
* If that isn’t possible then mark as intermittent
* If marking as intermittent is insufficient e.g. because the test is
affecting others in the job, then it should be disabled.


 From what you said earlier, shouldn't the second bullet be:
* If the intermittent is infrequent enough, leave it alone and let 
sheriffs star it


Yes, agreed.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Support for annotating intermittent tests with multiple statuses

2019-07-25 Thread James Graham
Thanks to the amazing work of Outreachy participant Nikki Sharpley, our 
test result logging infrastructure now supports marking tests as having 
multiple expected statuses. This can be used instead of disabling tests 
where the results are intermittent. Harness-level support is currently 
available in the web-platform-tests harness, but the feature could be 
added to any of our mozlog-using test harnesses.


== Bugzilla ==

This work is tracked in bug 1552879 [1]; if you notice any problems with 
the changes, please file bugs that block that one.


== Technical Details ==

This change adds a new field; `known_intermittent` to the mozlog 
`test_end` and `test_status` actions. This field contains a list of 
statuses that represent known intermittent results of the test. The 
`expected` field remains unaffected, so this is a backwards-compatible 
change, and known mozlog consumers have been updated, but any other 
consumers may require patches to correctly handle known intermittent 
statuses.


The in-tree logger formatters, including the tbpl formatter that’s used 
in CI, have been updated so that statuses matching a known intermittent 
are called out explicitly with e.g. `TEST-KNOWN-INTERMITTENT`. The 
mozharness code has been updated so that known intermittents don’t turn 
jobs orange.


== Integration with wpt ==

wpt tests can be annotated with multiple statuses in the metadata files, 
by replacing the single `expected` value with a list e.g.


```
[test.html]
  expected:
if os == “linux”: [PASS, FAIL]
[PASS, TIMEOUT]
```

In this case the expected status is PASS in all cases and FAIL is a 
known intermittent on Linux and TIMEOUT is a known intermittent on all 
other platforms. This is documented on MDN [2].


The sync bot has been updated to prefer marking imported tests that are 
intermittent with multiple statuses rather than disabling them.


== Isn’t this a bad idea? ==

Obviously marking tests with multiple statuses represents a loss of 
coverage compared to having a single status for each test. In general 
it’s always preferable to fix a test or fix gecko so that tests reliably 
return a single result. However marking tests with  multiple statuses is 
a clear improvement over disabling tests:


* Continuing to run the test allows us to detect more serious 
regressions like crashes
* It is possible to detect cases where the annotation is inaccurate 
because some of the statuses are no longer recorded (e.g. a [PASS, FAIL] 
that starts passing all the time or failing all the time).


The fact that marked tests will no longer turn treeherder orange does 
mean that, initially at least, we won’t have the sheriffing data we 
currently have to know when an intermittent becomes more frequent or 
turns into a permafail. For this reason it makes sense to only mark 
tests with known intermittent statuses in cases they would otherwise be 
disabled and not for tests that only fail very infrequently.


== So I should never disable a wpt? ==

In general the order of priority is:
* First try to fix the underlying issue
* If that isn’t possible then mark as intermittent
* If marking as intermittent is insufficient e.g. because the test is 
affecting others in the job, then it should be disabled.


== Future Work ==

Nikki will spend the remainder of her internship starting work on the 
tooling to detect cases where the range of allowed statuses doesn’t 
match the range of observed statuses for a specific test. This will 
unlock the possibility to auto-remove superfluous expectations, or flag 
tests that show intermittent statuses more frequently as likely 
regressions in the future.


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1552879
[2] 
https://developer.mozilla.org/en-US/docs/Mozilla/QA/web-platform-tests#Metadata

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Visual Viewport API on Android

2019-05-13 Thread James Graham



On 10/05/2019 21:49, David Burns wrote:

Not yet as we are stabilising tests for gecko view but hopefully soon!


That won't automatically work; we'd need to start uploading Android 
results to wpt.fyi. Which is possible in one of two ways:


* Add Geckoview on Android to the TC configuration for wpt
* Start uploading results from m-c

The former has the advantage that all the upstream pushes will run kust 
like for desktop, so the results will be directly comparable with other 
browsers. The problem is that there may be implementation difficulties 
connecting up taskcluster-github with the packet infrastructure for 
running in the Android emulator. However if we do this it seems likely 
we can also make Chrome Android work as a point of comparison.


The alternative where we just publish the results from m-c will end up 
creating an entirely seperate set of Firefox-only data corresponding to 
m-c pushes. That data won't be directly comparable against other 
browsers, so although it's easier to do it's less useful.


I think getting android running upstream would be a good use of time and 
resources, but it's likely to be a reasonable amount of work.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Lack of browser mochitests in non-e10s configuration and support for turning off e10s on desktop going forward

2019-04-25 Thread James Graham

On 25/04/2019 17:12, Bobby Holley wrote:

On Thu, Apr 25, 2019 at 3:36 AM Joel Maher  wrote:




On Wed, Apr 24, 2019 at 1:39 PM Bobby Holley 
wrote:




Thanks Mike!

So Fennec is the last remaining non-e10s configuration we ship to

users.

Given that Fennec test coverage is somewhat incomplete, we probably

want to

keep running desktop 1proc tests until Fennec EOL.





Fennec runs the full set of tests, there is no need to run non-e10s tests
on desktop to support Fennec.



I had the impression that we had a fair number of tests disabled under
Fennec, but maybe not. In any case - insofar as non-e10s is a supported
platform, it's useful to be able to hit those failures directly on Desktop
tests rather than armv7 emulators - so there's still value in keeping 1proc
enabled until Fennec EOL.


Note that Android testing has been moving to x86[_64]. I don't know if 
moving more of the tests to that platform would make a meaningful 
difference to this decision (or indeed how practical it is to do so), 
but I think the performance improvement is substantial, which might 
solve some of the issue with local debugging.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: web-platform-tests (dashboard | fuzzy-reftests | reftest comparisons)

2019-04-16 Thread James Graham

On 16/04/2019 12:17, Anne van Kesteren wrote:

On Mon, Apr 15, 2019 at 7:16 PM Jonathan Watt  wrote:

These are all really great. Thanks, James!


Indeed, thanks a lot for working on this! This will help a lot with
prioritizing work and also with standards development.

Some feedback for the Interop Dashboard:

* It would help me a bit if it clarified which versions of Chrome,
Firefox, Safari are represented to make local comparisons more easily.


In general we are always using the latest experimental versions 
available for the platforms we have (Firefox Nightly / Linux, Chrome dev 
/ Linux and Safari TP / macOS). The wpt.fyi link should have more 
details about the specific versions used for the currently loaded run; I 
could add that somewhere in the dashboard if it's helpful.



* It would also be nice to be able to give a wpt path rather than have
to find the corresponding bug component.


It's not the best UI but you can select "Any" as the bug component and 
then filter by path. That does end up loading all the data though. 
Allowing to select by either component or path at the top level would be 
a larger change, but it's certainly possible if people would find it 
useful. The idea of making component the top-level selector was to 
enable workflows analogous to bug triage where the work is divided up by 
component.



* I don't see all the bug components, e.g., DOM::Core: Networking is missing.


The mapping from component to test directories is in 
testing/web-platform/moz.build Per that file there aren't any tests 
covering DOM::Core: Networking. More likely the component to path labels 
contain some errors that should be fixed.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: web-platform-tests (dashboard | fuzzy-reftests | reftest comparisons)

2019-04-12 Thread James Graham
There have been a few recent changes related to wpt that may be of 
interest to a wider audience; for brevity I'm coalescing them into a 
single email:


* New wpt dashboard focused on interop problems
* Support for fuzzy annotations in reftests
* Better support for debugging failing reftests on wpt.fyi

== Interop Dashboard ==

https://jgraham.github.io/wptdash/ (url may change) is a dashboard 
intended to allow identifying web-platorm-test failures that present an 
web-compat risk.


By default it shows tests that are failing in Gecko but passing in Both 
Blink and WebKit, based on wpt.fyi runs of Firefox, Chrome and Safari. 
Tests are divided by bug component.


In addition it presents a summary of the in-tree metadata in expectation 
ini files to provide an overview of where tests are disabled in gecko, 
and various test problems that aren't visible in wpt.fyi data (e.g. 
leaks, or debug-only crashes).


There is upstream worked planned to add a system for annotating test 
results from wpt.fyi; this will allow us to associate each gecko-only 
failure with a bug and eventually to ensure that we triage all failures 
that look like interop issues.


Please let me know about any changes that would make the dashboard more 
useful to you.


== Fuzzy annotations available in wpt reftests ==

wpt has adopted a system to mark reftests as fuzzy. This is semantically 
identical to the system used by gecko reftests; the fuzziness annotation 
consists of two ranges; one for the number of pixels that may contain 
differences and one for the maximum difference in any colour channel.


In the case where the test is known to (possibly) be an inexact match in 
any configuration, the annotation may be put directly in the test file 
as a  element e.g.




(note that as with the recent changes to reftests, this requires exactly 
300 pixels different i.e. the implied range is 300-300 not 0-300).


More usually, however the fuzziness will be browser and configuration 
specific; in this case the annotation must be put in the wpt expectation 
ini file (i.e. the one under testing/web-platform/meta). In this case 
the basic syntax is:


[test.html]
  fuzzy: maxDifference=8;totalPixels=0-1

The "if" syntax used for configuration-specific expectations also works 
in this case.


In reftests involving multiple possible references or long chains of 
references it may be necessary to specify exactly which comparison 
requires the annotation or have multiple comparisons with different 
annotations. This can be done as in the example below:


[test.html]
  fuzzy: [ref1.html:maxDifference=8;totalPixels=0-1,
  ref2.html==ref3.html:maxDifference=4;totalPixels=0-2]

The first annotation in the list applies to any comparison involving 
ref1.html as the reference, the second only applies to the specific 
comparison ref2.html == ref3.html (all paths are resolved relative to 
the test).


More documentation is available at [1] and [2], the latter should 
obviously move somewhere more useful.


== Reftest Comparisons ==

Debugging wpt reftest failures should now be easier since, thanks to 
Chrome's ecosystem-infra team, wpt.fyi has gained reftest-analyzer like 
functionality. See [3] for an example. wpt.fyi shows links to the 
analyzer for all failing reftests [4]. Again, feedback about 
improvements to make this more useful is very much encouraged.



[1] 
https://web-platform-tests.org/writing-tests/reftests.html#fuzzy-matching
[2] 
https://searchfox.org/mozilla-central/source/testing/web-platform/tests/tools/wptrunner/wptrunner/manifestexpected.py#102
[3] 
https://wpt.fyi/analyzer?screenshot=sha1%3A78ed15d1532e4134d8e3560c060538fd0b0a80d9&screenshot=sha1%3A5b8f16d25cb907619183b551c6b3e4d670991268
[4] 
https://wpt.fyi/results/css/css-backgrounds/background-image-first-line.html?label=master&label=experimental&product=chrome&product=firefox&product=safari

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Creating a new mach command for adding tests

2019-04-11 Thread James Graham

On 11/04/2019 18:22, Brian Grinstead wrote:

This has now landed (with initial support for xpcshell, mochitests, and web 
platform tests). Thanks to Andrew Halberstadt and James Graham for improving 
upon the initial prototype and making it easier to extend to new suites.


Eager users should note that, as I write, the wpt support is in a commit 
that is on autoland but not yet merged to central.


Once that has merged this new tool will replace the `mach wpt-create` 
command.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove unnecessary [type] attributes on script tags in mozilla-central

2019-04-09 Thread James Graham



On 09/04/2019 10:31, Anne van Kesteren wrote:

On Tue, Apr 9, 2019 at 5:56 AM Cameron McCormack  wrote:

On Tue, Apr 9, 2019, at 1:39 PM, Brian Grinstead wrote:

I'd like to rewrite markup in the tree to avoid using the [type]
attribute on 

Re: Creating a new mach command for adding tests

2019-04-03 Thread James Graham

On 02/04/2019 19:11, Brian Grinstead wrote:

I don't think that having papercuts in the workflow for writing one type of 
test is the right way to nudge developers into writing another type. It also 
doesn't seem effective - otherwise people would be using the wpt-create tool to 
avoid jumping through hoops to add a mochitest.


To be clear, my intent was not to add papercuts to any workflow; I'd 
really like all our developer workflows to be as ergonomic as possible, 
and adding better tooling to help people create tests seems like a great 
idea.


That said, there's a pattern that we've often fallen into where we make 
broadly applicable improvements to only a single testsuite — often 
mochitest — and then consider it to be job done. Although each change 
individually is small, over time it adds up and re-enforces an implied 
hierarchy of testsuites that doesn't match current best practices.



That said, given there’s already a convention for this perhaps the tool as-is 
would be better named `./mach mochitest-create`. Based on Steve’s suggestion, 
if we did want a single API we could do something like:

# Attempt to automatically determine the type of test (mochitest-chrome, 
xpcshell, wpt, etc)
`./mach addtest path/to/test`

# If you want to pass extra arguments specific to that type, then you use a 
subcommand:
`./mach addtest mochitest --flavor=chrome 
toolkit/components/windowcreator/test/test_chrome.html`
`./mach addtest wpt testing/web-platform/tests/accelerometer/test.html 
--long-timeout`


I think the idea of a single mach command for test creation that, as far 
as possible, guesses the test type from its location is great. I'd be 
happy to provide whatever support is needed to make this replace the 
wpt-specific command.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Creating a new mach command for adding tests

2019-04-02 Thread James Graham

On 01/04/2019 23:13, Steve Fink wrote:

On 4/1/19 11:36 AM, Brian Grinstead wrote:
Based on my own experience and discussions with others, the workflow 
for adding new mochitests isn't great. Commonly, it looks like: 
"copy/paste a test in the same directory, add the new test to the 
relevant manifest file, empty out the actual test bits, write your 
test". In my experience this is prone to issues like forgetting to add 
the new test to the manifest, or not fully replacing boilerplate like 
bug numbers from the copied test.


There's a script in tree I was unaware of until last week called 
gen_template.pl that's intended to help here, but it does leave a few 
issues open:


1) It doesn't help with finding the manifest file and adding the new 
test to it.
2) The boilerplate it generates is outdated (for example, it sets 
type="application/javascript" even in HTML documents, it doesn't 
include add_task, etc).

3) It supports only mochitest-chrome and mochitest-plain.

Last week I prototyped a new mach command to fix (1) and (2), and 
expand (3) to include browser-chrome mochitests. If it's helpful, it 
could be extended to more test types as well. When you run the command 
it will create a file with the appropriate boilerplate and add it to 
the manifest file (chrome.ini, mochitest.ini, browser.ini depending on 
the type). This way you can immediately run the test with `./mach 
mochitest`.


It sounds great to me, but I'm wondering if the generic name is 
intentional or not. Various groups within Mozilla assume different 
things by 'test'. Is`mach addtest` intended to only be for mochitests? 
If so, then perhaps `mach addmochitest` is a better name, even it's a 
bit of mouthful. My reasoning is that there's already a distinction 
between `mach mochitest` and `mach test`, where the latter attempts to 
be general and support a bunch of different kinds of tests. Having `mach 
test` assume mochitests would be highly confusing to me, at least. 
(Though I'm not sure that `mach test` really works; it seems like I 
usually have to run the more specific command.)


I'm also pretty worried by this. For web-platform features ensuring 
interop is critical and as such web-platform-tests should be preferred 
over mochitests where possible. But every time we build features with a 
mochitest-first approach it undermines that.


For web-platform-tests we already have ./mach wpt-create, so I think we 
should either roll that functionality in to the new command as part of 
the initial featureset or have one command per supported test type (i.e. 
call this mach mochitest-create).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS Containment

2019-03-18 Thread James Graham

On 18/03/2019 19:01, Daniel Holbert wrote:

As of today (March 18th 2019), I intend to turn CSS Containment
 on by default on all platforms, in
Firefox Nightly 68. It has been developed behind the
'layout.css.contain.enabled' preference.


Apologies if I've missed it, but I can't see any mention of whether this 
feature has — meaningful — cross browser (i.e. wpt) tests in the ItI 
thread or here.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Limit the maximum life-time of cookies set through document.cookie to seven days

2019-03-08 Thread James Graham

On 08/03/2019 15:06, Boris Zbarsky wrote:

On 3/7/19 7:31 PM, Ehsan Akhgari wrote:

*web-platform-tests*: This is an intervention which different engines do
not agree on yet.  Creating a web-platform-test for it would be very 
simple

but it will be failing in the engines that do not agree with the
intervention.  I'm not sure what the recommendation for testing these 
types

of changes is, would be happy to submit a test if there is a path for
getting them accepted into wpt.


Other vendors have been landing tests with ".tentative" as the last part 
of the filename before the suffixes the test harness expects (so e.g. 
"web-locks/mode-shared.tentative.https.any.js").


I think doing that here is fine; we may want the tests or the commit 
message involved to point to an explainer or something tracking the need 
for a spec change or something like that...


Yes, this seems correct to me too; a .tentative. test is the right way 
to land a test for something that isn't yet standardised, and it should 
somehow link to the relevant discussion but there isn't an explicit 
convention for how that should happen (commit message vs comment vs link 
element, for example). See the end of [1] for the documentation.


[1] https://web-platform-tests.org/writing-tests/file-names.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Searchfox support for mobile/android just got a little better

2019-02-28 Thread James Willcox
Really great, thanks!

James

On Thu, Feb 28, 2019 at 11:49 AM Kartikaya Gupta  wrote:

> As of today Searchfox is providing C++/Rust analysis for
> Android-specific code. So stuff in widget/android and behind android
> ifdefs should be turning up when you search for symbols. Note that
> we're using data from an armv7 build, so code that is specific to
> aarch64 or x86 is not covered. That can be added easily if there's a
> need for it.
>
> But wait, there's more! We now host a new top-level repo,
> "mozilla-mobile" [1] that includes a bunch of stuff from the
> mozilla-mobile github org, where much of the mobile work happens these
> days. This includes focus, the reference-browser and much more. At the
> moment searchfox only provides text search over this codebase (not
> even blame yet), but we'll add more functionality to this repo as time
> permits.
>
> Cheers,
> kats
>
> [1] https://searchfox.org/mozilla-mobile/source
> ___
> mobile-firefox-dev mailing list
> mobile-firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/mobile-firefox-dev
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: Gamepad Extensions `multi touch` and `light indicator`

2019-02-27 Thread James Graham




On 26/02/2019 22:49, d...@mozilla.com wrote:

On Tuesday, February 26, 2019 at 2:15:57 AM UTC-8, James Graham wrote:

On 25/02/2019 19:44, Daosheng Mu wrote:


web-platform-tests: none exist (and I don't plan to write WPTs but we do
have gamepad mochitest, I will add new tests to cover these two new APIs.)


Why do you plan to not write web-platform-tests? I imagine there may be
technical challenges, but we should ensure that those are well
understood before falling back on browser-specific tests.

In the absence of web-platform-tests what's the strategy to ensure that
implementations of this feature are interoperable and we don't end up
fighting compat fires in the future?



Gamepad tests require a real gamepad to run them, so wpt/gamepad are all manual 
tests in Firefox [1]. Our solution is making a GamepadTestService to help us do 
this puppet tests, the GamepadTestService will be launched once we run our 
gamepad mochitest and perform as a real gamepad under our automated testing. 
Besides, there is no tests for Gamepad extension so far. Therefore, if there is 
no big change, I would continue following the same scenario as before.


The current thinking is that hardware interaction APIs which rely on 
mocks to test should specify the API for testing as part of the 
specification (e.g. [1]). So it seems like the same approach could be 
used here.


[1] https://webbluetoothcg.github.io/web-bluetooth/tests
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: Gamepad Extensions `multi touch` and `light indicator`

2019-02-26 Thread James Graham




On 25/02/2019 19:44, Daosheng Mu wrote:


web-platform-tests: none exist (and I don't plan to write WPTs but we do
have gamepad mochitest, I will add new tests to cover these two new APIs.)


Why do you plan to not write web-platform-tests? I imagine there may be 
technical challenges, but we should ensure that those are well 
understood before falling back on browser-specific tests.


In the absence of web-platform-tests what's the strategy to ensure that 
implementations of this feature are interoperable and we don't end up 
fighting compat fires in the future?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to adjust testing to run on PGO builds only and not test on OPT builds

2019-01-21 Thread James Graham

On 21/01/2019 10:18, Jan de Mooij wrote:

On Fri, Jan 18, 2019 at 10:36 PM Joel Maher  wrote:


Are there any concerns with this latest proposal?



This proposal sounds great to me. Thank you!


+1. This seems like the right first step to me.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to adjust testing to run on PGO builds only and not test on OPT builds

2019-01-17 Thread James Graham

On 17/01/2019 16:42, jmaher wrote:

Following up on this, thanks to Chris we have fast artifact builds for PGO, so 
the time to develop and use try server is in parity with current opt solutions 
for many cases (front end development, most bisection cases).


Even as someone not making frequent changes to compiled code I 
occasionally want to both rebuild and run tests on opt (e.g. because 
some test changes also require changes to moz.build files that could 
break the build in a way that isn't caught by an artifact build). In 
this case adding an extra hour of end-to-end time on try is a pretty 
serious regression.


For my specific use case it might be enough if we could schedule 
artifact builds for PGO and full builds for debug. But I suspect it's 
going to work better for more people — and save more resources overall — 
to simply keep the default try configuration as-is and just turn off 
non-PGO opt builds (or at least tests) on integration branches / central.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to adjust testing to run on PGO builds only and not test on OPT builds

2019-01-03 Thread James Graham

On 03/01/2019 18:16, Steve Fink wrote:

Good points, but given that most failures will show up debug builds, it 
seems like a more relevant metric is the difference between time(Opt) vs 
min(time(debug), time(PGO)). Though debug builds may run slow enough 
that it boils down to what you said?


Looking at Windows 64-bit jobs from a random push ( 
https://treeherder.mozilla.org/#/jobs?repo=mozilla-inbound&revision=63027ff03effb04ed4bf53bbb0c9aa1bad4b4c9b 
), I see:


pgo: build=119min + Wd1=15min
opt: build=55min + Wd1=13min
debug: build=46min + Wd1=22min

So by that, you get opt and debug Wd1 results back at the same time 
(67-68min) and pgo Wd1 results take twice as long (134min). I imagine 
there are much slower test jobs that make this situation cloudier, but 
assuming the general pictures holds then it seems like opt is mostly 
redundant with debug.


I think a good rule of thumb is that debug tests are about twice as slow 
as opt, with the same chunking. So for a test job taking closer to an 
hour on opt (which some do), you can easily be at 45 minutes longer for 
opt results than debug. We could of course chunk more, but there's 
overhead there that would eat some of the regained capacity.


I wonder if an alternative would be running opt+debug on integration 
branches and pgo+debug on central. That would have the obvious 
disadvantage that pgo-only failures would be caught much later, but it 
would keep current end-to-end times for integration and slightly better 
capacity savings. I don't know how common pgo-only failures are compared 
to other things that we are only catching on central.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to adjust testing to run on PGO builds only and not test on OPT builds

2019-01-03 Thread James Graham

On 03/01/2019 16:17, jmaher wrote:

What are the risks associated with this?
1) try server build times will increase as we will be testing on PGO instead of 
OPT
2) we could miss a regression that only shows up on OPT, but if we only ship 
PGO and once we leave central we do not build OPT, this is a very low risk.


Couldn't we leave opt enabled for try and just stop running it on 
integration/central branches? That would allow faster/cheaper try but 
preserve the benefits you list above without any additional increase in 
risk compared to today. I do wonder how that would interact with 
artifact builds though; maybe it would be worth running opt *builds* 
just not opt *tests* (which I think is your proposal anyway).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: Overflow media queries

2019-01-02 Thread James Graham

On 23/12/2018 10:59, Emilio Cobos Álvarez wrote:

web-platform-tests: Minimal parsing tests are being added to:

   https://wpt.fyi/results/css/mediaqueries/test_media_queries.html

Unfortunately WPT has no way to test print preview or pagination right 
now so the rest of reftests are Gecko-only.


Mind filing an issue on the wpt repo about the inability to test these 
features? It seems like something we could at least investigate, 
although I don't think there's currently any cross-browser API 
(including via e.g. WebDriver) for getting paginated layout.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Next Year in web-platform-tests - 2018/19 Edition

2018-12-07 Thread James Graham
Following the summary of what we've achieved in the previous year in 
web-platform-tests, I'd like to set out the plan for priorities in the 
coming year, and solicit feedback.


There are several key things that we would like to achieve over the 
course of the year:


* Give the platform team and others the tools to know where there are 
interop problems in the features they own, and the ability to triage 
those failures, so they are able to identify and prioritise issues 
likely to lead to web-compat problems.


* Continue to expand the reach of web-platform-tests by identifying and 
fixing cases where cross-browser web-platform features are untestable in wpt


* Improve the developer ergonomics of working with web-platform-tests, 
both by fixing pain points for test authoring and debugging, and by 
improving the documentation.


For the first point in particular, the current thinking is to create a 
dashboard based on wpt.fyi that focuses on cases where gecko has a 
different behaviour to Blink and/or WebKit. By annotating results with 
links to bugzilla and web-compat.com it should be possible to 
effectively triage these differences and ensure that we can identify and 
fix the issues most likely to end up causing compatibility problems. 
There are also interesting possibilities to link in gecko-specific data 
like code coverage information. Feedback on what people would find most 
useful here is needed to ensure we get this right.


I would also very much like to hear about remaining pain points and 
reasons that developers choose to write browser-specific tests for 
web-platform features. Those are issues we need to prioritize fixing.


On a more general note, I believe this is a critical time for the web 
platform. With recent developments to Edge we are staring down a future 
in which the web is a product, defined by an implementation, not much 
different than Android today. Ensuring that we have excellent 
compatibility between different engines is a necessary—but certainly not 
sufficient—condition to ensure the long term health of the platform. 
Like performance, compatibility is complex and doesn't permit a single 
simple solution. Testing for interoperability is only one part of the 
puzzle. But as is the case with performance we can succeed if we have a 
culture that recognises that it's essential to our future success. We 
know that when people pull together on this we can win; large complex 
features like CSS grid have launched with a level of cross-browser 
consistency that was unheard of a few years ago.


Testsuites such as web-platform-tests and test-262, are the public 
health initiative of the web; not a cure for all ills, but a mechanism 
to prevent many issues upfront when it's cheap and easy.  The genesis of 
web-platform-tests came with the decline and eventual failure of Presto: 
it became clear to those working on it just how hard it is to fix 
compatibility issues after the fact. Success requires considerations of 
compatibility and interop to be an integral part of the process of 
building and shipping a browser. In the time since we have made a great 
deal of progress, but the web itself has got more complex and fragile. 
We need to carry that progress into 2019 and beyond to build the future 
we want to see.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


This year in web-platform-tests - 2018 edition

2018-12-07 Thread James Graham
Welcome to the second annual update on progress in cross-browser interop 
testing through web-platform-tests. This year has seen big improvements 
to the platform coverage of wpt, as well as increasing the number of 
features that can be tested with wpt, and the visibility of test results 
in Gecko and in other browsers. In addition there have been numerous 
fixes intended to make the experience of working with wpt better and the 
results more reliable. So in no particular order:


== Gecko CI ==

* web-platform-tests are now enabled on CI on all tier-1 platforms i.e. 
Desktop Firefox on Linux, Windows and macOS, and mobile Firefox on 
Android (using the x86 emulator)


* web-platform-tests reftests run the full set of CSS tests on all 
platforms.


* web-platform-tests run with leak checking enabled in debug, and also 
run under LSAN. When tests are imported, any existing leaks are 
automatically added to the allowed list for those tests.


* wpt wasm tests are running in the jsshell.

== Testability ==
* Substantially increased the scope of the testdriver.js [1] API to 
allow writing tests that are not possible using standard DOM APIs. In 
particular the following things are now possible:


  - Sending trusted key and mouse events
  - Sending complex series of trusted pointer and key interactions for 
things like in-content drag and drop or pinch zoom

  - File upload

  Also added the 'bless' API for cases where something trusted is 
required but the details are unimportant.


* Implemented support for running tests on HTTP/2 (this is currently 
disabled by default due to problems with upstream CI that will be solved 
by migrating from Travis).


* Added support for fuzzy matching in reftests, to allow reftests to 
pass in the face of small unavoidable differences between test and ref 
(this is not yet merged but is expected to be landed by the end of the 
year).


* Added multiple top-level domains to wpt to allow cross-site tests.
Changed wpt runner to run tests in a true top level browsing context 
(i.e. with null `opener`).


== Results Collection and Viewability ==

* Many fixes and improvements to the wpt.fyi interface to allow 
efficient selection and comparison of multiple test runs.


* Set up Taskcluster to run all tests after each upstream commit on 
Firefox nightly and Chrome dev on Linux.


* Set up daily runs of stable and weekly runs of beta, also using 
Taskcluster.


* Regular, reliable, Edge and Safari Stable/TP runs using a custom 
buildbot setup and Sauce Labs for Edge.


== Sync ==

* We deployed a new, faster, sync for mozilla-central that enables us to 
sync test changes made in Gecko repositories as soon as they land in our 
repositories, creates Bugzilla bugs to track upstream PRs and 
corresponding changes to Firefox test results, and allows us to 
downstream changes from web-platform-tests much more frequently.


* Servo's sync system was updated to allow continuous syncing of PRs 
from servo/servo to web-platform-tests and frequent downstreaming.


* Since launching the new sync we have upstreamed over 500 changes from 
Gecko bugs to the web-platform-tests repository.


* Over the same time period we have seen about 1,500 changes synced from 
the Chromium repository to upstream web-platform-tests.


== Developer Ergonomics ==

* Stopped using the in-tree MANIFEST.json file and instead downloaded it 
on demand. Apart from avoiding merge conflicts, this helps avoid 
breaking many pieces of tooling including Phabricator and git-cinnabar.


* Added reftest-analyzer compatible output to the default logger used by 
mach wpt.


* Improved multi-global tests by extending .any.js tests to be more 
flexible and support more kinds of global scope.


== GitHub Infrastructure ==

* Moved test-stability checks from Travis to Taskcluster for performance 
and reliability


* Started running PR checks in Safari using Azure Pipelines.

* Started checking for tests that are close to the timeout value and 
likely to become intermittent.


* Continued making more infrastructure Python 3 compatible.

* Running affected tests on Chrome, Firefox and Safari and showing 
regressions as PR statuses (currently in beta)


== Meta ==

* Moved the web-platform-tests GitHub repository to its own organisation.

* Started the process of creating a more formal governance structure for 
web-platform-tests, forming a provisional core team with membership from 
a range of stakeholders.


A couple of the mentioned items are still under review, but will almost 
certainly merge before the end of the year :) These improvements have 
been the result of a close collaboration between people across the 
web-platform-tests community, including the Interop Testing team at 
Mozilla, the Ecosystem Infra team at Google and many, many, others. Huge 
thanks to everyone involved.


[1] https://web-platform-tests.org/writing-tests/testdriver.html
___
dev-platform mailing list
dev-platfo

Re: Intent to ship: set keyCode or charCode of "keypress" event to the other's non-zero value

2018-12-05 Thread James Graham

On 04/12/2018 02:23, Masayuki Nakano wrote:

On 2018/11/30 20:42, James Graham wrote:

On 30/11/2018 01:37, Masayuki Nakano wrote:
web-platform-tests: N/A due to requiring user input, but we have 
mochitests with synthesized events.


I think it should be possible to write web-platform-tests for this 
kind of thing now, using the testdriver API and in particular the 
actions support see e.g. [1], [2]


If this still doesn't meet your use case please let me know because we 
should work out how to make testing this kind of stuff possible 
cross-browser; as you well know UI events have been an interop 
nightmare in the past and we can't afford to let that situation 
continue into the future for new devices and APIs.


[1] https://web-platform-tests.org/writing-tests/testdriver.html
[2] 
https://searchfox.org/mozilla-central/source/testing/web-platform/tests/infrastructure/testdriver/actions/multiDevice.html 



Thank you for the information.

I'm looking for the implementation of the keyboard event dispatchers, 
but I've not found it yet. Could you let me know where it is?


So, the full picture is that the test harness provides a js 
implementation of the testdriver API action_sequence method [1] which 
routes a message to the harness asking it to dispatch some actions. In 
the marionette backend this ends up at [2] which sends a 
WebDriver:PerformActions command to the browser. That ends up in [3] and 
then [4] and [5] which in the case of a key event goes through [6] and, 
skipping several layers of setup, eventually [7]. That is ultimately 
using nsITextInputProcessor, and the implementation looks extremely 
similar to synthesizeKey from EventUtils.js [8].


So I believe that unless one of the intervening layers is doing 
something wrong or putting unnecessary constraints on what's possible, 
it should be possible to use this API to generate the exact same events 
that you would get from EventUtils.js. If there are use cases you are 
unable to replicate using the testdriver API please let me know because 
they are likely bugs that ought to be fixed; the intent here is that the 
events looks as close as possible to real key events generated by real 
user interaction.


[1] 
https://searchfox.org/mozilla-central/source/testing/web-platform/tests/tools/wptrunner/wptrunner/testdriver-extra.js#75
[2] 
https://searchfox.org/mozilla-central/source/testing/web-platform/tests/tools/wptrunner/wptrunner/executors/executormarionette.py#374
[3] 
https://searchfox.org/mozilla-central/source/testing/marionette/listener.js#776
[4] 
https://searchfox.org/mozilla-central/source/testing/marionette/action.js#988
[5] 
https://searchfox.org/mozilla-central/source/testing/marionette/action.js#1096
[6] 
https://searchfox.org/mozilla-central/source/testing/marionette/action.js#1142
[7] 
https://searchfox.org/mozilla-central/source/testing/marionette/event.js#384
[8] 
https://searchfox.org/mozilla-central/source/testing/mochitest/tests/SimpleTest/EventUtils.js#850

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: set keyCode or charCode of "keypress" event to the other's non-zero value

2018-11-30 Thread James Graham

On 30/11/2018 01:37, Masayuki Nakano wrote:
web-platform-tests: N/A due to requiring user input, but we have 
mochitests with synthesized events.


I think it should be possible to write web-platform-tests for this kind 
of thing now, using the testdriver API and in particular the actions 
support see e.g. [1], [2]


If this still doesn't meet your use case please let me know because we 
should work out how to make testing this kind of stuff possible 
cross-browser; as you well know UI events have been an interop nightmare 
in the past and we can't afford to let that situation continue into the 
future for new devices and APIs.


[1] https://web-platform-tests.org/writing-tests/testdriver.html
[2] 
https://searchfox.org/mozilla-central/source/testing/web-platform/tests/infrastructure/testdriver/actions/multiDevice.html

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Workflow Apropos!

2018-11-28 Thread James Graham

On 28/11/2018 20:15, Mark Côté wrote:
We're still working through a longer-term vision that we'll share early 
next year, but I can answer some questions now.


Thanks, this is helpful!


* Have to make a choice early on about whether to learn a relatively
unfamiliar (to the majority of developers) VCS (mercurial), use a
slightly unorthodox git setup with slow initial clone (cinnabar), or
use
the largely unsupported (?) GitHub clone.


This is a very difficult problem. I can't see this problem going away 
entirely without some sort of executive decision to require everyone use 
a particular VCS. That said, Mercurial should still be seen as the 
default VCS, especially as we get partial-clone support 
(https://bugzilla.mozilla.org/show_bug.cgi?id=150). Git-cinnabar 
should be treated as an "advanced" option. Perhaps docs could be 
clarified as to this.


I understand that mercurial is not going away :) Having said that it's a 
pretty hard sell to get someone to make an initial contribution if it 
involves learning a whole new VCS; that's a considerable investment of 
time on its own. I wonder if there's something we could do to help 
people here like run a taskcluster task on m-c to produce an artifact 
containing a cinnabar clone archived using git-bundle, and then set up 
the bootstrap.py script to give a choice between mercurial and git using 
cinnabar, and in the git case, do the initial clone by downloading a 
recent bundle and then fetching missing commits. There's probably a good 
reason that won't work, but I'm not sure what it is yet :)


My team has pretty much nothing to do with the gecko GitHub clone; we 
need to keep our focus on the "standard" workflow.


Sure, the problem is that it's an attractive nuisance for new 
contributors who find it and go "It's a GitHub repo! I know this" 
without realising it's largely unsupported.



* Cloning the repository doesn't provide you with the right tooling to
actually request review on a patch. You have to download something else
and — particularly if you wrote the patch as a series of commits —
there's a choice of tools at various levels of completeness. If you use
something backed by arcanist this probably involves installing
system-level dependencies that aren't handled by mach bootstrap.


Yes, this is an issue we'll be addressing. The first step is to stop 
using Arcanist in moz-phab; not only does it introduce other 
dependencies (PHP) but it is causing some performance issues in moz-phab 
as well. After that, we can see about installing it via mach bootstrap 
or such.


Sounds good. It would be great if we can get to a place where 
submitting/updating a review is just `mach review [commits]`, or similar.



* It's not obvious to people that patches can't go up for review
without
a preexisting bug, and won't actually be reviewed unless they specify a
reviewer in the commit message (or go into Phabricator and add a
reviewer after the fact).


Part of this problem has always existed (knowing to pick a reviewer and 
who); we've got plans to introduce suggested reviewers into the flow in 
an even better way than it's done in Bugzilla. Timeline here is a bit 
uncertain in part because there are some prerequisites.


Some system for auto-assigning reviewers where none are provided would 
be a big win; even as a regular contributor I sometimes make changes to 
parts of the tree where I have to guess a possible reviewer from the VCS 
logs.


We could also make moz-phab more helpful when it comes to bugs. And of 
course there's still the controversial idea of not requiring bugs for 
all patches that comes up now and again, but that's a (big) policy question.


Yeah, I don't have a specific solution to suggest for the bugs thing, 
but it's a real issue that people have. Maybe there's some compromise 
where if you send commits for review without a bug the tooling can offer 
to file one for you using the changed files to guess at the 
product/component using the metadata in moz.buid files and the commit 
message to set the bug summary/description.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Workflow Apropos!

2018-11-28 Thread James Graham

On 27/11/2018 17:46, Kim Moir wrote:


Speeding up moz-phab

moz-phab[2] is Engineering Workflow’s officially supported custom
command-line interface to Phabricator, built in order to better support the
“stacked commits” workflow that is common in Firefox engineering.
Unfortunately some of the design decisions we made early on, such as
wrapping Arcanist in order to reduce long-term maintenance burden, have
made moz-phab painfully slow in some circumstances. We’ve spent some time
doing performance analysis and have put together a plan for
improvements[3], taking some inspiration from phlay[4] (which is Git only)
and phabsend[5] (which is Mercurial only). Phase 0 was completed last
week[6] and released yesterday[7]!


Can you share with us the long term vision for what the workflow is 
going to look like here? I've recently seen a few cases where 
experienced develoeprs who have either never contributed to gecko before 
or contribute infrequently tried to get things set up for a patch to get 
into review, and it seems like there was a lot of frustration caused by 
accidental complexity that's mostly hidden from people who are already 
up and running. Some of the issues encountered seemed to be:


* Have to make a choice early on about whether to learn a relatively 
unfamiliar (to the majority of developers) VCS (mercurial), use a 
slightly unorthodox git setup with slow initial clone (cinnabar), or use 
the largely unsupported (?) GitHub clone.


* Cloning the repository doesn't provide you with the right tooling to 
actually request review on a patch. You have to download something else 
and — particularly if you wrote the patch as a series of commits — 
there's a choice of tools at various levels of completeness. If you use 
something backed by arcanist this probably involves installing 
system-level dependencies that aren't handled by mach bootstrap.


* It's not obvious to people that patches can't go up for review without 
a preexisting bug, and won't actually be reviewed unless they specify a 
reviewer in the commit message (or go into Phabricator and add a 
reviewer after the fact).


I appreciate that moving to new tooling is a tricky process and that's 
why there are rough edges at the moment. But it would be really useful 
to be able to tell people that the issues they are facing are understood 
to be pain points and are going to go away in the future :)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing new test platform "Android 7.0 x86"

2018-11-13 Thread James Willcox
This should be running GeckoView in e10s mode. We only support one child
process right now, but we may try to increase that soonish. Resource
consumption is a big concern. The parent process is shared with the host
application right now, but we would like to split that out as well.

James

On Fri, Nov 2, 2018 at 3:00 PM Ehsan Akhgari 
wrote:

> Thanks a lot, this is great news!
>
> What's the process model configuration for this testing platform?  Do
> these tests run in single process mode or are they running in some e10s
> like environment?  Is there some documentation that explains what runs in
> which process?
>
> Thanks,
> Ehsan
>
> On Thu, Nov 1, 2018 at 5:44 PM Geoffrey Brown  wrote:
>
>> This week some familiar tier 1 test suites began running on a new test
>> platform labelled "Android 7.0 x86" on treeherder. Only a few test suites
>> are running so far; more are planned.
>>
>> Like the existing "Android 4.2" and "Android 4.3" test platforms, these
>> tests run in an Android emulator running in a docker container (the same
>> Ubuntu-based image used for linux64 tests).  The new platform runs an x86
>> emulator using kvm acceleration, enabling tests to run much, much faster
>> than on the older platforms. As a bonus, the new platform uses Android 7.0
>> ("Nougat", API 24) - more modern, more relevant.
>>
>> This test platform was added to support geckoview testing. Tests run in
>> the
>> geckoview-based TestRunnerActivity (not Firefox for Android).
>>
>> To reproduce the main elements of this test environment locally:
>>  - build for Android x86 (mozconfig with --target=i686-linux-android)
>>  - 'mach android-emulator' or explicitly 'mach android-emulator --version
>> x86-7.0'
>>  - install the geckoview androidTest apk
>>  - run your test command using --app to specify the geckoview test app,
>> something like 'mach mochitest ... --app=org.mozilla.geckoview.test'
>>
>> Great thanks to the many people who have helped enable this test platform,
>> especially :wcosta for help with taskcluster and :jchen for investigating
>> test failures.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
> --
> Ehsan
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: Unprefix -moz-user-select, unship mozilla-specific values.

2018-11-13 Thread James Graham

On 11/11/2018 17:57, Emilio Cobos Álvarez wrote:

web-platform-tests: Test coverage for all the values is pre-existing. 
There's unfortunately little coverage in WPT, but a lot in our selection 
and contenteditable tests.


Can we upstream some of these tests to wpt? I don't know if there 
are/were technical barriers that would prevent us doing that, but if 
user gestures are required, the new testdriver APIs might fill the gap, 
and if there is some other piece of missing functionality I would be 
interested to know what that is.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Support Referrer Policy for

2018-11-01 Thread James Graham

On 01/11/2018 11:03, Thomas Nguyen wrote:

The link
https://searchfox.org/mozilla-central/search?q=script-tag%2Finsecure-protocol.keep-origin-redirect.http.html&path=
is not covered all the tests. Thanks James for pointing it out.
In fact, we have synced all script-tag tests which were added in 
https://github.com/web-platform-tests/wpt/pull/10976/commits/78a3837eb9cc4fb1bd55f21a9823eda82694d3d2
The tests should provide sufficient coverage of the feature. All the 
tests are disabled now, for example:


It looks like the tests are marked as expected: FAIL rather than 
disabled, and checking treeherder I'm finding results so I think they 
are indeed already running (sorry if this is a bit pedantic, I was just 
making sure I understood the situation).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Support Referrer Policy for

2018-11-01 Thread James Graham

On 31/10/2018 14:03, Thomas Nguyen wrote:

Summary: This implementation adds Referrer Policy support to the 

Re: web-platform-tests that fail only in Firefox (from wpt.fyi data)

2018-10-17 Thread James Graham

On 17/10/2018 10:12, James Graham wrote:

On 17/10/2018 01:23, Emilio Cobos Álvarez wrote:

Hi Philip,

Do you know how do reftests run in order to get that data?

I'm particularly curious about this Firefox-only failure:

   css/selectors/selection-image-001.html

It passes both on our automation and locally. I'm curious because I 
was the author of that test (whoops) and the Firefox fix (bug 1449010).


Does it use the same mechanism than our automation to wait for image 
decodes and such? Is there any way to see the test images?


It's using the same harness as we use in gecko, so it should be giving 
the same results, but of course it's possible that there's some 
difference in the configuration that could cause different results for 
some tests.


Unfortunately there isn't yet a way to see the images; because of the 
number of failures per run, and the number of runs, putting all the 
screenshots in the logs would be prohibitively large, but there is a 
plan to start uploading previously unseen screenshots to wpt.fyi [1]


OK, I investigated this and it turns out that we accidentally started 
uploading tbpl-style logs with screenshots for full runs when we turned 
on taskcluster for PRs. So the screenshot is available through


https://hg.mozilla.org/mozilla-central/raw-file/tip/layout/tools/reftest/reftest-analyzer.xhtml#logurl=https://taskcluster-artifacts.net/U6OIGr7ZTjurDYjy_KgyCg/0/public/results/log_tbpl.log
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: web-platform-tests that fail only in Firefox (from wpt.fyi data)

2018-10-17 Thread James Graham

On 17/10/2018 01:23, Emilio Cobos Álvarez wrote:

Hi Philip,

Do you know how do reftests run in order to get that data?

I'm particularly curious about this Firefox-only failure:

   css/selectors/selection-image-001.html

It passes both on our automation and locally. I'm curious because I was 
the author of that test (whoops) and the Firefox fix (bug 1449010).


Does it use the same mechanism than our automation to wait for image 
decodes and such? Is there any way to see the test images?


It's using the same harness as we use in gecko, so it should be giving 
the same results, but of course it's possible that there's some 
difference in the configuration that could cause different results for 
some tests.


Unfortunately there isn't yet a way to see the images; because of the 
number of failures per run, and the number of runs, putting all the 
screenshots in the logs would be prohibitively large, but there is a 
plan to start uploading previously unseen screenshots to wpt.fyi [1]


Having said that the infrastructure is all containerised and it's 
possible to repeat the run locally with relatively little effort. I'm 
happy to help out with that if you like.


[1] https://github.com/web-platform-tests/wpt.fyi/issues/57
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Multi-browser web-platform-tests results dashboard

2018-10-03 Thread James Graham

On 03/10/2018 13:32, Boris Zbarsky wrote:

On 10/3/18 5:21 AM, James Graham wrote:
So the net effect of all this is that bug fixes won't appear on the 
dashboard until the first upstream wpt commit after they are present 
in a nightly release.


Right.

I assume for now the volume of commits to wpt is such that this is OK, 
but it might make sense to do updates of the experimental dashboard 
daily if nothing else has triggered an update...


There are currently about 100 PRs/week merged in web-platform-tests, and 
I see no reason that will go down, so the chance of going a full weekday 
without merges (excepting cases of infra bustage or similar) seems 
rather small. However if you notice this problem let me know; it's 
certainly possible to solve.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Multi-browser web-platform-tests results dashboard

2018-10-03 Thread James Graham

On 02/10/2018 21:27, Boris Zbarsky wrote:

On 10/2/18 1:38 PM, James Graham wrote:
Experimental (i.e. nightly/dev) builds of Firefox and Chrome are run 
on Linux using Taskcluster after each commit to web-platform-tests.


Would a commit to Firefox that fixes some tests and just touches wpt 
.ini files to mark those tests as passing trigger such a run?  It sounds 
like it would not...


No; I should have been more clear. Commits to mozilla repos aren't 
involved; it's commits to upstream web-platform-tests repo on GitHub 
that trigger runs. We are using Taskcluster via (a slight customisation 
of) the GitHub integration not via the usual gecko scheduling.


So the net effect of all this is that bug fixes won't appear on the 
dashboard until the first upstream wpt commit after they are present in 
a nightly release.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Multi-browser web-platform-tests results dashboard

2018-10-02 Thread James Graham
https://wpt.fyi is a dashboard containing the results of the full 
web-platform-tests suite in multiple versions of current browsers.


The default view shows test results in the latest stable releases of 
desktop Firefox/Chrome/Edge/Safari. There are several alternative views 
that are likely to be of interest to gecko developers:


* The "label" url parameter can be used to change the browser channel. 
https://wpt.fyi/?label=experimental shows the latest test results in 
Firefox Nightly, Chrome Dev and latest Edge Insider / Safari TP. 
label=beta gives results for Chrome and Firefox betas.


* The diff parameter in combination with others can be used to construct 
a two-way comparison. For example 
https://wpt.fyi/results/?products=firefox[beta],firefox[stable]&aligned=true&diff 
shows Firefox beta vs stable, and 
https://wpt.fyi/results/?products=chrome[experimental],firefox[experimental]&aligned=true&diff 
shows a comparison between latest Firefox and Chrome. The diff 
comparison is currently lossy  as it shows the sum of changes, which can 
misrepresent the case where /different/ tests pass and fail across the 
two browsers; a solution to this is currently under development.


* The "Interoperability" view (link on the main page) is an attempt to 
show which platform features have good/poor interop, as defined by the 
fraction of tests that have the same result in multiple browsers.


UI to replace the current LBI ("location bar interface" :) is planned work.

If there are changes that you need to make wpt.fyi better meet your 
needs please speak to me or file a bug at 
https://github.com/web-platform-tests/wpt.fyi/issues


Thanks to our colleagues in the Chrome Platform Predictability team and 
their associates at Bocoup who did the bulk of the work to make the 
dashboard happen.


= (Possibly) FAQ =

How often are the results updated?

Experimental (i.e. nightly/dev) builds of Firefox and Chrome are run on 
Linux using Taskcluster after each commit to web-platform-tests. The 
test runs can be found on 
https://github.com/web-platform-tests/wpt/commits/master (click on the 
green tick). Beta builds are run weekly and stable builds daily. Safari 
and stable Edge are run on a buildbot instance hosted by Bocoup and are 
updated roughly daily (there are also buildbot runs of Firefox and 
Chrome stable). Edge insider results are provided by Microsoft, sometimes.


What browser settings are used?

For Firefox we apply the same profile settings we use for internal CI 
from testing/profiles (but not test-specific prefs set in 
testing/web-platform/meta). For stable/beta builds the prefs are taken 
from the user.js files in release/beta repositories. Note that this 
means that non-experimental builds are run with whatever prefs are 
enabled there. I'd like to clean this up so that beta/release more 
closely match released beta/release. Chrome uses 
--enable-experimental-web-platform-features for experimental builds and 
only prefs required for the test harness to run in other configurations. 
I believe that Edge and Safari are also run with minimal changes from 
the default pref set.


How are the tests run?

Tests are run using the `wpt run` command in the web-platform-tests 
repository. For example you can replicate the results from firefox beta 
running something like `wpt run --channel beta --install-browser firefox 
`. The docker image used for Taskcluster runs is in tools/docker/. 
The underlying harness is wptrunner which also used for Gecko CI.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: wpt MANIFEST.json has moved out of tree

2018-09-26 Thread James Graham

On 24/09/2018 13:43, James Graham wrote:

If you notice regressions from this change, please file a bug in the 
Testing::web-platform-tests component and needinfo me.


Due to bug 1493674 we were mistakenly looking for test metadata in the 
objdir rather than the source dir. If you notice that web-platform-tests 
are unexpectedly failing or not using the expected prefs, please ensure 
you have the commits from that bug in your tree, and delete 
/_tests/web-platform/wptrunner.local.ini; it should be recreated 
with the correct paths when you next run wpt tests.


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1493674
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: wpt MANIFEST.json has moved out of tree

2018-09-24 Thread James Graham



On 24/09/2018 14:01, Boris Zbarsky wrote:

On 9/24/18 8:43 AM, James Graham wrote:
Thanks to great work by Outreachy intern Ahilya Sinha (:Cactusmachete) 
[1], the in-tree wpt MANIFEST.json files are no longer used and will 
soon be removed.


That's great news.  :)

Invoking `mach wpt` will now cause a recent wpt manifest to be 
downloaded from Taskcluster into the objdir (if not already present) 
and updated to match the source tree.


Just to make sure I understand, what happens in an offline scenario? 
Does it basically "update" from an empty starting manifest or something 
else?


Yes, that. Nothing is supposed to break if you're entirely offline, but 
a full manifest update is rather slow (technical details: because all 
the metadata is extracted from the test files, and we currently parse 
them using a slow-but-correct Python HTML parser. Switching to html5ever 
or similar is the most obvious path to fix this, but comes with 
additional challenges in terms of workflow and compatibility).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: wpt MANIFEST.json has moved out of tree

2018-09-24 Thread James Graham
Thanks to great work by Outreachy intern Ahilya Sinha (:Cactusmachete) 
[1], the in-tree wpt MANIFEST.json files are no longer used and will 
soon be removed.


Invoking `mach wpt` will now cause a recent wpt manifest to be 
downloaded from Taskcluster into the objdir (if not already present) and 
updated to match the source tree. Running `mach wpt-manifest-update` 
manually should no longer be necessary. Hopefully this fixes the many 
issues caused by having this file under source control.


The tradeoff for auto-updating the manifest is an corresponding delay in 
startup for wpt tests. In order to reduce this as much as possible, 
there is ongoing work to speed up manifest updates [2].


If you notice regressions from this change, please file a bug in the 
Testing::web-platform-tests component and needinfo me.


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1473915
[2] https://github.com/web-platform-tests/wpt/pull/12553
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement: Storage Access API

2018-09-10 Thread James Graham

On 07/09/2018 21:27, Ehsan Akhgari wrote:

Very cool, I did not know this!  It seems like test_driver.bless() is 
what we need here for simulating a user activation gesture.


However it sounds like in this case you may need to add test-only APIs
for manipulating internal browser state. There are two possible
approaches here:

* Add a feature to WebDriver for manipulating this data. This can be
specified in the StorageManager spec and is appropriate if we want to
add something that can be used by authors as part of the automated
testing for their website.

* Add test-only DOM APIs in the StorageManager spec that we can enable
when running the browser in test mode. Each browser would be
expected to
have some implementation-specific means to enable these APIs when under
test (e.g. a pref). WebUSB has an example of this approach.


This is something that I should get some feedback on from at least 
WebKit before deciding on a path forward, but from a Gecko perspective, 
we basically need to call one function at the beginning and one function 
at the end of each test, so looks like the first option would be 
sufficient.  Do you happen to have an example of that handy?  I've never 
done something like this before, and I do appreciate some pointers to 
get started.


You want an example of a spec adding a WebDriver-based test API? (I'm 
not 100% sure I interpreted your message correctly). The permissions 
spec has one [1].


[1] https://w3c.github.io/permissions/#automation
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement: Storage Access API

2018-09-07 Thread James Graham

web-platform-tests: Our implementation unfortunately doesn’t come with
web-platform-tests, for two reasons.  One is that there is currently no way
to mock user gestures in web platform tests [4], and the second reason is
that furthermore, our implementation also depends on being able to
manipulate the URL Classifier backend for testing purposes.


So this isn't true, however the present state of affairs may not yet be 
useful to you.


It is currently possible to make basic user actions in wpt using the 
testdriver API [1]. I am helping extend this to allow more general 
guestures via WebDriver-like actions [2].


However it sounds like in this case you may need to add test-only APIs 
for manipulating internal browser state. There are two possible 
approaches here:


* Add a feature to WebDriver for manipulating this data. This can be 
specified in the StorageManager spec and is appropriate if we want to 
add something that can be used by authors as part of the automated 
testing for their website.


* Add test-only DOM APIs in the StorageManager spec that we can enable 
when running the browser in test mode. Each browser would be expected to 
have some implementation-specific means to enable these APIs when under 
test (e.g. a pref). WebUSB has an example of this approach.



[4] https://github.com/web-platform-tests/wpt/issues/7156



[1] https://web-platform-tests.org/writing-tests/testdriver.html
[2] https://github.com/web-platform-tests/wpt/pull/12726
[3] https://wicg.github.io/webusb/test/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ./mach try fuzzy: A Try Syntax Alternative

2018-08-06 Thread James Graham

On 06/08/2018 01:25, Botond Ballo wrote:

Is there an easy way to do a T-push (builds on all platforms, tests on
one platform only) with |mach try fuzzy|?

I usually do T-pushes using try syntax, by Trychooser seems to be out
of date when it comes to building a T-push syntax for Android, so I'm
at a loss as to how to do a T-push for Android right now.


There are a couple of options. Interactively you can select all the 
builds you want, press ctrl+a (or whatever the select-all keybinding you 
have configured is), then do the same again with the tests you want, 
then accept all your choices.


If you want to construct a single query string that can be reused with 
--save, something like 'test-linux64 | build !ccov !pgo !msvc' seems to 
select all builds and tests just on linux64. Unfortunately I can't 
figure out any way to logically group expressions, which does make 
composing multiple terms more tricky.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Developer Outreach - Web Platform Research and Recommendations

2018-08-01 Thread James Graham

On 31/07/18 10:34, James Graham wrote:
One of the underlying concerns I have here is that there are a lot of 
separate groups working on different parts of this. As one of the people 
involved, I would nevertheless struggle to articulate the overall 
Mozilla strategy to ensure that the web remains a compelling platform 
supporting multiple engine implementations. I believe it's important 
that we do better here and ensure that all these different teams have a 
shared understanding to set processes and priorities.


As a followup, I just came across [1], which details an explicit 
cross-functional strategy at Google for developer experience and web 
compatibility.


[1] 
https://medium.com/ben-and-dion/mission-improve-the-web-ecosystem-for-developers-3a8b55f46411

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Developer Outreach - Web Platform Research and Recommendations

2018-07-31 Thread James Graham

On 27/07/2018 21:26, Dietrich Ayala wrote:

Additionally, much of what we're proposing is based directly on the
interviews we had with people in different roles in the development of the
web platform. Common themes were: lack of data for making
selection/prioritization decisions, visibility of what is in flight both in
Gecko and across all vendors, lack of overall coordination across vendors,
and visibility into adoption. Those themes are the first priority, and
drove this first set of actions. >
Much of what you discuss is, as you noted, far better than in the past, so
maybe is why they didn't come up much in the interviews?


Without knowing what was in those interviews it's hard to conjecture 
about the reasons for any differences. All I can do is point out the 
issues I perceive.



2) Write a testsuite for each feature, ensure that it's detailed enough

to catch issues and ensure that we are passing those tests when we ship a
new feature.

2) We now have a relatively well established cross-browser testsuite in

web-platform-tests. We are still relatively poor at ensuring that features
we implement are adequately tested (essentially the only process here is
the informal one related to Intent to Implement emails) or that we actually
match other implementations before we ship a feature.

Can you share more about this, and some examples? My understanding is that
this lies mostly in the reviewer's hands. If we have these testsuites, are
they just not in automation, or not being used?


We have the testsuites and they are in automation, but our CI 
infrastructure is only designed to tell us about regressions relative to 
previous builds; it's not suitable for flagging general issues like "not 
enough of these tests pass".


Comparisons between browsers are (as of recently) available at wpt.fyi 
[1], but we don't have any process that requires people to look at this 
data.


We also know that many features which have some tests don't have enough 
tests (e.g. a recent XHR bugfix didn't cause any tests to start passing, 
indicating a problem with the coverage). This is a hard problem in 
general, of course, but even for new features we don't have any 
systematic approach to ensuring that the tests actually cover the 
feature in a meaningful way.



3) Ensure that the performance profile of the feature is good enough

compared to other implementations (in particular if it's relatively easy to
hit performance problems in one implementation, that may prevent it being
useful in that implementation even though it "works")

3) Performance testing is obviously hard and whilst benchmarks are a

thing, it's hard to make them representative of the entire gamut of
possible uses of a feature. We are starting to work on more cross-browser
performance testing, but this is difficult to get right. The main strategy
seems to just to try to be fast in general. Devtools can be helpful in
bridging the gap here if it can identify the cause of slowness either in
general or in a specific engine.

There is a lot of focus and work on perf generally, so not something that
really came up in the interviews. I'm interested in learning about gaps in
developer tooling, if you have some examples.


I note that there's a difference between "perf generally" and 
"compatibility-affecting perf" (although both are important and the 
latter is a subset of the former). Perf issues affect compatibility when 
they don't exist in other engines with sufficiently high combined 
marketshare. So something that is slow in Firefox but fast in all other 
browsers is likely to be used in real sites, whereas a feature that's 
fast in Firefox but slow in all other browsers probably won't get used 
much in the wild.


In terms of specific developer tooling, I don't have examples beyond the 
obvious that developers should be able to profile in a way that allows 
them to figure out which part of their code is causing slowness in 
particular implementations, in much the same way you would expect in 
other development scenarios.



4) Ensure that developers using the feature have a convenient way to

develop and debug the feature in each implementation.

4) This is obviously the role of devtools, making it convenient to

develop inside the browser and possible to debug implementation-specific
problems even where a developer isn't using a specific implementation all
the time. Requiring devtools support for new features where it makes sense
seems like a good step forward.

We've seen success and excitement when features are well supported with
tooling. We're asserting that *always* shipping tooling concurrently with
features in release will amplify adoption.


I entirely agree that coordinating tooling with features makes sense.


5) Ensure that developers have a convenient way to do ongoing testing of

their site against multiple different implementations so that it continues
to work over time.

5) This is something we support via WebDriver, but it doesn't cover all

Re: Proposed W3C Charters: Accessibility (APA and ARIA Working Groups)

2018-07-27 Thread James Teh
A final clarification:

On Fri, Jul 27, 2018 at 4:36 PM, Tantek Çelik 
wrote:

> Even if we (Mozilla) are delayed with implementation, we can
> still champion this stuff. We can still nominate someone to
> participate in the WG with subject matter expertise to help guide what
> we think will be more implementable features.
>
1.  Superficially (I haven't dug into it in detail), I don't believe
anything proposed in ARIA 1.1 or 1.2 is likely to be "not implementable" or
even "costly to implement" for browser vendors. It's more that the Mozilla
accessibility team currently doesn't have anyone who can devote time to
working on new spec things. To put it melodramatically, with current
resourcing, it's likely to take us months to even get to reading the spec
or implement the simplest of spec additions. I really hope this does not
remain the case for too long, but that's how it is right now.
2. For the same reason, we also don't have anyone with subject matter
expertise that's able (due to tie constraints) to participate meaningfully
in the WG.

Jamie
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Accessibility (APA and ARIA Working Groups)

2018-07-26 Thread James Teh
TL;DR: Thanks for the further explanation/clarification. I (reluctantly)
agree that these concerns make sense and have nothing else to add as far as
the response goes.

On Fri, Jul 27, 2018 at 2:33 PM, Tantek Çelik 
wrote:

> > The only thing worth
> > noting is that while you say there's no need to delay for years, that may
> > well be what ends up happening, and Mozilla will essentially be "blocking
> > progress" on this front.
>
> If there were only two browser vendors (including Mozilla) then yes
> your statement would be correct.
>
> However, we have (at least) four major browser vendors, and thus it is
> incorrect to assert that Mozilla alone could be "blocking progress"
> when any 2 of the other 3 browser vendors could implement something
> and have it exit CR.
>
That's fair. I suppose there's some (now irrelevant) historical context
here: it used to be that Mozilla championed this stuff and drove others to
push accessibility forward. At present, that is not the case, and I'm
concerned it'll now be very hard to make much progress in accessibility.
Still, while that's kind of sad, I take your point that this is irrelevant
to the requirements of the charter.


> > We want "limited resources" to drive better
> > standards, yet with our resources in accessibility as limited as they
> are at
> > this point, it's entirely likely we won't get around to implementing new
> > ARIA stuff for years.
>
> That may well be. If that is your assessment, we should add that to
> our Charter response and be quite upfront that we are unlikely to
> implement new ARIA stuff for (a few?) years, and perhaps ask
> (non-F.O.) for the WG to be postponed accordingly.
>
Honestly, there is a lot of uncertainty at this point; I certainly couldn't
give any "formal" statement concerning what we might or might not
implement. FWIW, I believe Mozilla *should* implement this stuff, but that
all depends on me convincing leadership that we should provide more
resources for accessibility. :) Again, irrelevant to our charter response.


> In addition per your note about "still haven't implemented parts of
> ARIA 1.1, let alone ARIA 1.2.", if you know of any features in those
> specs which *no browser implements* we should call those out, and ask
> that the Charter explicitly dictate dropping them in the next version
> of ARIA for failure to get uptake.
>
I'd say there's at least one implementation (probably two) of most ARIA 1.1
stuff. I'm not sure about ARIA 1.2; I haven't even had a chance to look at
it yet.

Jamie
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Accessibility (APA and ARIA Working Groups)

2018-07-26 Thread James Teh
That all seems reasonable from a process perspective. The only thing worth
noting is that while you say there's no need to delay for years, that may
well be what ends up happening, and Mozilla will essentially be "blocking
progress" on this front. We want "limited resources" to drive better
standards, yet with our resources in accessibility as limited as they are
at this point, it's entirely likely we won't get around to implementing new
ARIA stuff for years. At that point, we have a conflict: we have Mozilla
objecting to the minimum implementation exception, while at the same time
not resourcing accessibility sufficiently to make any reasonable progress
at all. I'm not sure we can have it "both ways".

Jamie

On Fri, Jul 27, 2018 at 11:27 AM, Tantek Çelik 
wrote:

> On Thu, Jul 26, 2018 at 6:04 PM, James Teh  wrote:
> > On Fri, Jul 27, 2018 at 2:09 AM, L. David Baron 
> wrote:
> >
> >> So some comments on the ARIA charter at
> >> https://www.w3.org/2018/03/draft-aria-charter :
> >> ...
> >> I guess it seems OK to have only one implementation
> >> if there's really only going to be one implementation on that
> >> platform... but allowing it in general (i.e.,  seems less than ideal,
> >
> > It is. However, the problem is that accessibility in general is severely
> > lacking in resources across browser vendors (especially Mozilla!; we're
> > currently working with just 2 engineers). Even where browser vendors
> agree
> > on how something *should* be done, it often takes months or years before
> it
> > gets implemented, primarily due to the aforementioned resource shortage.
> We
> > (Mozilla) still haven't implemented parts of ARIA 1.1, let alone ARIA
> 1.2.
> > The reality is that if multiple implementations were required for
> sign-off,
> > it'd probably delay the process for years.
>
> Respectfully, I disagree with that use of process, and those
> unimplemented parts of ARIA 1.1, let alone ARIA 1.2 should probably
> have been dropped and/or postponed to future versions.
>
> The reality is that if a standard does not reflect what is
> implemented/implementable (and yes, economic constraints / costs,
> resource are a legitimate reason to criticize something as not being
> implementable), then it should not be in the standard.
>
> A better answer when something lacks multiple implementations is:
> 1. if there is only one implementation, move it to an informative
> (non-normative) appendix
> 2. if there are zero implementations, cut it and postpone it to the
> next +0.1 version
>
> By following such a methodology, there is no need to delay "for
> years". You ship the spec (go to PR) with what happens to be supported
> as of that point in time, then work on the next +0.1 version to ship
> the next year and repeat, hopefully increasing the number of features
> that are interoperable implemented.
>
> > and
> >> allowing only 75% of mappings to be implemented to count as
> >> success seems pretty bad.
> >>
> > Same issue as above regarding limited resources.  Still, this one is a
> > little more concerning because it raises questions about whether the
> > remaining 25% will *ever* be implementable.
>
> Right, same issue with implementability, and same answer (1, 2 above).
>
> We (especially Mozilla) want "limited resources" to be a forcing
> function to drive better standards, simpler to implement, test, debug,
> secure, etc.
>
> No user benefits from unimplemented standards.
>
> If anything, such "specifiction" causes harm in that it can cause
> false expectations of what "works", wasting web developer time and
> resources.
>
> Tantek
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Accessibility (APA and ARIA Working Groups)

2018-07-26 Thread James Teh
On Fri, Jul 27, 2018 at 2:09 AM, L. David Baron  wrote:

> So some comments on the ARIA charter at
> https://www.w3.org/2018/03/draft-aria-charter :
> ...
> I guess it seems OK to have only one implementation
> if there's really only going to be one implementation on that
> platform... but allowing it in general (i.e.,  seems less than ideal,

It is. However, the problem is that accessibility in general is severely
lacking in resources across browser vendors (especially Mozilla!; we're
currently working with just 2 engineers). Even where browser vendors agree
on how something *should* be done, it often takes months or years before it
gets implemented, primarily due to the aforementioned resource shortage. We
(Mozilla) still haven't implemented parts of ARIA 1.1, let alone ARIA 1.2.
The reality is that if multiple implementations were required for sign-off,
it'd probably delay the process for years.

and
> allowing only 75% of mappings to be implemented to count as
> success seems pretty bad.
>
Same issue as above regarding limited resources.  Still, this one is a
little more concerning because it raises questions about whether the
remaining 25% will *ever* be implementable.

Also, the two references to a deliverable of the SVG working group
> when the SVG working group isn't currently chartered seems
> problematic.
>
Ah, yes, that does seem like a problem.

Jamie
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Developer Outreach - Web Platform Research and Recommendations

2018-07-26 Thread James Graham

On 26/07/2018 19:15, Dietrich Ayala wrote:


Why are we doing this?

The goals of this effort are to ensure that the web platform technologies we're 
investing in are meeting the highest priority needs of today's designers and 
developers, and to accelerate availability and maximize adoption of the 
technologies we've prioritized to meet these needs.


I think this is a great effort, and all the recommendations you make 
seem sensible.


Taking half a step back, the overriding goal seems to be to make 
developing for the web platform a compelling experience. I think one way 
to subdivide this overall goal is into two parts


* Ensure that the features that are added to the platform meet the 
requirements of content creators (i.e. web developers).


* Ensure that once shipped, using the features is as painless as 
possible. In particular for the web this means that developing content 
that works in multiple implementations should not be substantially more 
expensive than the cost of developing for a single implementation.


The first point seems relatively well covered by your plans; it's true 
that so far the approach to selecting which features to develop has been 
ad-hoc, and there's certainly room to improve.


The second point seems no less crucial to the long term health of the 
web; there is a lot of evidence that having multiple implementations of 
the platform is not a naturally stable equilibrium and in the absence of 
continued effort to maintain one it will drift toward a single dominant 
player and de-facto vendor control. The cheaper it is to develop content 
that works in many browsers, the easier it will be to retain this 
essential distinguishing feature of the web.


There are a number of things we can do to help ensure that the cost to 
developers of targeting multiple implementations is relatively low:


1) Write standards for each feature, detailed enough to implement 
without ambiguity.


2) Write a testsuite for each feature, ensure that it's detailed enough 
to catch issues and ensure that we are passing those tests when we ship 
a new feature.


3) Ensure that the performance profile of the feature is good enough 
compared to other implementations (in particular if it's relatively easy 
to hit performance problems in one implementation, that may prevent it 
being useful in that implementation even though it "works")


4) Ensure that developers using the feature have a convenient way to 
develop and debug the feature in each implementation.


5) Ensure that developers have a convenient way to do ongoing testing of 
their site against multiple different implementations so that it 
continues to work over time.


There are certainly more things I've missed.

On each of those items we are currently at a different stage of progress:

1) Compared to 14 years ago, we have got a lot better at this. Standards 
are usually written to be unambiguous and produce defined behaviour for 
all cases. Where they fall short of this we aren't always disciplined at 
providing feedback on the problems, and there are certainly other areas 
we can improve.


2) We now have a relatively well established cross-browser testsuite in 
web-platform-tests. We are still relatively poor at ensuring that 
features we implement are adequately tested (essentially the only 
process here is the informal one related to Intent to Implement emails) 
or that we actually match other implementations before we ship a feature.


3) Performance testing is obviously hard and whilst benchmarks are a 
thing, it's hard to make them representative of the entire gamut of 
possible uses of a feature. We are starting to work on more 
cross-browser performance testing, but this is difficult to get right. 
The main strategy seems to just to try to be fast in general. Devtools 
can be helpful in bridging the gap here if it can identify the cause of 
slowness either in general or in a specific engine.


4) This is obviously the role of devtools, making it convenient to 
develop inside the browser and possible to debug implementation-specific 
problems even where a developer isn't using a specific implementation 
all the time. Requiring devtools support for new features where it makes 
sense seems like a good step forward.


5) This is something we support via WebDriver, but it doesn't cover all 
features, and there seems to be some movement toward vendor-specific 
replacements (e.g. Google's Puppeteer), which prioritise the goal of 
making development and testing in a single browser easy, at the expense 
of cross-browser development / testing hard. This seems like an area 
where we need to do much better, by ensuring we can offer web developers 
a compelling story on how to test their products in multiple browsers.


So, to bring this back to your initiative, it seems that the only point 
above you really address is number 4 by recommending that devtools 
support is required for shipping new features. I fully agree that this 
is a good reco

Re: PSA: Re-run old (non-syntax) try pushes with |mach try again|

2018-07-17 Thread James Graham

On 17/07/2018 21:16, Nicholas Alexander wrote:

Ahal,

On Tue, Jul 17, 2018 at 11:55 AM, Andrew Halberstadt > wrote:


While |mach try fuzzy| is generally a better experience than try
syntax, there are a few cases where it can be annoying. One
common case was when you selected a bunch of tasks in the
interface and pushed. Then at a later date you wanted to push
the exact same set of tasks again. This used to be a really poor
experience as you needed to re-select all the same tasks
manually.

As of now, you can use |mach try again| instead. The general
workflow is:

This is awesome, thank you for building it!

Can it be extended to "named pushes"?  That is, right now I use my shell 
history to do `mach try fuzzy -q "'build-android | 'robocop", but nobody 
else will find that without me telling them, and it won't be 
automatically updated when robocop gets renamed.  That is, if I could 
`mach try fuzzy --named android-tier1` or something I could save myself 
some manual command editing and teach other people what a green try run 
means in my area.


./mach try fuzzy --save android-tier1 -q "'build-android | 'robocop"

And then run with

./mach try fuzzy --preset android-tier1

I think that's what you want? There isn't a way to share it or anything, 
but it works well for the use case of "I make the same set of try pushes 
repeatedly over many patches".

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Accessibility (APA and ARIA Working Groups)

2018-07-11 Thread James Teh
I (and others in the accessibility team) think we should support these
charters. The ARIA working group is especially important in the future
evolution of web accessibility. I have some potential concerns/questions
regarding the personalisation semantics specifications from APA, but
they're more spec questions at this point and I don't think they need to be
raised with respect to charter. Certainly, cognitive disabilities is an
area that definitely needs a great deal more attention on the web, and the
APA are seeking to do that.

Thanks.

Jamie

On Wed, Jul 11, 2018 at 3:57 PM, L. David Baron  wrote:

> The W3C is proposing revised charters for:
>
>   Accessible Platform Architectures (APA) Working Group
>   https://www.w3.org/2018/03/draft-apa-charter
>
>   Accessible Rich Internet Applications (ARIA) Working Group
>   https://www.w3.org/2018/03/draft-aria-charter
>
>   https://lists.w3.org/Archives/Public/public-new-work/2018Jun/0003.html
>
> Mozilla has the opportunity to send comments or objections through
> Friday, July 27.
>
> The changes relative to the previous charters are:
> https://services.w3.org/htmldiff?doc1=https%3A%2F%
> 2Fwww.w3.org%2F2015%2F10%2Fapa-charter&doc2=https%3A%
> 2F%2Fwww.w3.org%2F2018%2F03%2Fdraft-apa-charter
> https://services.w3.org/htmldiff?doc1=https%3A%2F%
> 2Fwww.w3.org%2F2015%2F10%2Faria-charter&doc2=https%3A%
> 2F%2Fwww.w3.org%2F2018%2F03%2Fdraft-aria-charter
>
> Please reply to this thread if you think there's something we should
> say as part of this charter review, or if you think we should
> support or oppose it.
>
> -David
>
> --
> 𝄞   L. David Baron http://dbaron.org/   𝄂
> 𝄢   Mozilla  https://www.mozilla.org/   𝄂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Changes to how offset*, client*, scroll* behave on tables

2018-07-10 Thread James Graham

On 10/07/2018 17:25, Boris Zbarsky wrote:

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=820891

Summary: In other browsers, and arguably per spec as far as cssom-view 
specs things, various geometry APIs on tables should report values for 
the table wrapper, not the table itself, because they are defined to 
work on the "first" box generated by the element.  That means that the 
caption is included in the returned values and that things like 
clientWidth should include the table border, modulo the various 
box-sizing weirdness around tables.


Right now, we are just applying the geometry APIs to the table box 
itself.  The patches in the above bug change this.


The behavior of getBoundingClientRect and getClientRects is not being 
changed here, though there is lack of interop around it as well; I filed 
https://bugs.webkit.org/show_bug.cgi?id=187524 and 
https://bugs.chromium.org/p/chromium/issues/detail?id=862205 on that.


Our new behavior aligns much better with other browsers and the spec, 
but this is a general heads-up in case there is compat fallout due to 
browser-sniffing or something...


Are there web-platform-tests covering this behaviour (both the part we 
are changing and the part we aren't)?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: pay attention when setting multiple reviewers in Phabricator

2018-07-05 Thread James Graham

On 05/07/2018 18:19, Mark Côté wrote:
I sympathize with the concerns here; however, changing the default would 
be a very invasive change to Phabricator, which would not only be 
complex to implement but troublesome to maintain, as we upgrade 
Phabricator every week or two.


This is, however, something we can address with our new custom 
commit-series-friendly command-line tool. We are also working towards 
the superior solution of automatically selecting reviewers based on 
module owners and peers and enforcing this in Lando.


Automatically selecting reviewers sounds like a huge improvement, 
particularly for people making changes who haven't yet internalised the 
ownership status of the files they are touching (notably any kind of 
first-time or otherwise infrequent contributor to a specific piece of 
code). So I'm very excited about this change.


That said, basing it on the list of module owners & peers seems like it 
may not be the right decision for a number of reasons:


* The number of reviews for a given module can be very large and being 
unconditionally selected for every review in a module may be overwhelming.


* The list of module owners and peers is not uniformly well maintained 
(in at least some cases it suggests that components are owned by people 
who have not been involved with the project for several years). Although 
this should certainly be cleaned up, the fact is that the current data 
is not reliable in many cases.


* Oftentimes there is substructure within a module that means that some 
people should be reviewers in certain files/directories but have no 
knowledge of other parts.


* It usually desirable to have people perform code review for some time 
as part of the process of becoming a module owner or peer.


A better solution would be to have in-tree metadata files providing 
subscription rules for code review (e.g. a mapping of usernames to a 
list of patterns matching files). Module owners would be responsible for 
reviewing changes to these rules to ensure that automatic delegation 
happens to the correct people.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fwd: WPT Developer Survey - June 2018

2018-06-20 Thread James Graham

On 19/06/2018 10:01, Andreas Tolfsen wrote:

If you run, write, or work with Web Platform Tests (WPT) in some
capacity, we would like to invite you to answer a short survey.

The survey helps us identify ergonomic problems so that we can
improve the tools for building an interoperable web platform.


Just to re-emphasise what Andreas said; I know there are a lot of 
surveys going on at the moment, and we had a Mozilla-internal wpt survey 
late last year, but if you work on gecko development or otherwise on 
web-platform features it will be really valuable to us if you take five 
minutes to respond to this survey and thus ensure that the cross-browser 
testing initiative understands, and can meet, the needs of the Mozilla 
community.



-- >8 --


From: Simon Pieters 
Subject: WPT Developer Survey - June 2018
Date: 19 June 2018 at 09:20:03 BST
To: "public-test-in...@w3.org" 
Resent-From: public-test-in...@w3.org

Hello public-test-infra!

We're gathering feedback about recent changes and pain points in
wpt. Please help us by filling out this survey (and passing it on
to others you know who work with wpt) so we can better prioritize
future work and improve the experience for everyone. We won't take
much of your time - promise!

https://goo.gl/forms/gO2hCgCMvqiAHCVd2

Thank you!


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Launch of Phabricator and Lando for mozilla-central

2018-06-07 Thread James Graham

On 06/06/2018 15:57, Mark Côté wrote:


Similarly, there are two other features which are not part of initial launch 
but will follow in subsequent releases:
* Stacked revisions. If you have a stack of revisions, that is, two or more 
revisions with parent-child relationships, Lando cannot land them all at once.  
You will need to individually land them. This is filed as 
https://bugzilla.mozilla.org/show_bug.cgi?id=1457525.


Have we considered the impact this will have on our CI load? If we 
currently have (say — I didn't bother to compute the actual number) an 
average of 2 commits per push, it seems like this change could increase 
the load on inbound by a corresponding factor of 2 (or perhaps less if 
the multiple-final-commit workflow is so bad that people start pushing 
fewer, larger, changes).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: media-capabilities

2018-05-14 Thread James Graham

On 14/05/2018 16:19, Jean-Yves Avenard wrote:

Media Capabilities allow for web sites to better determine what content to 
serve to the end user.
Currently a media element offers the canPlayType method 
(https://html.spec.whatwg.org/multipage/media.html#dom-navigator-canplaytype-dev 
)
 to determine if a container/codec can be used. But the answer is limited as a 
maybe/probably type answer.

It gives no ability to determine if a particular resolution can be played 
well/smoothly enough or be done in a power efficient manner (e.g. will it be 
hardware accelerated).

This has been a particular problem with sites such as YouTube that serves VP9 
under all circumstances even if the user agent won't play it well (VP9 is 
mostly done via software decoding and is CPU itensive). This has forced us to 
indiscriminately disable VP9 altogether).
For YouTube to know that VP9 could be used for low resolution but not high-def 
ones would allow them to select the right codec from the start.

This issue is tracked in bugzilla 1409664  
(https://bugzilla.mozilla.org/show_bug.cgi?id=1409664 
)

The proposed spec is available at https://wicg.github.io/media-capabilities/ 


Chrome has shipped it a while ago now and talking to several partners 
(including YouTube, Netflix, Facebook etc) , Media Capabilities support has 
been the number one request.



What is the testing situation for this feature? Do we have 
web-platform-tests?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: PerformanceServerTiming

2018-04-25 Thread James Graham

On 24/04/2018 22:36, Valentin Gosu wrote:
On 24 April 2018 at 22:44, James Graham 
<https://bugs.chromium.org/p/chromium/issues/detail?id=702760>


This affects web-compat, since per our "restrict new features to
secure
origins policy" the serverTiming attribute will be undefined on
unsecure
origins.
There is a bug on the spec to address this issue:
https://github.com/w3c/server-timing/issues/54
<https://github.com/w3c/server-timing/issues/54>

Link to the spec: https://w3c.github.io/server-timing/
<https://w3c.github.io/server-timing/>


What's the wpt test situation for this feature, and how do our
results compare to other browsers?


The WPT tests pass when run over HTTPS: 
https://w3c-test.org/server-timing/test_server_timing.html


If we are only supporting this in secure contexts, we should rename the 
test so that it has .https. in the filename which will cause it to be 
loaded over https when run (e.g. in our CI). If there is general 
agreement about restricting the feature to secure contexts, we should 
additionally add a test that it doesn't work over http.


I can't imagine this would be controversial, but if it is we should at 
least ensure that there's a copy of the test set up to run over https.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: PerformanceServerTiming

2018-04-24 Thread James Graham

On 24/04/2018 20:32, Valentin Gosu wrote:

Bug 1423495  is set
to land on m-c and we intend to let it ride the release train, meaning we
are targeting Firefox 61.

Chromium bug: https://bugs.chromium.org/p/chromium/issues/detail?id=702760

This affects web-compat, since per our "restrict new features to secure
origins policy" the serverTiming attribute will be undefined on unsecure
origins.
There is a bug on the spec to address this issue:
https://github.com/w3c/server-timing/issues/54

Link to the spec: https://w3c.github.io/server-timing/


What's the wpt test situation for this feature, and how do our results 
compare to other browsers?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent To Require Manifests For Vendored Code In mozilla-central

2018-04-10 Thread James Graham

On 10/04/2018 14:34, Ted Mielczarek wrote:

On Tue, Apr 10, 2018, at 9:23 AM, James Graham wrote:

On 10/04/2018 05:25, glob wrote:

mozilla-central contains code vendored from external sources. Currently
there is no standard way to document and update this code. In order to
facilitate automation around auditing, vendoring, and linting we intend
to require all vendored code to be annotated with an in-tree YAML file,
and for the vendoring process to be standardised and automated.


The plan is to create a YAML file for each library containing metadata
such as the homepage url, vendored version, bugzilla component, etc. See
https://goo.gl/QZyz4xfor the full specification.


So we now have moz.build that in addition to build instructions,
contains metadata for mozilla-authored code (e.g. bugzilla components)
and moz.yaml that will contain similar metadata but only for
non-mozilla-authored code, as well as Cargo.toml that will contain (some
of) that metadata but only for code written in Rust.

As someone who ended up having to write code to update moz.build files
programatically, the situation where we have similar metadata spread
over three different kinds of files, one of them Turing complete,
doesn't make me happy. Rust may be unsolvable, but it would be good if
we didn't have two mozilla-specific formats for specifying metadata
about source files. It would be especially good if updating this
metadata didn't require pattern matching a Python AST.


We are in fact rethinking the decision to put file metadata in moz.build files 
for these very reasons. I floated the idea of having it live in these same YAML 
files that glob is proposing for vendoring info since it feels very similar. I 
don't want to block his initial work on tangentially-related concerns, but I 
think we should definitely look into this once he gets a first version of his 
vendoring proposal working. I don't know if there's anything useful we can do 
about Cargo.toml--we obviously want to continue using existing Rust practices 
there. If there are specific things you need to do that are hard because of 
that I'd be interested to hear about them to see if there's anything we can 
improve.


That's great to hear! The main thing I currently have to do is 
automatically update bug component metadata when files move around 
during wpt imports. However one can certainly imagine having to script 
similar metadata updates For example, I assume that wpt is not "third 
party" code according to the terms of this discussion, since it's also 
edited in-tree, and whatever tooling we have to support generic third 
party repos won't apply. But it would make sense to store the upstream 
revision of wpt in there rather than in one-off custom file like we do 
currently.  So reusing the same moz.yaml format everywhere rather than 
having one case for "local" code and one for "remote" would make sense 
to me as someone maintaining what amounts to an edge case.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent To Require Manifests For Vendored Code In mozilla-central

2018-04-10 Thread James Graham

On 10/04/2018 05:25, glob wrote:
mozilla-central contains code vendored from external sources. Currently 
there is no standard way to document and update this code. In order to 
facilitate automation around auditing, vendoring, and linting we intend 
to require all vendored code to be annotated with an in-tree YAML file, 
and for the vendoring process to be standardised and automated.



The plan is to create a YAML file for each library containing metadata 
such as the homepage url, vendored version, bugzilla component, etc. See 
https://goo.gl/QZyz4xfor the full specification.


So we now have moz.build that in addition to build instructions, 
contains metadata for mozilla-authored code (e.g. bugzilla components) 
and moz.yaml that will contain similar metadata but only for 
non-mozilla-authored code, as well as Cargo.toml that will contain (some 
of) that metadata but only for code written in Rust.


As someone who ended up having to write code to update moz.build files 
programatically, the situation where we have similar metadata spread 
over three different kinds of files, one of them Turing complete, 
doesn't make me happy. Rust may be unsolvable, but it would be good if 
we didn't have two mozilla-specific formats for specifying metadata 
about source files. It would be especially good if updating this 
metadata didn't require pattern matching a Python AST.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: OpenType Variation Font support

2018-03-20 Thread James Graham

On 19/03/2018 22:32, Jonathan Kew wrote:
As of this week, for the mozilla-61 cycle, I plan to turn support for 
OpenType Font Variations on by default.


It has been developed behind the layout.css.font-variations.enabled and 
gfx.downloadable_fonts.keep_variation_tables preferences.


Other UAs shipping this or intending to ship it include:
   Safari (on macOS 10.13 or later)
   Chrome (and presumably other Blink-based UAs)
   MSEdge (on Windows 10 Fall Creators Update or later)

Bug to turn on by default: 
https://bugzilla.mozilla.org/show_bug.cgi?id=1447163


This feature was previously discussed in this "intent to implement" 
thread: 
https://groups.google.com/d/topic/mozilla.dev.platform/_FacI6Aw2BQ/discussion 


Are there now (cross-browser) tests for this feature?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Module scripts (ES6 modules)

2018-02-14 Thread James Graham

On 14/02/2018 14:13, jcoppe...@mozilla.com wrote:

I intend to turn on 

Re: Improved wpt-sync now running experimentally

2018-02-14 Thread James Graham

On 12/02/2018 20:08, smaug wrote:

On 02/09/2018 10:39 PM, James Graham wrote:

On 09/02/2018 19:59, Josh Bowman-Matthews wrote:

On 2/9/18 1:26 PM, James Graham wrote:
* One bug per PR we downstream, filed in a component determined by 
the files changed in the PR.


What does this mean exactly? What is the desired outcome of these bugs?


They're tracking the process and will be closed when the PR lands in 
central. They are used for notifying gecko developers about the 
incoming change, and in particular contain the information about tests 
that went from passing to failing, and other problems during the import.


I guess I don't understand the bugmail. Most of the time I don't see any 
information about something failing. Am I supposed to look at the commit?

Or are new failures in bugmail like
"
Ran 2 tests and 44 subtests
OK : 2
PASS   : 34
FAIL   : 10
"

Are those 10 failures new failures, or failures from the test total?


That's the total failures. If that's all you see then nothing fell into 
one of the predefined categories of badness that get extra details added 
to the comment. If there is some information that you think should be 
present but is actually missing, please file an issue.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improved wpt-sync now running experimentally

2018-02-09 Thread James Graham

On 09/02/2018 19:59, Josh Bowman-Matthews wrote:

On 2/9/18 1:26 PM, James Graham wrote:
* One bug per PR we downstream, filed in a component determined by the 
files changed in the PR.


What does this mean exactly? What is the desired outcome of these bugs?


They're tracking the process and will be closed when the PR lands in 
central. They are used for notifying gecko developers about the incoming 
change, and in particular contain the information about tests that went 
from passing to failing, and other problems during the import.


They are not essential to the sync so if they end up not working well at 
keeping people informed we can revisit the approach.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Improved wpt-sync now running experimentally

2018-02-09 Thread James Graham
The new sync for web-platform-tests is now running experimentally. This 
provides two way sync between the w3c/web-platform-tests repository on 
GitHub and mozilla-central, so allowing gecko developers to contribute 
to web-platform-tests using their normal gecko workflow, and ensuring 
that we get all the upstream changes submitted by the community 
including engineers at Google, Apple, and Microsoft.


The new code is intended to provide the following improvements over the 
old periodic batch sync approach:


* Faster sync. The code to actually land changes to mozilla-central is 
still undergoing testing, but the intent is that we can get at least one 
wpt update per day once the system is fully operational.


* One bug per PR we downstream, filed in a component determined by the 
files changed in the PR.


* One PR per bug we upstream. Currently this will be created when a 
patch lands on inbound or autoland and should be merged when the patch 
reaches central. In some hypothetical future world in which there's a 
single entry point for submitting code to land in gecko (e.g. 
phabricator) this will change so that the PR is created when the code is 
submitted for review, so that upstream test results are available before 
landing (see next point).


* Upstream CI jobs run on PRs originating from gecko repositories. 
Previously we skipped upstream travis jobs on pushes we landed, 
occasionally causing breakage as a result. Now these jobs are run on all 
our pushes and the original bug should get a notification if the jobs fail.


* Notifications of notable changes introduced by upstream PRs. In 
particular we will add a comment when tests that used to pass start to 
not pass, when there are crashes or disabled tests, and for new tests 
that fail. This notification happens in the bug for the sync, but there 
is already an issue open to move things that obviously require attention 
(e.g. crashes) into their own bug.


If you notice problems with the sync, please file an issue [1] or 
complain in #wpt-sync on irc.  The project team consists of:


* jgraham and maja_zf (development, primary contacts)
* AutomatedTester (project management)

Issues are not unanticipated at this time, so thanks in advance for your 
patience as we work out the kinks in the system.


[1] https://github.com/mozilla/wpt-sync/issues

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   3   >