On 27/07/2018 21:26, Dietrich Ayala wrote:
Additionally, much of what we're proposing is based directly on the
interviews we had with people in different roles in the development of the
web platform. Common themes were: lack of data for making
selection/prioritization decisions, visibility of what is in flight both in
Gecko and across all vendors, lack of overall coordination across vendors,
and visibility into adoption. Those themes are the first priority, and
drove this first set of actions. >
Much of what you discuss is, as you noted, far better than in the past, so
maybe is why they didn't come up much in the interviews?

Without knowing what was in those interviews it's hard to conjecture about the reasons for any differences. All I can do is point out the issues I perceive.

2) Write a testsuite for each feature, ensure that it's detailed enough
to catch issues and ensure that we are passing those tests when we ship a
new feature.
2) We now have a relatively well established cross-browser testsuite in
web-platform-tests. We are still relatively poor at ensuring that features
we implement are adequately tested (essentially the only process here is
the informal one related to Intent to Implement emails) or that we actually
match other implementations before we ship a feature.

Can you share more about this, and some examples? My understanding is that
this lies mostly in the reviewer's hands. If we have these testsuites, are
they just not in automation, or not being used?

We have the testsuites and they are in automation, but our CI infrastructure is only designed to tell us about regressions relative to previous builds; it's not suitable for flagging general issues like "not enough of these tests pass".

Comparisons between browsers are (as of recently) available at wpt.fyi [1], but we don't have any process that requires people to look at this data.

We also know that many features which have some tests don't have enough tests (e.g. a recent XHR bugfix didn't cause any tests to start passing, indicating a problem with the coverage). This is a hard problem in general, of course, but even for new features we don't have any systematic approach to ensuring that the tests actually cover the feature in a meaningful way.

3) Ensure that the performance profile of the feature is good enough
compared to other implementations (in particular if it's relatively easy to
hit performance problems in one implementation, that may prevent it being
useful in that implementation even though it "works")
3) Performance testing is obviously hard and whilst benchmarks are a
thing, it's hard to make them representative of the entire gamut of
possible uses of a feature. We are starting to work on more cross-browser
performance testing, but this is difficult to get right. The main strategy
seems to just to try to be fast in general. Devtools can be helpful in
bridging the gap here if it can identify the cause of slowness either in
general or in a specific engine.

There is a lot of focus and work on perf generally, so not something that
really came up in the interviews. I'm interested in learning about gaps in
developer tooling, if you have some examples.

I note that there's a difference between "perf generally" and "compatibility-affecting perf" (although both are important and the latter is a subset of the former). Perf issues affect compatibility when they don't exist in other engines with sufficiently high combined marketshare. So something that is slow in Firefox but fast in all other browsers is likely to be used in real sites, whereas a feature that's fast in Firefox but slow in all other browsers probably won't get used much in the wild.

In terms of specific developer tooling, I don't have examples beyond the obvious that developers should be able to profile in a way that allows them to figure out which part of their code is causing slowness in particular implementations, in much the same way you would expect in other development scenarios.

4) Ensure that developers using the feature have a convenient way to
develop and debug the feature in each implementation.
4) This is obviously the role of devtools, making it convenient to
develop inside the browser and possible to debug implementation-specific
problems even where a developer isn't using a specific implementation all
the time. Requiring devtools support for new features where it makes sense
seems like a good step forward.

We've seen success and excitement when features are well supported with
tooling. We're asserting that *always* shipping tooling concurrently with
features in release will amplify adoption.

I entirely agree that coordinating tooling with features makes sense.

5) Ensure that developers have a convenient way to do ongoing testing of
their site against multiple different implementations so that it continues
to work over time.
5) This is something we support via WebDriver, but it doesn't cover all
features, and there seems to be some movement toward vendor-specific
replacements (e.g. Google's Puppeteer), which prioritise the goal of making
development and testing in a single browser easy, at the expense of
cross-browser development / testing hard. This seems like an area where we
need to do much better, by ensuring we can offer web developers a
compelling story on how to test their products in multiple browsers.

Definitely agree on easing cross-browser development. There are a few
services that do it, but a paid service is a huge barrier and also not
standardized so not integrated into tooling. It's not where the developers
are already working.

Btw, I implemented a subset of the Puppeteer API for Firefox so that I
could easily run the same tests against Chrome and Firefox:

https://github.com/autonome/puppeteer-fx

Oh, this is cool! But perhaps not yet something that we should expect people to depend on in production.

So, to bring this back to your initiative, it seems that the only point
above you really address is number 4 by recommending that devtools support
is required for shipping new features. I fully agree that this is a good
recommendation, but I think we need to go further and ensure that we are
improving on all the areas listed above.

Yes, lots more work to do in the areas you listed, by a number of different
groups! Thanks for sharing your thoughts.

One of the underlying concerns I have here is that there are a lot of separate groups working on different parts of this. As one of the people involved, I would nevertheless struggle to articulate the overall Mozilla strategy to ensure that the web remains a compelling platform supporting multiple engine implementations. I believe it's important that we do better here and ensure that all these different teams have a shared understanding to set processes and priorities.

[1] https://wpt.fyi/results/?label=experimental
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to