Please stop using keypress event to handle non-printable keys

2018-01-17 Thread Masayuki Nakano

Hello, everyone.

Please stop using keypress event for handling non-printable keys in new 
code when you write new code and new automated tests. Firefox will stop 
dispatching keypress events for non-printable keys for conforming to UI 
Events and struggling with web compatibility. (non-printable key means 
key combination which won't cause inputting character, e.g., arrow keys, 
backspace key and Ctrl (and/or Alt) - "A" key, etc.)


You can perhaps just use keydown event instead. KeyboardEvent.key and 
KeyboardEvent.keyCode of non-printable key events are always same. 
Difference between keydown event and keypress event is 
KeyboardEvent.charCode of printable keys (and Ctrl/Alt + printable 
keys).  So, when you need to use only key or keyCode, please use keydown 
event.  Otherwise, when you need to use charCode, please keep using 
keypress event.


Background:

We need to fix bug 968056 (*1) for web-compat issues.

Currently, Firefox dispatches keypress events for any keys except 
modifier keys. This is traditional behavior from Netscape Navigator. 
However, this is now invalid behavior from a point of view of standards 
(*2).


I'm going to start to work on the bug from next week. However, this 
requires to rewrite too many keypress event handlers in our internal 
code and our automated tests.  So, please stop using keypress event when 
you want to handle non-printable keys at least in new code.


Thanks in advance.

1: https://bugzilla.mozilla.org/show_bug.cgi?id=968056
2: https://w3c.github.io/uievents/#legacy-keyboardevent-event-types

--
Masayuki Nakano 
Software Engineer, Mozilla
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Jeff Gilbert
It's way cheaper to get build clusters rolling than to get beefy
hardware for every desk.
Distributed compilation or other direct build optimizations also allow
continued use of laptops for most devs, which definitely has value.

On Wed, Jan 17, 2018 at 11:22 AM, Jean-Yves Avenard
 wrote:
>
>
>> On 17 Jan 2018, at 8:14 pm, Ralph Giles  wrote:
>>
>> Something simple with the jobserver logic might work here, but I think we
>> want to complete the long-term project of getting a complete dependency
>> graph available before looking at that kind of optimization.
>
> Just get every person needing to work on mac an iMac Pro, and those on 
> Windows/Linux a P710 or better and off we go.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Jean-Yves Avenard


> On 17 Jan 2018, at 8:14 pm, Ralph Giles  wrote:
> 
> Something simple with the jobserver logic might work here, but I think we
> want to complete the long-term project of getting a complete dependency
> graph available before looking at that kind of optimization.

Just get every person needing to work on mac an iMac Pro, and those on 
Windows/Linux a P710 or better and off we go.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Ralph Giles
On Wed, Jan 17, 2018 at 10:27 AM, Steve Fink  wrote:


> Would it be possible that when I do an hg pull of mozilla-central or
> mozilla-inbound, I can also choose to download the object files from the
> most recent ancestor that had an automation build?


You mention 'artifact builds' so I assume you know about `ac_add_options
--enable-artifact-builds` which does this for the final libXUL target,
greatly speeding up the first people for people working on the parts of
Firefox outside Gecko.

In the build team we've been discussing for a while if there's a way to
make this more granular. The most concrete plan is to use sccache again.
This tool already supports multi-level (local and remote) caches, so it
could certainly pull the latest object files from a CI build; it already
does this when running in automation. There are still some 'reproducible
build' issues which block general use of this: source directory prefixes
not matching, __FILE__ and __DATE__, different build flags between
automation and the default developer builds, that sort of thing. These
prevent cache hits when compiling the same code. There aren't too many
left; help would be welcome working out the last few if you're interested.

We've also discussed having sccache race local build and remote cache fetch
as you suggest, but not the kind of global scheduling you talk about.
Something simple with the jobserver logic might work here, but I think we
want to complete the long-term project of getting a complete dependency
graph available before looking at that kind of optimization.

FWIW,
 -r
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Simon Sapin

On 17/01/18 19:27, Steve Fink wrote:

Would it be possible that when I do an hg pull of mozilla-central or
mozilla-inbound, I can also choose to download the object files from the
most recent ancestor that had an automation build? (It could be a
separate command, or ./mach pull.) They would go into a local ccache (or
probably sccache?) directory.


I believe that sccache already has support for Amazon S3. I don’t know 
if we already enable that for our CI infra. Once we do, I imagine we 
could make that store world-readable and configure local builds to use it.


--
Simon Sapin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Steve Fink

On 1/16/18 2:59 PM, smaug wrote:

On 01/16/2018 11:41 PM, Mike Hommey wrote:

On Tue, Jan 16, 2018 at 10:02:12AM -0800, Ralph Giles wrote:
On Tue, Jan 16, 2018 at 7:51 AM, Jean-Yves Avenard  


wrote:

But I would be interested in knowing how long that same Lenovo P710  
takes

to compile *today*….



On my Lenovo P710 (2x2x6 core Xeon E5-2643 v4), Fedora 27 Linux

debug -Og build with gcc: 12:34
debug -Og build with clang: 12:55
opt build with clang: 11:51

Interestingly, I can almost no longer get any benefits when using  
icecream,

with 36 cores it saves 11s, with 52 cores it saves 50s only…



Are you staturating all 52 cores during the buidls? Most of the  
increase in
build time is new Rust code, and icecream doesn't distribute Rust.  
So in
addition to some long compile times for final crates limiting the  
minimum
build time, icecream doesn't help much in the run-up either. This is  
why
I'm excited about the distributed build feature we're adding to  
sccache.


Distributed compilation of rust won't help unfortunately. That won't
solve the fact that the long pole of rust compilation is a series of
multiple long single-threaded processes that can't happen in parallel
because each of them depends on the output of the previous one.

Mike



Distributed compilation won't also help those remotees who may not  
have machines to setup

icecream or distributed sscache.
(I just got a new laptop because of rust compilation being so slow. )
I'm hoping rust compiler gets some heavy optimizations itself. 


I'm in the same situation, which reminds me of something I wrote long  
ago, shortly after joining Mozilla:  
https://wiki.mozilla.org/Sfink/Thought_Experiment_-_One_Minute_Builds  
(no need to read it, it's ancient history now. It's kind of a fun read  
IMO, though you have to remember that it long predates mozilla-inbound,  
autoland, linux64, and sccache, and was in the dawn of the Era of  
Sheriffing so build breakages were more frequent and more damaging.) But  
in there, I speculated about ways to get other machines' built object  
files into a local ccache. So here's my latest handwaving:


Would it be possible that when I do an hg pull of mozilla-central or  
mozilla-inbound, I can also choose to download the object files from the  
most recent ancestor that had an automation build? (It could be a  
separate command, or ./mach pull.) They would go into a local ccache (or  
probably sccache?) directory. The files would need to be atomically  
updated with respect to my own builds, so I could race my build against  
the download. And preferably the download would go roughly in the  
reverse order as my own build, so they would meet in the middle at some  
point, after which only the modified files would need to be compiled. It  
might require splitting debug info out of the object files for this to  
be practical, where the debug info could be downloaded asynchronously in  
the background after the main build is complete.


Or, a different idea: have Rust "artifact builds", where I can download  
prebuilt Rust bits when I'm only recompiling C++ code. (Tricky, I know,  
when we have code generation that communicates between Rust and C++.)  
This isn't fundamentally different from the previous idea, or  
distributed compilation in general, if you start to take the exact  
interdependencies into account.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use Counters were previously over-reporting, are now fixed.

2018-01-17 Thread Chris Hutten-Czapski
It depends on what you're measuring. The aggregates are recorded by
build_id, submission_date, OS, OS version, channel, major version, e10s
enabled setting, application name, and architecture. They can be returned
aggregated against any of those dimensions except the first two. So the
error depends on the values distributed amongst those dimensions... which
is a lot of things to check.

Generally speaking we don't add a use counter for anything that is used a
lot. Even people who see these features being used don't see them used a
lot. The only other candidate for largest change would be GetPreventDefault
(https://mzl.la/2mQqYMR), in the sub-2% category. Most use counters are
less than 0.1% usage.

So on the count of "baseline sense for how much I should care"... if you
have put in a Use Counter and thought usage was too high to remove that
feature, look again. Re-evaluate with the new, correct, data.

If you have considered using Use Counters but heard they couldn't be
trusted, they can. Now.

If neither of those things apply, this is just a note that Use Counters
exist and can be useful. Caring is optional :)

:chutten

On Wed, Jan 17, 2018 at 12:51 PM, Steve Fink  wrote:

> On 1/17/18 7:57 AM, Chris Hutten-Czapski wrote:
>
>> Hello,
>>
>>Use Counters[0] as reported by the Telemetry Aggregator (via the HTTPS
>> API, and the aggregates dashboards on telemetry.mozilla.org) have been
>> over-reporting usage since bug 1204994[1] (about the middle of September,
>> 2015). They are now fixed [2], and in the course of fixing it, :gfritzsche
>> prepared a nifty view [3] of them that performs the fix client-side.
>>
>
> Can you give a sense for the size of the error in observed counts? (Is
> this a 15% should-have-been 10% type of thing, or almost always a 15.001%
> should-have-been 15% type of thing?)
>
> Could you order all of the counts by percentage error and send out a list
> of the top 10 or so? (eg in your blog post, you had PROPERTY_FILL_DOCUMENT
> going from 10.00% -> 9.46%, for an error of 0.54%. If that 0.54% is in the
> top 10, report the measure with its old and new percentages.) Really, it
> ought to be the list of measures, if any, that crossed over some "decision
> threshold", but I'm assuming that for the most part we don't have specific
> thresholds.
>
> I'm just trying to get a baseline sense for how much I should care. :-)
>
>
> Thanks,
>
> Steve
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use Counters were previously over-reporting, are now fixed.

2018-01-17 Thread Steve Fink

On 1/17/18 7:57 AM, Chris Hutten-Czapski wrote:

Hello,

   Use Counters[0] as reported by the Telemetry Aggregator (via the HTTPS
API, and the aggregates dashboards on telemetry.mozilla.org) have been
over-reporting usage since bug 1204994[1] (about the middle of September,
2015). They are now fixed [2], and in the course of fixing it, :gfritzsche
prepared a nifty view [3] of them that performs the fix client-side.


Can you give a sense for the size of the error in observed counts? (Is 
this a 15% should-have-been 10% type of thing, or almost always a 
15.001% should-have-been 15% type of thing?)


Could you order all of the counts by percentage error and send out a 
list of the top 10 or so? (eg in your blog post, you had 
PROPERTY_FILL_DOCUMENT going from 10.00% -> 9.46%, for an error of 
0.54%. If that 0.54% is in the top 10, report the measure with its old 
and new percentages.) Really, it ought to be the list of measures, if 
any, that crossed over some "decision threshold", but I'm assuming that 
for the most part we don't have specific thresholds.


I'm just trying to get a baseline sense for how much I should care. :-)


Thanks,

Steve


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Use Counters were previously over-reporting, are now fixed.

2018-01-17 Thread Chris Hutten-Czapski
Hello,

  Use Counters[0] as reported by the Telemetry Aggregator (via the HTTPS
API, and the aggregates dashboards on telemetry.mozilla.org) have been
over-reporting usage since bug 1204994[1] (about the middle of September,
2015). They are now fixed [2], and in the course of fixing it, :gfritzsche
prepared a nifty view [3] of them that performs the fix client-side.

  Of all the problems to have with use-counters, _over_estimating usage is
the kind of problem that hurts the web least, as at least we weren't
retiring features that were used more than we reported.

  For a narrative-form description of the events, I wrote this blog post:
[4]. In short, we goofed. But it's fixed now, and the aggregator (and,
thus, telemetry.mozilla.org's telemetry aggregates dashboards) now report
the correct values for all queries, current and historical.

  So go forth and enjoy your use counters! They're pretty neat, actually.

:chutten

[0]:
https://firefox-source-docs.mozilla.org/toolkit/components/telemetry/telemetry/collection/use-counters.html
[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1204994
[2]: https://github.com/mozilla/python_mozaggregator/pull/59
[3]: http://georgf.github.io/usecounters/index.html
[4]:
https://chuttenblog.wordpress.com/2018/01/17/firefox-telemetry-use-counters-over-estimating-usage-now-fixed/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Revised proposal to refactor the observer service

2018-01-17 Thread Gabriele Svelto
 Hello all,
I've tried to take into account all the suggestions from the discussion
of my previous proposal [1] and I've come up with a new plan which
should cover all bases. Don't hesitate to point out things that might
still be problematic, since this is going to be a large refactoring it's
better to get it right from the start.

The biggest sticking point of my previous proposal was that it would
prevent any modifications to the observer list from artifact builds.
Addressing that particular issue effectively requires to keep the
existing service in place so instead of replacing it altogether I'd do
the following:

1) Introduce a new observer service that would live alongside the
current one (nsIObserverService2?). This will use a machine-generated
list of topics that will be held within the interface itself instead of
a separate file as I originally proposed. This will become possible
thanks to bug 1428775 [2]. The only downside of this is that the C++
code will not use an enum but just integer constants. The upside is that
this will need only one code generator and only one output file (the
IDL) versus two generators and three files in my original proposal.

2) Migrate all C++-only and mixed C++/JS users to use the new service.
Since the original service would still be there this can be done
incrementally. Leave JS-only users alone.

3) Consider writing a JS-only pub/sub service that would be a better fit
than the current observer service. If we can come up with something
that's better than the observer service for JS then it can be used to
retire the old service for good.

So, how does this sound?

 Gabriele

[1]
https://lists.mozilla.org/pipermail/dev-platform/2018-January/020935.html
[2] Make it possible to generate IDL files
https://bugzilla.mozilla.org/buglist.cgi?quicksearch=1428775



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to enable scrollbars by default for windows opened by window.open()

2018-01-17 Thread Andrew Overholt
We got a bug report that things are not working here:
https://bugzilla.mozilla.org/show_bug.cgi?id=1429900. Ben, can you take a
look?

On Mon, May 23, 2016 at 2:25 AM, Ben Tian  wrote:

> Hi,
>
> I’m planning to enable scrollbars by default for windows opened by
> window.open().
>
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1257887
>
> This change intends to enable scrollbars by default when “scrollbars”
> doesn't appear in the feature argument of window.open(), and to disable
> only when opener pages specify explicitly. Currently window.open() disables
> scrollbars by default unless the feature argument enables explicitly or is
> an empty string. However in the majority of the cases the user will want to
> be able to scroll, and a lot of accessibility docs recommend enabling
> scrollbars by default.
>
> Chrome and Safari don't provide a way to disable scrollbar, according to
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=1257887#c0
>
> If you have any concern or know of regression on pages relying on current
> behavior, please let me know.
>
>
> -Ben
>
> --
> Ben Tian,
> Engineering Manager
> System Engineering Lead
> Mozilla Corporation
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Requiring secure contexts for new features

2018-01-17 Thread Anne van Kesteren
On Wed, Jan 17, 2018 at 12:02 AM, Martin Thomson  wrote:
> Either of these criteria are sufficient, right?  However, I expect
> that we'll want to hold the line in some cases where other browsers
> ship anyway.  How do we plan to resolve that?  One potential
> resolution to that sort of problem is to ship in secure contexts
> anyway and ask other browsers to do the same.
>
> My expectation is that we'll discuss these and exercise judgment.  But
> I thought that I'd raise this point here.  I want to avoid creating an
> expectation here that we're happy with lowest common denominator when
> it comes to these issues.

I was hoping that the section "Exceptions to requiring secure
contexts" makes it quite clear that it is indeed an appeals process.
We already have a process for shipping new features ("intent to
implement/ship") and now you need to present justification if you want
to ship a feature available on insecure contexts. It is likely that an
exception is granted for either of the reasons given, but it's not a
guarantee. The dev-platform community can still object and if that is
sustained by the Distinguished Engineers I would expect us to not ship
it (or ship it restricted to secure contexts).

I have clarified https://wiki.mozilla.org/ExposureGuidelines that
secure contexts needs to be part of the "intent to implement" emails
and also linked the secure contexts post from the suggested
implementation process.


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform