Looking for examples of bad IPC latency

2020-04-27 Thread Adam Roach

Hey, folks --

I've been looking at the amount of time IPC messages spend in flight. So 
far, I've been using general web pages (e.g., modern news sites like 
CNN) to generate the profile information that I'm analyzing.


Given that several people I spoke to in Berlin believed that IPC latency 
was a major concern, I'm interested in finding out whether any of you 
have specific use cases that you know or believe are hampered by IPC 
performance, to make sure I look at them in particular. If you know of 
any such cases, please let me know (either via email, or by pinging me 
on the Matrix server -- I'm abr:mozilla.org).


Thanks!

/a

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Autodiscovery of WebExtension search engines

2020-02-19 Thread Adam Roach

On 2/14/2020 5:05 PM, Daniel Veditz wrote:

On Fri, Feb 14, 2020 at 11:50 AM Dale Harvey  wrote:


We’re proposing a new mime-type [...]: “x-xpinstall” for WebExtension
search
engines. Example: 

This is confusingly similar to "application/x-xpinstall" which we use to
trigger extension installs from link clicks. Since standard media-type
syntax is "/" some authors will tend to fill in the
"missing" bit and get it wrong, and others will complain that the syntax is
non-standard and broken.

Does this code enforce that the .xpi we download and attempt to install is
actually a search type and not an arbitrary WebExtension? If any extension
type will work then re-using the full application/x-xpinstall is
appropriate, but that sounds like it would go against user expectation and
might trick users into doing something dangerous. "This page would like to
install 'Steal all your data from every page search engine'. OK?" If the
code does enforce only search type add-ons will it be confusing to use the
generic media-type? Or maybe it's OK anyway, since rel="search" is required
and can be taken as requiring that subset.

If you _do_ invent a new one shared with other browser vendors, please
don't use an "x-" prefix in anything new.
https://tools.ietf.org/html/rfc6648 [2012] (hey -- our very own St. Peter)



I had a response composed, and then realized that Dan had covered most 
of what I wanted to say. The only additional point I would like to make 
is: unless you're re-using a media type already in use (e.g., 
application/x-xpinstall), or planning to run this through a standards 
process first, this should look something like 
"application/vnd.mozilla.webextension." See 
 for 
details.


/a

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Timed Text (TT) Working Group

2019-08-29 Thread Adam Roach
Most of the deltas range from editorial to general good hygiene. The 
only changes of any real consequence that I see are:


 * Updating their previous work to new versions
 * Charter item to work on a profile of TTML2 to support audio-only use
   cases
 * Catch-all clause at the bottom of §2.1 that grants the WG carte
   blanche to work on any random thing they want

Having little background in this technology, I'm pretty ambivalent about 
the first two changes. I think we should object to the third change: 
charters serve both the guide work and limit scope, and this clause 
removes all scope limitations.


/a

On 8/28/19 5:41 PM, L. David Baron wrote:

The W3C is proposing a revised charter for:

   Timed Text (TT) Working Group
   https://www.w3.org/2019/08/ttwg-proposed-charter.html
   https://lists.w3.org/Archives/Public/public-new-work/2019Aug/0004.html

The comparison to the group's previous charter is:
   
https://services.w3.org/htmldiff?doc1=https%3A%2F%2Fwww.w3.org%2F2018%2F05%2Ftimed-text-charter.html&doc2=https%3A%2F%2Fwww.w3.org%2F2019%2F08%2Fttwg-proposed-charter.html

Mozilla has the opportunity to send comments or objections through
Tuesday, September 10.

Please reply to this thread if you think there's something we should
say as part of this charter review, or if you think we should
support or oppose it.

-David



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: open socket and read file inside Webrtc

2018-07-04 Thread Adam Roach

On 7/4/18 7:24 AM, amantell...@gmail.com wrote:

Hi,
I'm very new with firefox (as developer, of course).
I need to open a file and tcp sockets inside webrtc.
I read the following link
https://wiki.mozilla.org/Security/Sandbox#File_System_Restrictions
there is the sandbox that does not permit to open sockets or file descriptors.
could you give me the way how I can solve these my problems?
Thank you very much


For files, you want to use the File API. See 
https://developer.mozilla.org/en-US/docs/Web/API/File/Using_files_from_web_applications 
for a good introduction.


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing tinderbox-builds from archive.mozilla.org

2018-05-09 Thread Adam Roach

On 5/9/18 12:11 PM, L. David Baron wrote:

It's useful for tracking down regressions no matter how old the
regression is; I pretty regularly see mozregression finding useful
data on bugs that regressed multiple years ago.


I want to agree with David -- I recall one incident in particular where 
I used mozregression to track a problem down to a three-year-old change 
that was only exposed when we flipped the big "everyone gets e10s now" 
switch. I would have been pretty lost figuring out the root cause 
without older builds.


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New Policy: Marking Bugzilla bugs for features riding behind a pref

2018-05-03 Thread Adam Roach

On 5/3/18 12:18 PM, Nicholas Alexander wrote:

Not all features are feasible to ship behind feature flags.


I'm pretty sure the proposed policy isn't intended to change anything 
regarding features that ship without associated feature flags, nor is it 
trying to get more features to ship behind flags than currently do. It's 
just trying to rationalize a single, more managable process for those 
that *do* ship behind flags.


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: JSON-LD Working Group

2018-04-27 Thread Adam Roach

On 4/27/18 2:02 PM, L. David Baron wrote:

On Friday 2018-04-27 10:07 +0300, Henri Sivonen wrote:

For this reason, I think we should resist introducing dependencies on
JSON-LD in formats and APIs that are relevant to the Web Platform. I
think it follows that we should not support this charter. I expect
this charter to pass in any case, so I'm not sure us saying something
changes anything, but it might still be worth a try to register
displeasure about the prospect of JSON-LD coming into contact with
stuff that Web engines or people who make Web apps or sites need to
deal with and to register displeasure with designing formats whose
full processing model differs from how the format is evangelized to
developers (previously: serving XHTML as text/html while pretending to
get benefits of the XML processing model that way).

Yeah, I'm not quite sure how to register such displeasure.  In
particular, I think it's probably poor form to object to maintenance
work on a base specification, even if we're opposed to that
specification's use elsewhere.  At least, assuming we don't want to
make the argument that the energy being spent on that maintenance
shouldn't be.

I'm inclined to leave this one alone, unless somebody else comes up
with a better position we could take.


With the caveat that I have very limited knowledge about JSON-LD and am 
basing this mostly on the preceding exchange:


If there's a set of behaviors defined by the 1.0 spec, and a different 
set of behaviors implemented, deployed, and evangelized, I think it 
would be reasonable to object (on that basis) to a charter that does not 
explicitly include work items to bring the spec into line with reality.


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to restrict to secure contexts: navigator.geolocation

2016-10-24 Thread Adam Roach
I'm hearing general agreement that we think turning this off is the 
right thing to do; that maintaining compatibility with Chrome's behavior 
is important (since that's what existing code will presumably be tested 
against); and -- as bz points out -- we don't want to throw an exception 
here for spec compliance purposes. I propose that we move forward with a 
plan to immediately deny permission in non-secure contexts. Kan-Ru's 
proposal that we put this behind a pref seems like a good one -- that 
way, if we discover that something unexpected happens in deployment, 
it's a very simple fix to go back to our current behavior.


I would be hesitant to over-analyze additional complications, such as 
https-everywhere or user education on this topic. We are, after all, 
simply coming into alignment with the rest of the web ecosystem here.


/a

On 10/22/16 12:05, Ehsan Akhgari wrote:

On 2016-10-22 10:16 AM, Boris Zbarsky wrote:

On 10/22/16 9:38 AM, Richard Barnes wrote:

I'm not picky about how exactly we turn this off, as long as the
functionality goes away.  Chrome and Safari both immediately call the
error
handler with the same error as if the user had denied permission.  We
could
do that too, it would just be a little more code.

Uh...  What does the spec say to do?

It seems like the geolocation spec just says the failure callback needs
to be called when permission is defined, with the PERMISSION_DENIED
code, but doesn't mention anything about non-secure contexts.  The
permissions spec explicitly says that geolocation *is* allowed in
non-secure contexts <https://w3c.github.io/permissions/#geolocation>.
The most relevant thing I can find is
<https://w3c.github.io/webappsec-secure-contexts/#legacy-example>, which
is an implementation consideration.  But as far as I can tell, this is
not spec'ed.


Your intent, and the whole "sites that would break are already broken"
thing sounded like we were going to match Chrome and Safari behavior; if
that was not the plan you really needed to explicitly say so!

Yes, indeed.  It seems that making Navigator.geolocation [SecureContext]
is incompatible with their implementation.


We certainly should not be shipping anything that will change behavior
here to something _different_ from what Chrome and Safari are shipping,
assuming they are shipping compatible things.  Again, what does the spec
say to do?

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Engineer, Mozilla
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Action Script 4

2016-08-08 Thread Adam Roach

On 8/7/16 12:45, Jonathan Moore wrote:

I was wondering about how one would go about integrating ActionScript
into Gecko.



I'd start by looking at Shumway <http://mozilla.github.io/shumway/> -- I 
believe it only does Actionscript 1, 2, and 3, and that the support is 
only partial, but it's probably easier than trying to start something 
from scratch. Note that Shumway is no longer under active development.


I'd also encourage you to back up to first principles and ask why you're 
not just targeting HTML5 directly. My understanding is that Adobe Flash 
Professional can target JS just as easily as it can Actionscript: 
http://tv.adobe.com/watch/adobe-technology-sneaks-2012/export-to-html5-from-flash-professional/


--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Group Photo

2016-06-15 Thread Adam Roach

Here is the group photo from this morning's session:

https://www.flickr.com/photos/9361819@N04/27686864545/in/album-72157669699014665/

https://www.flickr.com/photos/9361819@N04/27686865615/in/album-72157669699014665/


--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Basic Auth Prevalence (was Re: Intent to ship: Treat cookies set over non-secure HTTP as session cookies)

2016-06-10 Thread Adam Roach

On 4/18/16 09:59, Richard Barnes wrote:

Could we just disable HTTP auth for connections not protected with TLS?  At
least Basic auth is manifestly insecure over an insecure transport.  I
don't have any usage statistics, but I suspect it's pretty low compared to
form-based auth.


As a follow up from this: we added telemetry to answer the exact 
question about how prevalent Basic auth over non-TLS connections was. 
Now that 49 is off Nightly, I pulled the stats for our new little counter.


It would appear telemetry was enabled for approximately 109M page 
loads[1], of which approximately 8.7M[2] used HTTP auth -- or 
approximately 8% of all pages. (This is much higher than I expected -- 
approximately 1 out of 12 page loads uses HTTP auth? It seems far less 
dead than we anticipated).


749k of those were unencrypted basic auth[2]; this constitutes 
approximately 0.7% of all recorded traffic.


I'll look at the 49 Aurora stats when it has enough data -- it'll be 
interesting to see how much if it is nontrivially different.


/a


[1] 
https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0&end_date=2016-06-06&keys=__none__!__none__!__none__&max_channel_version=nightly%252F49&measure=HTTP_PAGELOAD_IS_SSL&min_channel_version=null&product=Firefox&sanitize=1&sort_keys=submissions&start_date=2016-05-04&table=0&trim=1&use_submission_date=0


[2] 
https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0&end_date=2016-06-06&keys=__none__!__none__!__none__&max_channel_version=nightly%252F49&measure=HTTP_AUTH_TYPE_STATS&min_channel_version=null&product=Firefox&sanitize=1&sort_keys=submissions&start_date=2016-05-04&table=0&trim=1&use_submission_date=0 




--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FF49a1: Page load of jumping points doesn't work like it should in Wikipedia

2016-05-20 Thread Adam Roach
Ah, I think I spoke to quickly -- the jumping is caused by javascript, 
but not by javascript scrolling. It's certainly possible that javascript 
hiding of large elements would be treated as reflow events by this 
approach...


/a

On 5/20/16 15:19, Adam Roach wrote:

There is one FAQ on that page, and I think it basically says the opposite.

/a

On 5/20/16 12:35, Kartikaya Gupta wrote:

Note that this might get fixed in chrome with their new "scroll
anchoring" feature -
https://developers.google.com/web/updates/2016/04/scroll-anchoring?hl=en

kats

On Fri, May 20, 2016 at 12:15 PM, Adam Roach  wrote:

On 5/20/16 10:13, Gijs Kruitbosch wrote:

On 20/05/2016 16:11, Tobias B. Besemer wrote:

Plz open e.g. this URL:

https://en.wikipedia.org/wiki/Microsoft_Windows#Alternative_implementations

FF49a1 loads the page, jumps to "Alternative implementations", stays
there for 1-2 sec and then go ~1 screen-high (page) down.

Can someone very this bug?

The same thing happens in Chrome, so it seems like it's more likely to be
an issue with Wikipedia.

The fact that turning JavaScript off prevents this behavior would certainly
seem to support that supposition.

--
Adam Roach
Principal Platform Engineer
Office of the CTO

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
Office of the CTO



--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FF49a1: Page load of jumping points doesn't work like it should in Wikipedia

2016-05-20 Thread Adam Roach

There is one FAQ on that page, and I think it basically says the opposite.

/a

On 5/20/16 12:35, Kartikaya Gupta wrote:

Note that this might get fixed in chrome with their new "scroll
anchoring" feature -
https://developers.google.com/web/updates/2016/04/scroll-anchoring?hl=en

kats

On Fri, May 20, 2016 at 12:15 PM, Adam Roach  wrote:

On 5/20/16 10:13, Gijs Kruitbosch wrote:

On 20/05/2016 16:11, Tobias B. Besemer wrote:

Plz open e.g. this URL:

https://en.wikipedia.org/wiki/Microsoft_Windows#Alternative_implementations

FF49a1 loads the page, jumps to "Alternative implementations", stays
there for 1-2 sec and then go ~1 screen-high (page) down.

Can someone very this bug?


The same thing happens in Chrome, so it seems like it's more likely to be
an issue with Wikipedia.


The fact that turning JavaScript off prevents this behavior would certainly
seem to support that supposition.

--
Adam Roach
Principal Platform Engineer
Office of the CTO

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FF49a1: Page load of jumping points doesn't work like it should in Wikipedia

2016-05-20 Thread Adam Roach

On 5/20/16 10:13, Gijs Kruitbosch wrote:

On 20/05/2016 16:11, Tobias B. Besemer wrote:

Plz open e.g. this URL:
https://en.wikipedia.org/wiki/Microsoft_Windows#Alternative_implementations 



FF49a1 loads the page, jumps to "Alternative implementations", stays 
there for 1-2 sec and then go ~1 screen-high (page) down.


Can someone very this bug?


The same thing happens in Chrome, so it seems like it's more likely to 
be an issue with Wikipedia. 


The fact that turning JavaScript off prevents this behavior would 
certainly seem to support that supposition.


--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-05-13 Thread Adam Roach

On 5/13/16 14:26, Ben Hearsum wrote:
I intend to make sure that Beta/Release/ESR is configured in such a 
way that users get the most up to date release possible. Eg: serve 
10.6-10.8 users the latest 48.0 point release, then give them a 
deprecation notice. 


Presumably, the deprecation notice will mention ESR as a way to continue 
to get security updates for several more months?



--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-05-03 Thread Adam Roach

On 5/3/16 4:59 PM, Justin Dolske wrote:

On 5/3/16 12:21 PM, Gregory Szorc wrote:

* The update server has been reconfigured to not serve Nightly 
updates to

10.6-10.8 (bug 1269811)


Are we going to be showing some kind of notice to affected users upon 
Release? That is, if I'm a 10.6 user and I update to Firefox 48, at 
some point should I see a message saying I'll no longer receive future 
updates?


Even better, is there any way to get the update system to automatically 
move such users over to 45ESR?


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to (sort of) unship SSLKEYLOGFILE logging

2016-04-26 Thread Adam Roach
I think we need to have reasonable answers to Patrick's questions before 
landing this patch. It's clear what we're losing, but unclear what we're 
gaining.


/a

On 4/26/16 08:30, Patrick McManus wrote:

I don't think the case for making this change (even to release builds) has
been successfully made yet and the ability to debug and iterate on the
quality of the application network stack is hurt by it.

The Key Log - in release builds - is part of the debugging strategy and is
used fairly commonly in the network stack diagnostics. The first line of
defense is dev tools, the second is NSPR logging, and the third is
wireshark with a key log because sometimes what is logged is not what is
really happening on the 'wire' (thus the need to troubleshoot).

Bug reporters are often not developers and sometimes do not have the option
of (or willingness to) running other builds. Removing functionality that
helps with that is damaging to our strategic goal of building our Core and
emphasizing quality. Bug 1188657 suggests that this functionality is for
diagnosing tricky TLS bugs, but its just as helpful for diagnosing anything
using TLS which we of course hope to make be everything.

But of course if it represents a security hole then it is medicine that
needs to be swallowed - I wouldn't argue against that. That's why I say the
case hasn't been made yet.

The mechanism requires machine level control to enable - the same level of
control that can alter the firefox binary, or annotate the CA root key
store or any number of other well understood things. Daniel suggests that
Chrome will keep this functionality. The bug 1183318 handwaves around
social engineering attacks against this - but of course that's the same
vector for machine level control of those other attacks as well - I don't
see anything really improved by making this change, but our usability and
ability to iterate on quality are damaged. Maybe I'm mis understanding the
attack this change ameliorates?

Minimally we should be having this discussion about a change in
functionality for  Firefox 49 - not something that just moved up a
release-train channel.

Lastly, as a more strategic point I think reducing the tooling around HTTPS
serves to dis-incentivize HTTPS. Obviously, we don't want to do that.
Sometimes there are tradeoffs to be made, I'm skeptical of this one though.


On Tue, Apr 26, 2016 at 12:44 AM, Martin Thomson  wrote:


In NSS, we have landed bug 1183318 [1], which I expect will be part of
Firefox 48.

This disables the use of the SSLKEYLOGFILE environment variable in
optimized builds of NSS.  That means all released Firefox channels
won't have this feature as it rides the trains.

This feature is sometimes used to extract TLS keys for decrypting
Wireshark traces [2].  The landing of this bug means that it will no
longer be possible to log all your secret keys unless you have a debug
build.

This is a fairly specialized thing to want to do, and weighing
benefits against risks in this case is an exercise in comparing very
small numbers, which is hard.  I realize that this is very helpful for
a select few people, but we decided to take the safe option in the
absence of other information.

(I almost forgot to send this, but then [3] reminded me in a very
timely fashion.)

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1183318
[2]
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format
[3]
https://lists.mozilla.org/pipermail/dev-platform/2016-April/014573.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-03-10 Thread Adam Roach

On 3/10/16 5:17 PM, Trevor Saunders wrote:

On Thu, Mar 10, 2016 at 04:01:15PM -0700, Tyler Downer wrote:

The other thing to note is many of those users can still update to 10.11,
and I imagine that over the next year that number will continue to go down.

given they haven't upgraded from 10.6 - 10.8 why do you believe they are
likely to in the future?


Or even can? As I point out in my other message, a lot of the Intel Mac 
hardware cannot go past 10.6.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-03-10 Thread Adam Roach

On 3/10/16 4:50 PM, Ryan VanderMeulen wrote:
25% is pretty close for 10.6-10.8 combined. However, the current 
proposal includes security patches for nearly a year still (putting 
them on the ESR45 train), so construing this as abandoning those users 
seems like it's going a bit far.


I'm not sure the difference between "abandoning" and "irreversibly 
locking into being abandoned in ~1 year" is all that great. After 
initial drop-off, these versions have a pretty stable tail on them.


http://lowendmac.com/2015/the-rise-and-fall-of-mac-os-x-versions-2009-to-2015/

OS X 10.7, in particular, was the first release to leave behind the Core 
Duo and Core Solo Intel hardware, which is still pretty capable and 
(apparently) still used by some sizable portion of the Mac community 
[1]. You'll notice, for example, that 10.6 has many more users than 10.7 
or 10.8 does; and, in fact, appears to still account for 1 out of every 
10 Mac users.


To be clear: these users _cannot_ upgrade to 10.9 or later. It simply 
won't install.


To put this in perspective: we continue to support XP, some 9 years 
after after the January 2007 release date of its successor. OS X 10.9 
didn't come out until October of 2013, which is only two and a half 
years ago.



[1] Full disclosure: I have and continue to use such hardware personally.

--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Requiring a try job prior to autolanding to inbound

2016-01-22 Thread Adam Roach

On 1/22/16 06:12, Daniel Minor wrote:

Another difference is that sheriffs require a try run before they will land
a patch flagged "checkin-needed." In Bug 1239281 we're proposing to
implement this requirement for autolanding.


I'm always wary of using tools to enforce policy, since you frequently 
end up with a "tail wagging the dog" situation of policies that are 
driven by the limitations of the tool. I'm all for adding some kind of 
notice reminding people that they should try their patch if it's 
anything more than a trivial change, but making the tool enforce this is 
going to cause more harm -- in the form of development friction -- than 
good.


My understanding is that the autolander is available only to developers 
with Level 3 access, right? Given that this is the same group of people 
who can do a manual check-in, I don't see why we would make autolanding 
have to clear a higher bar than manual landing.


--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTML mozbrowser frames on desktop Firefox?

2016-01-08 Thread Adam Roach
Regardless of technical feasibility, I believe we're discouraging new 
uses of XUL in Firefox.


/a

On 1/8/16 04:55, Tim Guan-tin Chien wrote:

What prevents you from using ? Is it because the parent
frame is (X)HTML?

I don't know what prevents browser-element from being enabled on desktop
though -- it's tests are running on desktop, and the actual feature is
hidden behind a permission so we won't expose it to the web content even if
we turn it on.


On Fri, Jan 8, 2016 at 3:31 PM, J. Ryan Stinnett  wrote:


(CCing dev-platform as suggested on IRC.)

On Thu, Jan 7, 2016 at 9:58 PM, J. Ryan Stinnett  wrote:

DevTools is working on rebuilding the responsive design UI in an HTML,
chrome-scoped page. This page will want to manage child frames to show
the page content, which could be remote frames. So, I would want to
use  for cases like these.

However, I noticed mozbrowser frames are currently preffed off
(dom.mozBrowserFramesEnabled) on desktop. Is there a reason for this?
Can it be turned on, or is there some kind of work still needed before
it is usable?

I assume we would eventually want to enable this anyway, so that HTML
frames can be used in the primary browser UI.

- Ryan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Adam Roach

On 1/4/16 1:00 PM, Adam Roach wrote:
One of the points that Benjamin Smedberg has been trying to drive home 
is that data collection is everyone's job.


After sending, I realized that this is a slight misquote. It should have 
been "data is everyone's job" (i.e.: there's more to data than collection).


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Adam Roach

On 1/4/16 12:29 PM, Daniel Holbert wrote:

I had a similar thought, but I think it's too late for such telemetry to
be effective. The vast majority of users who are affected will have
already stopped using Firefox, or will immediately do so, as soon as
they discover that their webmail, bank, google, facebook, etc. don't work.


That's a valid point for the first batch of users that is hit with the 
issue on day one. (Aside: I wonder what the preponderant behavior will 
be when Chrome also starts choking on those sites.) It'll be interesting 
to see whether there's a detectable decline in user count that 
correlates with the beginning of the year.


At the same time, I know that Google tends to measure quite a bit about 
Chrome's behavior. Lacking our own numbers, perhaps we reach out to them 
and ask if they're willing to share what they know.


In any case, people install new things all the time. While it is too 
late to catch the large wave of users who are running into the problem 
this week, it would be nice to have data about this problem on an 
ongoing basis.



(We could have used this sort of telemetry before Jan 1 if we'd forseen
this potential problem.  I don't blame us for not forseeing this, though.)


You're correct: given our current habits, it's understandable that no 
one thought to measure this. I think there's an object lesson to be 
learned here.


Mozilla has a clear and stated intention to be more data driven in how 
we do things. One of the points that Benjamin Smedberg has been trying 
to drive home is that data collection is everyone's job. In the same way 
that we would never land code without thinking about how to test it, we 
need to develop a mindset in which we don't land code without 
considering whether and how to measure it. It's not a perfect analogy, 
since many things won't need specific new metrics, but it should be part 
of the mental checklist: "did I think about whether we need to measure 
anything about this feature?"


If just asking that question were part of our culture, I'm certain we 
would have thought of landing exactly this kind telemetry as part of the 
same patch that disabled SHA-1; or, even better, in advance of it.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Adam Roach

On 1/4/16 2:19 AM, Daniel Holbert wrote:

I'm not sure what action we should (or can) take about this, but for
now we should be on the lookout for this, and perhaps consider writing a
support article about it if we haven't already.


I propose that we minimally should collect telemetry around this 
condition. It should be pretty easy to detect: look for cases where we 
reject very young SHA-1 certs that chain back to a CA we don't ship. 
Once we know the scope of the problem, we can make informed decisions 
about how urgent our subsequent actions should be.


It would also be potentially useful to know the cert issuer in these 
cases, since that might allow us to make some guesses about whether the 
failures are caused by malware, well-intentioned but kludgy malware 
detectors, or enterprise gateways. Working out how to do that in a way 
that respects privacy and user agency may be tricky, so I'd propose we 
go for the simple count first.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-12-02 Thread Adam Roach
In case you missed it, Kev Needham (the Add-Ons Product Manager) has put 
together a blog post on this topic:

https://blog.mozilla.org/addons/2015/12/01/de-coupling-reviews-from-signing-unlisted-add-ons/

He also sent the same information to the addons-user-experience mailing 
list:

https://groups.google.com/forum/#!topic/mozilla.addons.user-experience/iwjQbLIb-Fo

I recommend that interested parties who wish to continue the discussion 
respond in one of those two forums.


Thanks!

--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: ISO-2022-JP-2 support in the ISO-2022-JP decoder

2015-11-30 Thread Adam Roach

On 11/30/15 09:38, Henri Sivonen wrote:

The only known realistic source of ISO-2022-JP-2 data is Apple's Mail
application under some circumstances, which may impact Thunderbird and
SeaMonkey.


Does this mean it might interact with webmail services as well? Or do 
they tend to do server-side transcoding from the received encoding to 
something like UTF8?


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship

2015-08-26 Thread Adam Roach

On 8/26/15 08:36, Ehsan Akhgari wrote:
Have you considered the implications of making the alias falsey in 
conditions, similar to document.all?


The issue with doing so is that we see code in the wild that looks like 
this:


|var NativeRTCPeerConnection = (window.webkitRTCPeerConnection || 
||window.mozRTCPeerConnection);|

And a falsey value would simply make things not work.

For all the cases I can think of (at least, in short order), making the 
alias falsey breaks as many things as simply removing it.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Busy indicator API

2015-07-13 Thread Adam Roach

On 7/13/15 10:36, smaug wrote:

On 07/13/2015 01:50 PM, Richard Barnes wrote:

Obligatory: Will this be restricted to secure contexts? 
But given that web pages can already achieve something like this using 
document.open()/close(), at least on Gecko, perhaps exposing the API 
to certainly-not-secure-contexts wouldn't be too bad. 


Going by memory, we're only considering "new features" to be "those 
things that can't be achieved with a polyfill." Since we can polyfill a 
busy indicator, I don't think it qualifies as a "new feature" under our 
"all new features should be on secure origins only" policy.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use-case for consideration, which will be difficult post-NPAPI

2015-06-25 Thread Adam Roach
I would look over the discussion in 
https://bugzilla.mozilla.org/show_bug.cgi?id=988781 regarding future SC 
support via the WebCrypto JS APIs. I would hope that having a W3C spec 
for a smartcard API would encourage a common, cross-browser way to do 
this without plugins or addons.


/a

On 6/25/15 22:29, James May wrote:

Have you considered using a local web server? That way you can use any
native code you want, and it's a reasonably common approach.

On many platforms you can even use socket activation to avoid the need for
a always running server process.



On 25 June 2015 at 21:04, Alex Taylor 
wrote:


Good morning.

I have a use-case which will be difficult to reproduce in the post-NPAPI
world:

The use-case is a Java/NPAPI applet which uses the javax.smartcardio
library to communicate with USB-connected contactless smartcard readers,
from a web-page. Extremely useful functionality for our customers.

Currently the applet will work in Firefox, Chrome and IE.

With the deprecation of NPAPI, we are looking into ways to continue
offering that functionality, and need to continue to target all three of
those browsers if possible.


For Chrome, I have looked into re-implementing the Java applet as a Chrome
App, or using NaCl/PPAPI etc. I have not found any equivalent technology
for Firefox as yet.

Chrome Apps can connect to USB ports via the chrome.usb API, but there is
currently no implementation of PC/SC for it (the smartcard access
specifications that javax.smartcardio is also built on). Due to time
constraints, re-implementing PC/SC ourselves is an option we would only
choose as a last resort. In any case, that would only solve the problem for
Chrome, not Firefox.

Unfortunately, no technology I have looked into so far to solve this
problem is able to offer the cross-browser support that Java/NPAPI enjoyed,
and has an available PC/SC library.


I flag this use-case for consideration in a future web-platform. I am sure
we are not the only company who have combined smartcard io functionality
with the web, and wish to continue doing so.


If anyone knows of any technology or open-source project which might be
useful for this situation, please let me know.


Alex Taylor | Lead Developer

[logo-291px]

T: +44 (0)1753 27 99 27 | DD: +44 (0)1753 378 144
E: alex.tay...@referencepoint.co.uk | Lync: alex.tay...@referencepoint.co.uk

W: www.referencepoint.co.uk<http://www.referencepoint.co.uk/>

A: Reference Point Limited, Technology House, 2-4 High Street, Chalfont
St. Peter, Gerrards Cross, SL9 9QA

Right People. Right Skills. Right Place. Right Time.

Registered in England No. 02156356

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: WebRTC Working Group

2015-06-12 Thread Adam Roach

On 6/12/15 13:27, L. David Baron wrote:

The W3C is proposing a revised charter for:

   Web Performance Working Group
   http://www.w3.org/2015/05/webperf-charter.html
   https://w3c.github.io/charter-webperf/
   https://lists.w3.org/Archives/Public/public-web-perf/2015Jun/0066.html



I think the subject line may have confused things here. Do you mean 
WebRTC or WebPerf?


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Voting in BMO

2015-06-09 Thread Adam Roach

On 6/9/15 17:00, Justin Dolske wrote:

On 6/9/15 2:24 PM, Chris Peterson wrote:


I vote for bugs as a polite (sneaky?) way to watch a bug's bugmail
without spamming all the other CCs by adding myself to the bug's "real"
CC list.


I think if Bugzilla, with its long and complex history, ever has a 
hope of being untangled into something better, we can't keep every 
feature because of all the possible ways it might be used. :) 

OBxkcd: http://xkcd.com/1172/

/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Replacing PR_LOG levels

2015-05-22 Thread Adam Roach

On 5/22/15 15:51, Eric Rahm wrote:

I agree, we shouldn't make it harder to turn on logging. The easy solution is 
just to add a separate logger for verbose messages (if we choose not to add 
Verbose/Trace).


I don't know why we wouldn't just add a more verbose log level (Verbose, 
Trace... I don't care what we call it). The presence of "DEBUG + 1" in 
our code is evidence of a clear, actual need.


Making this a separate mechanism implies that the means of controlling 
these more verbose messages is going to change at some point, and it 
would be a change with no clear benefit. This means that, for example, 
web pages intended to facilitate bug reporting [1] will need to be 
updated to have variant instructions depending on the version of the 
browser; and some such instructions are virtually guaranteed to be missed.



[1] See, for example, <https://wiki.mozilla.org/Media/WebRTC/Logging>: 
"For frame-by-frame logging, use mediamanager:6"


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changing the style guide's preference for loose over strict equality checks in non-test code

2015-05-14 Thread Adam Roach

On 5/14/15 16:33, Gijs Kruitbosch wrote:
Can you give a concrete example where you had to change a 
contributor's patch in frontend gaia code to prefer === to prevent 
real bugs? 


From what I've seen, it's typically a matter of making the results 
unsurprising for subsequent code maintainers, because the rules of what 
gets coerced to what are not intuitive.


I'll crib from Crockford's examples (cf. "Appendix B: The Bad Parts" 
from "JavaScript: The Good Parts"). How many of these can you correctly 
predict the result of?


1. '' == '0'
2. 0 == ''
3. 0 == '0'

4. false == 'false'
5. false == '0'

6. false == undefined
7. false == null
8. null == undefined

9. '\t\r\n' == 0


I've posted the answers at https://pastebin.mozilla.org/8833537

If you had to think for more than a few moments to reach the right 
conclusion about any of these -- or, heaven forbid, actually got one 
wrong -- then I think you need to ultimately concede that the use of == 
is more confusing than it needs to be.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-07 Thread Adam Roach
> On May 6, 2015, at 22:51, Eric Shepherd  wrote:
>
> would have been nice to have more notice


The plan that has been outlined involves a staged approach, with new
JavaScript features being withheld after some date, followed by a
period during which select older JavaScript features are gradually
removed. I'll note that actually turning off http isn't part of the
outline.

Most importantly, all of these steps are to be taken at dates that are
still under discussion. You can be part of that discussion.

Which leaves us with a conundrum regarding your plea for more notice:
it's a bit hard to seriously consider complaints that "at some future
date yet to be determined" is "too soon."

/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand("cut"/"copy")

2015-05-06 Thread Adam Roach

On 5/6/15 20:32, Ehsan Akhgari wrote:

If this falls under the definition of a "new
feature," and if it's going to be released after the embargo date, then
the security properties of clipboard manipulation don't really enter
into the evaluation.


I admit that I didn't real the entire HTTP deprecation plan thread 
because of the length and the tone of some of the participants, so 
perhaps I missed this, but reading 
<https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/> 
seems to suggest that there is going to be a date and criteria for 
what new features mean, but I see no mention of what that date is, or 
what the definition of new features is. 


That's why there were two predicates qualifying the statement.

My point is that the answer to Jonas' question may -- and I'll emphasize 
"may" -- turn on an overarching strategic security policy, rather than 
the security properties of the feature itself.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand("cut"/"copy")

2015-05-06 Thread Adam Roach

On 5/6/15 13:32, Jonas Sicking wrote:

Like Ehsan, I don't see what advantages limiting this to https brings?


In some ways, that depends on what we decide to define "new features" to 
mean, and the release date of this feature relative to the date we 
settle on in the announced security plan [1] of " Setting a date after 
which all new features will be available only to secure websites."


If we use the example definition of "new features" to mean "features 
that cannot be polyfilled," then this would qualify.


Keep in mind the thesis of that plan isn't that we restrict 
security-sensitive features to https -- it's that /all new stuff/ is 
restricted to https. If this falls under the definition of a "new 
feature," and if it's going to be released after the embargo date, then 
the security properties of clipboard manipulation don't really enter 
into the evaluation.



[1] 
https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand("cut"/"copy")

2015-05-06 Thread Adam Roach

On 5/6/15 13:13, Gervase Markham wrote:

On 06/05/15 18:36, Tom Schuster wrote:

I think the ribbon would be really useful if it allowed the user to
restore the previous clipboard content. However this is probably not
possible for all data that can be stored in clipboards, i.e. files.

Which is why we wouldn't overwrite the clipboard until the permission
was granted :-)



Well, that makes it scantly better than a doorhanger, which is what 
Martin was objecting to (and I agree with him). The model that we really 
want here is "this thing happened, click here to undo it" rather than 
"this think is about to happen, but won't unless you take additional 
action." I think this position is pretty strongly bolstered by Dave 
Graham's message about GitHub behavior: "Although IE 11 supports this 
API as well, we have not enabled it yet. The browser displays a popup 
dialog asking the user for permission to copy to the clipboard. 
Hopefully this popup is removed in Edge so we can start using JS there too."


Basically, requiring the extra step of requiring the user to click on an 
"okay, do it" button is high enough friction that the function loses its 
value.


In any case, we should have a better technical exploration of the 
assertion that restoring a clipboard isn't possible in all cases before 
we take it as given. A cursory examination of the OS X clipboard API 
leads me to believe that this would be trivially possible (I believe we 
can just store the array of pasteboardItems from the NSGeneralPBoard off 
somewhere so that they can be moved back if necessary). I'd be a little 
surprised if this weren't also true for Linux and Windows.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand("cut"/"copy")

2015-05-06 Thread Adam Roach

On 5/6/15 10:49, Martin Thomson wrote:

On Wed, May 6, 2015 at 8:42 AM, Doug Turner  wrote:

This is important.  We could mitigate by requiring https, only allowing the top 
level document access these clipboard apis, and doorhangering the API.  
Thoughts?

A doorhanger seems like overkill here.  Making this conditional on an
"engagement gesture" seems about right.  I don't believe that we
should be worry about surfing - and interacting with - strange sites
while there is something precious on the clipboard.

"Ask forgiveness, not permission" seems about the right balance here.
If we can find a way to revoke permission for a site that abuses the
privilege, that's better.  (Adding this to about:permissions with a
default on state seems about right, which leads me to think that we
need the same for the fullscreen thing.)


Going fullscreen also gives the user UI at the time of activation, 
allowing them to manipulate permissions in an obvious way.




Perhaps an analogous yellow ribbon informing the user that the site has 
copied data onto their clipboard, with buttons to allow them to prevent 
it from happening in the future, would be a good balance (in particular 
if denying permission restored the clipboard to its previous state) -- 
it informs the user and provides clear recourse without *requiring* 
additional action.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand("cut"/"copy")

2015-05-06 Thread Adam Roach

On 5/6/15 10:49, Martin Thomson wrote:

On Wed, May 6, 2015 at 8:42 AM, Doug Turner  wrote:

This is important.  We could mitigate by requiring https, only allowing the top 
level document access these clipboard apis, and doorhangering the API.  
Thoughts?

A doorhanger seems like overkill here.  Making this conditional on an
"engagement gesture" seems about right.  I don't believe that we
should be worry about surfing - and interacting with - strange sites
while there is something precious on the clipboard.

"Ask forgiveness, not permission" seems about the right balance here.
If we can find a way to revoke permission for a site that abuses the
privilege, that's better.  (Adding this toabout:permissions  with a
default on state seems about right, which leads me to think that we
need the same for the fullscreen thing.)


Going fullscreen also gives the user UI at the time of activation, 
allowing them to manipulate permissions in an obvious way:


https://www.dropbox.com/s/c0sbknrlz4pbybk/Screenshot%202015-05-06%2011.33.42.png?dl=0

Perhaps an analogous yellow ribbon informing the user that the site has 
copied data onto their clipboard, with buttons to allow them to prevent 
it from happening in the future, would be a good balance (in particular 
if denying permission restored the clipboard to its previous state) -- 
it informs the user and provides clear recourse without *requiring* 
additional action.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-04 Thread Adam Roach

On 5/4/15 11:24, Florian Bösch wrote:
On Mon, May 4, 2015 at 3:38 PM, Adam Roach <mailto:a...@mozilla.com>> wrote:


others who want to work for a better future

A client of mine whom I polled if they can move to HTTPs with their 
server stated they do not have the time and resources to do so. So the 
fullscreen button will just stop working. That's an amazing better 
future right there.


You have made some well-thought-out contributions to conversations at 
Mozilla in the past. I'm a little sad that you're choosing not to 
participate in a useful way here.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-04 Thread Adam Roach

On 5/2/15 05:25, Florian Bösch wrote:
I now mandate that you (and everyone you know) shall only do ethernet 
trough pigeon carriers. There are great advantages to doing this, and 
I can recommend a number of first rate pigeon breeders which will sell 
you pigeons bred for that purpose. I will not discuss with you any 
notion that pigeons shit onto everything and that cost might rise 
because pigeons are more expensive to keep than a copper line. 
Obviously you're a pigeon refusenik and preventer of great progress. 
My mandate for pigeons is binding will come into effect because I 
happen to have a controlling stake in all utility companies and come 
mid 2015 copper lines will be successively cut. Please refrain from 
disagreeing my mandate in vulgar terms, also I refuse any notion that 
using pigeons for ethernet by mandate is batshit insane (the'yre 
pigeons, not bats, please).


It's clear you didn't see it as such, but Nicholas was trying to do you 
a favor.


You obviously have input you'd like to provide on the topic, and the 
very purpose of this thread is to gather input. If you show up with 
well-reasoned arguments in a tone that assumes good faith, there's a 
real chance for a conversation here where people reach a common 
understanding and potentially change certain aspects of the outcome.


If all you're willing to do is hurl vitriol from the sidelines, you're 
not making a difference. Even if you have legitimate and 
well-thought-out points hidden in the venom, no one is going to hear 
them. Nicholas, like I, would clearly prefer that the time of people on 
this mailing list be spent conversing with others who want to work for a 
better future rather than those who simply want to be creatively 
abusive. You get to choose which one you are.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-01 Thread Adam Roach

On 5/1/15 05:03, Matthew Phillips wrote:

All mandatory https will do is discourage people from participating in
speech unless they can afford the very high costs (both in dollars and
in time) that you are now suggesting be required.


Let's be clear about the costs and effort involved.

There are already several deployed CAs that issue certs for free. And 
within a couple of months, it will take users two simple commands, zero 
fiscal cost, and several tens of seconds to obtain and activate a cert:


https://letsencrypt.org/howitworks/

There is great opportunity for you to update your knowledge about how 
the the world of CAs has changed in the past decade. Seize it.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-01 Thread Adam Roach

On 5/1/15 02:54, 王小康 wrote:

P.S.:And finally, accept Cacert or a easy to use CA.


CAs can only be included at their own request. As it stands, CACert has 
withdrawn its request to be included in Firefox until they have 
completed an audit with satisfactory results. If you want CACert to be 
included, contact them and ask what you can do to help.


In the meanwhile, as has been brought up many times in this thread 
already, there are already deployed or soon-to-be-deployed "easy to use 
CAs" in the world.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-16 Thread Adam Roach

On 4/16/15 07:16, david.a.p.ll...@gmail.com wrote:

For example:
- You say "there is only secure/not secure".  Traditionally, we have things like defense 
in depth, and multiple levels of different sources of authentication.  I am hearing: "You will 
either have a Let's Encrypt certificate or you don't".  Heck, let's get rid of EV certificate 
validation too while we are at it: we don't want to have to do special vetting for banking and 
medical websites, because that doesn't fit in with Let's Encrypt's business model.


You're pretty far off in the weeds here. I'll try to help you with some 
of your misconceptions.


First, no one is proposing that Let's Encrypt should become the sole 
source of TLS certificates. Let's Encrypt was started to solve a 
specific set of valid complaints about the complexity and financial 
issues surrounding acquiring a TLS certificate for certain individuals.


Second, Let's Encrypt is run by ISRG, not Mozilla -- Mozilla is one of 
several supporters for ISRG, but we are separate entities.


Finally, ISRG is a 501(c)(3) non-profit public benefit corporation. 
There's no business model in the traditional sense, since the goal is 
not profit. The goal is to fulfill its mission, which is "to reduce 
financial, technological, and education barriers to secure communication 
over the Internet." Accusing ISRG of having a pro-TLS agenda is akin to 
accusing a soup kitchen of having a pro-soup agenda: it shows a 
fundamental misunderstanding of what they're doing and why.



- You don't want to hear about non-centralized security models.  DANE...


...is a centralized security model. The difference is that you're 
trading a set of predominantly commercial CA entities for a different 
set of governmental or governmentally-contracted entities. It is 
arguably more centralized than the current CA system.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Adam Roach

On 4/14/15 16:32, northrupthebandg...@gmail.com wrote:

*By logical measure*, the [connection] that is encrypted but unauthenticated is 
more secure than the one that is neither encrypted nor authenticated, and the 
fact that virtually every HTTPS-supporting browser assumes the precise opposite 
is mind-boggling.


That depends on what kind of resource you're trying to access. If the 
resource you're trying to reach (in both circumstances) isn't demanding 
security -- i.e., it is an "http" URL -- then your logic is sound. 
That's the basis for enabling OE.


The problem here is that you're comparing:

 * Unsecured connections working as designed

with

 * Supposedly secured connections that have a detected security flaw


An "https" URL is a promise of encryption _and_ authentication; and when 
those promises are violated, it's a sign that something has gone wrong 
in a way that likely has stark security implications.


Resources loaded via an "http" URL make no such promises, so the 
situation isn't even remotely comparable.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Adam Roach

On 4/14/15 15:35, emmanueldeloge...@gmail.com wrote:

Will Mozilla start to offer certificates to every single domain name owner ?


Yes [1].

https://letsencrypt.org/



[1] I'll note that Mozilla is only one of several organizations involved 
in making this effort happen.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Adam Roach

On 4/14/15 10:53, justin.kru...@gmail.com wrote:

Dynamic DNS might be difficult to run on HTTPS as the IP address needs to 
change when say your cable modem IP updates.  HTTPS only would make running 
personal sites more difficult for individuals, and would make the internet 
slightly less democratic.


I'm not sure I follow. I have a cert for a web site running on a dynamic 
address using DynDNS, and it works just fine. Certs are bound to names, 
not addresses.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Adam Roach

On 3/12/15 12:26, Aryeh Gregor wrote:

Because unless things have changed a lot in the last three years or
so, HTTPS is a pain for a few reasons:

1) It requires time and effort to set up.  Network admins have better
things to do.  Most of them either are volunteers, work part-time,
computers isn't their primary job responsibility, they're overworked,
etc.

2) It adds an additional point of failure.  It's easy to misconfigure,
and you have to keep the certificate up-to-date.  If you mess up,
browsers will helpfully go berserk and tell your users that your site
is trying to hack their computer (or that's what users will infer from
the terrifying bright-red warnings).  This is not a simple problem to
solve -- for a long time,https://amazon.com  would give a cert error,
and I'm pretty sure I once saw an error on a Google property too.  I
think Microsoft too once.

3) Last I checked, if you want a cert that works in all browsers, you
need to pay money.  This is a big psychological hurdle for some
people, and may be unreasonable for people who manage a lot of small
domains.

4) It adds round-trips, which is a big deal for people on high-latency
connections.  I remember Google was trying to cut it down to one extra
round-trip on the first connection and none on subsequent connections,
but I don't know if that's actually made it into all the major
browsers yet.

These issues seem all basically fixable within a few years


As an aside, the first three are not just fixable, but actually fixed 
within the next few months: https://letsencrypt.org/



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: WebRTC Working Group

2015-03-06 Thread Adam Roach

On 3/6/15 17:27, Martin Thomson wrote:

On Fri, Mar 6, 2015 at 3:13 PM, Adam Roach  wrote:

The only thing that I think we'd want to see in here that is currently
missing is an explicit statement that the "new set of low level
object-oriented APIs for real-time communication" (called "WebRTC NG" in
the deliverables) will be backwards-compatible extensions to the
existing WebRTC 1.0 APIs, rather than a parallel system for performing
similar operations.


I think that we're capable of making that comment in the working group
proper (and I think that someone already has).  No need to raise it at
this level unless someone feels that our concern was ignored.


I think the language in the charter is mildly biased towards implying 
the opposite, which causes me some concern.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: WebRTC Working Group

2015-03-06 Thread Adam Roach
On 3/2/15 12:53, L. David Baron wrote:
> The W3C is proposing a revised charter for:
>
>   Web Real-Time Communications Working Group
>   http://www.w3.org/2015/02/webrtc-charter.html
>   https://lists.w3.org/Archives/Public/public-new-work/2015Feb/0004.html
>
> Mozilla has the opportunity to send comments or objections through
> Friday, March 13.
>
> Please reply to this thread if you think there's something we should
> say as part of this charter review.  (Given our involvement, it
> seems to me like we should support the charter in general, possibly
> with comments.)
>  

I agree that we should support the charter. I've read over the proposed
charter, and find it congruent with the discussions about future
direction that have taken place within the working group.

The only thing that I think we'd want to see in here that is currently
missing is an explicit statement that the "new set of low level
object-oriented APIs for real-time communication" (called "WebRTC NG" in
the deliverables) will be backwards-compatible extensions to the
existing WebRTC 1.0 APIs, rather than a parallel system for performing
similar operations.

-- 
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS

2014-11-19 Thread Adam Roach

On 11/19/14 04:50, Patrick McManus wrote:

There are basically 2 arguments against OE here: 1] you don't need OE
because everyone can run https and 2] OE somehow undermines https

I don't buy them because [1] remains a substantial body of data and [2] is
unsubstantiated speculation and borders on untested FUD.


I agree, and find the assertion of [2] to be further perplexing: it 
completely discounts the fact that OE can (and ideally will) be opt-out 
for most server configurations, while HTTPS remains opt-in -- even for 
the Let's Encrypt setup.


There's a radical difference in penetration between opt-in and opt-out, 
and we base substantial portions of our privacy decisions on this fact. 
I'm a bit baffled that it's not immediately obvious to everyone in this 
conversation that this distinction translates to the deployment of 
encryption.


I'm all for the drive to have authenticated encryption everywhere, and 
am very excited about the Let's Encrypt initiative. But there's no 
reason to leave traffic gratuitously unencrypted while we drive towards 
100% HTTPS penetration.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecate geolocation and getUserMedia() for unauthenticated origins

2014-09-29 Thread Adam Roach

On 9/29/14 03:02, Anne van Kesteren wrote:

On Mon, Sep 29, 2014 at 2:02 AM, Adam Roach  wrote:

Yes, I saw that. Your proposal didn't see a lot of support in that venue.

So far for geolocation there is nobody that is opposed.


I'm responding on the topic of gUM, but I'll point out that a response 
of "this is nonsense as stated" (Richard Barnes) does sound like an 
objection to me. I also read Karl Dubost's response as being less than 
favorable, and Chris Peterson 
(https://bugzilla.mozilla.org/show_bug.cgi?id=1072859#c2) seems to be 
worried about how your proposal breaks existing websites.


Based on your statement, I do wonder what constitutes opposition in your 
mind. For clarity, I am opposed to your proposal for gUM.



For getUserMedia() there are claims of extensive discussion that is
not actually recorded in text. There was also a lot of pointing to
geolocation which does not seem like a valid argument. I don't think
they've made their case.


Sure, but determination of consensus is the job of the chairs. What goes 
into a spec is based on direction of the working group, not on the 
criteria of "Anne van Kesteren thinks the working group has made its case."


Fundamentally, the problem is that you're coming to the conversation 
late, and you're not willing to do the work to catch up on the 
conversations that have already occurred. There's no WG historian whose 
job it is to research decision rationale for newcomers.


The reason you're not seeing anyone responding with compelling reasons 
on the working group mailing list is that the issue has been discussed 
and closed. No one has much energy to relitigate old issues, especially 
when objections are brought forth in the rather weak form of "I'm not 
sure this was a good idea." If you wish to challenge the status quo, 
then the burden of proof is on you to come up with arguments that are 
both compelling and lucid enough to change the minds of the working 
group participants, not on the working group to justify its past work to 
you.


I note that one of the chairs has taken the exceptionally gracious step 
of inviting you to make your arguments in a more sensible, 
threat-analysis-based form. There's your opening -- take his suggestion, 
and make your case.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecate geolocation and getUserMedia() for unauthenticated origins

2014-09-28 Thread Adam Roach

On 9/27/14 02:24, Anne van Kesteren wrote:

On Fri, Sep 26, 2014 at 11:11 PM, Adam Roach  wrote:

This is a matter for the relevant specification, not some secret cabal.

I was not proposing doing anything in secret.

I also contacted the relevant standards lists.




Yes, I saw that. Your proposal didn't see a lot of support in that 
venue. And that's why taking it to a Mozilla mailing list rather than 
continuing the discourse that you already started feels like an 
attempted end-run around the standards process. Surely you understand 
why that appears unseemly, right?


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecate geolocation and getUserMedia() for unauthenticated origins

2014-09-26 Thread Adam Roach

On 9/26/14 14:58, Anne van Kesteren wrote:

Exposing geolocation on unauthenticated origins was a mistake. Copying
that for getUserMedia() is too. I suggest that to protect our users we
make some noise about deprecating this practice.


There have already been extensive discussions on this specific topic 
within the W3C, and the conclusion that has been reached does not match 
what you are proposing. I would be extremely loathe to propose that we 
implement outside the spec on a security issue that's already received 
adequate discussion in the relevant venue.



More immediately we should make it impossible to make persistent
grants for these features on unauthenticated origins.


Our implementation of getUserMedia already does this, and the 
getUserMedia spec has RFC 2119 "MUST" strength language requiring such 
behavior.



I can reach out to Google (and Apple & Microsoft I suppose, though I
haven't seen much from them on the pro-TLS front) to see if they would
be on board with this and help us spread the message.



The email address you're looking for is "public-media-capt...@w3.org". 
This is a matter for the relevant specification, not some secret cabal.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-15 Thread Adam Roach

On 9/15/14 11:08, Anne van Kesteren wrote:

Google seems to have the right trade off
and the "IETF consensus" seems to be unaware of what is happening
elsewhere.


You're confused.

The whole line of argumentation that web browsers and servers should be 
taking advantage of opportunistic encryption is explicitly informed by 
what's actually "happening elsewhere." Because what's *actually* 
happening is an overly-broad dragnet of personal information by a wide 
variety of both private and governmental agencies -- activities that 
would be prohibitively expensive in the face of opportunistic encryption.


Google's laser focus on preventing active attackers to the exclusion of 
any solution that thwarts passive attacks is a prime example of 
insisting on a perfect solution, resulting instead in substantial 
deployments of nothing. They're naïvely hoping that finding just the 
right carrot will somehow result in mass adoption of an approach that 
people have demonstrated, with fourteen years of experience, significant 
reluctance to deploy universally.


This is something far worse than being simply unaware of "what's 
happening elsewhere": it's an acknowledgement that pervasive passive 
monitoring is taking place, and a conscious decision not to care.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Adam Roach

On 9/12/14 10:07, Trevor Saunders wrote:

[W]hen it comes to the NSA we're pretty much just not going to be able
to force everyone to use something strong enough they can't beat it.


Not to get too far off onto this sidebar, but you may find the following 
illuminating; not just for potentially adjusting your perception of what 
the NSA can and cannot do (especially in the coming years), but as a 
cogent analysis of how even the thinnest veneer of security can temper 
intelligence agencies' overreach into collecting information about 
non-targets:


http://justsecurity.org/7837/myth-nsa-omnipotence/

While not the thesis of the piece, a highly relevant conclusion the 
author draws is: "[T]hose engineers prepared to build defenses against 
bulk collection should not be deterred by the myth of NSA omnipotence.  
That myth is an artifact of the post-9/11 era that may now be outdated 
in the age of austerity, when NSA will struggle to find the resources to 
meet technological challenges."


(I'm hesitant to appeal to authority here, but I do want to point out 
the "About the Author" section as being important for understanding 
Marshall's qualifications to hold forth on these matters.)


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebCrypto for http:// origins

2014-09-11 Thread Adam Roach

On 9/11/14 11:08, Anne van Kesteren wrote:

On Thu, Sep 11, 2014 at 5:56 PM, Richard Barnes  wrote:

Most notably, even over non-secure origins, application-layer encryption can 
provide resistance to passive adversaries.

See https://twitter.com/sleevi_/status/509723775349182464 for a long
thread on Google's security people not being particularly convinced by
that line of reasoning.



The brief detour into discussing opportunistic encryption in that 
rambling thread [1] highlights a place where Ryan differs from the 
growing consensus, at least within the IETF, that something is better 
than nothing. He is out of step with the recognition that our historic 
stance of "perfect or absent" is counterproductive. Theodore actually 
puts it pretty succinctly in one of the IETF mailing list messages that 
Henri cites: " For too long, I think, we've let the perfect be the enemy 
of the good."


When you force people into an "all or nothing" situation regarding 
security, "nothing" is the easy choice. If you provide tools for much 
easier incremental improvement, people will be far more likely to deploy 
something. Absolutism isn't the way to make progress: a transition path 
with small, incremental steps that yield small, incremental improvements 
at gets you to where you want to be eventually.


By contrast, forcing people to swallow everything all at once only 
serves to discourage adoption of any security at all.



[1] Which is now my favorite example of Twitter's shortcomings as a 
communications medium.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to land: Voice/video client ("Loop")

2014-05-30 Thread Adam Roach

On 5/30/14 10:14, Anne van Kesteren wrote:

On Fri, May 30, 2014 at 5:03 PM, Adam Roach  wrote:

Link to standard: N/A

I take it this means there's no web-exposed API?




That is correct. This is a browser feature, not accessible from content.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to land: Voice/video client ("Loop")

2014-05-30 Thread Adam Roach

Summary:

The Loop project aims to create a user-visible real-time communications 
service for existing Mozilla products, leveraging the WebRTC platform. 
One version of the client will be integrated with Firefox Desktop. It is 
intended to be interoperable with a Firefox OS application (not part of 
Gaia) that will be shipped by a third party on the Firefox OS 2.0 platform.


The implementation of the client has already reached a proof-of-concept 
stage in a set of github repositories. This announcement of intent is 
being sent in advance of integration into the mozilla-central 
repositories. This integration will be a two-step process. The first 
step, which has already taken place, is to land the existing 
implementation in the Elm tree to validate proper integration into the 
RelEng systems. After testing, the code will be merged into m-c, and 
subsequent development will take place in the m-i/m-c trees.


The code is currently controlled by the MOZ_LOOP preprocessor 
definition, which is only enabled for Nightly builds. The feature will 
iterate on Nightly until it is considered complete enough to ride the 
trains out.



For more details:

https://wiki.mozilla.org/Loop
https://wiki.mozilla.org/Media/WebRTC
https://blog.mozilla.org/futurereleases/2014/05/29/experimenting-with-webrtc-in-firefox-nightly/

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=loop_mlp

Link to standard: N/A

Platform coverage: Firefox on Desktop.

Estimated or target release: Firefox 33 or 34 (33 is a stretch goal for 
the team, 34 is a committed date).


Preference behind which this will be implemented: For initial landing on 
Nightly, none. Will be behind "loop.enabled" before riding the trains.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-08 Thread Adam Roach

On 1/8/14 12:03, Martin Thomson wrote:

On 2014-01-08, at 09:57, Adam Roach  wrote:


Automated wrapping to a column width is less than optimal. If you look back at 
bz's example about how he would chose to wrap a specific conditional, it's 
based on semantic intent, not the language syntax. By and large, this goes to 
author's intent and his understanding of the problem at hand. It's not the kind 
of thing that can be derived mechanically.

 From that I infer that you would prefer to leave wrapping choices to 
individuals.  That’s in favour of the latter option: enforce line length, but 
don’t reformat to it.


My observation has two key implications. That's the first.

The second is that we need to be careful if we decide to run a 
reformatter over the code wholesale, since you can actually lose useful 
information about author's intent. I'm not the first to raise that point 
in this discussion; I'm simply agreeing with it.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-08 Thread Adam Roach

On 1/8/14 11:28, Martin Thomson wrote:

So maybe this can bifurcate the bike shedding discussion further: do you want 
to have a tool wrap to X, or do you want a tool to block patches that exceed X?


Automated wrapping to a column width is less than optimal. If you look 
back at bz's example about how he would chose to wrap a specific 
conditional, it's based on semantic intent, not the language syntax. By 
and large, this goes to author's intent and his understanding of the 
problem at hand. It's not the kind of thing that can be derived 
mechanically.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-07 Thread Adam Roach

On 1/7/14 14:23, Boris Zbarsky wrote:
One reason I've seen 2 preferred to 4 (apart from keeping line lengths 
down)...


Thanks. I was just about to raise the issue that choosing four over two 
has no identified benefits, and only serves to exacerbate *both* sides 
of the argument over line length limits.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-07 Thread Adam Roach

On 1/7/14 12:16, Martin Thomson wrote:

On 2014-01-06, at 19:28, Patrick McManus  wrote:


I strongly prefer at least a 100 character per line limit. Technology
marches on.

Yes.  I’ve encountered too many instances where 80 was not enough.


Since people are introducing actual research information here, let's run 
some numbers. According to Paterson et. al. [1], reading comprehension 
speed is actively hindered by lines that are either too short or too 
long, which they define as 9 picas (1.5 inches) and 43 picas (~7 
inches), respectively. Comprehension is significantly faster at 19 picas 
(~3 inches).


Using the default themes that ship with the OS X "Terminal" app, an 
80-character-wide terminal is on the order of 4 inches wide on a 15-inch 
monitor. 100 columns pushes this to nearly 5 inches.


Now, I'm not arguing for a 60-character line length here. However, it 
would seem that moving from 80 to 100 is going in the wrong direction 
for comprehension speed.



[1] http://psycnet.apa.org/journals/xge/27/5/572/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-07 Thread Adam Roach

On 1/7/14 03:07, Jason Duell wrote:
Yes--if we jump to >80 chars per line, I won't be able to keep two 
columns open in my editor (vim, but emacs would be the same) on my 
laptop, which would suck.


(Yes, my vision is not what it used to be--I'm using 10 point font. 
But that's not so huge.) 


I'm not just sympathetic to this argument; I've made it myself in other 
venues. Put me down as emphatically agreeing.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A proposal to reduce the number of styles in Mozilla code

2014-01-06 Thread Adam Roach

On 1/6/14 12:22, Axel Hecht wrote:
In the little version control archaeology I do, I hit "breaks blame 
for no good reason" pretty often already. I'd not underestimate the 
cost for the project of doing changes just for the sake of changes. 


Do you have a concrete reason to believe that Martin's workaround for 
retaining blame information doesn't work, or did you just miss that part 
of his message?



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A proposal to reduce the number of styles in Mozilla code

2014-01-06 Thread Adam Roach

On 1/6/14 09:50, Gavin Sharp wrote:

A concise summary of the changes you're proposing would be useful -
here's my attempt at one.

 From what I gather, the changes you're proposing to the style guide are:

* remove implicit discouragement of changing code to conform to "Mozilla style"
** style changes should never be combined with functional changes
** (other specifics about how this should be accomplished and with
what exceptions may need elaborating)
* remove the text suggesting that module owners can dictate different
code styles


You missed:

* Don't reformat third-party code.

/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: java click to run problem on Firefox

2013-10-10 Thread Adam Roach

On 10/10/13 14:42, Ehsan Akhgari wrote:
There is a java compiler that targets llvm somewhere which I can't 
find a link to right now, but it's unmaintained and IIRC it did not 
cover all of the Java syntax.  But the real problem is in the huge API 
set that the JVM provides for various platform services and in order 
to use emscripten on java code for anything except for pure 
computations you will need a javascript implementation of those bits 
(similar to the javascript libc implementation in 
<https://github.com/kripken/emscripten/blob/master/src/library.js>. To 
the best of my knowledge nobody has ever signed up to do that work.


Interesting. It would seem to me that, if we're really serious about 
getting plugins out of the browser, we would place a pretty high 
priority on giving developers tools to migrate Java to JavaScript.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: java click to run problem on Firefox

2013-10-10 Thread Adam Roach

On 10/10/13 12:06, Thierry Milard wrote:
There are still a few things like speedy 3D That html-javascipt do bot 
do Dell enough


http://www.unrealengine.com/html5/


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: java click to run problem on Firefox

2013-10-10 Thread Adam Roach

On 10/10/13 11:09, Benjamin Smedberg wrote:
We encourage you to transition your site away from Java as soon as 
possible. If there are APIs which you need in the web platform in 
order to make that possible, please let me know and we will try to 
make adding those a priority.


I haven't personally seen it done, but it seems to me that you could do 
something like:


[java source] --javac-> [java bytecode] --llvm-> [llvm bitcode] 
--emscripten-> [javascript]


I'm not claiming that this would be possible without some porting 
effort; my point is that one does not need to start from scratch to make 
this transition.


Has anyone here played around with the toolchain I describe above?

/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-09 Thread Adam Roach

On 9/9/13 02:31, Henri Sivonen wrote:

We don't have telemetry for the question "How often are pages that are not
labeled as UTF-8, UTF-16 or anything that maps to their replacement
encoding according to the Encoding Standard and that contain non-ASCII
bytes in fact valid UTF-8?" How rare would the mislabeled UTF-8 case need
to be for you to consider the UI that you're proposing not worth it?


I'd think it would depend somewhat on the severity of the misencoding. 
For example, interpreting a page of UTF-8 as Windows-1252 isn't 
generally going to completely ruin a page with the occasional accented 
Latin character, although it will certainly be an obvious defect. I'd be 
happy to leave the situation be if this happened to fewer than 1% of 
users over a six week period.


On the other hand, misrendering a page of UTF-8 that consists 
predominantly of a non-Latin character set is pretty catastrophic, and 
is going to tend to happen to the same subset of users over and over 
again. For that situation, I think I'd like to see fewer than 0.1% of 
users who have a build that has been localized into a non-Latin 
character set impacted over a six-week period before I was happy leaving 
things as-is.



However, we do have telemetry for the percentage of Firefox sessions in
which the  current character encoding override UI has been used at least
once. See https://bugzilla.mozilla.org/show_bug.cgi?id=906032 for the
results broken down by desktop versus Android and then by locale.


I don't think measuring the behavior those few people who know about 
this feature is particularly relevant. The status quo works for them, by 
definition. I'm far more concerned about those users who get garbled 
pages and don't have the knowledge to do anything about it.



I would accept  a (performance-conscious) patch for gathering telemetry for
the UTF-8 question in the HTML parser.  However, I'm not volunteering to
write one myself immediately, because I have bugs on my todo list that have
been caused by previous attempts of Gecko developers to be well-intentioned
about DWIM and UI around character encodings. Gotta fix those first.


Great. I'll see if I can wedge in some time to put one together 
(although I'm similarly swamped, so I don't have a good timeframe for 
this). If anyone else has time to roll one out, that would be even better.



Even non-automatic correction means authors can take the attitude that
getting the encoding wrong is no big deal since the fix is a click away for
the user.


I'll repeat that it's not our job to police the web. I'm firmly of the 
opinion that those developers who don't care about doing things right 
won't do them right no matter how big a stick you personally choose to 
beat them with. On the other hand, I'm quite worried about collateral 
damage to our users in your crusade to control publishers.


Give the publishers the tools to understand their errors, and the users 
the tools to use the web the way they want to use it. Those publishers 
who aren't bad actors will correct their own behavior -- those who _are_ 
bad actors aren't going to behave anyway. There's no point getting 
authoritarian about it and making the web a less accessible place as a 
consequence.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-06 Thread Adam Roach

On 9/6/13 04:25, Henri Sivonen wrote:

We do surface such UI for https deployment errors
inspiring academic papers about how bad it is  that users are exposed
to such UI.


Sure. It's a much trickier problem (and, in any case, the UI is 
necessarily more intrusive than what I'm suggesting). There's no good 
way to explain the nuanced implications of security decisions in a way 
that is both accessible to a lay user and concise enough to hold the 
average user's attention.



On Thu, Sep 5, 2013 at 6:15 PM, Adam Roach  wrote:

As to the "why," it comes down to balancing the need to let the publisher
know that they've done something wrong against punishing the user for the
publisher's sins.

Two problems:
  1) The complexity of the platform increases in order to address a fringe case.
  2) Making publishers' misdeeds less severe in the short term makes it
more OK for publishers to engage in the misdeeds, which in the light
of #1 leads to long-term problems. (Consider the character encoding
situation in Japan and how HTML parsing in Japanese Firefox is worse
than in other locales as the result.)


To the first point: the increase in complexity is fairly minimal for a 
substantial gain in usability. Absent hard statistics, I suspect we will 
disagree about how "fringe" this particular exception is. Suffice it to 
say that I have personally encountered it as a problem as recently as 
last week. If you think we need to move beyond anecdotes and personal 
experience, let's go ahead and add telemetry to find out how often this 
arises in the field.


Your second point is an argument against automatic correction. Don't get 
me wrong: I think automatic correction leads to innocent publisher 
mistakes that make things worse over the long term. I absolutely agree 
that doing so trades short-term gain for long-term damage. But I'm not 
arguing for automatic correction.


But it's not our job to police the web.

It's our job to... and I'm going to borrow some words here... give users 
"the ability to shape their own experiences on the Internet." You're 
arguing _against_ that for the purposes of trying to control a group of 
publishers who, for whatever reason, either lack the ability or don't 
care enough to fix their content even when their tools clearly tell them 
that their content is broken.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-05 Thread Adam Roach

On 9/5/13 09:10, Henri Sivonen wrote:

Why should we surface this class of authoring error to the UI in a way 
that asks the user to make a decision considering how rare this class 
of authoring error is?


It's not a matter of the user judging the rarity of the condition; it's 
the user being able to, by casual observation, look at a web page and 
tell that something is messed up in a way that makes it unusable for them.



Are there other classes of authoring errors
that you think should have UI for the user to second-guess the author?
If yes, why? If not, why not?


In theory, yes. In practice, I can't immediately think of any instances 
that fit the class other than this one and certain Content-Encoding issues.


If you want to reduce it to principle, I would say that we should 
consider it for any authoring error that is (a) relatively common in the 
wild; (b) trivially detectable by a lay user; (c) trivially detectable 
by the browser; (d) mechanically reparable by the browser; and (e) has 
the potential to make a page completely useless.


I would argue that we do, to some degree, already do this for things 
like Content-Encoding. For example, if a website attempts to send 
gzip-encoded bodies without a Content-Encoding header, we don't simply 
display the compressed body as if it were encoded according to the 
indicated type; we pop up a dialog box to ask the user what to do with 
the body.


I'm proposing nothing more radical than this existing behavior, except 
in a more user-friendly form.


As to the "why," it comes down to balancing the need to let the 
publisher know that they've done something wrong against punishing the 
user for the publisher's sins.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-04 Thread Adam Roach

On 9/2/13 13:36, Joshua Cranmer 🐧 wrote:
I don't think there *is* a sane approach that satisfies everybody. 
Either you break "UTF8-just-works-everywhere", you break legacy 
content, you make parsing take inordinate times...


I want to push on this last point a bit. Using a straightforward UTF-8 
detection algorithm (which could probably stand some optimization), it 
takes my laptop somewhere between 0.9 ms and 1.4 ms to scan a _Megabyte_ 
buffer in order to check whether it consists entirely of valid UTF-8 
sequences (the speed variation depends on what proportion of the 
characters in the buffer are higher than U+007F). That hardly even rises 
to the level of noise.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-08-30 Thread Adam Roach

On 8/30/13 13:41, Anne van Kesteren wrote:

Where did the text file come from? There's a source somewhere... And
these days that's hardly how people create content anyway.


Maybe not for the content _you_ consume, but the Internet is a bit 
larger than our ivory tower.


Check out, for example:

https://www.rfc-editor.org/rse/wiki/lib/exe/fetch.php?media=design:future-unpag-20130820.txt

In particular, when you look at that document, tell me what you think 
the parenthetical phrase after the author's name is supposed to look 
like -- because I can guarantee that Firefox isn't doing the right thing 
here.



And again, it has already been pointed out we cannot scan the entire byte stream


Sure we can. We just can't fix things on the fly: we'd need something 
akin to a user prompt and probably a page reload. Which is what I'm 
proposing.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-08-30 Thread Adam Roach

On 8/30/13 12:24, Mike Hoye wrote:

On 2013-08-30 11:17 AM, Adam Roach wrote:

It seems to me that there's an important balance here between (a) 
letting developers discover their configuration error and (b) 
allowing users to render misconfigured content without specialized 
knowledge.


For what it's worth Internet Explorer handled this (before UTF-8 and 
caring about JS performance were a thing) by guessing what encoding to 
use, comparing a letter-frequency-analysis of a page's content to a 
table of what bytes are most common in which in what encodings of 
whatever languages.

...
From both the developer and user perspectives, it was amounted to 
"something went wrong because of bad magic."


I'd like to clarify two points about what I'm proposing.

First, I'm not proposing that we do anything without explicit user 
intervention, other than present an unobtrusive bar helping the user 
understand why the headline they're trying to read renders as "Ð' 
Ð"оÑ?дÑfме пÑEURедложили оÑ,обÑEURаÑ,ÑOE "Ð?обелÑ?" 
Ñf Ðz(бамÑ< " rather than "? ??? ??  "??" ? 
?". (No political statement intended here -- that's just the leading 
headline on Pravda at the moment).


If the user is happy with the encoding, they do nothing and go about 
their business.


If the user determines that the rendering is, in fact, not what they 
want, they can simply click on the "Yes" button and (with high 
probability), everything is right with the world again.


Also note that I'm not proposing that we try to do generic character set 
and language detection. That's fraught with the perils you cite. The 
topic we're discussing here is UTF-8, which can be easily detected with 
extremely high confidence.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-08-30 Thread Adam Roach

On 8/30/13 14:11, Adam Roach wrote:
...helping the user understand why the headline they're trying to read 
renders as "Ð' Ð"оÑ?дÑfме пÑEURедложили 
оÑ,обÑEURаÑ,ÑOE "Ð?обелÑ?" Ñf Ðz(бамÑ< " rather than "? 
??? ??  "??" ? ?".


Well, *there's* a heavy dose of irony in the context of this thread. I 
wonder what rules our mailing list server applies for character set 
decimation.


When I sent that out, the question marks were a perfectly readable 
string of Cyrillic characters.


Which provides a strong object lesson in the fact that character set 
configuration is hard. If we can't get this right internally, I think 
we've lost the moral ground in saying that others should be able to, and 
tough luck if they can't.


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-08-30 Thread Adam Roach

On 8/30/13 05:08, Nicholas Nethercote wrote:

On Fri, Aug 30, 2013 at 8:03 PM, Henri Sivonen  wrote:

I think we should encourage Web authors to use UTF-8  *and* to *declare* it.

I'm no expert on this stuff, but Henri's point sure sound sensible to me.



It seems to me that there's an important balance here between (a) 
letting developers discover their configuration error and (b) allowing 
users to render misconfigured content without specialized knowledge.


Both of these are valid concerns, and I'm afraid that we're not 
assigning enough weight to the user perspective.


I think we can find some middle ground here, where we help developers 
discover their misconfiguration, while also handing users the tool they 
need to fix it. Maybe an unobtrusive bar (similar to the password save 
bar) that says something like: "This page's character encoding appears 
to be mislabeled, which might cause certain characters to display 
incorrectly. Would you like to reload this page as Unicode? [Yes] [No] 
[More Information] [x]".



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking build defaults

2013-08-16 Thread Adam Roach
I think the key argument against this approach is that system components 
are never truly isolated. Sure, some of them can be compiled out and 
still produce a working system. That doesn't mean that testing without 
those components is going to have good test coverage.


What I'm worried about, if we start disabling various modules, is that 
we're going to have regressions that go unnoticed on developer systems, 
blow up on m-i, and then take a _long_ time to track down. We already 
have m-i closed for about four hours a day as it is, frequently during 
prime working hours for a substantial fraction of Mozilla's 
contributors. Further varying developers' local build environments from 
those of the builders will only make this problem worse.


/a

On 8/16/13 04:32, Mike Hommey wrote:

Hi everyone,

There's been a lot of traction recently about our builds getting slower
and what we could do about it, and what not.

Starting with bug 904979, I would like to change the way we're thinking
about default flags and options. And I am therefore opening a discussion
about it.

The main thing bug 904979 does is to make release engineering builds (as
well as linux distros, palemoon, icecat, you name it) use a special
--enable-release configure flag to use flags that we deem necessary for
a build of Firefox, the product. The flip side is that builds without
this flag, which matches builds from every developer, obviously, would
use flags that make the build faster. For the moment, on Linux systems,
this means disabling identical code folding and dead code removal (which,
while they make the binary size smaller, increase link time), and
forcing the use of the gold linker when it's available but is not system
default. With bug 905646, it will mean enabling -gsplit-dwarf when it's
available, which make link time on linux really very much faster (<4s
on my machine instead of 30s). We could and should do the same kind
of things for other platforms, with the goal of making linking
libxul.so/xul.dll/XUL faster, making edit-compile-edit cycles faster.
If that works reliably, for instance, we should for instance use
incremental linking. Please feel free to file Core::Build Config bugs
for what you think would help on your favorite build platform (and if
you do, for better tracking, make them depend on bug 904979).

That being said, this is not the discussion I want to have here, that
was merely an introduction.

The web has grown in the past few years, and so has our code base, to
support new technologies. As Nathan noted on his blog[1] disabling
webrtc calls for great build time improvements. And I think it's
something we should address by a switch in strategy.

- How many people are working on webrtc code?
- How many people are working on peripheral code that may affect webrtc?
- How many people are building webrtc code they're not working on and
   not using?

I'm fairly certain the answer to the above is that the latter population
is much bigger than the other two, by probably more than an order of
magnitude.

So here's the discussion opener: why not make things like webrtc (I'm
sure we can find many more[2]) opt-in instead of opt-out, for local,
developer builds? What do you think are good candidates for such a
switch?

Mike

1. 
https://blog.mozilla.org/nfroyd/2013/08/15/better-build-times-through-configury/
2. and we can already start with ICU, because it's built and not even
used. And to add injury to pain, it's currently built without
parallelism (and the patch to make it not do so was backed out).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


OS X: deprecate Apple clang 4.1?

2013-08-14 Thread Adam Roach
Over the past few weeks, I've had the build completely break three time 
due to issues with Apple clang 4.1, which tells me that we're not doing 
any regular builds with Apple clang 4.1 (c.f. Bug 892594, Bug 904108, 
and the fact that the current tip of m-i won't link with Apple clang 4.1).


I'll note that the bugs I mention above are both working around actual 
bugs in clang, not missing features.


Any time I ask in #developers, the answer seems to be that our minimum 
version for Apple clang is still 4.1. I would propose that (unless we're 
adapting some of our infra builders to check that we can at least 
compile and link with 4.1), we formally abandon 4.1 as a supported compiler.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-21 Thread Adam Roach

On 6/21/13 15:45, Andrew Overholt wrote:
I'd appreciate your review feedback.  Thanks. 



I'm having a hard time rectifying these two passages, which seem to be 
in direct contradiction:


1. "Note that at this time, we are specifically focusing on /new/ JS
   APIs and not on CSS, WebGL, WebRTC, or other existing
   features/properties"

2. "This policy is new and we have work to do to clean up previous
   divergences from it."


I expect that the first statement is the correct one, given that the 
goal here is to prevent wholesale breaking of deployed sites. It also 
aligns with what I've heard from parties who have been involved in the 
conversations to date.


Of course, I could just be misreading the second statement, in which 
case I'd think that a clarification might be in order.


/a

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standalone GTests

2013-05-08 Thread Adam Roach

On 5/8/13 12:10, Gregory Szorc wrote:
I think this is more a question for sheriffs and people closer to 
automation. Generally, you need to be cognizant of timeouts enforced 
by our automation infrastructure and the scorn people will give you 
for running a test that isn't efficient. But if it is efficient and 
passes try, you're generally on semi-solid ground.


The issue with the signaling tests is that there are necessarily a lot 
of timers involved, so they'll always take a long time to run. They're 
pretty close to optimally "efficient" inasmuch as there's really no 
faster way to do what they're doing. I suspect you mean "runs in a very 
short amount of time" rather than "efficient."


It should be noted that not running the signaling_unittests on checkin 
has bitten us several times, as we'll go for long period of times with 
regressions that would have been caught if they'd been running (and then 
it's a lot more work to track down where the regression was introduced).


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform