Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)
Thanks Kris for all this information and the beginning of the first issue of this newsletter! 2018-07-10 20:19 GMT+02:00 Kris Maglione : > The problem is thus: In order for site isolation to work, we need to be > able to run *at least* 100 content processes in an average Firefox session I've seen this information of 100 content processes in a couple places but i haven't been able to find the rationale for it. How was the 100 number picked? Would 90 prevent a release of project fission? How will the rollout happen? Will the rollout happen progressively (like 2 content processes soon, 4 soon after, 10 some time after, etc.) or does it have to be 1 (current situation IIUC) then 100? * Andrew McCreight created a tool for tracking JS memory usage, and figuring > out which scripts and objects are responsible for how much of it > (https://bugzil.la/1463569). > How often is this code run? Is there a place to find the daily output of this tool applied to a nightly build for instance? Thanks again, David ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Eiminating nsIDOM* interfaces and brand checks
Hi, Sorry, arriving a bit late to the party. I was about to propose something related to @@toStringTag, but reading the discussions about how it may/will work [1][2][3], i realize it may not be your preferred solution. Maybe @@toStringTag will end up not working well enough for your need anyway. But another solution could be to define a chromeonly symbol for the brand. obj[Symbol.brand] === 'HTMLEmbedElement' (`Symbol.brand` is chromeonly. `obj[Symbol.brand]` too) No function call, no string to allocate, nothing to import (`Symbol` is a standard ECMAScript global). It might look weird because symbols are new, but maybe it's just something to get used to, hard to tell. David [1] https://github.com/heycam/webidl/issues/54 [2] https://github.com/heycam/webidl/pull/357 [3] https://github.com/heycam/webidl/issues/419 Le vendredi 1 septembre 2017 17:01:58 UTC+2, Boris Zbarsky a écrit : > Now that we control all the code that can attempt to touch > Components.interfaces.nsIDOM*, we can try to get rid of these interfaces > and their corresponding bloat. > > The main issue is that we have consumers that use these for testing what > sort of object we have, like so: > >if (obj instanceof Ci.nsIDOMWhatever) > > and we'd need to find ways to make that work. In some cases various > hacky workarounds are possible in terms of property names the object has > and maybe their values, but they can end up pretty non-intuitive and > fragile. For example, this: > >element instanceof Components.interfaces.nsIDOMHTMLEmbedElement > > becomes: > >element.localName === "embed" && >element.namespaceURI === "http://www.w3.org/1999/xhtml; > > and even that is only OK at the callsite in question because we know it > came from > http://searchfox.org/mozilla-central/rev/51b3d67a5ec1758bd2fe7d7b6e75ad6b6b5da223/dom/interfaces/xul/nsIDOMXULCommandDispatcher.idl#17 > > and hence we know it really is an Element... > > Anyway, we need a replacement. Some possible options: > > 1) Use "obj instanceof Whatever". The problem is that we'd like to > maybe kill off the cross-global instanceof behavior we have now for DOM > constructors. > > 2) Introduce chromeonly "is" methods on all DOM constructors. So > "HTMLEmbedElement.is(obj)". Possibly with some nicer but uniform name. > > 3) Introduce chromeonly "isWhatever" methods (akin to Array.isArray) on > DOM constructors. So "HTMLEmbedElement.isHTMLEmbedElement(obj)". Kinda > wordy and repetitive... > > 4) Something else? > > Thoughts? It really would be nice to get rid of some of this stuff > going forward. And since the web platform seems to be punting on > branding, there's no existing solution we can use. > > -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Please consider whether new APIs/functionality should be disabled by default in sandboxed iframes
Hi Boris, Did a particular feature triggered your message? Would it make sense to add the question to the "Intent to Implement" email template? https://wiki.mozilla.org/WebAPI/ExposureGuidelines#Intent_to_Implement "Intent to" emails seem like a good time to ask this questions/raise: * the feature is not implemented yet * other browsers vendors are reading the "intent to" emails, so there is an opportunity for this question to be fixed in an interoperable manner David Le mercredi 11 janvier 2017 18:34:56 UTC+1, Boris Zbarsky a écrit : > When adding a new API or CSS/HTML feature, please consider whether it > should be disabled by default in sandboxed iframes, with a sandbox token > to enable. > > Note that this is impossible to do post-facto to already-shipped APIs, > due to breaking compat. But for an API just being added, this is a > reasonable option and should be strongly considered. > > -Boris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: So, what's the point of Cu.import, these days?
Le mardi 27 septembre 2016 14:49:36 UTC+2, David Teller a écrit : > I have opened bug 1305669 with one possible strategy for migrating > towards RequireJS. RequireJS [1] is a peculiar choice for chrome code especially if your goal is static analysis. From this thread and what I read in bug, it doesn't seem you want require.js. I'll take a minute to try to setup some vocabulary. There are currently mostly 3 module formats in use in the front-end development. * Aynchronous Module Definition (AMD) which first/main loader implementation is require.js [1]. Its motivation was mostly to provide a way to load modules asynchronously in front-end code. Example of how it looks: * CommonJS (which Node.js is the main user right now, but front-end can use this syntax via tools like browserify and webpack). This form of module imports modules via the syntax 'var x = require('x')' * ES6 modules which defines both the static syntax ('import' and 'export' keywords) and the (dynamic) module loader [2]. The latter is not ready yet but is getting there. From what I've read, it looks like you want to transition to CommonJS, not AMD. If you want a CommonJS loader, be aware that there is already one at https://hg.mozilla.org/mozilla-central/file/66a77b9bfe5d/devtools/shared/Loader.jsm On the topic of transitioning, I don't maintain the Firefox codebase, so feel free to ignore anything I say below. But for one-time top-level imports, the ES6 syntax seems like a better bet given from what I've read that they're supported in chrome and are the end-game. As far as dynamic/conditional imports, there doesn't seem to be much value to move from Cu.import() to require() given it's unlikely static analysis tools will do anything with either anyway (I'm interested in being proven wrong here though) and the standard module loader [2] will beg for another rewrite eventually. hope that helps, David [1] http://requirejs.org/ [2] https://whatwg.github.io/loader/ & https://github.com/whatwg/loader/pull/152/files ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Reproducible builds
Le lundi 18 juillet 2016 20:57:12 UTC+2, Gregory Szorc a écrit : > On Sun, Jul 17, 2016 at 9:38 AM, David Bruant <bruan...@gmail.com> wrote: > > We already have deterministic packaging in some parts of Firefox (notably > most XPIs and omni.ja files). We've done this by implementing our own > jar/zip archiving layer ( > https://dxr.mozilla.org/mozilla-central/source/python/mozbuild/mozpack/mozjar.py) > which pins times, sorts files before writing, etc. We just haven't applied > this to all parts of packaging yet. We know what we have to do here. Out of curiosity, do you have a bug number tracking this work off-head? > A significant obstacle to even comparable builds is "private" data embedded > within Firefox. e.g. Google API Keys. I /think/ we're also shipping some > DRM blobs. Then of course there is build signing, which takes a private key > and cryptographically signs builds/installers. With these in play, there is > no way for anybody not Mozilla to do a bit-for-bit reproduction of most > (all?) of the Firefox distributions at > https://www.mozilla.org/en-US/firefox/all/. The best we can do is ask you > to compare the extracted/packaged files and compare them - modulo pieces > like the Google API Key - to what a 3rd party entity has produced. > Unfortunately, I'm not sure that will be trivial, as I believe these > private blobs of data are embedded within libxul. So your comparison tool > would have to know how to read library headers and possibly even assembly > code. At some point, the ability to audit a Firefox distribution is > undermined enough that a security professional may not feel comfortable > saying it looks good. Blah, anything that's more than unzip + file traversal (with blacklist) + byte comparison seems too complicated to audit to be worth it. I'm delighted to read the followup answers explaining some things are downloaded on Firefox first run. For the private data, I'm tempted to ask whether these could be in a separate file (which the comparator could safely ignore) and loaded dynamically, but I guess there is a trade-off to address with Mozilla's willingness of keeping them "private". In any case, thank you for your answer! David ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Reproducible builds
Hi, Two recent comments on the Linux reproducible build bug thread [1] suggest that the bug has no clear end goal. In this email, I'll try to describe what I understand of the problem and discuss the outline of a possible end goal. I felt that the topic covers a wide enough range of responsibilities, so I'm posting to dev-platform. If there is a better forum, please tell me what a better forum would be and sorry for the noise. # Context I believe Brendan Eich's post captures the threat fairly well [2] > Every major browser today is distributed by an organization within reach of > surveillance laws. As the Lavabit case suggests, the government may request > that browser vendors secretly inject surveillance code into the browsers they > distribute to users. We have no information that any browser vendor has ever > received such a directive. However, if that were to happen, the public would > likely not find out due to gag orders. # End goal Brendan Eich suggests : > Through international collaboration of independent entities we can give users > the confidence that Firefox cannot be subverted without the world noticing, > and offer a browser that verifiably meets users’ privacy expectations. The end goal would be that someone (EFF or equivalents, security researchers, etc.) downloads something from https://www.mozilla.org/en-US/firefox/all/ and can either build a bit-identical from source or a version that allows to verify that the downloaded version has not been altered with a backdoor or else. The verification could be assisted by a tool that looks at the various files and verifies the content of important files is bit-identical # Answering to some points https://bugzilla.mozilla.org/show_bug.cgi?id=885777#c21 > 1) The NSS .chk files are always different (see #c8). 2) Timestamps of the files inside the .tar.bz2 package will differ, but untarring them and using a recursive diff will reveal no differences (except for the aforementioned .chk files) The second point sort of solves them both. As part of making things verifiable, Mozilla could publish a program that makes byte by byte comparison only on files that matters after unzip. If they're not that important, .chk files could be ignored (blacklisted from the comparison). Same for file timestamps. That would be acceptable IMHO since a backdoor cannot be hidden in .chk files or file timestamps (right?). That could be called "comparable builds" and seems closer to something reachable than actually bit-equality. This shifts the problem a bit because another programs verifies Firefox. However, this verifying program is a combination of gunzip + directory traversal + bit comparison and seems simple enough that it cannot be the target of being alterable. Out of curiosity, how has is the TOR team handled points 1 and 2? https://bugzilla.mozilla.org/show_bug.cgi?id=885777#c22 > Docker images are notoriously not very reproducible (because `yum > update/install` installs the latest version of packages advertised on servers > and that can change over time). Does this affect file bit-to-bit comparison between what can be downloaded from https://www.mozilla.org/en-US/firefox/all/ and what can be built from source? > However, paranoid people will want to reproduce those independently. It's > turtles all the way down of course. The question is how far do we want to go. In my opinion, enabling independent organization to point out whether what can be downloaded from https://www.mozilla.org/en-US/firefox/all/ has been altered would be an amazing first milestone. I wouldn't worry too much about "paranoid people" for now. > We /could/ publish the Docker images Mozilla uses (they are probably already > public for all I know). Publishing the images used by Mozilla would probably be enough for now IMHO. People can always audit the image by traversing the image file system to see whether they find something fishy. Does a comparable build seems like a good end-goal? David [1] https://bugzilla.mozilla.org/show_bug.cgi?id=885777#c21 [2] https://brendaneich.com/2014/01/trust-but-verify/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: ESLint is now available in the entire tree
Hi, Just a drive-by comment to inform folks that there is an effort to transition Mozilla JavaScript codebase to standard JavaScript. Main bugs is: https://bugzilla.mozilla.org/show_bug.cgi?id=867617 And https://bugzilla.mozilla.org/show_bug.cgi?id=1103158 is about removing non-standard features from SpiderMonkey. Of course this can rarely be done right away and most often requires dependent bugs to move code to standard ECMAScript (with a period with warnings about the usage of the non-standard feature). Le 27/11/2015 23:53, Dave Townsend a écrit : We also know that some of our JS syntax isn't supported by ESLint. Array generators look like they might not be standardized anyway so please switch them to Array.map. If anyone notices such cases and don't necessarily have the time to fix them right away, please make a bug that blocks https://bugzilla.mozilla.org/show_bug.cgi?id=1220564 ...and of course, you meant Array.prototype.map given Array.map is non-standard and meant to disappear ;-) https://bugzilla.mozilla.org/show_bug.cgi?id=1222547 For conditional catch expressions we may want to add support to eslint somehow but for now any of those cases will just have to be ignored. I created https://bugzilla.mozilla.org/show_bug.cgi?id=1228841 for eventual removal of this non-standard feature. Adding support to ESLint might be a sensible choice in the meantime indeed. Davud ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Help with removing __iterator__ from JS
Hi Paolo, The ES6 iterator protocol is what you're looking for. See: * https://hacks.mozilla.org/2015/04/es6-in-depth-iterators-and-the-for-of-loop/ * https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/iterator Alongside with the computed property syntax, you should be able to achieve what you want. Something along the lines of: function makeIterableEventList(el){ el[Symbol.iterator] = function(){ /* define iterator here*/ } return el; } events: { [Symbol.iterator]: function(){ /* define iterator here*/ } mainview: makeIterableEventList({ focus(event) {}, }), filter: makeIterableEventList({ input(event) {}, }), }, this.boundEventHandlers = [ for ([elementName, events] of this.events) for ([eventName, handler] of events) [elementName, eventName, handler.bind(this)] ]; --- It should work in Firefox today (at least the Symbol.iterator and computed object prop parts. Not 100% sure about the array comprehension as I haven't tested) David Le 22/07/2015 14:42, Paolo Amadini a écrit : On 7/21/2015 12:07 PM, Tom Schuster wrote: Aside: Please also try avoid using Iterator(). What would be the alternative to Iterator() when I need to iterate and easily assign both keys and values of an object to local variables? See for example https://bugzilla.mozilla.org/show_bug.cgi?id=980828: this.boundEventHandlers = [ for ([elementName, events] of Iterator(this.events)) for ([eventName, handler] of Iterator(events)) [elementName, eventName, handler.bind(this)] ]; There's more context on the bug for the specific example (the bug is about supporting destructuring in comprehensions in the first place) but my concern in general is that I've failed to find an alternative to Iterator() that is as expressive. Cheers, Paolo ___ firefox-dev mailing list firefox-...@mozilla.org https://mail.mozilla.org/listinfo/firefox-dev ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Firebug in embedded XULRunner.
Le mercredi 19 février 2014 13:43:57 UTC+1, apransk...@gmail.com a écrit : We need to debug and measure execution time of JavaScript inside the pages we display. For this sort of work, web developers tend to use PhantomJS or CasperJS nowadays. http://phantomjs.org/ http://casperjs.org/ (built on top of Phantom (Webkit) or SlimerJS (Gecko)). I have no idea as to the Java embedding story for this though. David ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Parallel sandboxed iframes
Hi, So far, nobody said that the idea was either stupid or impossible/impractical, so I went ahead and filed https://bugzilla.mozilla.org/show_bug.cgi?id=961689 David ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Parallel sandboxed iframes
Hi, Machines are getting more cores, web devs want to make use of that opportunity. We (webdev hat) have WebWorkers, that's cool, but one can only do ~math in there. It's not possible to update a UI in real time from a WebWorker. It's not possible to update a canvas or WebGL context either. New APIs are being designed and implemented to cover this specific use case. We have cross-origin iframes which are very very close from being able to run in parallel. Communication between an iframe and its cross-origin parent are more or less restricted to: * postMessage + message event * resizing the iframe from the parent (which modifies the viewport dimensions inside the iframe?) that's the ideal view, because, of course, web reality kicks in and document.domain allows for a frame and its parent to dynamically change of origin (and synchronous communication can begin) and there is also named access on cross-origin Windows which has proven to be web-required [1]. It looks like cross-origin iframes are a dead end from a practical point of view when it comes to parallelism. Enters: Sandboxed iframes! A sandboxed iframe has the same properties than cross-origin iframes with the benefit of document.domain being disabled [2] (I need to look at the named access thing). So in theory, it could run in parallel from its parent. I'm writing this message to ask about the practical aspects. From what I understand, it's already possible to create process-separated frames [3] which I feel isn't that far away from a parallel iframe (though for now, I haven't been really able to make it work :-/). Sandboxed iframes have flags to disable a bunch of things (plugin, top navigation, etc.) if that can help finding a workable subset that can work in parallel from its parent. I believe parallel sandboxed iframes would be a good thing also because it would encourage web developers to use them also for their security characteristics. It would also save the work of creating new WebWorker-specific APIs. Thoughts? David [1] https://bugzilla.mozilla.org/show_bug.cgi?id=916939 [2] https://bugzilla.mozilla.org/show_bug.cgi?id=907892 [3] remote option described in https://developer.mozilla.org/en-US/Add-ons/SDK/Low-Level_APIs/frame_utils ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Parallel sandboxed iframes
Le jeudi 16 janvier 2014 18:31:59 UTC+1, Boris Zbarsky a écrit : It's pretty far away in Gecko as it stands, because we have all sorts of global state (class statics, say!) that is not parallel-safe. Process-separated is likely to be simpler to get to work without pesky things like memory corruption. I don't understand the last sentence. Is memory corruption referring to the global state you mentioned before? When you wrote parallel-safe above, was that implicitly thread-parallelism? As a web developer, I'm happy with process-separated if it's easier, at least for a first shot, but I can imagine the costs aren't the same at scale. Maybe the current rarity of sandboxed iframe on the web makes it a practical idea? I forgot to mention that Blink is planning [1] to work on site isolation [2] (a site is an eTLD IIUC). I think it flows that sandboxed iframes will be process-isolated (since a sandboxed iframe is in its own site). I believe this lowers the chance of Blink adopting the WebWorker-specific APIs to draw on canvas/WebGL contexts that Mozilla is working on. David [1] https://groups.google.com/a/chromium.org/d/msg/blink-dev/Z5OzwYh3Wfk/IWooaY5FZowJ [2] http://www.chromium.org/developers/design-documents/oop-iframes ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform