Re: Test failures in mochitest-browser caused by doing work inside DOM event handlers
On 8 May 2014, at 22:02, Irving Reid irv...@mozilla.com wrote: I've recently fought my way through a bunch of intermittent test failures in the Add-on Manager mochitest-browser suite, and there's a common anti-pattern where tests receive a Window callback, usually unload, and proceed to do significant work inside that event handler (e.g. opening/closing/focusing other windows; see one detailed case at https://bugzilla.mozilla.org/show_bug.cgi?id=608820#c281). The 'unload' event is signalled before the window is completely unloaded; I don't know the fine details but the stack is in a state where other window operations sometimes fail. Specifically, I found it a bit surprising that the run_next_test() function in the async test harness starts the next test immediately on top of the current JS stack, inside whatever callbacks are currently in progress. I just filed bug 1007906 proposing that we modify the run_next_test function in the mochitest framework to always schedule the next test for a future spin of the event loop, to allow the stack to unwind - the xpcshell test harness already does this, see http://dxr.mozilla.org/mozilla-central/source/testing/xpcshell/head.js#1443. We should also update the MDN documentation about writing mochitests to strongly advise making all DOM and Window callback listeners as small as possible; my preference is to advocate using Promises as callback listeners, because stack is always unwound before the .next handler is invoked. It looks like a good place to put this information would be as a subsection of https://developer.mozilla.org/en/docs/Mochitest#Writing_tests Can you add in a brief section covering this point, along with a brief code snippet illustrating a good and a bad way to do it perhaps? Just some rough notes would be fine; I’d be happy to edit it afterwards. Many thanks, Chris Mills Senior tech writer || Mozilla developer.mozilla.org || MDN cmi...@mozilla.com || @chrisdavidmills ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Test failures in mochitest-browser caused by doing work inside DOM event handlers
On 25/06/2014 15:16:04, Chris Mills wrote: It looks like a good place to put this information would be as a subsection of https://developer.mozilla.org/en/docs/Mochitest#Writing_tests Can you add in a brief section covering this point, along with a brief code snippet illustrating a good and a bad way to do it perhaps? Also: https://developer.mozilla.org/en-US/docs/Mozilla/QA/Avoiding_intermittent_oranges ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
Thanks for bringing this to dev-platform. Dynamic analysis is something the security teams are particularly interested in. Especially tainting user input is something we could make use of across the project: Existing security efforts for Firefox OS, Firefox Desktop, Firefox Mobile and our websites would all greatly benefit from it, as it could help preventing Cross-Site Scripting and other content injection attacks. Some people may know the work Stefano Di Paola has done to develop his DOM-XSS scanner DOMinator. There's also been an attempt to develop it in-tree within the security mentorship program, but the outcome wasn't fit to be merged into moz-central (bug 811877). A mozilla-owned API would help make all future endeavors last. I have also been in contact with folks in academia and the industry who are interested in both implementation and consumption of the API. I will make sure their attention is directed to this threat to provide additional feedback. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: BzAPI Compatibility API has been rolled out to production BMO
Those are just the API root paths, for reference. For example, to view a bug, they would be https://bugzilla.mozilla.org/bzapi/bug/35 https://bugzilla.mozilla.org/rest/bug/35 Mark On 2014-06-22, 4:42 AM, Josh Matthews wrote: [5] https://bugzilla.mozilla.org/bzapi [6] https://bugzilla.mozilla.org/rest These URLs do not lead to real pages. Cheers, Josh On 06/20/2014 10:12 PM, David Lawrence wrote: Until recently, Bugzilla supported only older Web technologies, namely XMLRPC and JSONRPC. The BMO team created a new REST API in the summer of 2013 to provide a modern Web interface to Bugzilla. Prior to the native REST API[1], a separate proxy service called BzAPI[2] was created that provided a REST API using data obtained through the older RPC interfaces as well as various other Bugzilla data sources, including CSV representations. This was a great interim solution, but now that we have a native API, and since the system hosting the proxy is not maintained by Mozilla IT, the BzAPI service will need to be decommissioned at some point. Check out the wiki page[3] for the differences between BzAPI and the native API. To ease the transition, we have created a native BzAPI compatibility layer (bug 880669[4]) that acts almost exactly the same as BzAPI but will translate the queries to the native API layer. Thus clients who currently use BzAPI will just need to change the REST URL to the built-in API[5], which is slightly different from the native one[6]. Even though we've done our own testing, we are interested in having more people test the compat API by changing their dashboards, scripts, apps, etc. to point to the compat API URL instead of the BzAPI proxy. Then try to see if anything doesn't display properly, is missing, or generates an error of some kind. We have a component[7] in Bugzilla under the BMO product that we would like people to use to let us know. You can also browse[8] for bugs that have already been submitted. We plan to leave the compat API in place for the foreseeable future, but we do not plan to make any major changes or enhancements to it. We will be working to enhance the native REST API instead with the upstream Bugzilla community. So any requests for improvements or new features will need to be directed the native API component[9]. Thanks Mozilla BMO Team [1] https://wiki.mozilla.org/BMO/REST [2] https://wiki.mozilla.org/Bugzilla:REST_API [3] https://wiki.mozilla.org/Bugzilla:API_Comparison [4] https://bugzilla.mozilla.org/show_bug.cgi?id=880669 [5] https://bugzilla.mozilla.org/bzapi [6] https://bugzilla.mozilla.org/rest [7] https://bugzilla.mozilla.org/enter_bug.cgi?product=bugzilla.mozilla.orgcomponent=Extensions%3A%20BzAPI%20Compatibility [8] https://bugzilla.mozilla.org/buglist.cgi?product=bugzilla.mozilla.orgcomponent=Extensions%3A%20BzAPI%20Compatibility [9] https://bugzilla.mozilla.org/enter_bug.cgi?product=Bugzillacomponent=WebService ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
Yes!! On 6/25/14, 8:15 AM, Jason Orendorff wrote: We're considering building a JavaScript API for dynamic analysis of JS code. Here's the sort of thing you could do with it: - Gather code coverage information (useful for testing/release mgmt?) Yes! We'd absolutely love to show code coverage in the debugger's source editor! We've played with implementations based on Debugger.prototype.onEnterFrame and hidden breakpoints, but it is pretty slow and feels like a huge hack. Would this work likely yield improved performance over that approach? - Trace all object mutation and method calls (useful for devtools?) Absolutely! A thousand times yes! We've made a prototype tracing debugger (again, based on Debugger.prototype.onEnterFrame) but (again) it is pretty slow so it isn't pref'd on by default (a few other reasons as well; it needs polish). We (devtools) have talked a lot about wanting to be the best printf debugger to lower the barriers to entry for people who normally print debug. Manually adding printfs is like a really crappy, temporary, tracing debugger; I see huge opportunities for developer productivity here! (Flip devtools.debugger.tracer to see the prototype: http://i.imgur.com/E3jtaxv.jpg) - Record/replay of JS execution (useful for devtools?) I'm drooling hh - Detect when a mathematical operation returns NaN (useful for game developers?) This would be a great fit for a console warning, IMO. Note that the API would not directly offer all these features. Instead, it would offer some powerful but mind-boggling way of instrumenting all JS code. It would be up to you, the user, to configure the instrumentation, get useful data out of it, and display or analyze it. There would be some overhead when you turn this on; we don't know how much yet. Would the API be something like DTrace? Just want to figure out what kind of thing we are talking about here. We would present a detailed example of how to use the proposed API, but we are so early in the process that we're not even sure what it would look like. There are several possibilities. We need to know how to prioritize this work. We need to know what kind of API we should build. So we're looking for early adopters. If that's you, please speak up and tell us how you'd like to instrument JS code. /me raises hand Mostly interested in tracing calls and mutations. Also code coverage, but to a bit of a lesser extent. Record/replay is such a holy grail (eclipsed only by time traveling / reverse and replay interactive (as opposed to a static recording) debugging with live, on-stack code editing) that I hesitate to even get my hopes up... Nick ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
On 6/25/14, 8:15 AM, Jason Orendorff wrote: We're considering building a JavaScript API for dynamic analysis of JS code. Here's the sort of thing you could do with it: - Gather code coverage information (useful for testing/release mgmt?) As someone who develops JS for Firefox features, code coverage is near the top of my list of wants. I want it as a development aid to ensure the tests I write cover the code I want/need them to. Just a few weeks ago we had an FHR shutdown hang because of a typo that was getting hit in a (fortunately rare) branch I thought we had test coverage for. I want code coverage so I know when I'm working with unfamiliar code I have an idea of how robust the test coverage is so I can adjust my patch and review rigor appropriately. I want code coverage so automation could potentially do things like warn when changed lines in a patch aren't being tested - something that will make me think twice about granting review. I want code coverage to quantify how much our zeal to disable intermittent failures is leading to reduced test coverage - something that will snowball into increased technical debt and lead to lower productivity. I want code coverage because we can use it in automation to identify what changes impact what tests, which potentially leads to us executing a more minimal set of tests for changes. Code coverage is a flashlight that illuminates a very dark alley of JavaScript development today. I can't wait to have it so I can apply more rigor to developing JavaScript. I also believe that good code coverage tools can increase developer productivity and allow us to move faster. On behalf of JavaScript developers everywhere, I implore you to build this functionality. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
On 6/25/14 5:15 PM, Jason Orendorff wrote: We're considering building a JavaScript API for dynamic analysis of JS code. Here's the sort of thing you could do with it: I usually don't do this, but since the others have mentioned all the good reasons and I am likewise totally excited about code coverage: +1 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: The future of HAL vs DOM for device APIs?
How does a gamepad differ from say a keyboard? What is the purpose of the actor? I just figured I'd ask these basic questions to help clarify some of the details that might otherwise be missing. I'm guessing that dealing with analog joysticks wouldn't be event based, but would be more of a poll, but button presses would be more event based. Some gamepads have vibration feedback, and I'm guessing new ones probably have accelerometers and such. What kinds of sensors and what types of frequencies are we looking at here? Dave Hylands - Original Message - From: Kyle Machulis kmachu...@mozilla.com To: dev-platform@lists.mozilla.org Sent: Tuesday, June 24, 2014 3:03:05 PM Subject: The future of HAL vs DOM for device APIs? I'm working on Gamepad OOP (bug 852944). We want to have an actor-per-device-per-process model for IPC. Right now, Gamepad's platform specifics live in HAL, as it's just using the observer broadcast model. Moving to the new actor model, it doesn't feel like gamepad will really fit in HAL anymore, since adding things to PHal usually assumes observers for device - content information broadcasts and synchronous calls for content - device queries. My first guess at implementing this would be to move gamepad's platform specifics over to DOM, and implement this all under the dom/gamepad directory, similar to how we've done other more complicated device webapi projects (like bluetooth). It feels like HAL assumes a broadcast model specifically for sensors, switches, etc, while more complex things get moved into DOM. I talked to dhylands (the HAL module owner) about this, and he directed me to a thread started by cjones a few years ago, which has even more ideas in it: https://groups.google.com/forum/#!searchin/mozilla.dev.platform/rfc$3A$20hal/mozilla.dev.platform/g72lXZpFLrg/k3bry9mgC1wJ So, does it make sense to keep HAL how it is right now, and have more complicated APIs live in DOM? Or should we look at extending HAL to handle things like actor creation/management for devices (if that's even possible)? I do admit that some of the confusion comes from the name of HAL, since hardware encompasses a lot more now than it did when the HAL layer started. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
Tainting could also be of use in a particular problem area for Content Security Policy (CSP): allowing modifications to CSP-protected pages caused by add-ons or bookmarklets. At the moment, such modifications (e.g. an add-on injecting tags into a page) are indistinguishable from malicious content injection attacks, and so are blocked. Unfortunately, the end result for users is their addons and bookmarklets break on these CSP-protected pages - a problem that will only worsen as CSP adoption increases. It is debatable whether such modifications should be allowed. According to the priority of constituencies, the user/user agent takes precedence over the the site, so add-ons and bookmarklets (typically interpreted as acting with the knowledge and consent on the user) should be able to override a CSP provided by the site. However, it is not clear that this interpretation is always valid (consider malicious addons, crapware addons, or default addons included with the user's knowledge by the OS or manufacturer). Either way, tainting support in the JS engine, and general instrumentation/metadata for JS calls, would probably help in achieving this goal (although we'd probably also have to add taint information to DOM objects as well, so CSP knows when it should be bypassed). On 06/25/2014 08:33 AM, Frederik Braun wrote: Thanks for bringing this to dev-platform. Dynamic analysis is something the security teams are particularly interested in. Especially tainting user input is something we could make use of across the project: Existing security efforts for Firefox OS, Firefox Desktop, Firefox Mobile and our websites would all greatly benefit from it, as it could help preventing Cross-Site Scripting and other content injection attacks. Some people may know the work Stefano Di Paola has done to develop his DOM-XSS scanner DOMinator. There's also been an attempt to develop it in-tree within the security mentorship program, but the outcome wasn't fit to be merged into moz-central (bug 811877). A mozilla-owned API would help make all future endeavors last. I have also been in contact with folks in academia and the industry who are interested in both implementation and consumption of the API. I will make sure their attention is directed to this threat to provide additional feedback. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: BzAPI Compatibility API has been rolled out to production BMO
This is terrific! The docs make mention of POST under bz_rest_options. Do you now (or will you at some point) support bug creation via API? Would you do full CRUD at some point? I'm excited to tinker with this. I'd guess REST support will eventually lead to more creative interfaces on top of Bugzilla. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
On 6/25/2014 10:15 AM, Jason Orendorff wrote: We're considering building a JavaScript API for dynamic analysis of JS code. Here's the sort of thing you could do with it: - Gather code coverage information (useful for testing/release mgmt?) I've begged this several times, and, as I mentioned in another recent thread, I've grown skeptical of any code coverage approach not based on the JS runtime engine itself. If you add only one new feature, this is the one you should add. -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
On 25 Jun 2014, at 20:04, Fitzgerald, Nick nfitzger...@mozilla.com wrote: Record/replay is such a holy grail (eclipsed only by time traveling / reverse and replay interactive (as opposed to a static recording) debugging with live, on-stack code editing) that I hesitate to even get my hopes up… Oh, the joy I had while working with Microsoft Script Editor over 10 years ago with which I could step out, step out, step out… move the arrow in the gutter pointing at the current line upward… hit ‘Next’… magic! If only I could relive those days with our devtools debugger! That’d be a dream come true! MikeDreamy. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
On 06/25/2014 01:04 PM, Fitzgerald, Nick wrote: Yes! We'd absolutely love to show code coverage in the debugger's source editor! We've played with implementations based on Debugger.prototype.onEnterFrame and hidden breakpoints, but it is pretty slow and feels like a huge hack. Would this work likely yield improved performance over that approach? If this pans out, I'd expect it to be faster, but how much depends on a lot of variables. I wish I could be more specific. Note that the API would not directly offer all these features. Instead, it would offer some powerful but mind-boggling way of instrumenting all JS code. It would be up to you, the user, to configure the instrumentation, get useful data out of it, and display or analyze it. There would be some overhead when you turn this on; we don't know how much yet. Would the API be something like DTrace? Just want to figure out what kind of thing we are talking about here. Very good question. I too am interested in figuring out what kind of thing we're talking about. One proposal is to build something like strace: it would be impossible to modify the instrumented code or its execution, only observe. User-specified data about JS execution would be logged, then delivered asynchronously. (Don't read too much into this -- the implementation would be completely unlike strace. Note too that records would *not* be delivered synchronously and would not contain stacks, though you could recover the stack from enter/leave records.) An alternative involves letting you modify JS code just before it's compiled (source-to-source transformation). This is more general (you could modify the instrumented code arbitrarily, and react synchronously as it executes) but maybe that's undesirable. It's not clear that transformed source would interact nicely with other tools, like the debugger. And a usable API for this is a tall order. So. Tradeoffs. /me raises hand Mostly interested in tracing calls and mutations. Also code coverage, but to a bit of a lesser extent. Great, we'll get in touch off-list! Record/replay is such a holy grail (eclipsed only by time traveling / reverse and replay interactive (as opposed to a static recording) debugging with live, on-stack code editing) that I hesitate to even get my hopes up... Don't get your hopes up. A dynamic analysis API would help with the record side of rr-style record/replay **of JS code alone**. You'd draw a boundary around all JS code and capture all input across that boundary. But replay requires more work in the engine. A bigger problem is that in such a scheme, replay only reproduces what happens inside the boundary, and the DOM, in this scenario, is still on the outside. A real record/replay product would have to support the DOM too. We're far from being able to do that. The dynamic analysis API would only be one part of the puzzle. I'm sorry for the misleading gloss. -j ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
On 6/25/14, 1:06 PM, Jason Orendorff wrote: An alternative involves letting you modify JS code just before it's compiled (source-to-source transformation). This is more general (you could modify the instrumented code arbitrarily, and react synchronously as it executes) but maybe that's undesirable. It's not clear that transformed source would interact nicely with other tools, like the debugger. And a usable API for this is a tall order. We explored source level instrumentation using the proposed onBeforeSourceCompiled hook (https://bugzilla.mozilla.org/show_bug.cgi?id=884602), and that was a speed improvement (although it required a reload to enable). Unfortunately, priorities caught up with us and we haven't had a chance to revisit. It /should/ be fine with the debugger so long as you aren't doing too invasive of changes (eg rearranging scopes) and you generate a source map between the original source and the new source. Unfortunately, that can also slow down the transformation quite a bit. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
On Thu, Jun 26, 2014 at 8:06 AM, Jason Orendorff jorendo...@mozilla.com wrote: An alternative involves letting you modify JS code just before it's compiled (source-to-source transformation). This is more general (you could modify the instrumented code arbitrarily, and react synchronously as it executes) but maybe that's undesirable. It's not clear that transformed source would interact nicely with other tools, like the debugger. And a usable API for this is a tall order. Why is a usable S2S API difficult to produce? A while ago I spent a few years doing dynamic analysis of Java code. Although the VMs had a lot of tracing and logging hooks, bytecode instrumentation was always more flexible and, done carefully, almost always more performant. I had to write some libraries to make it easy but tool builders enjoy writing and reusing those :-). For JS of course we wouldn't want to expose our internal bytecode, hence S2S. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Testing compartment-per-addon change
Hi everyone, Brian Hackett, Bobby Holley, and I have been working on some changes to the way that add-ons run (bug 990729). These changes make possible a lot of awesome new features for monitoring add-on performance and memory usage (which I'll talk about below). However, there's the possibility that some add-ons might break. The changes have landed on nightly but they're hidden behind a pref. Before we can enable them, I'd like to ask people to enable the pref locally and do some simple testing. Here's what you need to do if you'd like to help out: 1. Get the latest nightly. 2. Go to about:config and set the dom.compartment_per_addon preference (you should see it set to false by default). 3. Do whatever you normally do. 4. If you notice any abnormal behavior with your add-ons, file a bug under XPConnect, CC me (:billm), block bug 1030420, and include details about the add-on and what's going wrong. If something goes wrong, it will probably be related to global variables attached to the window that can't be found. However, these errors could manifest in all sorts of ways, so please look for anything strange. WHAT IS THE CHANGE? We're trying to isolate add-on code so that it doesn't run in the same compartments as normal Firefox code. For add-on code in JSMs and components, we already do that. Bug 990729 makes it so that code attached to XUL overlays (script elements and inline event handlers) runs in a different compartment from the Firefox window. Bug 1017310 additionally makes it so that chrome XBL code runs in a separate compartment as well. There are still some holes in the scheme, though. Content XBL code isn't separated out. Flashblock is the only add-on we know of that uses content XBL, so we're still deciding whether that's worth fixing. WHAT ARE THE BENEFITS? Right now there aren't any. However, a lot of cool things should be possible with these patches: * Previously, a lot of add-on memory usage was mixed in with chrome code. With this change, about:memory will be able to completely separate out memory allocated by add-ons from memory allocated by Firefox. Eventually, it should also be possible to track how much memory is kept alive by add-ons--that is, memory that would be released if an add-on weren't holding on to it. * Every time we start running code for an add-on, we'll have to go through cross-compartment wrappers. With the right wrappers, we'll be able to track how much time we spend running add-on code. This information could be really useful for the profiler or for telemetry. * We'll be able to track which APIs are used by add-ons. Searching mxr.m.o/addons is really useful if you're considering deprecating an API, but it doesn't tell you anything about non-AMO add-ons. With this change, we'll be to instrument the browser to report which add-ons are calling a given API and to report that information via telemetry or something. * We'll be able to expose a different API surface to add-ons than we do to the rest of the browser. So we'll be able to deprecate an API in Firefox but still offer it to add-ons. We'll also be able to hide certain APIs from add-ons. * We're going to use this stuff for electrolysis to intercept any add-on calls that might touch content objects so that we can take an alternate path. The hope is that we'll be able to make a lot more add-ons compatible with e10s. This is the original motivation for the bug. Only the electrolysis stuff is being actively worked on right now. If any of this other stuff sounds interesting and you'd like to work on it, please contact me! HOW DOES THIS AFFECT MEMORY USAGE? Even though they sound similar, this change shouldn't have anywhere near the effect on memory that compartment-per-global did. Most users have only one Firefox window open and only a few add-ons installed so we'll only be creating an extra 3 or 4 compartments. A typical Firefox user already has ~500 compartments, so this change won't affect the total much. -Bill ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
Record/replay would be incredibly useful for game developers trying to do automated testing or go back and analyze a rare failure case that happens occasionally. I already do a bunch of this on my end by recording API calls and results and such, having it at a lower JS level would be incredibly useful. Being able to detect operations that produced NaN is also handy for that sort of debugging or even as a way to just assert that it never happens in debug builds of your game. The other features listed sound useful as well but I don't know if they'd ever get used by game developers. I'm not sure I'd use them for JSIL either. Maybe you could use mutation/method call tracing to do a profile-guided-optimization equivalent for JS where you record argument types and clone functions per-callsite to ensure everything is monomorphic? I could see that being a big performance boon, especially if you can do it statically and automatically by using instrumentation. On Wed, Jun 25, 2014 at 3:40 PM, Robert O'Callahan rob...@ocallahan.org wrote: On Thu, Jun 26, 2014 at 8:06 AM, Jason Orendorff jorendo...@mozilla.com wrote: An alternative involves letting you modify JS code just before it's compiled (source-to-source transformation). This is more general (you could modify the instrumented code arbitrarily, and react synchronously as it executes) but maybe that's undesirable. It's not clear that transformed source would interact nicely with other tools, like the debugger. And a usable API for this is a tall order. Why is a usable S2S API difficult to produce? A while ago I spent a few years doing dynamic analysis of Java code. Although the VMs had a lot of tracing and logging hooks, bytecode instrumentation was always more flexible and, done carefully, almost always more performant. I had to write some libraries to make it easy but tool builders enjoy writing and reusing those :-). For JS of course we wouldn't want to expose our internal bytecode, hence S2S. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Changes in how Gecko code linkage is defined in the build system
On Tue, Nov 19, 2013 at 11:02:52AM +0900, Mike Hommey wrote: Hi, I am going to land a long series of patches that changes how Gecko code linkage is defined. Currently, when you add new code (like, a new module) to Gecko, you: - Add a new directory - Edit the parent moz.build to add the directory - Add your source files - Add a new moz.build defining at least: - SOURCES - LIBRARY_NAME - LIBXUL_LIBRARY - MODULE - Edit toolkit/library/Makefile.in, layout/build/Makefile.in, or layout/media/Makefile.in to add your library to either libxul, libgklayout or libgkmedias, with the right ifdefs. - Edit toolkit/library/nsStaticXULComponents.cpp to add your module, with the right #ifdefs. (if it's a xpcom module you're adding) I think that's all. That's already a lot. With the upcoming landing, this becomes: - Add a new directory - Edit the parent moz.build to add the directory - Add your source files - Add a new moz.build defining at least: - SOURCES - FINAL_LIBRARY - done. There are two bits of magic involved here: - FINAL_LIBRARY defines what library your code is going to be linked into. That needs to match an existing LIBRARY_NAME in some other moz.build. Most code will go in either xul, gkmedias or gklayout. Note gklayout may go away in the future, because it's just an implementation detail and an heritage of the past, but there are some ordering issues involved with removing it so we're keeping it for the moment. There are other remaining values of LIBRARY_NAME, they will fade away in the future, and shouldn't matter for most people. - The xpcom module list is going to be built with some linker magic. The error-prone list in nsStaticXULComponents.cpp is no longer required. So, this very last part had to be backed out back then (in November) because of some subtle dependency issues in existing XPCOM components shutdown. I thought I had sent a message in that regard, but it looks like I didn't. Anyways, I'm happy to announce today that this last bit landed again (bug 938437) on fx-team and should (will!) make its way to other branches. Cheers, Mike ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
Hello all, I'm one of the maintainers of the Jalangi dynamic analysis framework for JavaScript: https://github.com/SRA-SiliconValley/jalangi Jalangi works via source-to-source transformation, and we already have an implementation of many of the clients you listed (e.g., record/replay, taint analysis, NaN detection). One of our key pain points when analyzing web apps is trying to instrument all loaded code, so having a supported API to do so would be hugely helpful. We would find a source-to-source API most useful, and I agree with Rob that supporting S2S is a good way to go in terms of maximizing flexibility for tool builders. Apart from source-to-source transformation, it would be useful to us to have a supported way to load some scripts at initialization time (in our case, the Jalangi runtime libraries), so that instrumented code can call into those scripts. I will forward this thread to others who have worked on Jalangi to see if they have further feedback. We are highly supportive of this effort; I think a supported instrumentation API would make Firefox the browser of choice for those doing research on JavaScript dynamic analysis. Best, Manu - Manu Sridharan Samsung Research America http://manu.sridharan.net On Wednesday, June 25, 2014 8:15:50 AM UTC-7, Jason Orendorff wrote: We're considering building a JavaScript API for dynamic analysis of JS code. Here's the sort of thing you could do with it: - Gather code coverage information (useful for testing/release mgmt?) - Trace all object mutation and method calls (useful for devtools?) - Record/replay of JS execution (useful for devtools?) - Implement taint analysis (useful for the security team or devtools?) - Detect when a mathematical operation returns NaN (useful for game developers?) Note that the API would not directly offer all these features. Instead, it would offer some powerful but mind-boggling way of instrumenting all JS code. It would be up to you, the user, to configure the instrumentation, get useful data out of it, and display or analyze it. There would be some overhead when you turn this on; we don't know how much yet. We would present a detailed example of how to use the proposed API, but we are so early in the process that we're not even sure what it would look like. There are several possibilities. We need to know how to prioritize this work. We need to know what kind of API we should build. So we're looking for early adopters. If that's you, please speak up and tell us how you'd like to instrument JS code. -- Nicolas B. Pierron Jason Orendorff (JavaScript engine developers) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Are you interested in doing dynamic analysis of JS code?
I agree with Rob that S2S would give the most flexibility. Something similar to java.lang.instrumentation API would be immensely useful (http://docs.oracle.com/javase/7/docs/api/java/lang/instrument/Instrumentation.html). I am a maintainer and developer of Jalangi (https://github.com/SRA-SiliconValley/jalangi) and I believe a java.lang.instrumentation like API would help us to make Jalangi easily accessible to developers. On Wednesday, June 25, 2014 3:40:26 PM UTC-7, Robert O'Callahan wrote: On Thu, Jun 26, 2014 at 8:06 AM, Jason Orendorff wrote: An alternative involves letting you modify JS code just before it's compiled (source-to-source transformation). This is more general (you could modify the instrumented code arbitrarily, and react synchronously as it executes) but maybe that's undesirable. It's not clear that transformed source would interact nicely with other tools, like the debugger. And a usable API for this is a tall order. Why is a usable S2S API difficult to produce? A while ago I spent a few years doing dynamic analysis of Java code. Although the VMs had a lot of tracing and logging hooks, bytecode instrumentation was always more flexible and, done carefully, almost always more performant. I had to write some libraries to make it easy but tool builders enjoy writing and reusing those :-). For JS of course we wouldn't want to expose our internal bytecode, hence S2S. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform