W3C Proposed Recommendations: RDF 1.1
There are eight W3C Proposed Recommendations for RDF 1.1 (two of which are actually Proposed Edited Recommendations): RDF Schema 1.1: W3C Proposed Edited Recommendation 09 January 2014 http://www.w3.org/TR/rdf-schema/ RDF 1.1 XML Syntax: W3C Proposed Edited Recommendation 09 January 2014 http://www.w3.org/TR/rdf-syntax-grammar/ RDF 1.1 N-Quads: W3C Proposed Recommendation 09 January 2014 http://www.w3.org/TR/n-quads/ RDF 1.1 N-Triples: W3C Proposed Recommendation 09 January 2014 http://www.w3.org/TR/n-triples/ RDF 1.1 Concepts and Abstract Syntax: W3C Proposed Recommendation 09 January 2014 http://www.w3.org/TR/rdf11-concepts/ RDF 1.1 Semantics: W3C Proposed Recommendation 09 January 2014 http://www.w3.org/TR/rdf11-mt/ RDF 1.1 TriG: W3C Proposed Recommendation 09 January 2014 http://www.w3.org/TR/trig/ RDF 1.1 Turtle: W3C Proposed Recommendation 09 January 2014 http://www.w3.org/TR/turtle/ There's a call for review to W3C member companies (of which Mozilla is one) open until February 9. If there are comments you think Mozilla should send as part of the review, or if you think Mozilla should voice support or opposition to the specification, please say so in this thread. (I'd note, however, that there have been many previous opportunities to make comments, so it's somewhat bad form to bring up fundamental issues for the first time at this stage.) My inclination is to explicitly abstain to indicate this is something we're not interested or involved in. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Target Milestone field in bugzilla
On 01/09/2014 12:47 PM, Gavin Sharp wrote: It should be possible to have the field label change only for a specific set of products, in theory. Having the ability to customize on a per-product basis like this would make a lot of these proposals easier. I think we should ask our b.m.o devs (dkl, glob) to determine the feasibility of that solution. Currently Bugzilla does not support relabeling of fields in the UI based on some criteria. They are pretty well hard coded with the names. It would be a non-trivial amount of work to add the support and it would definitely not be something we could shift upstream. So we would need to maintain the customization going forward. With the build out work we are doing with the webservices API, it seems to me it would be more appropriate to just incorporate the name changes in the various third party UIs and dashboards that the different teams use. They can change the milestone values using the alternative UI. dkl -- David Lawrence d...@mozilla.com ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Reftests execute differently on Android or b2g?
On Wednesday 2014-01-08 19:22 +, Neil wrote: I've tried switching to XHTML to avoid the problem, and the good news is that my XHR now loads before the screenshot is taken. The bad news is that the test still fails, as if the patch wasn't in place. I guess I need to test with a server rather than just local files. You could mark the reftest as HTTP to make it load via a server on all platforms. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: List of deprecated constructs [was Re: A proposal to reduce the number of styles in Mozilla code]
On Tuesday 2014-01-07 09:13 +0100, Ms2ger wrote: On 01/07/2014 01:11 AM, Joshua Cranmer wrote: Since Benjamin's message of November 22: news://news.mozilla.org/mailman.11861.1385151580.23840.dev-platf...@lists.mozilla.org (search for Please use NS_WARN_IF instead of NS_ENSURE_SUCCESS if you don't have an NNTP client). Although there wasn't any prior discussion of the wisdom of this change, whether or not to use NS_ENSURE_SUCCESS-like patterns has been the subject of very acrimonious debates in the past and given the voluminous discussion on style guides in recent times, I'm not particularly inclined to start yet another one. FWIW, I've never seen much support for this change from anyone else than Benjamin, and only in his modules the NS_ENSURE_* macros have been effectively deprecated. I'm happy about getting rid of NS_ENSURE_*. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Mozilla style guide issues, from a JS point of view
On Tuesday 2014-01-07 10:23 -0800, Bobby Holley wrote: On Tue, Jan 7, 2014 at 9:38 AM, Adam Roach a...@mozilla.com wrote: On 1/7/14 03:07, Jason Duell wrote: Yes--if we jump to 80 chars per line, I won't be able to keep two columns open in my editor (vim, but emacs would be the same) on my laptop, which would suck. (Yes, my vision is not what it used to be--I'm using 10 point font. But that's not so huge.) I'm not just sympathetic to this argument; I've made it myself in other venues. Put me down as emphatically agreeing. Me too. With the font size that works for me, I can do 2 side-by-side columns with 80 chars, but not 100. Same for me, for my laptop display. On my external monitors (office and home; different sizes), I can fit two at 100, but only barely so for my external monitor at home, and it might not stay that way the next time the available fonts change or font rasterization changes and I have to reconfigure. (The reality for me is that I don't use side-by-side much when writing code, but I do when doing code reviews.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Mozilla style guide issues, from a JS point of view
On Monday 2014-01-06 18:46 -0600, Jeff Walden wrote: I'm writing this list, so obviously I'm choosing what I think is on it. But I think there's rough consensus on most of these among JS hackers. JS widely uses 99ch line lengths (allows a line-wrap character in 100ch terminals). Given C++ symbol names, especially with templates, get pretty long, it's a huge loss to revert to 80ch because of how much has to wrap. Is there a reason Mozilla couldn't increase to 99 or 100? Viewability on-screen seems pretty weak in this era of generally large screens. Printability's a better argument, but it's unclear to me files are printed often enough for this to matter. I do it one or two times a year, myself, these days. My argument against it is that there's a clear standard around 80ch since it's a longstanding terminal width, and if we start increasing the width, it'll keep increasing. People need to get an editor and terminal setup that works with the widest code they work with. However this works, it'll likely be able to deal with something a little bit wider than needed. Good enforcement of a defined standard does reduce the risk of creep due to this problem, but it doesn't reduce the chance that somebody will come along next year and ask for 120, etc. So if we're switching to 99 or 100, I'd like to understand how you picked that number and have confidence that it's not just going to keep going up. I'm also concerned about what happens as we get older and have poorer vision. I tend to think that we should either: * stick to 80 * require no wrapping, meaning that comments must be one paragraph per line, boolean conditions must all be single line, and assume that people will deal, using an editor that handles such code usefully I don't think most JS hackers care for abuse of Hungarian notation for scope-based (or const) naming. Every member/argument having a capital letter in it surely makes typing slower. And extra noise in every name but locals seems worse for new-contributor readability. Personally this doesn't bother me much (although aCx will always be painful compared to cx as two no-cap letters, I'm sure), but others are much more bothered. On the flip side, I strongly dislike the JS style in which member variables and constructor arguments frequently shadow each other. I'm fine with switching here, as long as it's *to* something that doesn't yield this sort of shadowing. JS people have long worked without bracing single-liners. With any style guide's indentation requirements, they're a visually redundant waste of space. Any style checker that checks both indentation and bracing (of course we'll have one, right?), will warn twice for the error single-line bracing prevents. I think most of us would discount the value of being able to add more to a single-line block without changing the condition line. So I'm pretty sure we're all dim on this one. I prefer non-bracing visually, but I've found the bracing to be useful often enough when inserting debugging-printfs that I've come to prefer it even though I think it's ugly and wastes space. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
W3C Last Calls: High Res Time 2, CSS Shapes, DOM Parsing and Serialization, XSLT3
Since I frequently email this list after it's too late to send most comments on W3C specifications (during the formal vote on a Recommendation), I figured I may as well send a note here about some specifications that are currently in last call, which is the phase when the group producing the spec thinks it's good, and really wants anyone who disagrees to tell them now. (The comment deadlines are often flexible, especially if you ask.) Comments should go to the lists requested in the boilerplate at the start of the spec, not to this list. High Resolution Time Level 2 http://www.w3.org/TR/hr-time-2/ Comment deadline: January 8 CSS Shapes Module Level 1 http://www.w3.org/TR/css-shapes-1/ Comment deadline: January 7 DOM Parsing and Serialization DOMParser, XMLSerializer, innerHTML, and similar APIs http://www.w3.org/TR/DOM-Parsing/ Comment deadline: January 7 XSL Transformations (XSLT) Version 3.0 http://www.w3.org/TR/xslt-30/ Comment deadline: February 10 (If you think this email is useful, please let me know; it may affect whether I send future notices of this sort. This data is also available via an RSS feed at http://www.w3.org/blog/news/feed .) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: js-inbound as a separate tree
Personally I find the branches we have annoying and are papering over the real problem that our feedback cycles once landed are far too long. Just for that reason alone I am against the idea. I think if we can solve the build/test scheduling and being smart about how we do our testing we can reduce the time the tree is closed greatly. more comments in line. David On 19/12/2013 18:48, Jason Orendorff wrote: On dev-tech-js-engine-internals, there's been some discussion about reviving a separate tree for JS engine development. The tradeoffs are like any other team-specific tree. Pro: - protect the rest of the project from closures and breakage due to JS patches mozilla-inbound has been closed for on average ~4 days a Month (Data at the end of the email). This is including the 8 days in November because we werent monitoring leaks properly. These ~4 days havent been split into Infrastructure vs test/build failure causing the closure and do include known downtime from Releng when they do work. - protect the JS team from closures and breakage on mozilla-inbound see my comment above. - avoid perverse incentives (rushing to land while the tree is open) When auto-land is ready we will be able to throttle landings for people adding checkin-needed to bugs since the tree is fragile on re opening. Currently the sheriffs watch for that an land things accordingly. They do the throttling themselves. Con: - more work for sheriffs (mostly merges) If mostly merges, are you suggesting there will be little traffic on the branch or the JS team will watch the tree for failures? If the former, is their value in having another branch when there is low traffic? - breakage caused by merges is a huge pain to track down Yup! Not to mention merge conflicts that can happen between branches. Today there was a complaint in #jsapi when someone was trying to fix an issue but the test framework was out of sync currently and no merge imminent. This was between b2g-inbound and mozilla-inbound. Adding another inbound feels like its going to make it even harder. - makes it harder to land stuff that touches both JS and other modules I already have this pain with working on something that B2G use too. The B2G team has been working with releng to try mitigate it but it's still painful. We did this before once (the badly named tracemonkey tree), and it was, I dunno, OK. The sheriffs have leveled up a *lot* since then. There is one JS-specific downside: because everything else in Gecko depends on the JS engine, JS patches might be extra likely to conflict with stuff landing on mozilla-inbound, causing problems that only surface after merging (the worst kind). I don't remember this being a big deal when the JS engine had its own repo before, though. We could use one of these to start: https://wiki.mozilla.org/ReleaseEngineering/DisposableProjectBranches Thoughts? -j ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform (treestatus)☁ mozilla-inbound python treestatus-stats.py --tree mozilla-inbound Added on :2012-05-14T09:59:46 Tree has been closed for a total of 64 days, 23:18:12 since it was created on 2012-05-14T09:59:46 2012-08 : 1 day, 1:26:57 2012-09 : 1 day, 3:31:16 2012-10 : 2 days, 21:33:14 2012-11 : 20:45:45 2012-12 : 2 days, 1:19:51 2013-01 : 2 days, 8:17:55 2013-02 : 4 days, 0:24:59 2013-03 : 6 days, 3:13:09 2013-04 : 4 days, 17:51:39 2013-05 : 5 days, 13:33:49 2013-06 : 2 days, 15:42:37 2013-07 : 6 days, 13:46:11 2013-08 : 4 days, 5:42:17 2013-09 : 4 days, 20:59:41 2013-10 : 4 days, 21:22:40 2013-11 : 8 days, 4:58:30 2013-12 : 2 days, 16:47:42 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: js-inbound as a separate tree
On 19/12/2013 23:56, Jason Orendorff wrote: On 12/19/13 4:55 PM, David Burns wrote: On 19/12/2013 18:48, Jason Orendorff wrote: Con: - more work for sheriffs (mostly merges) If mostly merges, are you suggesting there will be little traffic on the branch or the JS team will watch the tree for failures? Neither, I'm just saying the overall rate of broken patches wouldn't increase much, which I think shouldn't be controversial. That is, sheriffing is not watching trees, it's fighting bustage. Each busted patch and each intermittent orange creates a ton of work. It stands to reason that diverting some patches to a separate tree won't increase the volume of patches, except to the degree it actually improves developer efficiency (and let's have that problem, please). For context, I manage the sheriffs so want to be sure what I am signing them up for. If the overall rate of broken patches wouldn't increase much, why can't we keep things on inbound and when the tree is closed just using the checkin-needed keyword and let the sheriffs manage continue to manage the bustage and start landing patches again? 2013-07 : 6 days, 13:46:11 2013-08 : 4 days, 5:42:17 2013-09 : 4 days, 20:59:41 2013-10 : 4 days, 21:22:40 2013-11 : 8 days, 4:58:30 2013-12 : 2 days, 16:47:42 I know the point of including these numbers was, hey look it's not that bad, but this is really shocking. I know its bad and this is why I am tracking this information! I am watching how many backouts are affecting closures[1] and what the backout to push ratio[2] is. Currently these figures scare me and the default stance that I get from platform engineers is It's probably cheaper to push and get backed out than push to try. This comes back to my papering over the cracks be spreading things around. We're looking at an average of something like 125 hours per month that developers can't check stuff in. Even if the breakage is evenly distributed across time zones (optimistic) we're looking at zero 9s of availability. I know that RelEng are looking into how to do scheduling better, I am not sure where they are with this or if it is started but its a good first step. The whole a push can take hours to build/test is the thing that we need to be pushing against. I think if we solve that problem their will be a significant drop in bad pushes. A bad push is 3 times more expensive than a good push just in compute hours (we have 1 backout in every 15 pushes on average), never mind the cost of someone doing a pull after a bad push and them trying to solve why things don't build. We've all gotten used to it, but it's kind of nuts. Couldnt agree more! -j David [1] https://secure.theautomatedtester.co.uk/owncloud/public.php?service=filest=f54a3e2edabb70771d64e473b30780ac [2] https://secure.theautomatedtester.co.uk/owncloud/public.php?service=filest=ca3312fa7e0914e8352e96d44a48569f ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: On the usefulness of style guides (Was: style guide proposal)
On Thursday 2013-12-19 17:11 -0500, Ehsan Akhgari wrote: See, that right there is the root problem! Programmers tend to care too much about their favorite styles. I used to be like that but over the years I've mostly stopped caring about which style is better, and what I want now is consistency, even if the code looks ugly to *me*. For what it's worth, I care about style as a measure of how careful people are -- both the patch submitters, and the original authors of the code. I tend to operate on the assumption that there's a correlation between how careful people are at following local style and maintaining consistent style, and how careful they are doing other more important things that led to the patch. Maybe that assumption is wrong, but I think I've been implicitly assuming that it's true for years. If we automated style, we'd lose that data, but we'd also save the time these careful people spend on getting code style right, so I guess it's probably a win. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: We should write memory reporters for new features as they're being developed
Generally, I like the idea. Is it possible to write memory reporters for JS-implemented code? Also, is it possible to write memory reporters for Chrome Worker code? Cheers, David On 12/17/13 4:57 AM, Nicholas Nethercote wrote: So I want to propose something: if you're working on a change that will introduce significant new causes of memory consumption, you should write a memory reporter for it at the same time, rather than (maybe) doing it later, or letting someone else do it. And in this context, significant may be smaller than you expect. For example, we have numerous reporters for things that are typically only 100s of KBs. On B2G, 100KB per process is significant. -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
W3C Proposed Recommendation: Progress Events
W3C recently published the following proposed recommendation (the stage before W3C's final stage, Recommendation): http://www.w3.org/TR/progress-events/ Progress Events There's a call for review to W3C member companies (of which Mozilla is one) open until January 17. If there are comments you think Mozilla should send as part of the review, or if you think Mozilla should voice support or opposition to the specification, please say so in this thread. (I'd note, however, that there have been many previous opportunities to make comments, so it's somewhat bad form to bring up fundamental issues for the first time at this stage.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: W3C Proposed Recommendation: CORS
On Monday 2013-12-16 12:16 +, Anne van Kesteren wrote: On Mon, Dec 16, 2013 at 8:37 AM, Henri Sivonen hsivo...@hsivonen.fi wrote: I think we should indicate support and choose the intend to implement option. Except at this point http://fetch.spec.whatwg.org/ is what we should implement (not that they are different, I think). I guess we still support publishing it though for IPR reasons. I submitted the vote in support of publishing as REC, with all the intend to implement boxes checked (produces products addressed by this specification, expects to produce products conforming to this specification, expects to produce content conforming to this specification, and expects to use products conforming to this specification). It's changeable until the deadline if other feedback comes in. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
W3C Proposed Recommendations: Performance Timeline, User Timing, JSON-LD
W3C recently published the following proposed recommendation (the stage before W3C's final stage, Recommendation): http://www.w3.org/TR/cors/ Cross-Origin Resource Sharing (CORS) There's a call for review to W3C member companies (of which Mozilla is one) open until January 14. If there are comments you think Mozilla should send as part of the review, or if you think Mozilla should voice support or opposition to the specification, please say so in this thread. (I'd note, however, that there have been many previous opportunities to make comments, so it's somewhat bad form to bring up fundamental issues for the first time at this stage.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
W3C Proposed Recommendation: CORS
[ resending with correct subject line ] W3C recently published the following proposed recommendation (the stage before W3C's final stage, Recommendation): http://www.w3.org/TR/cors/ Cross-Origin Resource Sharing (CORS) There's a call for review to W3C member companies (of which Mozilla is one) open until January 14. If there are comments you think Mozilla should send as part of the review, or if you think Mozilla should voice support or opposition to the specification, please say so in this thread. (I'd note, however, that there have been many previous opportunities to make comments, so it's somewhat bad form to bring up fundamental issues for the first time at this stage.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
(un)safety of NS_LITERAL_STRING(...).get()
Recently bug 539710 landed[0] to fix an unnecessary and apparently unsafe operation: const PRUnichar *comma = NS_LITERAL_STRING(,).get(); Curious, I did a quick search for other examples of NS_LITERAL_STRING combined with .get() and found that this appears to be common[1]. So, I have a few questions for anyone with some insight: - Is this pattern always unsafe? - Is it sometimes unsafe? (if so, when/why?) - Should we do some cleanup and avoid things like this? (or maybe this is an outdated concern and isn't an issue anymore?) Cheers, David [0] https://bugzilla.mozilla.org/show_bug.cgi?id=539710 [1] https://mxr.mozilla.org/mozilla-central/search?string=NS_LITERAL_STRING.*getregexp=1case=1find=findi=filter=^[^\0]*%24hitlimit=tree=mozilla-central ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: (un)safety of NS_LITERAL_STRING(...).get()
On Thursday 2013-12-12 16:52 -0800, David Keeler wrote: Recently bug 539710 landed[0] to fix an unnecessary and apparently unsafe operation: const PRUnichar *comma = NS_LITERAL_STRING(,).get(); Curious, I did a quick search for other examples of NS_LITERAL_STRING combined with .get() and found that this appears to be common[1]. So, I have a few questions for anyone with some insight: - Is this pattern always unsafe? - Is it sometimes unsafe? (if so, when/why?) - Should we do some cleanup and avoid things like this? (or maybe this is an outdated concern and isn't an issue anymore?) It used to be unsafe on platforms where we didn't use wide string literals. We've since dropped support for such platforms (in bug 904985), so it's no longer unsafe. However, given that we no longer support such platforms, it's also a bunch of extra complexity that we no longer need. The preferred form would now be: #include mozilla/Char16.h const PRUnichar *comma = MOZ_UTF16(,); -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Deciding whether to change the number of unified sources
On Wednesday 2013-12-04 16:36 -0500, Ehsan Akhgari wrote: On Tue, Dec 3, 2013 at 2:47 PM, L. David Baron dba...@dbaron.org wrote: I'd certainly hope that nearly all of the difference in size of libxul.so is debugging info that wouldn't be present in a non-debug build. But it's worth testing, because if that's not the case, there are some serious improvements that could be made in the C/C++ toolchain... Well, not really. In the case of unified builds, the compiler sees larger translation units, so it has better opportunity for optimizations such as inlining, DCE, etc. I did a set of builds on changeset 8648aa476eef, on 64-bit Linux with gcc, to test these theories. Neither -DDEBUG-DDEBUG optimized nor optimized The sizes I get from the build (with -g) are: Nonunified 834700176 905487728 907369496 Unified520194760 563257880 55484 If I run strip --strip-debug on these, this removes the vast majority of the size difference: Nonunified 155896907 176296907 87984550 Unified152048885 168215761 87616359 And if I further run strip on these, I get: Nonunified 111273928 128967104 65150320 Unified109413312 123112424 64863456 readelf -WS (on the optimized pair) showed size differences in many of the sections, but the largest differences were the .debug_* ones. So indeed, *most* of the size difference is debugging info. But there is indeed still a small size difference without the debugging info (about 0.5% for optimized and not -DDEBUG, more without optimization or with -DDEBUG). (I still have all of the above binaries if people are interested in more information from them.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Deciding whether to change the number of unified sources
On Tuesday 2013-12-03 10:18 -0800, Brian Smith wrote: Also, I would be very interested in seeing size of libxul.so for fully-optimized (including PGO, where we normally do PGO) builds. Do unified builds help or hurt libxul size for release builds? Do unified builds help or hurt performance in release builds? I'd certainly hope that nearly all of the difference in size of libxul.so is debugging info that wouldn't be present in a non-debug build. But it's worth testing, because if that's not the case, there are some serious improvements that could be made in the C/C++ toolchain... -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: On closing old bugs
On Tuesday 2013-12-03 21:15 -0800, Lawrence Mandel wrote: I'm taking a stronger stance and suggesting that we should be able to wontfix bugs that likely aren't worth anyone's time or attention. As a concrete example, what is the value in keeping the following bugs open? bug 3246 - Core::Layout:Block and Inline P3 opened 14 years 9 months ago I don't think this should be wontfixed; it's a valid bug, and I think worth fixing, although as part of other architectural changes. I'd like to be able to use Bugzilla to track the known issues in our code rather than being forced to copy all the data into code comments. (At least, I sometimes would. Other times I'd rather use a version control system for tracking bugs.) In fact, there at 6925 bugs across all Bugzilla products currently in the new or unconfirmed state that were opened more than 10 years ago. I would assert that if a bug hasn't been fixed in 10 years it probably isn't important enough to spend time on now. We can always reopen or refile if the issue becomes more pressing (by anyone's judgement). I don't think that's true; both priorities and costs really do change over time. A good example of priorities changing over time is https://bugzilla.mozilla.org/show_bug.cgi?id=63895 . When filed, we may have been the only Web layout engine advanced enough for it to make sense to report the bug; today all the others have caught up and we're the only one with the bug. I also think the idea that you should wontfix bugs as a function of age just leads to messing with the bugs of components that have been around for a long time. See also http://dbaron.org/log/20080515-age-of-bugs . And it sends a bad message to the people who are interested in seeing those components improve, reporting and commenting in bugs, etc. Many of these old bugs are actually bugs that people care about, and that Web developers stumble into frequently. Some of them also contain useful information about how to fix the problem described -- information that wouldn't necessarily be there if they were wontfixed and new bugs filed. I tend to think we should be putting more effort into some of them than we currently are. (If there's a valid aging threshold, I think it's bugs that have been around long enough that they've shipped in a release. I think it's a meaningful threshold because it proves they weren't bad enough that we had to fix them in order to ship. But I think it's far from saying they should be wontfixed.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [RFC] Cleaning up sessionstore.js
On 11/29/13 4:47 AM, Robert Kaiser wrote: Just for my understanding (I have commented to users with huge, e.g. ~100MB sessionstore.js in bugs as well), I thought we were working on a rewrite of session store anyhow that would not kepp info of all tabs in one file? I think I have heard that we'd need to do this because of e10s anyhow at some point, and that it also would help our startup if we didn't need to load all session store data for all tabs at once but could do it per tab when they are actually restored/loaded. This redesign is bound to happen, sooner or later, although we are not actively working on this at the moment. It was blocked in part by the fact that the implementation of Session Restore had become a mess (that's now mostly solved) and by the fact that it was completely monolithic (also solved in many cases). I hope that we can now resume that work on Q1, but that's not a promise. Note tat this is actually not required by e10s, but if done well, this should indeed improve startup, as well as memory use. Now I fully agree with trying to not store things we probably should not keep anyhow, I just wonder if it might make sense to take into account where session store is going. Well, any change that would split sessionstore.js is going to need even more effort in ensuring that we collect the garbage, so I believe that this is a useful first step. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
[RFC] Cleaning up sessionstore.js
As many of you know, Session Restore is something of a performance hog, for many reasons – we have reports of. One of the reasons is that we store so very many things in sessionstore.js and sometimes keep stuff for a very long time. As part of bug 943352 followup, we are considering automatically cleanup some of the contents of sessionstore.js. Since people have mentioned webcompat and userdata loss in the context of sessionstore.js, I'd appreciate some feedback before we proceed. So, here are a few things that I believe we could cleanup: 1. get rid of closed windows after a while; 2. get rid of closed tabs after a while; 3. get rid of old history entries of open tabs after a while; 4. get rid of POST data of history entries after a while; 5. get rid of DOM storage contents of open tabs after a while; 6. get rid of form data content of open tabs after a while; ... Note that we don't have space usage number for each of these (bug 942340 should provide more insight). If anybody feels that we are going to break one million websites (or one million profiles), we would be interested to hear about this. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [RFC] Cleaning up sessionstore.js
On 11/28/13 1:33 PM, Till Schneidereit wrote: This would all be tackled after we did other things like getting rid of all history entries for iframes, which won't be restored in any case, right? We're not sure about the relative priorities of this cleanup vs. removing the history entries for dynamic iframes that cannot be restored. That will probably depend on the results of bug 942340. As a concrete example, I start writing my Status Board[1] entries during the week, and only send them off on Mondays. If we were to get rid of form data after some period of time, I couldn't do this anymore. Even worse: I might not know when exactly form data is discarded, so it'd *seem* to work just fine for a while, and I might invest quite some time in writing my update, only to lose it all of a sudden because Firefox decided that I don't need this data anymore. Good point. If we head in this direction, we definitely need to mark that data (form, POST or DOM storage) as unneeded only if the tab hasn't been active at all during the interval. Also, I was thinking of a time to live of at least 1 week, but I'm willing to make it 1 month. The key idea is to make sure that data eventually disappears, rather than staying forever. Similar scenarios can probably be thought up/occur for 4 and 5, too. As for 3, we could maybe gradually get rid of entries, oldest first. So it wouldn't be a hard cut-off, but a gradual loss of entries which are less and less likely to be of interest, anyway. That was the idea, yes. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [RFC] Cleaning up sessionstore.js
On 11/29/13 12:15 AM, Matthew N. wrote: On 11/28/13, 7:15 AM, Honza Bambas wrote: On 11/28/2013 12:56 PM, David Rajchenbach-Teller wrote: As many of you know, Session Restore is something of a performance hog, for many reasons – we have reports of. One of the reasons is that we store so very many things in sessionstore.js and sometimes keep stuff for a very long time. Do we know that these issues affect a large number of users and not just tab hoarders like myself? We have reports of single tabs taking 10+Mb (bug 942601, bug 934935). We're in the process of adding telemetry to find out how widespread this kind of situation is and acquire more visibility (bug 942340). It's not uncommon for me to go back to a tab group for a side project that I haven't worked on for a few months and want to resume where I left off. #3, #4, and #6 would definitely hinder that. It's hard to say if #5 would without knowing how websites are using DOM storage. To be clear, is that just referring to window.sessionStorage? It is. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Reacting more strongly to low-memory situations in Firefox 25
It seems that the 12MB reservation was aborting due to an invalid parameter. I've filed bug 943051. - Original Message - From: Benjamin Smedberg benja...@smedbergs.us To: Ehsan Akhgari ehsan.akhg...@gmail.com, dev-platform@lists.mozilla.org Sent: Monday, November 25, 2013 9:18:02 AM Subject: Re: Reacting more strongly to low-memory situations in Firefox 25 On 11/25/2013 12:11 PM, Ehsan Akhgari wrote: Do we know how much memory we tend to use during the minidump collection phase? No, we don't. It seems that the Windows code maps all of the DLLs into memory again in order to extract information from them. Does it make sense to try to reserve an address space range large enough for those allocations, and free it up right before trying to collect a crash report to make sure that the crash reporter would not run out of (V)memory in most cases? We already do this with a 12MB reservation, which had no apparent effect (bug 837835). --BDS ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
W3C Proposed Recommendations: Performance Timeline, User Timing, JSON-LD
W3C recently published the following proposed recommendations (the stage before W3C's final stage, Recommendation): http://www.w3.org/TR/performance-timeline/ Performance Timeline http://www.w3.org/TR/user-timing/ User Timing http://www.w3.org/TR/json-ld/ JSON-LD 1.0: A JSON-based Serialization for Linked Data http://www.w3.org/TR/json-ld-api/ JSON-LD 1.0 Processing Algorithms and API There's a call for review to W3C member companies (of which Mozilla is one) open until November 28 (for the first two) and December 5 (for the later two). If there are comments you think Mozilla should send as part of the review, or if you think Mozilla should voice support or opposition to the specification, please say so in this thread. (I'd note, however, that there have been many previous opportunities to make comments, so it's somewhat bad form to bring up fundamental issues for the first time at this stage.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: W3C Proposed Recommendations: XQuery, XPath, XSLT, EXI, API for Media Resources
On Tuesday 2013-10-29 12:01 +0200, Henri Sivonen wrote: On Tue, Oct 29, 2013 at 1:39 AM, Ralph Giles gi...@mozilla.com wrote: On 2013-10-28 2:11 PM, L. David Baron wrote: API for Media Resources 1.0 http://www.w3.org/TR/mediaont-api-1.0/ ... Thus I think we can be positive about this recommendation I would reply abstain and don't plan to implement for this API and on the XML related specs in this batch. I did so for the API spec; I managed to miss the deadline for the XML-related specs, though. Sorry about that. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to replace Promise.jsm and promise.js with DOM Promises
On 11/20/13 1:09 PM, Till Schneidereit wrote: How about logging them with console.info? That seems the right logging level to me, and it's easy to turn off if it gets in the way. Well, the problem is that of logging uncaught rejections. You can log them only if you catch them. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Unified builds
On Monday 2013-11-18 18:44 -0500, Ehsan Akhgari wrote: On 2013-11-17 7:50 PM, L. David Baron wrote: On Sunday 2013-11-17 16:45 -0800, Jonas Sicking wrote: On Thu, Nov 14, 2013 at 2:49 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote: I've started to work on a project in my spare time to switch us to use unified builds for C/C++ compilation. The way that unified builds work is by using the UNIFIED_SOURCES instead of the SOURCES variable in moz.build files. With that, the build system creates files such as: // Unified_cpp_path_0.cpp #include Source1.cpp #include Source2.cpp // ... Doesn't this negate the advantage of static global variables. I.e. when changing how you use a static global, rather than just auditing the .cpp file where that static global lives, you now how to audit all .cpp files that are unified together. Could we do a static analysis to look for things whose semantics are changed by this unification, such as statics with the same name between files that are/might be unified? (And could we make the tree turn red if it failed?) That analysis is quite hard to perform if we try to catch all kinds of such leakage. I think a periodic non-unified build is a much better way of discovering such problems. I'm inclined to think it'll need to be more than periodic, actually. I expect that otherwise we'd get pretty frequent bustage with people forgetting to add #includes, and that bustage would then show up when we add or remove files, which would make it a huge pain to add or remove files. But I'm actually also worried (perhaps *more* worried) about cases where there are semantic differences but things will still compile, such as two static variables of the same name and type, in different files (e.g., static bool gInitialized). We could end up with breakage both because of code that expects such variables to be distinct, or from new code that expects such variables to be merged. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Unified builds
On Tuesday 2013-11-19 17:08 +1300, Robert O'Callahan wrote: Fortunately two static variables with the same name in the same translation unit is an error in C++, at least with gcc. Ah, indeed. I'd tested in C, where it wasn't an error, but I also see an error with gcc in C++. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Unified builds
On Sunday 2013-11-17 16:45 -0800, Jonas Sicking wrote: On Thu, Nov 14, 2013 at 2:49 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote: I've started to work on a project in my spare time to switch us to use unified builds for C/C++ compilation. The way that unified builds work is by using the UNIFIED_SOURCES instead of the SOURCES variable in moz.build files. With that, the build system creates files such as: // Unified_cpp_path_0.cpp #include Source1.cpp #include Source2.cpp // ... Doesn't this negate the advantage of static global variables. I.e. when changing how you use a static global, rather than just auditing the .cpp file where that static global lives, you now how to audit all .cpp files that are unified together. Could we do a static analysis to look for things whose semantics are changed by this unification, such as statics with the same name between files that are/might be unified? (And could we make the tree turn red if it failed?) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Shared Desktop and Metro profile work on mozilla-central
I seem to remember that metro and desktop have a completely different format and implementation of session restore (bug 886336). Has this been somehow fixed? Cheers, David On 11/13/13 3:34 PM, Brian R. Bondy wrote: Over the past few weeks, we've been working on Metro and Desktop shared profiles. You can find some background information about this work on my blog [here][1]. Within the next week, if QA gives us the OK, well, we'll be uplifting the Metro and Desktop shared profile work from the oak branch to mozilla-central. As a side effect of this, if you have data in your Metro profile, it will no longer be accessible. Instead, you'll see your Firefox Desktop profile data inside the Metro environment. This message is just a heads up, if you see anything breaking, please post a new bug, CC :bbondy, and put a dependency on bug 924860. If you can think of any reason why this should not land, please speak up :) I'd like to land it sooner than later so that it has a bit of time to bake on mozilla-central. [1]: http://www.brianbondy.com/blog/id/155/shared-profiles-for-metro-firefox-and-desktop-firefox ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
[RFC] Should we persist dynamically generated iframes?
*** The problem We are currently faced with a problem on facebook.com (see bug 934935) that brings Firefox to its knees because our session restore accumulates huge amounts of dead data. This problem is most likely a Facebook bug, and we are in touch with the Facebook team to see if they can address it, but we are also thinking about counter-measures. The problem seems to be due to the combination of the following things: - facebook.com uses huge URLs (1kb), presumably for some kind of ad-tracking (annoying, but not a facebook bug); - facebook.com opens iframes and never closes them (presumably a facebook bug); - facebook.com uses the history API to push states (which increases the number of states that we need to save to disk). The net result is that users who spend lots of time on Facebook without Adblock end up with thousands of (conceptually dead) iframes saved to sessionstore.js, which is quite bad. *** Discussions We could get rid of the issue by not saving dynamically generated iframes to sessionstore.js. Or we could not save dynamic iframes that are not visible. Or we could not save dynamic iframes in non-current positions in the history. etc. All of these choices would change the semantics of sessionstore.js and would alter the user experience when reopening/recovering from crash on some sites that make good use of dynamic iframes. I would like people's opinion on such changes or possible other countermeasures. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [RFC] Should we persist dynamically generated iframes?
On 11/13/13 10:27 PM, Robert O'Callahan wrote: When you say iframes you mean content documents that aren't toplevel content documents, right? Indeed. Can you explain why sessionstore.js needs to observe non-toplevel-content documents at all? I assume there's an obvious answer, I just don't know what it is :-). Well, iframes ( co.) can contain forms, have their own history, their own DOM Storage, session cookies, etc. This is, of course, recursive. And we save all of this to be able to restore if the user relaunches Firefox, or in case of crash. Does this answer your question? Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [RFC] Should we persist dynamically generated iframes?
We could do that. This might make the behavior of Firefox a little harder to predict for web devs, though. Cheers, David On 11/13/13 7:38 PM, Mike de Boer wrote: Perhaps we could take a nuanced version of this option... Or we could not save dynamic iframes that are not visible. …changing it to ‘Put a reasonable cap on the amount of history we store for invisible, dynamic iframes, using a fifo queue’? Mike. -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Booting to the Web
Hi, As far as I remember, thread scheduling in Firefox OS is handled by the Linux kernel, so if you are looking for documentation, you should probably look in that direction. Cheers, David On 11/12/13 6:14 AM, saurabhlnt...@gmail.com wrote: Hi.. I am presenting on the topic Firefox OS. I need your help to develope some slides for Thread Schedulingin firefox OS. I am not able to find out any data regrading thread scheduling. Kindly help. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: HWA and OMTC on Linux
On Thursday 2013-11-07 17:37 -0800, Andreas Gal wrote: On Nov 7, 2013, at 3:06 PM, L. David Baron dba...@dbaron.org wrote: On Thursday 2013-11-07 13:24 -0800, Andreas Gal wrote: On Nov 7, 2013, at 1:19 PM, Karl Tomlinson mozn...@karlt.net wrote: Will any MoCo developers be permitted to spend some time fixing these or the already-known issues? Its not a priority to fix Linux/X11. We will happily take contributed patches, and people are welcome to fix issues they see, as long its not at the expense of the things that matter. I think having Linux/X11 be working and in good shape is important for attracting contributors to the Mozilla project, particularly those who write code. (Though I haven't seen recent data on OS use of Mozilla contributors who aren't paid to work on Mozilla. I'd be very surprised if it wasn't a much higher proportion of developers than users, though.) I don't think anyone disagrees with you here, except if you are saying that somehow keeping the non-OMTC Linux code is critical to attract contributors to Mozilla. I don't think thats the case and I don't think you are trying to say that. Thats what the post was all about. We want to get rid of the old non-OMTC code because its blocking making OMTC better everywhere, including Linux. Nope, I'm definitely not trying to say anything about which set of code we're using on Linux; I'm just saying that having Linux/X11 as a first tier platform is important for attracting contributors. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Pushes to Backouts on Mozilla Inbound
On Thursday 2013-11-07 14:13 +0200, Aryeh Gregor wrote: On Wed, Nov 6, 2013 at 6:46 PM, Ryan VanderMeulen rya...@gmail.com wrote: I'm just afraid we're going to end up in the same situation we're already in with intermittent failures where the developer looks at it and says that couldn't possibly be from me and ignores it. We already see Try results look good backouts on a depressingly-regular basis. The entire situation with how intermittent failures are handled strikes me as mostly a technical problem. Known intermittent failures should be flagged and automatically suppressed, not require manual judgment calls every single time. To ensure that they don't get made non-intermittent, they could be automatically rerun a couple of times (just the file, not the whole suite) if they fail to make sure they pass at least once, and get reported as a real failure if they fail five times in a row or something. Trying to persuade people to be careful of something that isn't a problem 90% of the time is a losing battle -- the signal-to-noise ratio needs to be a lot higher before people will pay attention. I think this depends on what you mean by known intermittent failures. If a known intermittent failure is the result of any regression that leads to a previously-passing test failing intermittently, I'd be pretty uncomfortable with this. There have been quite a few JS engine changes that led to style system mochitests failing intermittently; I wouldn't want all of the style system's test coverage to be progressively turned off as a result. But if you're talking about new tests that aren't yet passing reliably, or other cases where the module owner of the test recognizes that the regression is acceptable, then that seems ok. We need to get better about identifying and backing out changes that cause previously-passing tests to start failing intermittently. This requires better tools for doing it. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: HWA and OMTC on Linux
On Thursday 2013-11-07 13:24 -0800, Andreas Gal wrote: On Nov 7, 2013, at 1:19 PM, Karl Tomlinson mozn...@karlt.net wrote: Will any MoCo developers be permitted to spend some time fixing these or the already-known issues? Its not a priority to fix Linux/X11. We will happily take contributed patches, and people are welcome to fix issues they see, as long its not at the expense of the things that matter. I think having Linux/X11 be working and in good shape is important for attracting contributors to the Mozilla project, particularly those who write code. (Though I haven't seen recent data on OS use of Mozilla contributors who aren't paid to work on Mozilla. I'd be very surprised if it wasn't a much higher proportion of developers than users, though.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Measuring power usage
Context: I am currently working on patches designed to improve performance of some subsystems in Firefox Desktop by decreasing disk I/O, but I hope that they will also have an effect (hopefully beneficial) on power/battery usage. I'd like to confirm/infirm that hypothesis. Measuring and collecting performance improvement is relatively easy, thanks to Telemetry. Measuring power usage, though? That looks harder. So, here are my questions: - do we already have a good way to measure power usage by some thread between two points in time? - if not, would there be interest in developing a library for this purpose ? Note that I don't even know if that's possible in userland. - do we already have a good way to measure total power usage by a xpcshell test, perhaps by interfacing with powertop or Intel Power Gadget? Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Measuring power usage
Good point. Just accessing the battery level is rather imprecise, but Telemetry + large numbers should help us see trends. If we go that way, this probably doesn't deserve a new library, but possibly a few utility functions in e.g. Telemetry or TelemetryStopwatch. Cheers, David On 11/5/13 4:49 PM, Andreas Gal wrote: If you can access the remaining battery status of a large enough population over time it should be easy to use telemetry to measure this pre and post patch. Andreas -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Pushes to Backouts on Mozilla Inbound
On 05/11/2013 18:11, Steve Fink wrote: These stats are *awesome*! I've been wanting them for a long time, but never got around to generating them myself. Can we track these on an ongoing basis? Sure! Since we need to be working on the engineering productivity as a whole I think this could be a good metric to see if other efforts are paying off. On 11/05/2013 07:09 AM, Ed Morley wrote: On 05 November 2013 14:44:27, David Burns wrote: We appear to be doing 1 backout for every 15 pushes on a rough average[4]. I've been thinking about this some more - and I believe the ratio is probably actually even worse than the numbers suggest, since: Yeah, 1 backout for every 15 pushes sounds quite a bit better than I'd expect. * Depending on how the backouts are performed, the backout of several changesets/bugs are sometimes folded into one commit. Can this be factored into the stats? As in, parse the backout commit messages, gather the bug numbers (or infer them from the changeset if not given), then map them to back to the pushes for that bug? It still won't be 100% right, but it'll be closer. qbackout does a little bit of this when it tries to find the right commit message to reuse when you run with --apply. But it doesn't have access to (nor need) the pushlog, which would be required for this. I am happy to make tweaks. The data I get is quite raw so happy to dive in deeper to get better data. * The 'total commits' figure includes merges other automated/non-dev commits. Can this be fixed? Sure, this should be trivial to fix. The benefits of this approach are: * Available local compute time scales linearly with the number of devs hired, unlike our Tryserver automation. That doesn't seem like a fundamental property to me. At least theoretically, much of the tryserver automation scales with the Amazon cloud (aka it scales with the load on some corporate credit card that I'm glad I don't have to see the statements for). Again theoretically, we could be buying a local build/test box for every dev hire active volunteer, and setting up automation that bridges the gap between a dev's main box and the try server. (More on this below.) There is an efficiency here that we are missing here but that is a different discussion when there is more data. * Local dep builds are much quicker than Try clobber builds. Let's split that up into builds vs tests. For the stuff I work on, building is normally not a problem. But it can be during heavy times, because doing builds means losing push races. With wide-ranging stuff (where the probability of failures due to rebases is high), this means you either have to push without a final build or get repeatedly bumped to a later day. This should get better with the current build system improvements, so perhaps this isn't much of a problem anymore, but I'm running into it a fair amount right now. For tests, it depends on the test suite. But many of them just really suck to run locally. mach magic to identify a minimal subset of tests to run would help a lot with this, but that's going to be a substantial amount of work. For the most part, I think the try server is the way to go for tests. As for resource usage, my personal opinion is that if you restrict the tests to a single platform (a T push, which you can generate by selecting something under Restrict tests to platform(s) on http://trychooser.pub.build.mozilla.org/ ), then you're fine. I'd rather people run tests on one try platform than whittle down the specific tests to be run. (Well, for the first push. If you're working through a particular issue on try, it makes sense to just test that one test suite.) In short: use the try server. Build on everything. Test on one platform. Run all the tests. If any fail, iterate on just the failed test suites (unless you think your changes may break others.) I don't have the data to prove it, but my guess is that this would result in the lowest overall load. (Backouts are expensive! Especially in hard-to-measure people time.) I'm hopeful that with the build peer's ongoing overhaul of our build system, dep build times for an average patch are going to be short enough that there really is no excuse not to build locally. Add to that ongoing work on improving mach commands to ease running just a subset of the tests (for bonus points making use of the applied MQs to guess which ones), and it really shouldn't be too onerous of a request. Other ideas: Would it be possible to restrict the statistics to only the active times of day? It sucks when the tree is closed on a weekend or in the middle of my night, but it's way way less of a problem when only a few devs are impacted. The problem I see is tree closures when lots of people need to land. Tree closures at other times are a different problem, and can be addressed separately if needed. (You could even say backouts don't matter if there's no queue in front of any test machines, which isn't true when you consider
Re: Measuring power usage
On 11/5/13 4:49 PM, Andreas Gal wrote: If you can access the remaining battery status of a large enough population over time it should be easy to use telemetry to measure this pre and post patch. Andreas Sent from Mobile. As it turns out, the platform currently offers an abstract notion of battery level as a number in [.0, 1.0] and/or discharging time. I believe that we would need something a bit closer to the metal, e.g. a number of W•s. I wonder if I should pursue this as a xpcom/js-ctypes library or whether we should work on extending the BatteryManager WebAPI. Note that I am interested in using it in workers. Any thought? Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Pushes to Backouts on Mozilla Inbound
On Tuesday 2013-11-05 14:44 +, David Burns wrote: We appear to be doing 1 backout for every 15 pushes on a rough average[4]. This number I am sure you can all agree is far too high especially if we think about the figures that John O'Duinn suggests[5] for the cost of each push for running and testing. With the offending patch + backout we are using 508 computing hours for essentially doing no changes to the tree and then we do another 254 computing hours for the fixed reland. Note the that the 508 hours doesn't include retriggers done by the Sheriffs to see if it is intermittent or not. This is a lot of wasted effort when we should be striving to get patches to stick first time. Let's see if we can try make this figure 1 in 30 patches getting backed out. If your goal is saving compute time, I suspect a message like this could actually worsen our use of compute time, since the increase in try server compute time usage as a result of people being more careful may well be larger than the savings in mozilla-inbound compute time from fewer backouts, at least for many people. That said, I think there are other reasons to want to improve this number, because broken trees have other effects on developer productivity. However, with that there's also definitely a point where it's not worth an individual to spend more time for a small reduction in the probability of wasting others' time. I think it's worth tracking both the resource consumption rates and backout rates of individual committers, because they're substantially different, and if we want people working at the correct optimum we should be giving opposite advice to different committers. (I'm aware we track resource consumption on try at [1].) I think some committers are too careful and overconsume compute resources on try, and some are not careful enough and overconsume resources (computer and human) on inbound. So we should gather data so that we can give the correct advice to different committers rather than giving blanket advice that's going to be correct for some people and wrong for others. I also don't buy the argument raised elsewhere in this thread that testing on developer machines scales better than testing on consolidated hardware. Purchasing and issuing machines to developers, and then having those developers spend time setting up a development environment on those machines takes time, just as increasing our build and test infrastructure does. I think if it's cheaper than making an equivalent increase in our build infrastructure (of identically-configured machines) then it seems to me that there's something wrong with the way we build up that infrastructure. -David [1] https://secure.pub.build.mozilla.org/builddata/reports/reportor/daily/highscores/highscores.html -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Bug Number for Add-on File Registration PRD?
On 10/30/2013 2:55 PM, Jorge Villalobos wrote: Cross posting to dev.planning, where I originally intended this to be. Please follow up to dev.planning. Jorge On 10/30/13 3:42 PM, Jorge Villalobos wrote: Hello! As many of you know, the Add-ons Team, User Advocacy Team, Firefox Team and others have been collaborating for over a year in a project called Squeaky [1]. Our aim is to improve user experience for add-ons, particularly add-ons that we consider bad for various levels of bad. Part of our work consists on pushing forward improvements in Firefox that we think will significantly achieve our goals, which is why I'm submitting this spec for discussion: https://docs.google.com/document/d/1SZx7NlaMeFxA55-u8blvgCsPIl041xaJO5YLdu6HyOk/edit?usp=sharing The Add-on File Registration System is intended to create an add-on file repository that all add-on developers need to submit their files to. This repository won't publish any of the files, and inclusion won't require more than passing a series of automatic malware checks. We will store the files and generated hashes for them. On the client side, Firefox will compute the hashes of add-on files being installed and query the API for it. If the file is registered, it can be installed, otherwise it can't (there is planned transition period to ease adoption). There will also be periodic checks of installed add-ons to make sure they are registered. All AMO files would be registered automatically. This system will allow us to better keep track of add-on IDs, be able to easily find the files they correspond to, and have effective communication channels to their developers. It's not a silver bullet to solve add-on malware problems, but it raises the bar for malware developers. We believe this strikes the right balance between a completely closed system (where only AMO add-ons are allowed) and the completely open but risky system we currently have in place. Developers are still free to distribute add-ons as they please, while we get a much-needed set of tools to fight malware and keep it at bay. There are more details in the doc, so please give it a read and post your comments and questions on this thread. Jorge Villalobos Add-ons Developer Relations Lead [1] https://wiki.mozilla.org/AMO/Squeaky Is there a bugzilla.mozilla.org bug report for this change? If so, what is the bug number? -- David E. Ross http://www.rossde.com/ Where does your elected official stand? Which politicians refuse to tell us where they stand? See the non-partisan Project Vote Smart at http://votesmart.org/. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Removing a window from the session store
Please don't do that, we are going to remove the nsISupportsString from sessionstore-state-write soon. (So far, there is a single add-on that uses it, so we're busy preparing an API for that add-on before we remove that. Please don't make our life harder :) ) On Thu Oct 24 21:56:06 2013, Matthew Gertner wrote: The private browsing approach didn't work for me since there is a visible indication in the title bar that the window is private, which I don't want. I ended up solving this by observing sessionstore-state-write and changing the session data manually (possible since it is passed in as the subject using nsISupportsString). Not very convenient but it works. I also filed https://bugzilla.mozilla.org/show_bug.cgi?id=930713. Matt ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform -- David Rajchenbach-Teller, PhD Performance Team, Mozilla signature.asc Description: OpenPGP digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Removing a window from the session store
At the moment, there is no good way to do what you need. The only solution I can think of would be to configure your popup to be in private browsing mode. Would that work for you? On 10/23/13 2:33 AM, Ehsan Akhgari wrote: On 2013-10-22 12:52 PM, Matthew Gertner wrote: I am trying to close a popup browser.xul window during Firefox shutdown so that it won't get loaded on restart by the session saver. I close the window before the browser shuts down (e.g. on quit-application-requested) but it is still opened when I start the browser again. After trawling through SessionStore.jsm, it looks like the problem is that the session store freezes the session on quit-application-requested so that it doesn't accidently lose windows that are closed as a normal part of the shutdown process. It wasn't immediately obvious to me how to circumvent this behavior. The only idea I have is to grab the state with SessionStore.getBrowserState(), remove my window manually and then set it back with SessionStore.setBrowserState(). Is there an easier way to do this? That won't work well, since it will close all but one of the windows and tabs and reopen them all again. Cheers, Ehsan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Removing a window from the session store
Yes, please do. There's a component Session Restore. Cheers, David On 10/23/13 2:39 PM, Matthew Gertner wrote: On Wednesday, October 23, 2013 2:36:12 PM UTC+2, David Rajchenbach-Teller wrote: At the moment, there is no good way to do what you need. The only solution I can think of would be to configure your popup to be in private browsing mode. Would that work for you? That might be a good solution. The side effects (not adding the page to history, etc.) are probably things we want anyway. Should I file a bug about this? It seems to me that it should be possible to close a window during shutdown without it being restored on restart. The most flexible option might be an API to cause a window to opt out of session saving completely. Matt ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
W3C Proposed Recommendation: CSS Style Attributes
CSS Style Attributes is a W3C proposed recommendation: http://www.w3.org/TR/css-style-attr This means the W3C membership (including Mozilla) has the chance to vote on its advancement to Recommendation. I currently intend to vote in support, without comments. If there are any comments or objections you think Mozilla should make, please bring them up in this thread. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Faster builds, now ; on windows, too.
Wouldn't it be interesting to also have a ./mach build frontend that repackages XUL and js code? On 10/21/13 6:53 PM, Gregory Szorc wrote: So what's the difference between |./mach build| and |./mach build binaries|? would such difference exist also after updating mozillabuild with the new mozmake (or the new make)? https://ci.mozilla.org/job/mozilla-central-docs/Build_Documentation/build-targets.html answers the first part. In addition, mozmake should be faster than pymake in almost all circumstances. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Proposed W3C Charter: Web Application Security Working Group
I managed not to send this out for review until right before the deadline, but there's a new charter proposal for the Web Application Security working group: http://www.w3.org/2013/07/webappsec-charter.html which replaces the previous charter http://www.w3.org/2011/08/appsecwg-charter.html and the diff is visible at: http://services.w3.org/htmldiff?doc1=http%3A%2F%2Fwww.w3.org%2F2011%2F08%2Fappsecwg-charter.htmldoc2=http%3A%2F%2Fwww.w3.org%2F2013%2F07%2Fwebappsec-charter.html Mozilla participants in the group that I've talked to support these changes, and the diff looks relatively straightforward to me, so I plan to submit a positive review of the proposal (with a few nitpicks in the wording of the Secure Mixed Content deliverable). Sorry for not sending this out with enough time for others to review, though it's possible I could get any comments taken into consideration if they come in soon. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [RFC] Changing the behavior of safe file output stream
Sorry, I meant flush() (lower-case), aka PR_Sync. On Fri Oct 18 16:11:43 2013, Neil wrote: Are we looking at the same stream? Finish() calls Flush() because otherwise Close() discards the file. -- David Rajchenbach-Teller, PhD Performance Team, Mozilla signature.asc Description: OpenPGP digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
[Ann] Displaying uncaught asynchronous errors
I am happy to inform you that we have recently landed Bug 903433, which ensures that uncaught errors or rejections in Promise.jsm or Task.jsm are now displayed. So, from now on, if you are attempting to debug asynchronous code using Promise.jsm/Task.jsm, don't forget to look in the browser console output, you should find the error that you are looking for. Additionally, since bug 908955 landed, Promise.jsm/Task.jsm will dump() any programming error (e.g. TypeError, SyntaxError and the ilk) immediately, regardless of whether it is caught. So, keep an eye on your stderr, too. Both changes should make debugging async code much easier. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: What platform features can we kill?
I'd be happy if we could progressively kill FileUtils.jsm and make nsIFile [noscript]. Don't know if this qualifies as platform feature, though. Cheers, David ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: What platform features can we kill?
I'd be happy to mentor someone to rewrite them using OS.File. On 10/11/13 3:28 PM, Axel Hecht wrote: Both are heavily used in the js build system for gaia, fwiw. Axel ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Extensibility of JavaScript modules
Don't hesitate to ping me when it's time. Cheers, David On 10/10/13 12:04 AM, Jason Orendorff wrote: On 10/9/13 12:56 PM, David Rajchenbach-Teller wrote: I am interested, although my buglist is rather full. What kind of help would be useful? When it's time, we'll need to: 1. write Loader hooks to make the `import` keyword behave like Cu.import 2. somehow have those hooks installed by default in every chrome window And maybe: 3. migrate existing Cu.import call sites to ES6 `import` 4. reimplement Cu.import and friends on top of the Loader API But I'm not sure 3 and 4 are possible. ES6 modules are designed for the web and so are inherently asynchronous. Cu.import is synchronous. Switching poses some risks. -j -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: java click to run problem on Firefox
Being the firefox person who suggested that Thierry should discuss this on dev-platform, let me chime in. I seem to understand that Thierry's issue is partially related to the fact that the UX of his site was designed with the assumption that if Java didn't start, then Java either wasn't installed or wasn't installed correctly. The site instantly fell back to the Java-less version, which was basically instructions on how to install Java. We discussed alternatives detection mechanisms with Thierry, so this part is probably a solved problem, I believe. If I recall, Thierry would have liked a way for his users to quickly allow Java for the page. Thierry, does the Page Information dialog do what you need? You can open that dialog from the small icon on the left of the address bar. Cheers, David On 10/10/13 6:09 PM, Benjamin Smedberg wrote: On 10/10/2013 11:44 AM, Thierry Milard wrote: I have a java Web application (www.free-visit.net). the way Mozilla manages the java player is ... killing my users experience : they have not choce to go to chrome, because I can not do otherwyse : java won'y run even f they have the latest-of-the-latest java ...wich is java7 update 40 Can you describe the usability problem in detail? We absolutely are planning to continue to make all versions of Java click-to-play by default. Users should be presented with the choice to always activate Java on your site. -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Extensibility of JavaScript modules
A (not very) long time ago, our extension model was based on XPCOM – if you didn't like a component, you could just replace it in an add-on. These days, we have shifted to providing JavaScript modules and suggesting JavaScript add-ons. Now, by default, any JavaScript module can be monkey-patched. Some developers prefer to Object.freeze() them, to ensure that this doesn't happen, while others leave them open voluntarily and use monkey-patching in test suites. Both approaches have their pros and cons. Do we/should we have a policy? Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Coding style for functions called by Task.jsm tasks
I'm going to claim that the latter method of returning a new task promise is the one we should use in general. It makes the function more easily usable outside of a task since you're just getting a promise back. It is also what Task.jsm does internally for generators anyway. I fully agree with Dave. On 10/8/13 9:06 PM, Marco wrote: I usually prefer writing functions called by tasks as generators, because it makes the code a little more readable (it avoids indentation). It also makes the function impossible to use outside of a Task, so people in future will not be able to just call it and forget that it's asynchronous. If someone wanted to use the function outside of a task, they'd just need to add the Task wrapper. I actually believe that it's quite easy to forget adding the Task wrapper and end up with something wrong. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Audit your code if you use mozilla::WeakPtr
Isn't it ultimately up to the developer to get it right? Someone could just as well forget to use |if (object)| from your example. Here's a sample usage from the header file: * // Test a weak pointer for validity before using it. * if (weak) { * weak-num = 17; * weak-act(); * } - Original Message - From: Ehsan Akhgari ehsan.akhg...@gmail.com To: dev-platform@lists.mozilla.org Sent: Tuesday, October 8, 2013 3:54:17 PM Subject: Audit your code if you use mozilla::WeakPtr I and Benoit Jacob discovered a bug in WeakPtr (bug 924658) which makes its usage unsafe by default. The issue is that WeakPtr provides convenience methods such as operator-, which mean that the consumer can directly dereference it without the required null checking first. This means that you can have code like the below: WeakPtrClass foo = realObject-asWeakPtr(); // ... foo-Method(); That will happily compile and will crash at runtime if the object behind the weak pointer is dead. The correct way of writing such code is: Class* object = foo.get(); if (object) { object-Method(); } I don't know enough about all of the places which use WeakPtr myself to fix them all, but if you have code using this in your module, please spend some time auditing the code, and fix it. Please file individual bugs for your components blocking bug 924658. Thanks! -- Ehsan http://ehsanakhgari.org/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Audit your code if you use mozilla::WeakPtr
Does it? I don't think I'm any more or less likely to omit a validity check using operator-() vs get(). Maybe it's just me. It seems like get() might actually be *more* prone to failure. Imagine: Class* object = foo.get(); if (object) { object-Method(); } // ... A lot of stuff happens and the ref blows up ... if (object) { object-Method(); // oops } - Original Message - From: Ehsan Akhgari ehsan.akhg...@gmail.com To: David Major dma...@mozilla.com Cc: dev-platform@lists.mozilla.org Sent: Tuesday, October 8, 2013 4:27:03 PM Subject: Re: Audit your code if you use mozilla::WeakPtr On 2013-10-08 7:10 PM, David Major wrote: Isn't it ultimately up to the developer to get it right? Someone could just as well forget to use |if (object)| from your example. Here's a sample usage from the header file: * // Test a weak pointer for validity before using it. * if (weak) { * weak-num = 17; * weak-act(); * } Sure, but that convenience operator makes it natural to write incorrect code by default. It's better to be explicit and correct, than implicit and wrong. :-) Cheers, Ehsan - Original Message - From: Ehsan Akhgari ehsan.akhg...@gmail.com To: dev-platform@lists.mozilla.org Sent: Tuesday, October 8, 2013 3:54:17 PM Subject: Audit your code if you use mozilla::WeakPtr I and Benoit Jacob discovered a bug in WeakPtr (bug 924658) which makes its usage unsafe by default. The issue is that WeakPtr provides convenience methods such as operator-, which mean that the consumer can directly dereference it without the required null checking first. This means that you can have code like the below: WeakPtrClass foo = realObject-asWeakPtr(); // ... foo-Method(); That will happily compile and will crash at runtime if the object behind the weak pointer is dead. The correct way of writing such code is: Class* object = foo.get(); if (object) { object-Method(); } I don't know enough about all of the places which use WeakPtr myself to fix them all, but if you have code using this in your module, please spend some time auditing the code, and fix it. Please file individual bugs for your components blocking bug 924658. Thanks! -- Ehsan http://ehsanakhgari.org/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Async APIs: Make wishes for Q4 and beyond
Dear platformers, As you may be aware, we have been busy for the past few months/years adding platform APIs to simplify everybody's task of writing asynchronous or, even better, off-main thread code [1]. Do you have wishes for Q4 or beyond? [De]compressing files on chrome workers? Accessing sqlite from chrome workers? More tooling for Promise? New preference APIs? Anything else? Please drop a line, either here or on the blog: http://wp.me/52O1 Cheers, David P.S.: If you want to discuss this IRL, I'll be in Brussels for the Summit. [1] From the top of my head, we have added and gradually improved Promise.jsm, Task.jsm, OS.File, Sqlite.jsm, mozIStorageAsyncConnection, AsyncShutdown.jsm, nsIBackgroundFileSaver, add_task for xpcshell and mochitest-browser, async transactions for places, the chrome worker module loader, ... -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: C++ style question: virtual annotations on methods
On Wednesday 2013-09-04 14:28 +1000, Cameron McCormack wrote: Bobby Holley wrote: +1. EIBTI. I agree, though MOZ_OVERRIDE does imply that the function is virtual already, so it may not be so necessary there. I also support repeating virtual as good documentation. The introduction of MOZ_OVERRIDE (which is newer than most of our existing code) perhaps offers a reason not to bother anymore, though. But I think it's useful to have |virtual| be explicit. There are many cases of member function declarations like: /* virtual */ void theFunction(); I don't recall that convention for declarations, but what I do write quite often is the same thing in function *definitions*, where virtual (and static, for static methods) aren't allowed to be repeated. In other words, I generally write: class Foo { virtual void do_something(); }; /* virtual */ void Foo::do_something() { } -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Changes to how EXPORTS are handled
On Wednesday 2013-09-04 00:31 -0700, Gregory Szorc wrote: Assuming it sticks, bug 896797 just landed in inbound and changes how EXPORTS/headers are installed. This may impact your developer workflow if you modify EXPORTS in a moz.build file to add/remove headers. Previously, headers were installed incrementally as part of make directory traversal. In the new world, we write out a manifest of headers when the build config is read from moz.build files and then we install them in bulk at the top of the build. Does this undo the protection that the build tiers were designed for, which is to prevent backwards dependencies between parts of the build? (In other words, things like preventing XPCOM from depending on headers in layout, so that XPCOM could be used standalone.) Do we still care about ensuring this? If so, should we have some other mechanism (like having standalone builds and showing them on tbpl)? -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Changes to how EXPORTS are handled
On Wednesday 2013-09-04 18:45 +0900, Mike Hommey wrote: On Wed, Sep 04, 2013 at 10:28:21AM +0100, L. David Baron wrote: On Wednesday 2013-09-04 00:31 -0700, Gregory Szorc wrote: Assuming it sticks, bug 896797 just landed in inbound and changes how EXPORTS/headers are installed. This may impact your developer workflow if you modify EXPORTS in a moz.build file to add/remove headers. Previously, headers were installed incrementally as part of make directory traversal. In the new world, we write out a manifest of headers when the build config is read from moz.build files and then we install them in bulk at the top of the build. Does this undo the protection that the build tiers were designed for, which is to prevent backwards dependencies between parts of the build? (In other words, things like preventing XPCOM from depending on headers in layout, so that XPCOM could be used standalone.) Do we still care about ensuring this? If so, should we have some other mechanism (like having standalone builds and showing them on tbpl)? The way the tier build works is that we effectively make export in all directories of a same tier before make libs. In practice, this means xpcom had access to every header in platform already, and any tier built before platform, for that matter. So only app headers weren't available to xpcom, and that's not a lot of them. So, really, nothing was already there to prevent backwards dependencies. At least not in a very long time (I don't remember if we ever did (make export; make libs) recursively directory by directory instead of tier by tier.) So I was assuming that xpcom was in a different tier from layout; apparently that's not the case. (I thought it was originally, but my memory could be wrong.) But is it correct that this change means we no longer have the backwards-dependency checking for things in different tiers? -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: On builds getting slower
On Saturday 2013-08-03 13:36 +1000, Nicholas Nethercote wrote: # Header dependency hell I've recently done a bunch of work on improving the header situation in SpiderMonkey. I can break it down to two main areas. == MINIMIZING #include STATEMENTS == There's a clang tool called include-what-you-use, a.k.a. IWYU (http://code.google.com/p/include-what-you-use/). It tells you exactly which headers should be included in all your files. I've used it to minimize #includes somewhat already (https://bugzilla.mozilla.org/show_bug.cgi?id=634839) and I plan to do some more Real Soon Now (https://bugzilla.mozilla.org/show_bug.cgi?id=888768). There are still a couple of hundred unnecessary #include statements in SpiderMonkey. (BTW, SpiderMonkey has ~280 .cpp files and ~370 .h files.) This tool sounds great. I suspect there's even more to be gained that it can't detect, though, from things that are used, but could easily be made not used. I did a few passes of poking through .deps/*.pp files, and looking for things I thought didn't belong. It's been a while, though. (See bug 64023.) khuey was also recently working on something to reduce some pretty bad #include fanout related to the new DOM bindings generation. (I'm not sure if it's landed.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla https://www.mozilla.org/ 턂 Before I built a wall I'd ask to know What I was walling in or walling out, And to whom I was like to give offense. - Robert Frost, Mending Wall (1914) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Using C++0x auto
On Friday 2013-07-19 12:15 +1200, Robert O'Callahan wrote: On Fri, Jul 19, 2013 at 3:34 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote: On 2013-07-18 5:48 AM, mscl...@googlemail.com wrote: r-value references 4.3@10.0! Yes This is very useful. I believe the JS engine already rolls their own tricks to implement this semantics. With this we can get rid of already_AddRefed and just pass nsRefPtr/nsCOMPtr/RefPtr around, right? Is the idea here that nsRefPtr/nsCOMPtr/etc. would have move constructors, and we'd just return them, and the move constructors plus return value optimizations would take care of avoiding excess reference counting? Or does it involve something more complicated like returning rvalue references? (Is such a thing possible?) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
We now have a module loader for chrome workers
This is a short announcement: chrome workers now support modules. For your future developments involving chrome workers, please make use of the module system. All the documentation may be found here: https://developer.mozilla.org/en-US/docs/Mozilla/ChromeWorkers/Chrome_Worker_Modules Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Three RDFa-related W3C Proposed (Edited) Recommendations
The W3C has released three RDFA-related documents, one proposed recommendation: HTML+RDFa 1.1: http://www.w3.org/TR/2013/PR-html-rdfa-20130625/ and two proposed edited recommendations (which contain only editorial changes): RDFa 1.1 Core: http://www.w3.org/TR/2013/PER-rdfa-core-20130625/ XHTML+RDFa 1.1 http://www.w3.org/TR/2013/PER-xhtml-rdfa-20130625/ There's a call for review to W3C member companies (of which Mozilla is one) open until Tuesday, July 23 (one week from today). If there are comments you think Mozilla should send as part of the review, or if you think Mozilla should voice support or opposition to the specification, please say so in this thread. (I'd note, however, that there have been many previous opportunities to make comments, so it's somewhat bad form to bring up fundamental issues for the first time at this stage.) There was one formal objection earlier in the process, whose history is documented in http://lists.w3.org/Archives/Public/public-rdfa-wg/2013Jan/0057.html -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
On Thursday 2013-07-11 00:14 -0700, Robert O'Callahan wrote: We can't have a rigid rule about 24 hours. Someone requesting a review from me on Thursday PDT probably won't get a response until Monday if neither of us work during the weekend. But I think it's reasonable to expect developers to process outstanding review requests (and needinfos) at least once every regular work day. Processing includes leaving a comment with an ETA. So, partly, I'm really bad at figuring out ETAs for small tasks, since I find the priority order of small tasks to be relatively dynamic, given how often small things come up. For example, reading and responding to this thread (something I've spent at least 15 minutes on so far this morning, and it'll probably end up being more than 30, which is probably 10% of my non-meeting working day today). Should I have prioritized that above doing code reviews, or should I come back to this thread and give you my thoughts in 3 weeks (as I'm currently doing on the prefixing policy thread, which I feel requires more thought)? I spend a pretty big portion of my time on things that come up at the last minute: questions from colleagues, discussions on lists (Mozilla lists and standards lists, etc.). (Another interesting question: should I prioritize questions / needinfos from people in the *middle* of writing a patch over code reviews which are at the *end* of writing a patch? Right now I think I sometimes do, and sometimes treat them equally.) Like Boris, I feel guilty about not getting to reviews, and I feel like I'm bad at figuring out how to prioritize them. I suppose what leaving an ETA would do is force me to try to stick to what I've promised, which in turn means doing code reviews rather than doing things like reading email or responding to this thread. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: review stop-energy (was 24hour review)
[ responding to the two months worth flood of email that just resulted from https://bugzilla.mozilla.org/show_bug.cgi?id=891906 ] On Tuesday 2013-07-09 12:14 -0700, Taras Glek wrote: a) Realize that reviewing code is more valuable than writing code as it results in higher overall project activity. If you find you can't write code anymore due to prioritizing reviews over coding, grow more reviewers. Agreed, as long as the reviews are for things that we actually agree are important. b) Communicate better. If you are an active contributor, you should not leave r? patches sitting in your queue without feedback. I will review this next week because I'm (busy reviewing ___ this week|away at conference). I think bugzilla could use some improvements there. If you think a patch is lower priority than your other work communicate that. c) If you think saying nothing is better than admitting than you wont get to the patch for a while**, that's passive aggressiveness (https://en.wikipedia.org/wiki/Passive-aggressive_behavior). This is not a good way to build a happy coding community. Managers, look for instances of this on your team. I think there's a distinction between review requests: some of the review requests I recieve are assertions I believe this code is right, could you check?. Some of them aren't; they're this seems to work, but I really have no idea if it's correct; is it?. I think we should perhaps be able to have an expectation of fast response on the first set of review requests, but I don't think we should have an expectation of fast response on the second half, since many of them require the reviewer to do more work than the patch author. (I think I get a pretty substantial number of this form of review requests, at least when counting percentage of time rather than percentage of requests.) But sometimes it's also not clear which category the review request is in, or sometimes it's somewhere in-between. (Maybe we should ask people to distinguish the types? Should people then be embarrassed to get a review- on a patch of the first type where they're told to go back to the drawing board?) Or maybe I should just summarily minus review requests that appear to be of the second form, perhaps with pointers as to how the patch author should learn what's needed to figure out the necessary information? I also agree with Boris's comments about things that patch authors should do to make patches easier to review. I should probably be better about using review- when patch authors don't do these things, though I often feel bad about doing that when I've been away for a week and spent a few days catching up, and it's a patch that's already been sitting there for ten days. I guess I should just do it anyway. My list of things would be: * make the summary of the bug reflect the problem so that there's a clear description of what the patch is trying to fix * split things into small, logical, patches * write good commit messages that describe what's changing between old and new code (which, if it can't be summarized in less than about 100-150 characters, should have a short summary on the first line and a longer description on later lines) * write good code comments that describe the state of the new code, and if the patch is of nontrivial size, point to the important comments in the non-first lines of the commit message -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Code coverage take 2, and other code hygiene tools
On Monday 2013-06-24 18:50 -0700, Clint Talbert wrote: So, the key things I want to know: * Will you support code coverage? Would it be useful to your work to have a regularly scheduled code coverage build test run? * Would you want to additionally consider using something like JS-Lint for our codebase? For what it's worth, I found the old code coverage data useful. It was useful to me to browse through it for code that I was responsible for, to see: * what code was being executed during our test runs and how that matched with what I thought was being tested (it didn't always match, it turns out) * what areas might be in need of better tests When I was looking at it, I was mostly focusing on the mochitests in layout/style/test/. (I worry I might have been one of a very small number of people doing this, though.) I think using code coverage tools separately on standards-compliance test suites might also be interesting, e.g., to see what sort of coverage the test suite for a particular specification gives us, and whether there are tests we could contribute to improve it. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Making proposal for API exposure official
On Monday 2013-06-24 20:08 -0700, Brian Smith wrote: These clarifications would greatly help me (and probably owners and peers of other modules) scope our participation in this discussion. As far as the DOM module is concerned, I am mostly part of the peanut gallery so my judgement of whether this is a good idea is not so important. I generally trust the DOM module owner and peers to do the right thing for their module anyway. At the same time, I doubt such a policy is necessary or helpful for the modules that I am owner/peer of (PSM/Necko), at least at this time. In fact, though I haven't thought about it deeply, most of the recent evidence I've observed indicates that such a policy would be very harmful if applied to network and cryptographic protocol design and deployment, at least. But, let's not derail this discussion of DOM module policy with further discussions of things for which it is not relevant. I think it is intended to be substantially broader than the DOM module. Why do you think it's not relevant to network protocol design? One answer to that question I can come up with is that the constraints may be different in cases where the other side is implemented in a very small number of pieces of software. (For example, that would be true for a new crypto algorithm to be used in SSL, but false for a new HTTP header with semantics relevant only to the client that just needs to be written by an author into an .htaccess file or python script.) But I'm not sure if that's the answer you were thinking of. (Also, I hope to send more comments on the proposal soon.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Sandboxed, off-screen pages for thumbnail capture
On 6/18/13 3:01 PM, Gavin Sharp wrote: On Tue, Jun 18, 2013 at 8:10 AM, David Rajchenbach-Teller dtel...@mozilla.com wrote: If I understand correctly, we are doubling both network and disk activity (possibly CPU activity, too) for this purpose. Performance- and battery-wise, that's not a very good idea. doubling for the thumbnails we capture using this service, yes. We don't need to use this service to capture _all_ thumbnails - it was primarily designed to address requirement a) from Drew's original post (with some potential responsiveness benefits from d) being somewhat I don't understand when we wouldn't use this service. At the moment, we capture thumbnails for all pages, so if we do not change that strategy, the sandbox would effectively double at least all non-ajax network/disk activity. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Tree Closure for infrastructure work, Saturday June 1 from 1400 PDT to 2000 PDT
On Wednesday 2013-05-29 19:00 -0700, Hal Wine wrote: All trees will be closed during this period. Jobs in progress at start of treeclosure will likely burn, and can be retrigged afterwards if needed. Given that people are supposed to watch many trees (e.g., mozilla-central, mozilla-aurora, mozilla-beta) to ensure their pushes are green, at least the trees that people are required to watch should be closed enough time in advance of things going down so that this doesn't happen (as for all downtimes like this). -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [RFC] Modules for workers
On 5/27/13 7:34 PM, Jonas Sicking wrote: The alternative is to use C++ workers. This doesn't work for addons obviously, but those aren't yet a concern for B2G. Well, my main concern is front-end- and add-on-accessible code. Normally, it shouldn't influence B2G. Weren't we moving addons into separate processes anyway? This has been discussed, but I haven't heard from this since in ages. / Jonas -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [Advance warning] Session Restore is changing, will break add-ons
On 5/23/13 8:45 AM, Tim Taubert wrote: I talked to Gavin yesterday and we think the best approach would be to back out the Session Restore changes for now as they don't provide a real benefit other than code cleanup (and don't block any other work). The plan would then be to re-land them *with* a kill switch in the same cycle that brings Australis - so we would need to prepare and keep those patches ready. The reasoning is that we indeed will break different add-ons than Australis but at least there will only be one release with a couple of add-ons breaking instead of two in a row. Yes, I believe that's best. For bug 838577 we will probably need to maintain a shadow tree as Johnathan mentioned. I would suggest we talk to Ehsan as he has experience in doing this successfully. I'll get started on that. Expect the complicated patch to become more complicated :) Cheers, David - Tim -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [RFC] Modules for workers
Well, if we do not want the main thread to collapse under its weight, we have to move code off the main thread and to encourage add-ons to do likewise. I'm not sure I see an alternative here. Cheers, David On 5/24/13 1:12 AM, Jonas Sicking wrote: My main concern is that Workers created by Gecko are really expensive memory-wise. See the thread started by Justin Lebar titled Rethinking the amount of system JS we use in Gecko on B2G. The short of it is that each Worker requires a separate JS Runtime and we simply haven't optimize runtimes for having lots of them. This is especially a problem for B2G where we are very short on memory and where we are running multiple copies of Gecko. I would expect the same thing to be an issue on Firefox for Android, though maybe less so since we're generally running on higher-end hardware with more memory. So creating Workers from frontend desktop-only code seems fine. But it's something that would worry me if we start doing in cross platform Gecko code. / Jonas -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [Advance warning] Session Restore is changing, will break add-ons
Unfortunately, we do not. For the current batch of clean-up changes, it is certainly possible to add a kill switch. Time-consuming, certainly not nice (the kill switch will creep in in dozens of places in the code, if not hundreds), but possible. For the upcoming set of rewrite-half-of-the-code changes, though, having a kill switch pretty much means forking the code of Session Restore into an old session restore and a new one. Do we have a policy on these things? Cheers, David On 5/22/13 5:16 AM, Ehsan Akhgari wrote: Do we have a kill switch for the new stuff (a build-time flag or a runtime pref) which we can use to turn this off on Beta if the add-on compatibility problem proves to be bad enough that we would need to wait for a while before we can ship this? I have experience maintaining a branch to build and test Firefox with per-window private browsing turned off at build-time which I used when Firefox 20 migrated through our release channel and finally shipped. I would be happy to help you on doing the same thing if needed. Cheers, Ehsan -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [Advance warning] Session Restore is changing, will break add-ons
Opening Bug 874817 to add that kill switch. Just for clarification: we might kill add-ons that specifically look at the contents of undocumented private data structures. The advance warning is here because we know that some such add-ons exist. Given that all these refactorings take place on a single file, it might make sense to just backout the changes if necessary. Cheers, David On 5/22/13 3:35 PM, Johnathan Nightingale wrote: Policy[1] is that whenever something lands on central, it is the developer's responsibility to provide for the ability to turn it off. Usually that's just a kill switch in cases where it makes sense, or a backout where the patch is unlikely to accumulate much change on top of itself in a given release. In cases where neither of those works (Ehsan's private browsing changes, or dmandelin's landing of ionmonkey in FF18) engineers have had to work harder to maintain that ability, e.g. by maintaining a shadow tree that doesn't have their patches, but has all the other landings. That's what Ehsan's pointing to in his reply. I agree with Ehsan that, one way or another, we'll need to be able to disable these changes if they cause too much bustage (though I'm very happy to know that we're cleaning up that code in general). J [1] http://mozilla.github.io/process-releases/draft/development_overview/ Ancient, and shows it, but still relevant for this case. See Moving work from one channel to another --- Johnathan Nightingale VP Firefox Engineering @johnath -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: [RFC] Modules for workers
It should be possible to share some modules between Jetpack and Workers, for Jetpack modules that do not depend on DOM or XPCOM and Worker modules that do not depend on Worker-only code. This is not an immediate goal, but it is considered a-would-be-nice-to-have. Cheers, David On 5/20/13 8:53 PM, Dave Townsend wrote: On the face of it it looks like it should be possible for Jetpack's module loader to load these worker modules. Is that something that seems desirable or are these modules not useful outside of workers? -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
[Advance warning] Session Restore is changing, will break add-ons
As part of project Async, we have been working on refactoring Firefox’ Session Restore to ensure that it does not block the main thread. Part of the work has been cleaning up the code and the data structures involved in Session Restore both to give us some maneuverability and to improve the chances of catching refactoring errors. Unfortunately, a large number of add-ons seem to rely upon these undocumented data structures. Some of their features might therefore stop working. If you are the author of one such add-on, you should monitor carefully bug 874381 and its blockers. If you realize that we are about to break your add-on, please inform us asap, so that we can work out a solution. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposed W3C Charter: HTML Working Group
On Friday 2013-02-08 14:37 -0800, L. David Baron wrote: W3C is proposing a revised charter for the HTML Working Group. For more details, see: http://lists.w3.org/Archives/Public/public-new-work/2013Feb/0009.html http://www.w3.org/html/wg/charter/2012/ Mozilla has the opportunity to send comments or objections through Tuesday, March 12. Please reply to this thread if you think there's something we should say. A bit of followup here. One of the pieces of feedback I got as part of the previous round of review was that we should push for at least allowing *experimentation* with more open document licenses than the W3C currently allows. As a result, the W3C has proposed a revised charter with this modification and a few other small modifications resulting from the review. I've described the rationale for this and the sequence of events in a bit more detail here: http://dbaron.org/log/20130522-w3c-licensing This means there's currently another charter review period going on, to review this new charter: http://lists.w3.org/Archives/Public/public-new-work/2013May/.html http://www.w3.org/html/wg/charter/2013/ Given the previous review, I'd like to be able to support this revision without making further comments. But nonetheless I'm posting the revised charter here in case others have comments that we ought to submit as part of this charter review (deadline: May 29). -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
[RFC] Modules for workers
Hi everyone, As part of the ongoing effort to make (Chrome) Workers useful for platform refactorings, we have been working on a lightweight module loader for workers (bug 872421). This loader implements a minimal version of CommonJS modules, aka require.js. Example: // Setup the loader. We need this once per worker. importScripts(resource://gre/modules/workers/loader.js); // Import a few modules let Logger = require(resource://gre/modules/workers/logger.js); let Storage = require(resource://gre/modules/workers/storage.js); // ... // All values that are not exported are private to the module // ... exports.foo = function() { ... } // Export a value |foo| exports.bar = 5; // Export a value |bar| Once this loader lands, we will need some convention for where to place modules for workers. Unfortunately, main thread modules (both .jsm and Jetpack) can generally not be used by worker, due to different module semantics, and more importantly due to the fact that most main thread modules depend indirectly on XPCOM/XPConnect. Given that main thread modules are rooted in resource://gre/modules/ and Jetpack modules are rooted in resource://gre/modules/commonjs/ I would like to place worker modules in resource://gre/modules/workers/ Any comments? Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Storage in Gecko
On 5/16/13 2:26 AM, Robert Kaiser wrote: David Rajchenbach-Teller schrieb: I'd even go as far as limiting it to 16kb. (possibly with a transition phase during which going above 16kb only prints warnings) I think most of us agree, but the problem is that apparently a number of add-ons rely on large prefs atm, so right now we did set to 1MB. Adding a warning for everything over 10KB or 16KB or something and targeting to move the limit down to that at some point would surely be a good idea, and I'd be happy about someone filing a bug and patch about this. Filed: https://bugzilla.mozilla.org/show_bug.cgi?id=872980 https://bugzilla.mozilla.org/show_bug.cgi?id=872981 Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Ordering shutdown observers?
On 5/16/13 3:03 PM, Ehsan Akhgari wrote: Is this not the OS.File issue that Vladan mentioned? My point is that there doesn't seem to be enough use cases to warrant a new infrastructure to handle shutdown dependencies. Well, as we expand our use of OS.File, we start observing a number of issues, most of which do not seem to be due to OS.File itself, but more generally to (chrome) workers. Here are a few: - clients of OS.File need to write their data before OS.File shuts down – that's Vladan's remark; - JS Workers (including OS.File's I/O worker) need to be properly initialized before shut down or to cancel themselves nicely upon shutdown – that's Gabriele's remark; - OS.File itself needs to be informed of shut down to (asynchronously) collect information and print warnings about leaking file descriptors, and also to start rejecting additional requests. That's from the top of my head, I am sure that I am missing a few. As we move as much code as possible to workers/threads, I believe that we are going to suffer from a growing number of such issues. So, yes, I am convinced that we need a way to handle dependencies. Moreover, I believe that we need to make dependencies somewhat explicit, otherwise we will at some point end up with unsatisfiable implicit dependencies and we will need large refactorings to get around these. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OS.File and shutdown
On 5/14/13 8:35 PM, Felipe Gomes wrote: Should profile-before-change then be my call to stop accepting changes to the data and call writeAtomic to flush it? I've seen some code nearby doing it at quit-application-granted. Or perhaps there's no correct answer and it varies case by case (or anything goes that works and is early enough..) profile-before-change should be good. Any OS.File call posted before xpcom-shutdown will be completed before we exit Firefox. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Ordering shutdown observers?
On Wednesday 2013-05-15 14:32 -0700, Gregory Szorc wrote: I think the more compelling use case is service startup. Proper dependencies should allow us to more intelligently start services on demand. This should lead to lower resource utilization and faster startup times. Shutdown times should also speed up if there are fewer services to shut down. This is what we do already; we don't create an XPCOM service until somebody asks for it. Now, I'm not saying that all of our code is perfect about not *asking* for the service until it's needed. But in many cases that's more trouble than it's worth; there are many things we know we'll need during startup, and it's not worth the extra overhead of checking every time if we've already called getService. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OS.File and shutdown
On 5/10/13 10:45 PM, Felipe Gomes wrote: Hi, does OS.File guarantees that write tasks that have started will be completed if a shutdown occurs? My use case is for writeAtomic but I'm interested about the behavior of both write and writeAtomic. Corner case: what if I call write/writeAtomic from an xpcom-shutdown observer? In theory: - every call to OS.File queued *before* xpcom-shutdown will be completed; - every call to OS.File queued after xpcom-sthudown will throw an asynchronous exception (once bug 845190 has landed). Note, however, that this has not been thoroughly tested yet. Another question: are the write tasks queued and completed in order, or can two writeAtomic calls to the same file race each other and the 2nd call finish first (only to have the 1st call finish and write older data) All tasks to OS.File are queued and completed in order. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: PSA: inbound/central/birch closed
On Saturday 2013-05-11 20:46 -0400, Ehsan Akhgari wrote: I tried backing out everything that looked suspicious on the birch side of the merge but the leaks continued to persist. Not sure what to do next, and I won't be around for further investigation today. The trees remain closed for now. I bisected the birch/mozilla-central side of the merge by merging various points along the birch/mozilla-central side of the bad merge with the merge's parent on the mozilla-inbound side of the bad merge. Based on this, I backed out https://bugzilla.mozilla.org/show_bug.cgi?id=863732 in https://hg.mozilla.org/integration/mozilla-inbound/rev/9ec0ad6f7e09 , though I don't know the bad changeset on the other half. I've reopened the trees. -David On Sat, May 11, 2013 at 6:03 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote: The latest merge from m-c to inbound resulted in all debug unit tests going orange. This is caused by bad interaction of things that are landed on birch and inbound. I've closed birch/inbound/central until this issue is resolved in order to prevent further damage. I've tried backing out bug 868312 from inbound, but that did not help. The next suspect on the list is bug 861903 landed on birch, which I'm trying to backout: https://tbpl.mozilla.org/?tree=Tryrev=ccc66ba18f23. If that fixes the bustage, then I will back it out from inbound and merge to birch. Otherwise, I'm not sure what to do. -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
It's time to remove plugin support from Firefox mobile
[bcc'd to many lists for wide visibility - discussion should probably be on mobile.firefox.dev (https://mail.mozilla.org/listinfo/mobile-firefox-dev )] TL;DR: Now is a good time to remove plugin support from Firefox for Android. Consider: * We do not support plugins for Firefox OS and do not plan to * The only plugin that most users care about is Flash. Adobe stopped development for Flash on Android in November of 2011, which is a year and a half ago[1]. * Popular sites that use plugins have native apps. This includes YouTube, Netflix, Hulu, and so on. Other sites can follow suit or use modern web technologies like HTML5. Addons are also an option. * Plugins are a security hazard * Plugins drain battery life and make Firefox seem slow Let's be bold, let's protect our users, and let's move the web forward. [1] http://blogs.adobe.com/conversations/2011/11/flash-focus.html ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Storage in Gecko
Whatever you do, please, please, please make sure that everything is worker-friendly. If we can't write (or at least read) contents to that Key-Value store from a worker, we will need to reimplement everything in a few months. Cheers, David - Original Message - From: Gregory Szorc g...@mozilla.com To: Lawrence Mandel lman...@mozilla.com Cc: David Rajchenbach-Teller dtel...@mozilla.com, Taras Glek tg...@mozilla.com, dev-platform dev-platform@lists.mozilla.org Sent: Friday, May 3, 2013 1:36:15 AM Subject: Re: Storage in Gecko On 5/2/2013 4:13 PM, Lawrence Mandel wrote: - Original Message - Great post, Taras! Per IRC conversations, we'd like to move subsequent discussion of actions into a meeting so we can more quickly arrive at a resolution. Please meet in Gregory Szorc's Vidyo Room at 1400 PDT Tuesday, April 30. That's 2200 UTC. Apologies to the European and east coast crowds. If you'll miss it because it's too late, let me know and I'll consider moving it. https://v.mozilla.com/flex.html?roomdirect.htmlkey=yJWrGKmbSi6S Did someone post a summary of this meeting? Is there a link to share? Notes at https://etherpad.mozilla.org/storage-in-gecko We seemed to converge on a (presumably C++-based) storage service that has named branches/buckets with specific consistency, flushing, etc guarantees. Clients would obtain a handle on a branch, and perform basic I/O operations, including transactions. Branches could be created ad-hoc at run-time. So add-ons could obtain their own storage namespace with the storage guarantees of their choosing. Under the hood storage would be isolated so failures in one component wouldn't affect everybody. We didn't have enough time to get into prototyping or figuring out who would implement it. Going forward, I'm not sure who should own this initiative on a technical level. In classical Mozilla fashion the person who brings it up is responsible. That would be me. However, I haven't written a single line of C++ for Firefox and I have serious doubts I'd be effective. Perhaps we should talk about it at the next Platform meeting. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Some data on mozilla-inbound
The messaging around this should not be to tell people always test on try. It should be to help them figure out how to make better judgement calls on this. This is a skill that people develop and are not born with, and without data it's hard an an individual to judge how good I'm at that. One idea might be to give developers feedback on the consequences of a particular push, e.g. the AWS cost, a proxy for time during which developers couldn't push or some other measurable metric. Right now each push probably feels as expensive as every other. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Some data on mozilla-inbound
On 04/23/13 02:17, Ed Morley wrote: On 23 April 2013 09:58:41, Neil wrote: Hopefully a push never burns all platforms because the developer tried it locally first, but stranger things have happened! This actually happens quite often. On occasion it's due to warnings as errors (switched off by default on local machines due to toolchain differences) I would like to know a bit more about this. Is our list of supported toolchains so diverse that building with one version versus another will report so many false positives as to be useless? I enabled warnings-as-errors on my local machine after pushing something to inbound that failed to build because of this, and I've had no problems since then. Enabling this by default seems like an easy way to remove instances of this problem. but more often than not the developer didn't even try compiling locally :-/ So there are instances where developers didn't use the try servers and also didn't compile locally at all before pushing to inbound? I don't think we as a community should be okay with that kind of irresponsible behavior. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
W3C Proposed Recommendation: Web Storage
The Web Apps Working Group at W3C has published a Proposed Recommendation, Web Storage (the stage before W3C's final stage, Recommendation): http://www.w3.org/TR/2013/PR-webstorage-20130409/ There's a call for review to W3C member companies (of which Mozilla is one) open until Tuesday, May 7. If there are comments you think Mozilla should send as part of the review, or if you think Mozilla should voice support or opposition to the specification, please say so in this thread. (I'd note, however, that there have been many previous opportunities to make comments, so it's somewhat bad form to bring up fundamental issues for the first time at this stage.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Nightly *very* crashy on OSX
Mine is crashing on startup. Can't even get to profile chooser dialog. On Sun, Apr 21, 2013 at 6:55 AM, Axel Hecht l...@mozilla.com wrote: Hi, I'm having a very crashy nightly, uptime below an hour, not really bound to a site. Might be https://bugzilla.mozilla.org/**show_bug.cgi?id=864125https://bugzilla.mozilla.org/show_bug.cgi?id=864125, but I've experienced a bunch of crashes, all with pretty non-existing stack traces of no or one frame. bp-48ad9b29-145f-49ec-b282-**5538f2130421 4/21/13 3:27 PM bp-bea9322a-ab85-4586-8f26-**bfbcb2130421 4/21/13 2:45 PM bp-b3b43fa7-4c37-4f92-8d17-**c82802130420 4/20/13 10:59 PM bp-7e7f70e9-85c9-4fd2-a2d6-**31c892130420 4/20/13 8:34 PM bp-3faed1dd-98bb-4448-997b-**db6f22130420 4/20/13 8:16 PM bp-5440caa0-7ebc-48e2-bd15-**7fcf12130416 4/17/13 12:44 AM bp-3dbd9606-7d63-4a90-957a-**98f772130416 4/17/13 12:32 AM bp-2b7ac91d-1110-4780-9370-**89a372130416 4/17/13 12:31 AM Any ideas? Axel __**_ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/**listinfo/dev-platformhttps://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Proposed W3C Charter: Web Performance Working Group
W3C is proposing a revised charter for the Web Performance Working Group. For more details, see: http://www.w3.org/2013/01/webperf.html http://lists.w3.org/Archives/Public/public-new-work/2013Mar/.html Mozilla has the opportunity to send comments or objections through Thursday, April 11. Please reply to this thread if you think there's something we should say. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Proposed W3C Charters: groups within the XML activity
W3C is proposing revised charters for a collection of working groups in the XML area: http://lists.w3.org/Archives/Public/public-new-work/2013Mar/0007.html http://www.w3.org/XML/2013/exi-charter.html http://www.w3.org/XML/2013/query-charter.html http://www.w3.org/XML/2013/xml-core-charter.html http://www.w3.org/XML/2013/xproc-charter.html http://www.w3.org/XML/2013/xsl-charter.html http://www.w3.org/XML/2013/xml-cg-charter.html Mozilla has the opportunity to send comments or objections through Friday, April 26. Please reply to this thread if you think there's something we should say. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Proposed W3C Charter: Web and TV Interest Group
W3C is proposing a revised charter for the Web and TV Interest For more details, see: http://lists.w3.org/Archives/Public/public-new-work/2013Mar/0008.html http://www.w3.org/2012/11/webTVIGcharter.html Mozilla has the opportunity to send comments or objections through Friday, April 26. Please reply to this thread if you think there's something we should say. -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Proposal for using a multi-headed tree instead of inbound
On Wednesday 2013-04-03 17:31 -0400, Kartikaya Gupta wrote: 1. Take the latest green m-c change, commit your patch(es) on top of it, and push it to try. 2. If your try push is green, flag it for eventual merge to m-c and you're done. 3. If your try push is not green, update your patch(es) and go back to step 1. This seems like it would lead to a substantial increase in build/test load -- one that I suspect we don't currently have the hardware to support. This is because it would require a full build/test run for every push, which we avoid today because many builds and tests get merged on inbound when things are behind. (I also don't feel like we need a full build/test run for every push, so it feels like unnecessary use of resources to me.) -David -- 턞 L. David Baron http://dbaron.org/ 턂 턢 Mozilla http://www.mozilla.org/ 턂 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform