Re: Some data on mozilla-inbound

2013-04-23 Thread Axel Hecht

On 4/22/13 9:54 PM, Kartikaya Gupta wrote:

TL;DR:
* Inbound is closed 25% of the time
* Turning off coalescing could increase resource usage by up to 60% (but
probably less than this).
* We spend 24% of our machine resources on changes that are later backed
out, or changes that are doing the backout
* The vast majority of changesets that are backed out from inbound are
detectable on a try push


Do we know how many of these have been pushed to try, and 
passed/compiled what they'd fail later?


I expect some cost of regressions to come from merging/rebasing, and 
it'd be interesting to know how much of that you can see in the data 
window you looked at.


"has been pushed to try" is obviously tricky to find out, in particular 
on rebases, and possibly modified patches during the rebase.


Axel



Because of the large effect from coalescing, any changes to the current
process must not require running the full set of tests on every push.
(In my proposal this is easily accomplished with trychooser syntax, but
other proposals include rotating through T-runs on pushes, etc.).

--- Long verion below ---

Following up from the infra load meeting we had last week, I spent some
time this weekend crunching various pieces of data on mozilla-inbound to
get a sense of how much coalescing actually helps us, how much backouts
hurt us, and generally to get some data on the impact of my previous
proposal for using a multi-headed tree. I didn't get all the data that I
wanted but as I probably won't get back to this for a bit, I thought I'd
share what I found so far and see if anybody has other specific pieces
of data they would like to see gathered.

-- Inbound uptime --

I looked at a ~9 day period from April 7th to April 16th. During this time:
* inbound was closed for 24.9587% of the total time
* inbound was closed for 15.3068% of the total time due to "bustage".
* inbound was closed for 11.2059% of the total time due to "infra".

Notes:
1) "bustage" and "infra" were determined by grep -i on the data from
treestatus.mozilla.org.
2) There is some overlap so bustage + infra != total.
3) I also weighted the downtime using checkins-per-hour histogram from
joduinn's blog at [1], but this didn't have a significant impact: the
total, bustage, and infra downtime percentages moved to 25.5392%,
15.7285%, and 11.3748% respectively.

-- Backout changes --

Next I did an analysis of the changes that landed on inbound during that
time period. The exact pushlog that I looked at (corresponding to the
same April 7 - April 16 time period) is at [2]. I removed all of the
merge changesets from this range, since I wanted to look at inbound in
as much isolation as possible.

In this range:
* there were a total of 916 changesets
* there were a total of 553 "pushes"
* 74 of the 916 changesets (8.07%) were backout changesets
* 116 of the 916 changesets (12.66%) were backed out
* removing all backouts and changes backed out removed 114 pushes (20.6%)

Of the 116 changesets that were backed out:
* 37 belonged to single-changeset pushes
* 65 belonged to multi-changeset pushes where the entire pushed was
backed out
* 14 belonged to multi-changeset pushes where the changesets were
selectively backed out

Of the 74 backout changesets:
* 4 were for commit message problems
* 25 were for build failures
* 36 were for test failures
* 5 were for leaks/talos regressions
* 1 was for premature landing
* 3 were for unknown reasons

Notes:
1) There were actually 79 backouts, but I ignored 5 of them because they
backed out changes that happened prior to the start of my range).
2) Additional changes at the end of my range may have been backed out,
but the backouts were not in my range so I didn't include them in my
analysis.
3) The 14 csets that were selectively backed out is interesting to me
because it implies that somebody did some work to identify which changes
in the push were bad, and this naturally means that there is room to
save on doing that work.

-- Merge conflicts --

I also wanted to determine how many of these changes conflicted with
each other, and how far away the conflicting changes were. I got a
partial result here but I need to do more analysis before I have numbers
worth posting.

-- Build farm resources --

Finally, I used a combination of gps' mozilla-build-analyzer tool [3]
and some custom tools to determine how much machine time was spent on
building all of these pushes and changes.

I looked at all the build.json files [4] from the 6th of April to the
17th of April and pulled out all the jobs that corresponding to the
"push" changesets in my range above. For this set of 553 changesets,
there were 500 (exactly!) distinct "builders". 111 of these had "-pgo"
or "_pgo" in the name, and I excluded them. I created a 553x389 matrix
with the remaining builders and filled in how much time was spent on
each changeset for each builder (in case of multiple jobs, I added the
times).

Then I assumed that any empty field in the 553x389 matrix was a re

Re: Some data on mozilla-inbound

2013-04-23 Thread Neil

Kartikaya Gupta wrote:

The vast majority of changesets that are backed out from inbound are 
detectable on a try push


Hopefully a push never burns all platforms because the developer tried 
it locally first, but stranger things have happened! But what I'm most 
interested in is whether patches are more likely to be backed out for 
build or test failures. Perhaps if we could optimise our use of Try then 
that would reduce the load on inbound. For example:


   * At first, the push is built on one fast and readily available
 platform (linux64 is often mentioned)
   * If this builds, then all platforms build
   * Only once all platforms have built are tests run

This would avoid running tests for pushes that are known not to build on 
all platforms.


--
Warning: May contain traces of nuts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Jonathan Kew

On 23/4/13 09:58, Neil wrote:

Kartikaya Gupta wrote:


The vast majority of changesets that are backed out from inbound are
detectable on a try push


Hopefully a push never burns all platforms because the developer tried
it locally first, but stranger things have happened! But what I'm most
interested in is whether patches are more likely to be backed out for
build or test failures. Perhaps if we could optimise our use of Try then
that would reduce the load on inbound. For example:

* At first, the push is built on one fast and readily available
  platform (linux64 is often mentioned)
* If this builds, then all platforms build
* Only once all platforms have built are tests run

This would avoid running tests for pushes that are known not to build on
all platforms.



OTOH, it would significantly extend the time a developer has to wait 
before tryserver test results begin to appear. Which I think people 
would find discouraging.


JK

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Ed Morley

On 23 April 2013 09:58:41, Neil wrote:

Hopefully a push never burns all platforms because the developer tried
it locally first, but stranger things have happened!


This actually happens quite often. On occasion it's due to warnings as 
errors (switched off by default on local machines due to toolchain 
differences), but more often than not the developer didn't even try 
compiling locally :-/


Given that local machine time scales linearly with the rate at which we 
hire devs (unlike our automation capacity), I think we need to work out 
why (some) people aren't doing things like compiling locally and 
running their team's directory of tests before pushing. I would hazard 
a guess that if we improved incremental build times & created mach 
commands to simplify the edit-compile-test loop, then we could cut out 
many of these obvious inbound bustage cases.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: JavaScript reference changes: looking for opinions

2013-04-23 Thread Neil

Eric Shepherd wrote:

Currently, the JavaScript reference content for the global classes 
(String, Array, etc), are divided up such that the class methods and 
properties and the prototype methods and properties are documented 
separately


Function doesn't appear to be divided, while Object appears to have both 
a prototype page but also the prototype methods on the same page. I 
don't like this inconsistency either, however a problem with String and 
Array is that they support generics which have the same name as the 
instance methods. Perhaps if the methods of the global object were 
prefixed e.g.


String.fromCharCode(code, ...)
   Returns a string created by using the specified sequence of Unicode 
values.


String.charAt(string, pos)
charAt(pos)
   Returns the character at the specified index.

etc.


with links between them.


I looked and they're http: links too! Oops.

--
Warning: May contain traces of nuts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fw: JavaScript reference changes: looking for opinions

2013-04-23 Thread Jim Mathies
I've noticed our MDN pages climbing in search results over the last year or 
so, which is great to see. If we do fold pages like these together we should 
be carful to keep the pages that rank higher in search results.


http://www.bing.com/search?form=MOZPSB&pc=MOZO&q=javascript+string+object

https://www.google.com/search?q=javascript+string+object&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:unofficial&client=firefox-nightly

both have this url in the top three:

https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/String

Jim


-Original Message- 
From: Eric Shepherd
Sent: Monday, April 22, 2013 2:04 PM Newsgroups: 
mozilla.dev.mdc,mozilla.dev.platform

To: dev-platform@lists.mozilla.org
Subject: JavaScript reference changes: looking for opinions

Currently, the JavaScript reference content for the global classes
(String, Array, etc), are divided up such that the class methods and
properties and the prototype methods and properties are documented
separately, with links between them. For example, see:

https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/String


and

https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/String/prototype


While this might make sense to super-expert JavaScript folks (indeed,
it was their idea to do it this way), it's actually really confusing to
everyone else.

I'd like to propose we merge them back together, so that the stuff
currently documented on the "prototype" page is in the main body of the
class's documentation where most people would expect it to be. If
useful, we can come up with a badge to put next to items that are part
of the prototype (or not) to differentiate between them.

But the current organization is, well, kind of weird.

Any opinions on this pro or con before we actually try to find someone
to do the work?

--
Eric Shepherd
Developer Documentation Lead
Mozilla
Blog: http://www.bitstampede.com/
Twitter: @sheppy

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform 


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Accelerating exact rooting work

2013-04-23 Thread Tom Schuster
At the moment it's really just Jono working full time on this, and
terrence and other people reviewing. This stuff is actually quite easy
and you can expect really fast review times from our side.

In some parts of the code rooting could literally just mean to replace
JS::Value to JS::RootedValue and fixing the references to the
variable. It's really easy once you did it a few times.

Here is a list of all files that still have rooting problems:
http://pastebin.mozilla.org/2340241
And the details for each and every problem:
https://people.mozilla.com/~sfink/analysis/browser/rootingHazards.txt

We are using https://bugzilla.mozilla.org/show_bug.cgi?id=831379 to
track the rooting progress, make sure to file every bug as blocking
this one. I would appreciate if every module peer or owner would just
take a look at his/her module and tried to fix some of the issue. If
you are unsure or need help, ask us on #jsapi.

Thanks,
Tom

On Tue, Apr 23, 2013 at 3:03 AM, Robert O'Callahan  wrote:
> On Tue, Apr 23, 2013 at 5:36 AM, Terrence Cole  wrote:
>
>> Our exact rooting work is at a spot right now where we could easily use
>> more hands to accelerate the process. The main problem is that the work
>> is easy and tedious: a hard sell for pretty much any hacker at mozilla.
>>
>
> It sounds worthwhile to encourage developers who aren't currently working
> on critical-path projects to pile onto the exact rooting project. Getting
> GGC over the line reaps some pretty large benefits and it's an
> all-or-nothing project, unlike say pursuing the long tail of WebIDL
> conversions.
>
> If that sounds right, put out a call for volunteers (by which I include
> paid staff) to help push on exact rooting, with detailed instructions. I
> know some people who could probably help.
>
> Rob
> --
> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq
> qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
> qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq
> qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q
> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
> qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Accelerating exact rooting work

2013-04-23 Thread smaug

On 04/23/2013 04:07 PM, Tom Schuster wrote:

At the moment it's really just Jono working full time on this, and
terrence and other people reviewing. This stuff is actually quite easy
and you can expect really fast review times from our side.

In some parts of the code rooting could literally just mean to replace
JS::Value to JS::RootedValue and fixing the references to the
variable. It's really easy once you did it a few times.

Here is a list of all files that still have rooting problems:
http://pastebin.mozilla.org/2340241
And the details for each and every problem:
https://people.mozilla.com/~sfink/analysis/browser/rootingHazards.txt

We are using https://bugzilla.mozilla.org/show_bug.cgi?id=831379 to
track the rooting progress, make sure to file every bug as blocking
this one. I would appreciate if every module peer or owner would just
take a look at his/her module and tried to fix some of the issue. If
you are unsure or need help, ask us on #jsapi.

Thanks,
Tom



I found http://mxr.mozilla.org/mozilla-central/source/js/public/RootingAPI.h 
quite useful, but there are few things to
clarify. For example some code uses HandleObject and some code 
Handle and having two ways to do the same thing
just makes the code harder to read.



On Tue, Apr 23, 2013 at 3:03 AM, Robert O'Callahan  wrote:

On Tue, Apr 23, 2013 at 5:36 AM, Terrence Cole  wrote:


Our exact rooting work is at a spot right now where we could easily use
more hands to accelerate the process. The main problem is that the work
is easy and tedious: a hard sell for pretty much any hacker at mozilla.



It sounds worthwhile to encourage developers who aren't currently working
on critical-path projects to pile onto the exact rooting project. Getting
GGC over the line reaps some pretty large benefits and it's an
all-or-nothing project, unlike say pursuing the long tail of WebIDL
conversions.

If that sounds right, put out a call for volunteers (by which I include
paid staff) to help push on exact rooting, with detailed instructions. I
know some people who could probably help.

Rob
--
q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq
qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq
qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q
qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Accelerating exact rooting work

2013-04-23 Thread Tom Schuster
You wont have to bother with HandleObject or RootedObject etc. These
are internal to js/ and we take care of these cases.  Just use
JS::Handle or JS::Rooted. Thanks for pointing
people to this file.

PS: The correct handle for Jon Coppeard is of course jonco!

On Tue, Apr 23, 2013 at 3:58 PM, smaug  wrote:
> On 04/23/2013 04:07 PM, Tom Schuster wrote:
>>
>> At the moment it's really just Jono working full time on this, and
>> terrence and other people reviewing. This stuff is actually quite easy
>> and you can expect really fast review times from our side.
>>
>> In some parts of the code rooting could literally just mean to replace
>> JS::Value to JS::RootedValue and fixing the references to the
>> variable. It's really easy once you did it a few times.
>>
>> Here is a list of all files that still have rooting problems:
>> http://pastebin.mozilla.org/2340241
>> And the details for each and every problem:
>> https://people.mozilla.com/~sfink/analysis/browser/rootingHazards.txt
>>
>> We are using https://bugzilla.mozilla.org/show_bug.cgi?id=831379 to
>> track the rooting progress, make sure to file every bug as blocking
>> this one. I would appreciate if every module peer or owner would just
>> take a look at his/her module and tried to fix some of the issue. If
>> you are unsure or need help, ask us on #jsapi.
>>
>> Thanks,
>> Tom
>>
>
> I found http://mxr.mozilla.org/mozilla-central/source/js/public/RootingAPI.h
> quite useful, but there are few things to
> clarify. For example some code uses HandleObject and some code
> Handle and having two ways to do the same thing
> just makes the code harder to read.
>
>
>> On Tue, Apr 23, 2013 at 3:03 AM, Robert O'Callahan 
>> wrote:
>>>
>>> On Tue, Apr 23, 2013 at 5:36 AM, Terrence Cole  wrote:
>>>
 Our exact rooting work is at a spot right now where we could easily use
 more hands to accelerate the process. The main problem is that the work
 is easy and tedious: a hard sell for pretty much any hacker at mozilla.

>>>
>>> It sounds worthwhile to encourage developers who aren't currently working
>>> on critical-path projects to pile onto the exact rooting project. Getting
>>> GGC over the line reaps some pretty large benefits and it's an
>>> all-or-nothing project, unlike say pursuing the long tail of WebIDL
>>> conversions.
>>>
>>> If that sounds right, put out a call for volunteers (by which I include
>>> paid staff) to help push on exact rooting, with detailed instructions. I
>>> know some people who could probably help.
>>>
>>> Rob
>>> --
>>> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q
>>> qwqhqaqtq
>>> qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
>>> qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq
>>> qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq
>>> qyqoquq,q
>>> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
>>> qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Accelerating exact rooting work

2013-04-23 Thread Till Schneidereit
There's a rooting guide on the wiki here:
https://developer.mozilla.org/en-US/docs/SpiderMonkey/GC_Rooting_Guide

It's not very thorough, but it's something.


On Tue, Apr 23, 2013 at 3:14 PM, Tom Schuster  wrote:

> You wont have to bother with HandleObject or RootedObject etc. These
> are internal to js/ and we take care of these cases.  Just use
> JS::Handle or JS::Rooted. Thanks for pointing
> people to this file.
>
> PS: The correct handle for Jon Coppeard is of course jonco!
>
> On Tue, Apr 23, 2013 at 3:58 PM, smaug  wrote:
> > On 04/23/2013 04:07 PM, Tom Schuster wrote:
> >>
> >> At the moment it's really just Jono working full time on this, and
> >> terrence and other people reviewing. This stuff is actually quite easy
> >> and you can expect really fast review times from our side.
> >>
> >> In some parts of the code rooting could literally just mean to replace
> >> JS::Value to JS::RootedValue and fixing the references to the
> >> variable. It's really easy once you did it a few times.
> >>
> >> Here is a list of all files that still have rooting problems:
> >> http://pastebin.mozilla.org/2340241
> >> And the details for each and every problem:
> >> https://people.mozilla.com/~sfink/analysis/browser/rootingHazards.txt
> >>
> >> We are using https://bugzilla.mozilla.org/show_bug.cgi?id=831379 to
> >> track the rooting progress, make sure to file every bug as blocking
> >> this one. I would appreciate if every module peer or owner would just
> >> take a look at his/her module and tried to fix some of the issue. If
> >> you are unsure or need help, ask us on #jsapi.
> >>
> >> Thanks,
> >> Tom
> >>
> >
> > I found
> http://mxr.mozilla.org/mozilla-central/source/js/public/RootingAPI.h
> > quite useful, but there are few things to
> > clarify. For example some code uses HandleObject and some code
> > Handle and having two ways to do the same thing
> > just makes the code harder to read.
> >
> >
> >> On Tue, Apr 23, 2013 at 3:03 AM, Robert O'Callahan <
> rob...@ocallahan.org>
> >> wrote:
> >>>
> >>> On Tue, Apr 23, 2013 at 5:36 AM, Terrence Cole 
> wrote:
> >>>
>  Our exact rooting work is at a spot right now where we could easily
> use
>  more hands to accelerate the process. The main problem is that the
> work
>  is easy and tedious: a hard sell for pretty much any hacker at
> mozilla.
> 
> >>>
> >>> It sounds worthwhile to encourage developers who aren't currently
> working
> >>> on critical-path projects to pile onto the exact rooting project.
> Getting
> >>> GGC over the line reaps some pretty large benefits and it's an
> >>> all-or-nothing project, unlike say pursuing the long tail of WebIDL
> >>> conversions.
> >>>
> >>> If that sounds right, put out a call for volunteers (by which I include
> >>> paid staff) to help push on exact rooting, with detailed instructions.
> I
> >>> know some people who could probably help.
> >>>
> >>> Rob
> >>> --
> >>> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q
> >>> qwqhqaqtq
> >>> qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
> >>> qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq
> qyqoquq
> >>> qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq
> >>> qyqoquq,q
> >>> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
> >>> qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
> >>> ___
> >>> dev-platform mailing list
> >>> dev-platform@lists.mozilla.org
> >>> https://lists.mozilla.org/listinfo/dev-platform
> >
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Accelerating exact rooting work

2013-04-23 Thread Kevin Gadd
Related nitpick:

> *Warning:* The information at SpiderMonkey Garbage Collection 
> Tipsand
>  in the JSAPI
User 
Guideis
woefully out of date and should be ignored completely.

If that's true, why are the linked pages still around? Is it really hard to
remove the out of date/incorrect information? Seeing messages like this on
MDN alongside blatantly wrong information *without* a disclaimer causes me
to intrinsically distrust everything I read there. :)


On Tue, Apr 23, 2013 at 7:18 AM, Till Schneidereit
wrote:

> There's a rooting guide on the wiki here:
> https://developer.mozilla.org/en-US/docs/SpiderMonkey/GC_Rooting_Guide
>
> It's not very thorough, but it's something.
>
>
> On Tue, Apr 23, 2013 at 3:14 PM, Tom Schuster  wrote:
>
> > You wont have to bother with HandleObject or RootedObject etc. These
> > are internal to js/ and we take care of these cases.  Just use
> > JS::Handle or JS::Rooted. Thanks for pointing
> > people to this file.
> >
> > PS: The correct handle for Jon Coppeard is of course jonco!
> >
> > On Tue, Apr 23, 2013 at 3:58 PM, smaug  wrote:
> > > On 04/23/2013 04:07 PM, Tom Schuster wrote:
> > >>
> > >> At the moment it's really just Jono working full time on this, and
> > >> terrence and other people reviewing. This stuff is actually quite easy
> > >> and you can expect really fast review times from our side.
> > >>
> > >> In some parts of the code rooting could literally just mean to replace
> > >> JS::Value to JS::RootedValue and fixing the references to the
> > >> variable. It's really easy once you did it a few times.
> > >>
> > >> Here is a list of all files that still have rooting problems:
> > >> http://pastebin.mozilla.org/2340241
> > >> And the details for each and every problem:
> > >> https://people.mozilla.com/~sfink/analysis/browser/rootingHazards.txt
> > >>
> > >> We are using https://bugzilla.mozilla.org/show_bug.cgi?id=831379 to
> > >> track the rooting progress, make sure to file every bug as blocking
> > >> this one. I would appreciate if every module peer or owner would just
> > >> take a look at his/her module and tried to fix some of the issue. If
> > >> you are unsure or need help, ask us on #jsapi.
> > >>
> > >> Thanks,
> > >> Tom
> > >>
> > >
> > > I found
> > http://mxr.mozilla.org/mozilla-central/source/js/public/RootingAPI.h
> > > quite useful, but there are few things to
> > > clarify. For example some code uses HandleObject and some code
> > > Handle and having two ways to do the same thing
> > > just makes the code harder to read.
> > >
> > >
> > >> On Tue, Apr 23, 2013 at 3:03 AM, Robert O'Callahan <
> > rob...@ocallahan.org>
> > >> wrote:
> > >>>
> > >>> On Tue, Apr 23, 2013 at 5:36 AM, Terrence Cole 
> > wrote:
> > >>>
> >  Our exact rooting work is at a spot right now where we could easily
> > use
> >  more hands to accelerate the process. The main problem is that the
> > work
> >  is easy and tedious: a hard sell for pretty much any hacker at
> > mozilla.
> > 
> > >>>
> > >>> It sounds worthwhile to encourage developers who aren't currently
> > working
> > >>> on critical-path projects to pile onto the exact rooting project.
> > Getting
> > >>> GGC over the line reaps some pretty large benefits and it's an
> > >>> all-or-nothing project, unlike say pursuing the long tail of WebIDL
> > >>> conversions.
> > >>>
> > >>> If that sounds right, put out a call for volunteers (by which I
> include
> > >>> paid staff) to help push on exact rooting, with detailed
> instructions.
> > I
> > >>> know some people who could probably help.
> > >>>
> > >>> Rob
> > >>> --
> > >>> q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q
> > >>> qwqhqaqtq
> > >>> qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
> qsqiqnqnqeqrqsq
> > >>> qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq
> > qyqoquq
> > >>> qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq
> > >>> qyqoquq,q
> > >>> qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
> > >>> qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
> > >>> ___
> > >>> dev-platform mailing list
> > >>> dev-platform@lists.mozilla.org
> > >>> https://lists.mozilla.org/listinfo/dev-platform
> > >
> > >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 
-kg
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Accelerating exact rooting work

2013-04-23 Thread Ehsan Akhgari

On 2013-04-23 9:07 AM, Tom Schuster wrote:

At the moment it's really just Jono working full time on this, and
terrence and other people reviewing. This stuff is actually quite easy
and you can expect really fast review times from our side.

In some parts of the code rooting could literally just mean to replace
JS::Value to JS::RootedValue and fixing the references to the
variable. It's really easy once you did it a few times.


Does this also apply to code holding on to JSObject*s?  I've been adding 
a whole bunch of those cases lately.



Here is a list of all files that still have rooting problems:
http://pastebin.mozilla.org/2340241
And the details for each and every problem:
https://people.mozilla.com/~sfink/analysis/browser/rootingHazards.txt


Can the stuff in objdir/dom/bindings be fixed whole-sale by changing the 
WebIDL codegen?


Cheers,
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Accelerating exact rooting work

2013-04-23 Thread Ehsan Akhgari
Hmm, another question.  Your list includes a bunch of stuff under 
tools/profiler, and I took a quick look and picked JSObjectBuilder.cpp. 
 Changing the JS::Value's there to JS::RootedValue's cause compiler 
errors about conversion from jsvals when using macros such as 
INT_TO_JSVAL, STRING_TO_JSVAL, etc.  It's not clear to me what the 
correct fix there is.


Cheers,
Ehsan


On 2013-04-23 9:07 AM, Tom Schuster wrote:

At the moment it's really just Jono working full time on this, and
terrence and other people reviewing. This stuff is actually quite easy
and you can expect really fast review times from our side.

In some parts of the code rooting could literally just mean to replace
JS::Value to JS::RootedValue and fixing the references to the
variable. It's really easy once you did it a few times.

Here is a list of all files that still have rooting problems:
http://pastebin.mozilla.org/2340241
And the details for each and every problem:
https://people.mozilla.com/~sfink/analysis/browser/rootingHazards.txt

We are using https://bugzilla.mozilla.org/show_bug.cgi?id=831379 to
track the rooting progress, make sure to file every bug as blocking
this one. I would appreciate if every module peer or owner would just
take a look at his/her module and tried to fix some of the issue. If
you are unsure or need help, ask us on #jsapi.

Thanks,
Tom

On Tue, Apr 23, 2013 at 3:03 AM, Robert O'Callahan  wrote:

On Tue, Apr 23, 2013 at 5:36 AM, Terrence Cole  wrote:


Our exact rooting work is at a spot right now where we could easily use
more hands to accelerate the process. The main problem is that the work
is easy and tedious: a hard sell for pretty much any hacker at mozilla.



It sounds worthwhile to encourage developers who aren't currently working
on critical-path projects to pile onto the exact rooting project. Getting
GGC over the line reaps some pretty large benefits and it's an
all-or-nothing project, unlike say pursuing the long tail of WebIDL
conversions.

If that sounds right, put out a call for volunteers (by which I include
paid staff) to help push on exact rooting, with detailed instructions. I
know some people who could probably help.

Rob
--
q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq
qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq
qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q
qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Gervase Markham
On 23/04/13 10:17, Ed Morley wrote:
> Given that local machine time scales linearly with the rate at which we
> hire devs (unlike our automation capacity), I think we need to work out
> why (some) people aren't doing things like compiling locally and running
> their team's directory of tests before pushing. I would hazard a guess
> that if we improved incremental build times & created mach commands to
> simplify the edit-compile-test loop, then we could cut out many of these
> obvious inbound bustage cases.

That would be the carrot. The stick would be finding some way of finding
out whether a changeset was pushed to try before it was pushed to m-i.
If a developer failed to push to try and then broke m-i, we could (in a
pre-commit hook) refuse to let them commit to m-i in future unless
they'd already pushed to try. For a week, on first offence, a month on
subsequent offences :-)

This, of course, is predicated on being able to detect in real time
whether a changeset being pushed to m-i has previously been pushed to try.

Gerv

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Chris AtLee

On 16:34, Tue, 23 Apr, Gervase Markham wrote:

On 23/04/13 10:17, Ed Morley wrote:

Given that local machine time scales linearly with the rate at which we
hire devs (unlike our automation capacity), I think we need to work out
why (some) people aren't doing things like compiling locally and running
their team's directory of tests before pushing. I would hazard a guess
that if we improved incremental build times & created mach commands to
simplify the edit-compile-test loop, then we could cut out many of these
obvious inbound bustage cases.


That would be the carrot. The stick would be finding some way of finding
out whether a changeset was pushed to try before it was pushed to m-i.
If a developer failed to push to try and then broke m-i, we could (in a
pre-commit hook) refuse to let them commit to m-i in future unless
they'd already pushed to try. For a week, on first offence, a month on
subsequent offences :-)

This, of course, is predicated on being able to detect in real time
whether a changeset being pushed to m-i has previously been pushed to try.


We've considered enforcing this using some cryptographic token. After 
you push to try and get good results, the system gives you a token you 
need to include in your commit to m-i.


Alternatively, you could indicate the try revision you pushed, and we 
could look up the results and refuse the commit based on your 
build/tests results on try, or if you commit to m-i is "too different" 
than the push to try.


Cheers,
Chris


signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Accelerating exact rooting work

2013-04-23 Thread Kyle Huey
On Tue, Apr 23, 2013 at 8:18 AM, Ehsan Akhgari wrote:

> Can the stuff in objdir/dom/bindings be fixed whole-sale by changing the
> WebIDL codegen?
>

Yes.  bz is looking into it.

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Justin Lebar
>> The ratio of things landed on inbound which turn out to busted is really
>> worrying

On the one hand, we're told not to push to try too much, because that
wastes resources.

On the other hand, we're told not to burn m-i, because that wastes resources.

Should we be surprised when people don't get this right 100% of the time?

Instead of considering how to get people people to strike a better
balance between wasting infra resources and burning inbound, I think
we need to consider what we can do to increase the acceptable margin
of error.

Note that we don't have enough capacity to turn around current try
requests within a reasonable amount of time.  Pushing to inbound is
the only way to get quick feedback on whether your patch works, these
days.  As I've said before, I'd love to see releng report on try
turnaround times, so we can hold someone accountable.  The data is
there; we just need to process it.

If we can't increase the amount of infra capacity we have, perhaps we
could use it more effectively.  We've discussed lots of ways we might
accomplish this on this newsgroup, and I've seen very few of them
tried.  Perhaps an important part of the problem is that we're not
able to innovate quickly enough on this front.

People are always going to make mistakes, and the purpose of processes
is to minimize the harm caused by those mistakes, not to embarrass or
cajole people into behaving better in the future.  As Jono would say,
it's not the user's fault.

On Tue, Apr 23, 2013 at 12:50 AM, Justin Lebar  wrote:
>> The ratio of things landed on inbound which turn out to busted is really
>> worrying
>
>> * 116 of the 916 changesets (12.66%) were backed out
>
> If 13% is "really worrying", what do you think our goal should be?
>
> On Tue, Apr 23, 2013 at 12:39 AM, Ehsan Akhgari  
> wrote:
>> This was a fantastic read, it almost made me shed happy tears!  Thanks a lot
>> kats for doing this.
>>
>> The ratio of things landed on inbound which turn out to busted is really
>> worrying, and it might be an indicator that (some?) developers have a poor
>> judgement on how safe their patches are.  How hard would it be to gather a
>> list of the total number of patches being backed out plus the amount of time
>> that we spent building/testing those, hopefully in a style similar to
>> ?  If we had
>> such a list, perhaps we could reach out to the high offenders there and let
>> them know about the problem, and see if that changes these stats a couple of
>> weeks from now?
>>
>> Thanks!
>> Ehsan
>>
>>
>> On 2013-04-22 3:54 PM, Kartikaya Gupta wrote:
>>>
>>> TL;DR:
>>> * Inbound is closed 25% of the time
>>> * Turning off coalescing could increase resource usage by up to 60% (but
>>> probably less than this).
>>> * We spend 24% of our machine resources on changes that are later backed
>>> out, or changes that are doing the backout
>>> * The vast majority of changesets that are backed out from inbound are
>>> detectable on a try push
>>>
>>> Because of the large effect from coalescing, any changes to the current
>>> process must not require running the full set of tests on every push.
>>> (In my proposal this is easily accomplished with trychooser syntax, but
>>> other proposals include rotating through T-runs on pushes, etc.).
>>>
>>> --- Long verion below ---
>>>
>>> Following up from the infra load meeting we had last week, I spent some
>>> time this weekend crunching various pieces of data on mozilla-inbound to
>>> get a sense of how much coalescing actually helps us, how much backouts
>>> hurt us, and generally to get some data on the impact of my previous
>>> proposal for using a multi-headed tree. I didn't get all the data that I
>>> wanted but as I probably won't get back to this for a bit, I thought I'd
>>> share what I found so far and see if anybody has other specific pieces
>>> of data they would like to see gathered.
>>>
>>> -- Inbound uptime --
>>>
>>> I looked at a ~9 day period from April 7th to April 16th. During this
>>> time:
>>> * inbound was closed for 24.9587% of the total time
>>> * inbound was closed for 15.3068% of the total time due to "bustage".
>>> * inbound was closed for 11.2059% of the total time due to "infra".
>>>
>>> Notes:
>>> 1) "bustage" and "infra" were determined by grep -i on the data from
>>> treestatus.mozilla.org.
>>> 2) There is some overlap so bustage + infra != total.
>>> 3) I also weighted the downtime using checkins-per-hour histogram from
>>> joduinn's blog at [1], but this didn't have a significant impact: the
>>> total, bustage, and infra downtime percentages moved to 25.5392%,
>>> 15.7285%, and 11.3748% respectively.
>>>
>>> -- Backout changes --
>>>
>>> Next I did an analysis of the changes that landed on inbound during that
>>> time period. The exact pushlog that I looked at (corresponding to the
>>> same April 7 - April 16 time period) is at [2]. I removed all of the
>>> merge changesets from this range, sinc

Re: Accelerating exact rooting work

2013-04-23 Thread smaug

On 04/23/2013 06:23 PM, Ehsan Akhgari wrote:

Hmm, another question.  Your list includes a bunch of stuff under 
tools/profiler, and I took a quick look and picked JSObjectBuilder.cpp.  
Changing
the JS::Value's there to JS::RootedValue's cause compiler errors about 
conversion from jsvals when using macros such as INT_TO_JSVAL, STRING_TO_JSVAL,
etc.  It's not clear to me what the correct fix there is.


Yeah, we need better documentation about when to use .address().
Documentation in 
http://mxr.mozilla.org/mozilla-central/source/js/public/RootingAPI.h
or https://developer.mozilla.org/en-US/docs/SpiderMonkey/GC_Rooting_Guide don't 
really have
examples when it is needed.



Cheers,
Ehsan


On 2013-04-23 9:07 AM, Tom Schuster wrote:

At the moment it's really just Jono working full time on this, and
terrence and other people reviewing. This stuff is actually quite easy
and you can expect really fast review times from our side.

In some parts of the code rooting could literally just mean to replace
JS::Value to JS::RootedValue and fixing the references to the
variable. It's really easy once you did it a few times.

Here is a list of all files that still have rooting problems:
http://pastebin.mozilla.org/2340241
And the details for each and every problem:
https://people.mozilla.com/~sfink/analysis/browser/rootingHazards.txt

We are using https://bugzilla.mozilla.org/show_bug.cgi?id=831379 to
track the rooting progress, make sure to file every bug as blocking
this one. I would appreciate if every module peer or owner would just
take a look at his/her module and tried to fix some of the issue. If
you are unsure or need help, ask us on #jsapi.

Thanks,
Tom

On Tue, Apr 23, 2013 at 3:03 AM, Robert O'Callahan  wrote:

On Tue, Apr 23, 2013 at 5:36 AM, Terrence Cole  wrote:


Our exact rooting work is at a spot right now where we could easily use
more hands to accelerate the process. The main problem is that the work
is easy and tedious: a hard sell for pretty much any hacker at mozilla.



It sounds worthwhile to encourage developers who aren't currently working
on critical-path projects to pile onto the exact rooting project. Getting
GGC over the line reaps some pretty large benefits and it's an
all-or-nothing project, unlike say pursuing the long tail of WebIDL
conversions.

If that sounds right, put out a call for volunteers (by which I include
paid staff) to help push on exact rooting, with detailed instructions. I
know some people who could probably help.

Rob
--
q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq
qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq
qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q
qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Accelerating exact rooting work

2013-04-23 Thread Boris Zbarsky

On 4/23/13 11:18 AM, Ehsan Akhgari wrote:

Does this also apply to code holding on to JSObject*s?


If you're holding them on the heap... sigh.  Just trace them and use 
fromMarkedLocation as needed; there is nothing better at the moment. 
You can't use JS::Rooted on the heap.



Can the stuff in objdir/dom/bindings be fixed whole-sale by changing the
WebIDL codegen?


Yes, and it's being done.  See 
https://bugzilla.mozilla.org/show_bug.cgi?id=864727 and 
https://bugzilla.mozilla.org/show_bug.cgi?id=861022 which cover most of it.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Kartikaya Gupta

On 13-04-23 11:41 , Chris AtLee wrote:

We've considered enforcing this using some cryptographic token. After
you push to try and get good results, the system gives you a token you
need to include in your commit to m-i.


... or you could just merge the cset directly from try to m-i or m-c. 
(i.e. my original proposal).


Cheers,
kats

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Kartikaya Gupta

On 13-04-23 00:39 , Ehsan Akhgari wrote:

How hard would it be to
gather a list of the total number of patches being backed out plus the
amount of time that we spent building/testing those, hopefully in a
style similar to
?


Not trivial, but not too difficult either. Do we have any evidence to 
show that the try highscores page has made an impact in reducing 
unnecessary try usage? Also I agree with Justin that if we do this it 
will be very much a case of sending mixed messages. The try highscores 
list says to people "don't land on try" and the backout highscores list 
would say to people "always test on try".


Cheers,
kats

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Kartikaya Gupta

On 13-04-23 03:57 , Axel Hecht wrote:

Do we know how many of these have been pushed to try, and
passed/compiled what they'd fail later?


I haven't looked at this. It would be useful to know but short of 
pulling patches and using some similarity heuristic or manually 
examining patches I can't think of a way to get this data.



I expect some cost of regressions to come from merging/rebasing, and
it'd be interesting to know how much of that you can see in the data
window you looked at.


This is something I did try to determine, by looking at the number of 
conflicts between patches in my data window. My algorithm was basically 
this:

1) Sync a tree to the last cset in the range
2) Iterate through each push backwards, skipping merges, backouts, and 
changes that are later backed out

3) For each of these pushes, try to qpush a backout of it.
4) If the attempted qpush fails, that means there is another change that 
landed since that one that there is a merge conflict with.


The problem here is that the farther back you go the more likely it is 
that you will run into conflicting changes, because an increasing 
portion of the data window is checked for conflicts when really you 
probably only want to test some small number of changes (~30?). Using 
this approach I got 129 conflicts, and as expected, the rate at which I 
encountered conflicts went up as I went farther back. I didn't get 
around to trying the sliding window approach which I believe will give a 
more representative (and much lower) count. My code for doing this is in 
the bottom half of [1] if you (or anybody else) wants to give that a shot.


kats

[1] 
https://github.com/staktrace/mozilla-tree-analyzer/blob/master/inbound-csets.sh 
- WARNING don't *run* anything in this repo because it may do 
destructive things. Ask me if you're not sure.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Gavin Sharp
On Tue, Apr 23, 2013 at 8:41 AM, Chris AtLee  wrote:
> We've considered enforcing this using some cryptographic token. After you
> push to try and get good results, the system gives you a token you need to
> include in your commit to m-i.

Sounds like the goal of this kind of solution would be to eliminate
the "developer made a bad judgement call" case, but it's not at all
clear to me that that problem is worse than "developer overuses try
for trivial changes" or "developer needs to wait for try results
before pushing trivial fix" problem.

It's also not at all clear to me that a 13% backout rate on inbound is
a problem, because there are a lot of factors at play. Those backouts
represent "wasted resources" (build machine time, sheriff time,
sometimes tree-closure time), but if the alternative is wasting
developer time (needing to wait for try results unnecessarily) and
tryserver build machine time, the tradeoff becomes less clear.
Obviously different perspectives here also impact your view of those
tradeoffs.

Gavin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Automatic tree clobbering is coming

2013-04-23 Thread Joshua Cranmer 🐧

On 4/17/2013 6:12 PM, Gregory Szorc wrote:
I agree that we should consider a compromise regarding the UI/UX of 
auto clobber. I have filed bug 863091.


I would like to say that I view the object directory as a cache of the 
output of the build system. Since it's a cache, cache rules apply and 
data may disappear at any time. This analogy works well for my 
developer workflow - I never put anything not derived from the build 
system in my object directory. But, I don't know what other people are 
doing. Could the anti-auto-clobberers please explain where this 
analogy falls apart for your workflow?


My ctags file are kept in the object directory, as was doxygen output. 
The thing that I most hate to lose is the rules for ctags generation, 
kept in the object directory as well because it's much less work than 
keeping it in the source directory and constantly playing patch merging 
and reorganization games. Treating the objdir as a cache implies as a 
secondary concern that there is "little" cost to blowing it away--this 
is simply not true, as you can see from how many people reacted to 
CLOBBER by scripting stuff to get that stuff ignored. The problem I have 
with CLOBBER in particular is that it's a system which is optimized for 
the build experience of the buildbots, which build several changesets a 
day; my workflow has me pull from *-central about once a week or so, 
which means that some configure change ends up having the same effect as 
a CLOBBER in practice with less pain.


It is not uncommon for people to have multiple objdirs for a single 
srcdir. In such situations, it makes sense to keep things that apply to 
a single objdir in that objdir.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Gavin Sharp
On Tue, Apr 23, 2013 at 9:28 AM, Kartikaya Gupta  wrote:
> Not trivial, but not too difficult either. Do we have any evidence to show
> that the try highscores page has made an impact in reducing unnecessary try
> usage?

It's been used by people like Ed Morley to reach out to individual
developers and notify them of their impact. I'm sure that's had a
positive effect, though it seems rather difficult to measure.

Gavin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Accelerating exact rooting work

2013-04-23 Thread Andrew McCreight
- Original Message -
> Does this also apply to code holding on to JSObject*s?  I've been
> adding a whole bunch of those cases lately.

If you are telling the cycle collector about those JSObjects, then it should 
get fixed whenever CCed stuff is fixed.

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread David Keeler
On 04/23/13 02:17, Ed Morley wrote:
> On 23 April 2013 09:58:41, Neil wrote:
>> Hopefully a push never burns all platforms because the developer tried
>> it locally first, but stranger things have happened!
> 
> This actually happens quite often. On occasion it's due to warnings as
> errors (switched off by default on local machines due to toolchain
> differences)

I would like to know a bit more about this. Is our list of supported
toolchains so diverse that building with one version versus another will
report so many false positives as to be useless?
I enabled warnings-as-errors on my local machine after pushing something
to inbound that failed to build because of this, and I've had no
problems since then. Enabling this by default seems like an easy way to
remove instances of this problem.

> but more often than not the developer didn't even try
> compiling locally :-/

So there are instances where developers didn't use the try servers and
also didn't compile locally at all before pushing to inbound? I don't
think we as a community should be okay with that kind of irresponsible
behavior.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Ed Morley

On 23/04/2013 17:28, Kartikaya Gupta wrote:

On 13-04-23 00:39 , Ehsan Akhgari wrote:

How hard would it be to
gather a list of the total number of patches being backed out plus the
amount of time that we spent building/testing those, hopefully in a
style similar to
?


Not trivial, but not too difficult either. Do we have any evidence to
show that the try highscores page has made an impact in reducing
unnecessary try usage? Also I agree with Justin that if we do this it
will be very much a case of sending mixed messages. The try highscores
list says to people "don't land on try" and the backout highscores list
would say to people "always test on try".


It's worth noting that when I've contacted developers in the top 10 of 
the tryserver usage leaderboard my message is not "do not use try", but 
instead suggestions like:
* "please do not use -p all -u all when you only made an android 
specific change"
* "you already did a |-p all -u all| run - on which mochitest-1 failed 
on all platforms, so please don't test every testsuite on every platform 
for the half dozen iterations you ran on Try thereafter" (at much as 
this sounds like an extreme example, there have been cases like this)

...
...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Boris Zbarsky

On 4/23/13 1:17 PM, David Keeler wrote:

I would like to know a bit more about this. Is our list of supported
toolchains so diverse that building with one version versus another will
report so many false positives as to be useless?


Yes.  For example a typical clang+ccache build of the tree with fatal 
warnings will fail unless you jump through deoptimize-ccache hoops, 
because things like "if (FOO(x))" will warn if FOO(x) expands to "(x == 5)".


For another example msvc until recently didn't actually have warnings as 
errors enabled at all in many directories, so it didn't matter what you 
did with your local setup in msvc.



I enabled warnings-as-errors on my local machine after pushing something
to inbound that failed to build because of this, and I've had no
problems since then.


It _really_ depends on the exact compiler and toolchain you're using.


So there are instances where developers didn't use the try servers and
also didn't compile locally at all before pushing to inbound? I don't
think we as a community should be okay with that kind of irresponsible
behavior.


Agreed.

-Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Automatic tree clobbering is coming

2013-04-23 Thread Chris Lord

On 18/04/2013 11:02, Ed Morley wrote:

On 17/04/2013 20:51, Ms2ger wrote:

On 04/17/2013 09:36 PM, Gregory Szorc wrote:

It /could/, sure. However, I consider auto clobbering a core build
system feature (sheriffs were very vocal about wanting it). As such, it
needs to be part of client.mk. (Please correct me if I am wrong.)


This should say …about wanting it *on buildbot*. I have not heard
sheriffs being very vocal about anything to do with local builds. (Which
means that that particular request could have been solved with an opt-in
rather than an opt-out flag.)


Indeed, sheriffs were purely interested in the automation benefits.

If people are really against this, we should just:
* switch to opt-in rather than opt-out.
* make our automation opt-in.
* mention the mozconfig variable that needs to be set as part of the 
"needs clobber" message, so those in favour of auto-deletion would 
only have to 'suffer' one interrupted local build cycle before setting 
the pref.


Ed
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Having just read up the replies, it seems that it's generally agreed 
that this would be better as an opt-in feature. Is anyone tasked to make 
it such? Is there a bug I can follow for it?


--Chris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Axel Hecht

On 4/23/13 6:35 PM, Kartikaya Gupta wrote:

On 13-04-23 03:57 , Axel Hecht wrote:

Do we know how many of these have been pushed to try, and
passed/compiled what they'd fail later?


I haven't looked at this. It would be useful to know but short of
pulling patches and using some similarity heuristic or manually
examining patches I can't think of a way to get this data.


I expect some cost of regressions to come from merging/rebasing, and
it'd be interesting to know how much of that you can see in the data
window you looked at.


This is something I did try to determine, by looking at the number of
conflicts between patches in my data window. My algorithm was basically
this:
1) Sync a tree to the last cset in the range
2) Iterate through each push backwards, skipping merges, backouts, and
changes that are later backed out
3) For each of these pushes, try to qpush a backout of it.
4) If the attempted qpush fails, that means there is another change that
landed since that one that there is a merge conflict with.

The problem here is that the farther back you go the more likely it is
that you will run into conflicting changes, because an increasing
portion of the data window is checked for conflicts when really you
probably only want to test some small number of changes (~30?). Using
this approach I got 129 conflicts, and as expected, the rate at which I
encountered conflicts went up as I went farther back. I didn't get
around to trying the sliding window approach which I believe will give a
more representative (and much lower) count. My code for doing this is in
the bottom half of [1] if you (or anybody else) wants to give that a shot.


I expect that only a part of our programmatic merge conflicts are 
actually version control merge conflicts. There are a lot of cases like 
modifications to supposedly internal properties in toolkit starting to 
get a new usecase in browser, a define changing or disappearing, etc.


All those invalidate the testing of the patch that has been done to some 
extent, and don't involve modifications to the same lines of code, which 
is all that version control catches.


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking the amount of system JS we use in Gecko on B2G

2013-04-23 Thread Justin Lebar
To close the loop on this thread, the consensus here seems to be that

1. We should continue to make JS slimmer.  This is a high priority for
B2G even setting aside the memory consumption of B2G's chrome JS,
since of course B2G runs plenty of content JS.

The memory profile of B2G is different from desktop -- small overheads
matter more on B2G, and the consequences of using too much memory are
more drastic.  We should keep this in mind when working on JS in the
future.

2. We should fix bug 829482 (run more shrinking GCs in workers).  This
will get us an easy 4-5mb in the main B2G process.  This is a
MemShrink:P1 and has been open for a while; I'd love some assistance
with it.

3. We should rewrite these main-process workers in C++, if and when we
have manpower.  Even with bug 829482 fixed, the workers will still be
some of the largest individual consumers of JS memory in B2G, and all
of the JS folks I spoke with said that they thought they'd be unable
to reduce the memory overhead of a worker significantly in the medium
term.

I filed bugs:

https://bugzilla.mozilla.org/show_bug.cgi?id=864927
https://bugzilla.mozilla.org/show_bug.cgi?id=864931
https://bugzilla.mozilla.org/show_bug.cgi?id=864932

4. It's worthwhile to at least look carefully at the biggest B2G
chrome compartments and see whether we can reduce their size one way
or another.  I filed a metabug:
https://bugzilla.mozilla.org/show_bug.cgi?id=864943

5. When writing new APIs, we should at least consider writing them in
C++.  JS should not be the default.  Where things are super-easy in JS
and super-annoying in C++, we should consider investing in our C++
infrastructure to make it more pleasant.

Since not everyone reads this newsgroup, I'd appreciate assistance
disseminating (5) in bugs.  At the very least, we should ask patch
authors to consider the alternatives before creating new JS modules
that are enabled on B2G.

I'm also going to post this summary to dev-b2g with a pointer back to
this newsgroup.

Thanks for your thoughts, everyone.

-Justin

On Mon, Apr 22, 2013 at 9:46 PM, Nicholas Nethercote
 wrote:
> On Mon, Apr 22, 2013 at 6:35 PM, Justin Dolske  wrote:
>>
>> That said, I think it's critically important that we're working to make JS a
>> acceptable -- nay, _excellent_ -- language/runtime for application
>> development for the long run. We can't tell a credible story about why
>> people should write HTML5 apps, if we're tearing out swaths of JS in our own
>> products. Sometimes dogfooding is unpleasant or hard, but that's the whole
>> point.
>
> There's a big difference between apps on Firefox OS, which are likely
> to have relatively short lifetimes and can be killed if they take up
> too much memory, and the main process.  Bad memory behaviour in the
> main process is a much bigger deal, and it's something that's
> happening right now with some frequency.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Ehsan Akhgari

On 2013-04-23 12:50 AM, Justin Lebar wrote:

The ratio of things landed on inbound which turn out to busted is really
worrying



* 116 of the 916 changesets (12.66%) were backed out


If 13% is "really worrying", what do you think our goal should be?


Less than that?  It's really hard to come up with hard numbers as goals 
here.


Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Nicholas Nethercote
On Mon, Apr 22, 2013 at 12:54 PM, Kartikaya Gupta  wrote:
> TL;DR:
> * Inbound is closed 25% of the time
> * Turning off coalescing could increase resource usage by up to 60% (but
> probably less than this).
> * We spend 24% of our machine resources on changes that are later backed
> out, or changes that are doing the backout
> * The vast majority of changesets that are backed out from inbound are
> detectable on a try push

Thanks for collecting real data!

A collage of thoughts follow.

- The 'inbound was closed for 15.3068% of the total time due to
"bustage"' number is an underestimate, in one sense.  When inbound is
closed at 10am California time, it's a lot more inconvenient to
developers than when it's busted at midnight California time.  More
than 3x, according to
http://oduinn.com/images/2013/blog_2013_02_pushes_per_hour.png.

- Having our main landing repo closed multiple times per day, for a
significant fraction of the time feels clownshoes-ish to me.  For this
reason, my gut feeling is that we'll end up doing something like what
Kats is suggesting.  My gut feeling is also that it won't end up
changing the infrastructure load that much.

- Any landing system that makes life harder for sheriffs is a problem.
I'm not at all certain that Kats' proposal would do that, but that's
my main worry about it.

- A process whereby developers choose which tests to run on the
official landing branch (be it inbound, or something else) feels like
a bad idea.  It's far too easy to get wrong.

- Getting agreement on a significant process change is really
difficult.  Is it possible to set up a repo where a few people can
volunteer to try Kats' approach for a couple of weeks?  That would
provide invaluable experience and data.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Robert O'Callahan
On Wed, Apr 24, 2013 at 11:21 AM, Nicholas Nethercote <
n.netherc...@gmail.com> wrote:

> - The 'inbound was closed for 15.3068% of the total time due to
> "bustage"' number is an underestimate, in one sense.  When inbound is
> closed at 10am California time, it's a lot more inconvenient to
> developers than when it's busted at midnight California time.  More
> than 3x, according to
> http://oduinn.com/images/2013/blog_2013_02_pushes_per_hour.png.
>

Although I've been known to bust inbound, I also tend to check in around
2-3am PDT.

I think it's important to remember that the optimal bustage rate for
inbound is some value greater than zero and varies depending on the time of
day. If inbound is never busted then we're wasting try resources testing
patches that have a 0.99 probability of landing safely. OTOH, whenever the
bustage rate is high enough it's difficult to get things landed, or the
sheriffs' ability to detect regressions is impacted, it's too high. That
currently seems to be the case so it seems like a good idea to use a
highscore list or something like it to exert pressure to use try more until
the situation is resolved.

Rob
-- 
q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq
qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq
qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q
qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-23 Thread Kartikaya Gupta

On 13-04-23 19:21 , Nicholas Nethercote wrote:

- The 'inbound was closed for 15.3068% of the total time due to
"bustage"' number is an underestimate, in one sense.  When inbound is
closed at 10am California time, it's a lot more inconvenient to
developers than when it's busted at midnight California time.  More
than 3x, according to
http://oduinn.com/images/2013/blog_2013_02_pushes_per_hour.png.


See my "note 3" under the "Inbound uptime" section. I used exactly that 
graph to weight the inbound downtime and there wasn't a significant 
difference.



- Getting agreement on a significant process change is really
difficult.  Is it possible to set up a repo where a few people can
volunteer to try Kats' approach for a couple of weeks?  That would
provide invaluable experience and data.


Yeah, there are plans afoot to try this, pending sheriff approval.

Cheers,
kats
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking the amount of system JS we use in Gecko on B2G

2013-04-23 Thread azakai
On Monday, April 22, 2013 7:28:30 AM UTC-7, Justin Lebar wrote:
> 
> The issue isn't compilation of code; that doesn't stick out in the
> memory reports.  The issue seems to be mostly the overhead of the JS
> engine for each file (even if the file stores very little data, as
> BrowserElementParent does) and also the unavoidable inefficiency
> associated with assigning each worker its own runtime (in particular,
> the fact that this greatly increases fragmentation).
> 

Probably this doesn't make sense, but could some of the code running in JS 
workers be combined to live inside fewer actual workers? The main downside 
would be less concurrency obviously, but it sounds like it could avoid some 
memory overhead?

- Alon
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform